Simplest perceptron update rules demonstration

Simple perceptron

Let's consider the following simple perceptron with a transfert function given by f(x)=x to keep the maths simple:


Transfert function

The transfert function is given by:

 y= w_1.x_1 + w_2.x_2 + ... + w_N.x_N = \sum\limits_{i=1}^N w_i.x_i

Error (or loss)

In artificial neural networks, the error we want to minimize is:

 E=(y'-y)^2

with:

  • E the error
  • y' the expected output (from training data set)
  • y the real output of the network (from networt)

In practice and to simplify the maths, this error is divided by two:

 E=\frac{1}{2}(y'-y)^2

Gradient descent

The algorithm (gradient descent) used to train the network (i.e. updating the weights) is given by:

 w_i'=w_i-\eta.\frac{dE}{dw_i}

where:

  • w_i the weight before update
  • w_i' the weight after update
  • \eta the learning rate

Derivating the error

Let's derivate the error:

 \frac{dE}{dw_i} = \frac{1}{2}\frac{d}{dw_i}(y'-y)^2

Thanks to the chain rule [  (f \circ g)'=(f' \circ g).g') ] the previous equation can be rewritten:

 \frac{dE}{dw_i} = \frac{2}{2}(y'-y)\frac{d}{dw_i} (y'-y) = -(y'-y)\frac{dy}{dw_i}

As  y= w_1.x_1 + w_2.x_2 + ... + w_N.x_N :

 \frac{dE}{dw_i} = -(y'-y)\frac{d}{dw_i}(w_1.x_1 + w_2.x_2 + ... + w_N.x_N) = -(y'-y)x_i

Updating the weights

The weights can be updated with the following formula:

 w_i'=w_i-\eta.\frac{dE}{dw_i} = w_i+\eta(y'-y)x_i

In conclusion :

 w_i'= w_i + \eta(y'-y)x_i


Leave a Reply

Your email address will not be published. Required fields are marked *