In the Previous post, I have described What is Perceptron. So, if you don’t know what is Perceptron, don’t worry, Learn, What is Perceptron by clicking the below Button.

So, now we’ll Learn Why Do We Need a Learning Algorithm for Perceptron and we’ll discuss the Algorithm.

So, let’s get started.

Why Do We Need a Perceptron Learning Algorithm?

Did You Know?

The main difference between Mcculloch Pitts Neuron and Perceptron is, an introduction of numerical weights (w1, w2, w3, … wn ) for inputs and a mechanism for learning these weights in Perceptron.

What are these Weights?

Well, these weights are attached to each input. Like, X1 is an input, but in Perceptron the input will be X1*W1. Each time the weights will be learnt. For this learning path, an algorithm is needed by which the weights can be learnt.

So, Now we are going to learn the Learning Algorithm of Perceptron.

Let’s Assume,

Positive inputs ( P ) with label 1.

and Negative inputs ( N ) with label 0.

and Initialize, W = [ W0, W1, W2, … , Wn ]

What do you mean by Convergence in this algorithm? 

This algorithm converges when all the inputs are classified correctly on Training data set.

We can re-write the algorithm in a simple way – Let’s consider two Vectors → w and x. w = [ w0, w1, w2, …, wn ]       and        x = [ 1, x1, x2, …, xn ] w and x are two vectors. So, the Dot Product will be –

By this rule, the Perceptron will train or learn the weights. This is why it is named as Perceptron Learning Algorithm.