First of All, to Know about Perceptron you NEED to KNOWWhat is Mcculloch Pitts Neuron? “. My request to you to Read the blog post first, because, Mcculloch Pitts neuron and Perceptron are approximately Similar.


Perceptron is a more general computational model than Mcculloch Pitts Neuron.

As you can see {X1, X2, X3, …, Xnare the Inputs and Y is the Output. And f and g are the Functions. There are two types of inputs, One is Excitatory Input, which is dependent and another is Inhibitory Input, which is independent input. Here {X1, X2, X3, …, Xnare the Excitatory Inputs.

Here, (w1, w2, w3, …, wn ) are the Weights.

The main difference between Mcculloch Pitts neuron and Perceptron is, an introduction of numerical weights (w1, w2, w3, … wn ) for inputs and a mechanism for learning these weights.

This equation is same as the  Mcculloch Pitts Neuron, Only here the Weights ( W ) are included. These Weights are going to learn and change which was not present in Mcculloch Pitts Neuron.

In this equation, I just Moved Theta ( θ ) from Right side to the Left side for simplification.

Look CAREFULLY, here we start at i = 0. Which means, 

{ W0X0 + W1X1 + W2X2 + …. + WnXn } ≥ 0

[ Now, putting W0 = – θ and X0 = 1 ], We will get, { – θ + W1X1 + W2X2 + …. + WnXn } ≥ 0 Which is Similar to the PREVIOUS equation where i starts at 1.

This W0 is called the Bias.


From the equation, it should be clear that even a Perceptron separates the input space into two Halves. All inputs which produce a 1 lie on one side and all inputs which produce a 0 lie one another side.  

The difference is the weights can be learned and the inputs can be real-valued.

Source: NPTEL’s Deep Learning Course