Select Page

# Perceptron

## Perceptron is a more general computational model than Mcculloch Pitts Neuron.

As you can see {X1, X2, X3, …, Xnare the Inputs and Y is the Output. And f and g are the Functions. There are two types of inputs, One is Excitatory Input, which is dependent and another is Inhibitory Input, which is independent input. Here {X1, X2, X3, …, Xnare the Excitatory Inputs.

## The main difference between Mcculloch Pitts neuron and Perceptron is, an introduction of numerical weights (w1, w2, w3, … wn ) for inputs and a mechanism for learning these weights.

### In this equation, I just Moved Theta ( θ ) from Right side to the Left side for simplification.

Look CAREFULLY, here we start at i = 0. Which means,

{ W0X0 + W1X1 + W2X2 + …. + WnXn } ≥ 0

[ Now, putting W0 = – θ and X0 = 1 ], We will get, { – θ + W1X1 + W2X2 + …. + WnXn } ≥ 0 Which is Similar to the PREVIOUS equation where i starts at 1.

# NOTE THAT:

From the equation, it should be clear that even a Perceptron separates the input space into two Halves. All inputs which produce a 1 lie on one side and all inputs which produce a 0 lie one another side.

## The difference is the weights can be learned and the inputs can be real-valued.

Source: NPTEL’s Deep Learning Course