What is Artificial Neural Networks(ANN)?

[et_pb_section fb_built=”1″ _builder_version=”3.26.4″][et_pb_row _builder_version=”3.27.3″][et_pb_column type=”4_4″ _builder_version=”3.27.3″][et_pb_text _builder_version=”3.27.4″]

Artificial Neural Network, if we justify the name, we’ll get –

[/et_pb_text][/et_pb_column][/et_pb_row][et_pb_row column_structure=”1_3,1_3,1_3″ _builder_version=”3.26.6″][et_pb_column type=”1_3″ _builder_version=”3.26.6″][et_pb_blurb title=”Artificial” _builder_version=”3.27.3″ header_level=”h2″ header_font=”Times New Roman||||||||”]

Made or produced by human beings rather than occurring naturally, especially as a copy of something natural.

[/et_pb_blurb][/et_pb_column][et_pb_column type=”1_3″ _builder_version=”3.26.4″][et_pb_blurb title=”Neural” _builder_version=”3.27.3″ header_level=”h2″]
Relating to a nerve or the nervous system.
[/et_pb_blurb][/et_pb_column][et_pb_column type=”1_3″ _builder_version=”3.26.4″][et_pb_blurb title=”Network” _builder_version=”3.27.3″ header_level=”h2″]
A group or system of interconnected people or things.
[/et_pb_blurb][/et_pb_column][/et_pb_row][et_pb_row _builder_version=”3.26.6″][et_pb_column type=”4_4″ _builder_version=”3.26.6″][et_pb_blurb title=”According to Wikipedia” content_max_width=”736px” _builder_version=”3.26.6″]

Artificial neural networks are computing systems that are inspired by, but not necessarily identical to, the biological neural networks that constitute animal brains. Such systems “learn” to perform tasks by considering examples, generally without being programmed with any task-specific rules.

[/et_pb_blurb][et_pb_code _builder_version=”3.27.3″] style=”display:block” data-ad-client=”ca-pub-1750980986506231″ data-ad-slot=”6865950273″ data-ad-format=”auto” data-full-width-responsive=”true”>[/et_pb_code][/et_pb_column][/et_pb_row][et_pb_row _builder_version=”3.26.6″][et_pb_column type=”4_4″ _builder_version=”3.26.6″][et_pb_blurb title=”Tec4Tric explains Artificial Neural Network” content_max_width=”736px” _builder_version=”3.27.3″]

An Artificial Neural Network(ANN) is an efficient information processing system. It processes a large number of highly interconnected elements, called neurons, nodes or units. Each neuron is connected with the other by a connection link. And each connection link is associated with weights, which contain information about the input. ANN’s collective behavior is characterized by their ability to learn, recall, and generalize patterns similar to that of a human brain. ANN is being used in Machine Learning and Deep Learning for more accurate results.

Each neuron has an internal state, called Activation. The activation signal of a neuron is transmitted to other neurons.

A Neuron can send only one signal at a time.

[/et_pb_blurb][/et_pb_column][/et_pb_row][et_pb_row _builder_version=”3.26.6″][et_pb_column type=”4_4″ _builder_version=”3.26.6″][et_pb_blurb title=”What is Biological Neuron?” content_max_width=”725px” _builder_version=”3.26.6″]A biological neuron consists of Fours Parts Mainly,

  • Dendrite – Receives signals from other neurons.
  • Synapse – Point of connection to other neurons.
  • Soma – It’s the CPU, it processes the information.
  • Axon – Transmits the output of the neuron.
[/et_pb_blurb][/et_pb_column][/et_pb_row][/et_pb_section][et_pb_section fb_built=”1″ _builder_version=”3.27.3″][et_pb_row _builder_version=”3.26.6″][et_pb_column type=”4_4″ _builder_version=”3.26.6″][et_pb_text _builder_version=”3.27.4″]

Evolution of Neural Networks

[/et_pb_text][/et_pb_column][/et_pb_row][et_pb_row make_equal=”on” _builder_version=”3.27.3″ width=”100%” max_width=”1493px” module_alignment=”center”][et_pb_column type=”4_4″ _builder_version=”3.27.3″][et_pb_text _builder_version=”3.27.3″ text_font_size=”16px” text_orientation=”center” text_text_align=”center”]

Year

Neural Network

Inventor

1943

McCulloch Pitts Neuron

McCulloch & Pitts
1949
Hebb Network
Hebb
1958 – 1988

Perceptron

Frank Rosenblatt, Block, Minsky & Papert

1960
Adaline
Widrow & Hoff
1972
Kohonen self-organizing feature map
Kohonen
1986
Back-propagation
Rumelhart, Hinton & Williams
1987
Adaptive Resonance Theory
Carpenter & Grossberg
1988
Counter-propagation
Grossberg
1988
Radial basis function
Broomhead & Lowe
1988
Neo-cognition
Fukushima
[/et_pb_text][/et_pb_column][/et_pb_row][/et_pb_section][et_pb_section fb_built=”1″ _builder_version=”3.27.3″][et_pb_row _builder_version=”3.27.3″][et_pb_column type=”4_4″ _builder_version=”3.27.3″][et_pb_text _builder_version=”3.27.4″]

Let me describe, What is McCulloch Pitts Neuron first

Wikipedia says An Artificial Neuron or The Mcculloch Pitts Neuron is a mathematical function conceived as a model of biological neurons, a neural network.

[/et_pb_text][et_pb_code _builder_version=”3.27.3″] style=”display:block” data-ad-client=”ca-pub-1750980986506231″ data-ad-slot=”6865950273″ data-ad-format=”auto” data-full-width-responsive=”true”>[/et_pb_code][/et_pb_column][/et_pb_row][et_pb_row column_structure=”2_5,3_5″ _builder_version=”3.27.3″][et_pb_column type=”2_5″ _builder_version=”3.27.3″][et_pb_image src=”https://tec4tric.com/wp-content/uploads/2018/10/mcculouch-pitts.png” _builder_version=”3.27.3″][/et_pb_image][/et_pb_column][et_pb_column type=”3_5″ _builder_version=”3.27.3″][et_pb_text _builder_version=”3.27.4″]

As you can see {X1, X2, X3, …, Xn∈ {0,1} are the Inputs and Y ∈ {0,1} is the Output. And f and g are the Functions.

f is the Activation Function and g is the Pre-Activation Function.

There are two types of inputs, One is Excitatory Input, which is dependent and another one is Inhibitory Input, which is independent input.

Here {X1, X2, X3, …, Xn} ∈ {0,1} are the Excitatory Inputs.

[/et_pb_text][/et_pb_column][/et_pb_row][et_pb_row column_structure=”3_5,2_5″ _builder_version=”3.27.3″][et_pb_column type=”3_5″ _builder_version=”3.27.3″][et_pb_image src=”https://tec4tric.com/wp-content/uploads/2018/10/mccullouch-pitts1.png” _builder_version=”3.27.3″][/et_pb_image][/et_pb_column][et_pb_column type=”2_5″ _builder_version=”3.27.3″][et_pb_text _builder_version=”3.27.4″]

The output will be 1 if g(x) is Greater than equal to (≥) the threshold parameter and the output will be 0 if g(x) is less than (<) the threshold parameter.

[/et_pb_text][/et_pb_column][/et_pb_row][et_pb_row _builder_version=”3.27.3″][et_pb_column type=”4_4″ _builder_version=”3.27.3″][et_pb_text _builder_version=”3.27.4″]

A single Mcculloch Pitts neuron can be used to represent Boolean functions ( AND, OR, NOR, etc. ) which are linearly separable.

Linear Separability: There exists a line ( plane ) such that all inputs which produce a ‘1’ lie on one side of the line ( plane ) and all inputs which produce a ‘0’ lie on another side of the line ( plane ).

[/et_pb_text][et_pb_button button_url=”https://tec4tric.com/2018/10/mcculloch-pitts-neuron.html” url_new_window=”on” button_text=”Learn More about McCulloch Pitts Neuron” button_alignment=”center” button_alignment_last_edited=”off|desktop” _builder_version=”3.27.3″][/et_pb_button][et_pb_code _builder_version=”3.27.3″] style=”display:block” data-ad-client=”ca-pub-1750980986506231″ data-ad-slot=”6865950273″ data-ad-format=”auto” data-full-width-responsive=”true”>[/et_pb_code][/et_pb_column][/et_pb_row][et_pb_row _builder_version=”3.27.3″][et_pb_column type=”4_4″ _builder_version=”3.27.3″][et_pb_text _builder_version=”3.27.4″]

What is Perceptron?

Perceptron is a more general computational model than Mcculloch Pitts Neuron.[/et_pb_text][/et_pb_column][/et_pb_row][et_pb_row column_structure=”3_5,2_5″ _builder_version=”3.27.3″][et_pb_column type=”3_5″ _builder_version=”3.27.3″][et_pb_image src=”https://tec4tric.com/wp-content/uploads/2018/10/perceptron.jpg” _builder_version=”3.27.3″ width=”100%” max_width=”76%” module_alignment=”center”][/et_pb_image][/et_pb_column][et_pb_column type=”2_5″ _builder_version=”3.27.3″][et_pb_text _builder_version=”3.27.4″]

Here, (w1, w2, w3, …, wn ) are the Weights.

The main difference between Mcculloch Pitts neuron and Perceptron is, an introduction of numerical weights (w1, w2, w3, … wn ) for inputs and a mechanism for learning these weights.

[/et_pb_text][/et_pb_column][/et_pb_row][et_pb_row column_structure=”1_2,1_2″ _builder_version=”3.27.3″][et_pb_column type=”1_2″ _builder_version=”3.27.3″][et_pb_button button_url=”https://tec4tric.com/2018/10/what-is-perceptron.html” url_new_window=”on” button_text=”Learn More about Perceptron” _builder_version=”3.27.3″][/et_pb_button][/et_pb_column][et_pb_column type=”1_2″ _builder_version=”3.27.3″][et_pb_button button_url=”https://tec4tric.com/2018/10/perceptron-learning-algorithm.html” url_new_window=”on” button_text=”Perceptron Learning Algorithm” _builder_version=”3.27.3″][/et_pb_button][/et_pb_column][/et_pb_row][et_pb_row column_structure=”3_5,2_5″ _builder_version=”3.26.6″][et_pb_column type=”3_5″ _builder_version=”3.26.6″][et_pb_blurb title=”More about Artificial Neural Network” _builder_version=”3.27.3″ header_level=”h2″]An Artificial Neural Network typically consists of three layers.
  • Input Layer – This layer takes the input.
 
  • Hidden Layer – This layer process the given data.
 
  • Output Layer – This layer produces output.
The number of hidden layers varies.

[/et_pb_blurb][/et_pb_column][et_pb_column type=”2_5″ _builder_version=”3.26.6″][et_pb_image src=”https://tec4tric.com/wp-content/uploads/2019/07/ann-tec4tric-e1564202338179.png” _builder_version=”3.27.3″ custom_margin=”50px||||false|false”][/et_pb_image][/et_pb_column][/et_pb_row][et_pb_row _builder_version=”4.1″][et_pb_column _builder_version=”4.1″ type=”4_4″][et_pb_text _builder_version=”4.1″ module_id=”download” hover_enabled=”0″]

If you want to download Artificial Neural Networks notes as a PDF, then download from the below button.

[/et_pb_text][et_pb_button button_text=”Download” _builder_version=”4.1″ button_url=”https://tec4tric.com/wp-content/uploads/2020/01/ANN.pdf” button_alignment=”center” hover_enabled=”0″][/et_pb_button][et_pb_image src=”https://tec4tric.com/wp-content/uploads/2020/01/Artificial-Neural-network-1.jpg” _builder_version=”4.1″ url=”https://youtu.be/Ny3rTb2pQcY” url_new_window=”on” hover_enabled=”0″][/et_pb_image][et_pb_text _builder_version=”4.1″ hover_enabled=”0″]

Watch the Artificial Neural Network tutorial on Youtube

[/et_pb_text][/et_pb_column][/et_pb_row][et_pb_row _builder_version=”3.27.3″][et_pb_column type=”4_4″ _builder_version=”3.27.3″][et_pb_text _builder_version=”3.27.4″]

Important Terminologies of ANN

  • Weights: In ANN, each neuron is connected to other neurons and associated with weights. The weights contain information about the input. This information is used by the neural network to solve problems. The weights can be represented in terms of a matrix. And this matrix is also known as a connection matrix. Like weights, W = [w1, w2, w3, …, wn].
    [/et_pb_text][et_pb_code _builder_version=”3.27.3″] style=”display:block” data-ad-client=”ca-pub-1750980986506231″ data-ad-slot=”6865950273″ data-ad-format=”auto” data-full-width-responsive=”true”>[/et_pb_code][et_pb_text _builder_version=”3.27.4″]
    • Bias: A bias unit is an “extra” neuron added to each pre-output layer that stores the value of 1. The bias included in the network has its impact in calculating the net input. The bias is included by adding a component x0 = 1 to the input vector X. The input vector looks like this  X = (1, X1, X2, …, Xn).
    [/et_pb_text][et_pb_text _builder_version=”3.27.4″]
    • Threshold: Threshold is a set of value used to calculate the final output of a network. It is used in the activation function. For each and every application there is a threshold limit. Based on the threshold value, the activation functions are defined and the output is calculated. A threshold value is denoted by “θ”.
    [/et_pb_text][et_pb_text _builder_version=”3.27.4″]
    • Learning Rate: The learning rate is denoted by “α”. It is used to control the amount of weight adjustment at each step of training. The learning rate, ranging from 0 to 1, determines the rate of learning at each time step.
    [/et_pb_text][/et_pb_column][/et_pb_row][/et_pb_section]

    Sayan De

    Sayan De has a B.Tech in Computer Science & Engineering degree and currently pursuing his M.Tech in CSE. His interest area of work is Machine Learning, Deep Learning, Deep NLP, Computer Vision, Data Science, Linux, and a little bit of Website Development.

    Leave a Reply

    Your email address will not be published. Required fields are marked *