The history of Neural Network dates back to the 1960’s, and the name itself comes from the Danish language.

The neural network is made up of connected sections of neurons. The size and shape of the network determines its specific function.

Neural networks have been used by almost every single level of scientific research including: computer science, biology, and mathematics. (Neural Network History, 1975s)

The theory of the neural network was proposed in 1950, and was formally defined in 1970 by U.S. mathematicians Alexander Yee and Donald Neurath. (Neural Network History, 1950)

The theory is based on the similar neural activity of one brain and the behaviour of different brains.

The theory was later expanded in the early 1980’s by my colleagues at IIT Kharagpur, and became famous through many scientific articles and popularization.

An Extensive Intro to Neural Network History (1960s) — Deep Learning
An Extensive Intro to Neural Network History (1960s) — Deep Learning

The history of Neural Network is slightly different.

The reason why we are able to use a BNN for real-time medical diagnosis is that in order to make this approach scalable, the network must be trained by training with great accuracy, which means that you must be able to generate the largest number of results, and also that the algorithm must be able to run sufficiently quickly. (Neural Network History, 1962)

Training BNNs involves continuously adjusting the size of the network until it achieves a high state-of-the-art accuracy, which is achieved by minimizing the average error of all the recent trials.

“This is still quite an experimental tool, but we are getting close to the point where we can start doing simulations on people,” neuroscientist Francisco Coelho, at Duke University in Durham, North Carolina, told Nature. (Neural Network History, 1965)

“We can actually experiment with the algorithms and see how they play out in our brains. We’ll be able to test them in larger and larger populations, and get more and more accurate results.”

The network is built from neurons in the visual cortex.

In 1973, Neural Network was released, as a book by David M. Edelman and George P. Jensen. (Neural Network History, 1973)

In its beginning it was written by David B. Edelman, and in the 1970s in school (and some not-so-school) it was taught that Net is to be used for problems like differentiation of functions of given inputs.

Also, computers are mainly used for this. The present study was made with such use in mind, and after taking a deep dig into the Net, some things happened that convinced me that Net is the right tool. (Neural Network History, 1970)

In 1990s, Neural Network was used to solve routine physics problems, for example: detecting fractals and finding the best path to a destination.

More recently, neural networks are used in everything from pharmaceuticals to genomics.