Much of the technology we use is bio-inspired and one example of such biomimicry is the use of neural networks in ML/AI. What exactly is a neural network and how is that biomimicry?
Well, neural networks get their name from neurons, which are cells found in the brain. Neurons are responsible for sending electrical signals in the brain and down to the spinal cord and controlling muscular movements. Neurons synapse in order to transmit information and can be both electrical and chemical. For the sake of this post, though, we’ll stick to electrical. I will greatly simplify the complex mechanism: essentially an action potential travels down the length of the neuron to the end of the cell (synaptic terminal) and depolarizes it (makes the voltage in that area less negative) such that the channels in that region open and signal is transmitted to the next neuron. This continues down until the signal reaches where it needs to be.
How is this similar to a neural network? Neural networks have multiple different layers and each layer takes in an input from the previous layers, applies a function, and spits out another value. Much like neurons which intake a signal from an adjacent neuron, send the signal down its length, and transmit it to another neuron. The function applied in the neural network, called an activation function, acts like the channels at the end of the neuron spitting out a value depending on the size of an input. Usually, the activation function is set to be the ReLU function. The ReLU function is really simple, it returns the max(0, input), avoids any complications with exponents, multiplication, etc., and increases variability in input by not limiting to the 0–1 range like the sigmoid function does. The value from the activation function then gets passed onto future layers and the cycle continues.
Although this is a simplification of the complex biological process of neural synapses, hopefully this post was able to clearly explain how neural networks are a biomimicry of neurons. Thanks for reading!