I once had the same feeling, but it was dispelled by acknowledging that NN neurons are not even approximations of how neurons work. At most, NNs are inspired by the topology of a subset of neurons, and that's where the similarity between NNs and biological neurons stop. It's like the connection between objects in real life and objects in programming. They're both useful abstractions that are inspired by things in the real world, but the similarities to things in the real world stop there.
Neurons have a lot going on, they send and receive signals through a multitude of mediums, not just neural impulses, and they're capable of plasticity when it comes to the connections they make between other neurons. Neurons also don't have simplistic activation functions, they're capable of doing a lot more with the information they receive and send. Also, gradient descent and back propogation don't take place in any part of the brain.
Through that lens, I see NNs as if they're like really complex and impressive Markov chain generators. They can produce results that look intelligent, but it's just statistical correlations, and not at all how the brain works.
What attributes does a system need for you to accept its comparison to a brain/neuron?
Without defining what's essential, I'm nervous to call the comparison insufficient. If a topological subset of neurons isn't good enough, what do we need in addition/instead? If we stuff NNs full of complicated (how complicated?) activation functions, does that new system do the trick? Or add...47 new "neuron" variants? Or swap the learning scheme from gradient descent to something fancier? (For that matter, do we even know what the brain's scheme is, and why GA/back prop isn't an acceptably extremely crude approximation of it?)
The brain is so unimaginably intricate. Our models are hilariously simple in contrast, of course. But what of those mismatches are differences in kind vs. differences in magnitude?
> I see NNs as if they're like really complex and impressive Markov chain generators.
Funnily enough, that’s how I see brains. They start out as a few neurones basically implementing ‘hard wired’ logic, then some feedback loops form and next thing you know they’re asking “why am I here?”
Neurons have a lot going on, they send and receive signals through a multitude of mediums, not just neural impulses, and they're capable of plasticity when it comes to the connections they make between other neurons. Neurons also don't have simplistic activation functions, they're capable of doing a lot more with the information they receive and send. Also, gradient descent and back propogation don't take place in any part of the brain.
Through that lens, I see NNs as if they're like really complex and impressive Markov chain generators. They can produce results that look intelligent, but it's just statistical correlations, and not at all how the brain works.