You missed the crucial part, it's not about the synapses, but the organisation they have that brings out consciousness.
At the moment we have absolutely no idea of what basic organisation (of synapses) it stems from, something we could build upon, nor any idea of the general organisation of the whole system that make the consciousness an emergent property
This actually creates a good argument for considering current LLMs and other generative AI as steps in the right direction. Specifically, while we have "no idea of what basic organisation (of synapses) it stems from", we do know a few things about this organization which creates consciousness - such as:
- It is simple enough to be reachable by a dumb, brute-force, random-walking, incremental process of evolution;
- It scales well, conferring survival benefits from the start, and at every point of the way, from the first neuron, up to the human brain;
With those in mind, it absolutely makes sense for us to find a relatively simple compute/organizational system and scale it all the way to consciousness (or a set of those that scale together) - because that's exactly how evolution must have done it, since it's structurally the only thing it can do.
Having done a bit of biology, I would not want to use the word "evolution" and "simple" in the same sentence. ;-) Certainly not things that have evolved over many generations.
(Even artificial) evolution sometimes does really weird things.
I don't mean to diss evolution. It is simple in terms of a program, the simplest it could possibly be - but it's also massively parallel; every molecule of or around any living thing participates in it. All life, constantly, everywhere, all at once. And then it had literal billions of year to run on this rock, to get to the point of complex life in complex environment.
Like a brute-force enumeration running on a supercomputing platform, it's mighty, but it's also simple.
Also, the process is simple. The output not so much.
> It is simple enough to be reachable by a dumb, brute-force, random-walking, incremental process of evolution;
I'm not an expert in this field, but from what I heard brute-force natural selection is the naive explanation we're given in school. There are many more factors at play other than random chance: e.g. there is also sexsual selection. Sexual selection selects some characteristic that isn't necessarily an advantage in the current environment, but it is somehow preferred by the opposed sex. According to some research, the reason why we lost the bone that other primates have in their penis is due to sexual selection.
One intuition that helps is to see evolution as a random search through solution space. Another thing to realize is that -in evolutionary algorithms- the distribution of random trials will be strongly biased around existing solutions.
It helps to understand that in some situations, a search algorithm with some level of randomness can arrive at a solution faster than a systematic/random approach, on average. In other situations, a search with added randomization might be slower, but its ability to escape local optimae (to some degree) means that it is much more likely to find the global optimum (or at least a better local optimum :-P ).
Interesting points. But I would add that sexual selection can select for some trait that is actually counterproductive, a famous example being peacock tails. Female peacocks use them to evaluate the health of potential partners, so it is an indirect measure of fitness to the environment. But at the same time, it is clear that healthy peacocks would still be better off without carrying around such big tails. If tails weren't so important for reproduction, they would have likely shrunk by now. I wonder how a comparison with the methods you mentioned would capture this.
“What created the only example of consciousness we know of?” Daniel asked.
“Evolution.”
“Exactly. But I don’t want to wait three billion years, so I need to make the selection process a great deal more refined, and the sources of variation more targeted.”
Julie digested this. “You want to try to evolve true AI? Conscious, human-level AI?”
“Yes.” Daniel saw her mouth tightening, saw her struggling to measure her words before speaking.
“With respect,” she said, “I don’t think you’ve thought that through.”
“On the contrary,” Daniel assured her. “I’ve been planning this for twenty years.”
“Evolution,” she said, “is about failure and death. Do you have any idea how many sentient creatures lived and died along the way to Homo sapiens? How much suffering was involved?”
“Part of your job would be to minimise the suffering.”
“Minimise it?” She seemed genuinely shocked, as if this proposal was even worse than blithely assuming that the process would raise no ethical concerns. “What right do we have to inflict it at all?”
Daniel said, “You’re grateful to exist, aren’t you? Notwithstanding the tribulations of your ancestors.”
“I’m grateful to exist,” she agreed, “but in the human case the suffering wasn’t deliberately inflicted by anyone, and nor was there any alternative way we could have come into existence. If there really had been a just creator, I don’t doubt that he would have followed Genesis literally; he sure as hell would not have used evolution.”
At the moment we have absolutely no idea of what basic organisation (of synapses) it stems from, something we could build upon, nor any idea of the general organisation of the whole system that make the consciousness an emergent property