Am I the only one who's skeptical about the feasibility of AI? As I see it there are two ways to think about AI: first there's the kind of AI that arises from software emulating parts of the human brain based on our current understanding of its inner functioning and produces human-like intelligence, so even if the mechanisms are different from those actually employed by the brain the output is similar in response and depth of reasoning; then there's the AI that stems from creating an artificial brain by reverse-engineering the human brain, but we are an awfully long way from doing that, mostly because we can't expect to unravel in a few decades what evolution has spent millennia perfecting.
It looks to me, a layman, that the only approach that holds any water is the first one. But then again, it mostly looks like people are implementing software based on a flawed understanding of cognitive functions and basically hoping that something magic happens. How can a scattershot approach like this ever produce anything even remotely resembling human intelligence?
Evolution did just fine, and it's far more scattershot than our current efforts at AI.
You're also crucially missing the possibility that someone comes up with an intelligent algorithm that doesn't mirror the human brain in much detail, but still manages to outperform it. Think of flight: inventing flapping machines didn't turn out to be very useful, but we figured out a workaround that was far more efficient. The most interesting (terrifying?) AI research is along these lines.
You're being a bit parochial, specificially anthropocentric. Why a human brain? If you consider it's possible that other intelligences exist in the universe, they probably won't have human brains... Or, consider other animals on earth: given several million years and the right conditions, do you consider it possible that intelligence (at our level) could evolve again - and I don't just mean apes, dolphins and elephants. Especially fascinating in this connection are parrots, crows and other corvids, which are obviously quite intelligent, but have a completely different neuroanatomy of the higher centers from mammals.
Are really we so special? Apart from being first (that we know of)? The history of science is a history of anthropocentrism iconoclasm!
Of course, copying what we know works seems like a reasonable starting point.
Secondly, who said it has to be done within a few decades? If it is done within centuries or millennia, it is still done. "Infeasible" would mean it can't be done, ever.
Personally, I tend towards 50 years off... as it has been for a long time (and probably will continue to be) <- this is a joke. I'm saying it's a looong way off.
"mostly because we can't expect to unravel in a few decades what evolution has spent millennia perfecting" - that's not the kind of thinking that got us driving cars, flying planes, coding computers and altering genes. Evolution doesn't have a brain.
I'm extremely skeptical of this, even though I spend a good deal of my time trying to automate machine 2 machine communication (which gets me thinking about this a lot). I also think that there's a good amount of hubris in these scientists believing that the brain is something that can be emulated by computing power of any kind...are we sure it's an apt analogy (brain as computer)?
Why not? I keep seeing these discussions pointing out the futility of thinking about the brain as a computer, but still don't see powerful arguments to back that postulate up, other than the 'brain too hard, computer too easy, so brain no computer'.
Brain as a computer, in my opinion should be the default state for this discussions. Why? Consider the old and tired brain-made-of-matter argument. There's no reason to think there's something magical or supernatural inside the brain, so treat it as an organized collection of atoms doing cool stuff. The default state cannot be magic, it has to be something that can be disproved or ruled out.
Some parts of it seem to work, as fas as we know, in a (suspiciously) algorithmic way, or in other words, a highly abstract step-by-step chain of actions can be identify for a given part of the brain.
Why not start with the crazy assumption that the whole brain acts as a computer (the theoretical concept), and then identify which parts of it fail the analogy? The key part here is the word 'fail': it should not mean 'too complex for any computer we have built' nor 'we don't know any algorithm that does that', it should mean that there are parts that inherently cannot be modelled, under any circumstances, like the definition of algorithm.
If some part is discovered not to hold the analogy, you should just then question if the analogy is question is apt or not.
It's totally fair for one to start with that assumption, and stick with it until proven otherwise. But it's just as fair for me to be skeptical of the analogy. Before the concept of computers existed, people (very smart people) thought the complex organisms worked just like mechanical machines (http://en.wikipedia.org/wiki/Mechanical_philosophy), and I'm sure there were - and are - many similarities and likenesses to be drawn. However that analogy wasn't correct as we now know. So we're on to the current thinking. Fair enough.
I also think it's a jump to go from "if brain isn't a computer - then magical". There's a lot of room in between. And there are plenty of reasons to think that what goes on inside the brain cannot be mimicked by a computer or algorithms as we currently know them. We don't even know what consciousness is! We should at least admit as much...
I really liked the point of mechanical philosophy you made. This is science in action, when one paradigm can no longer be a valid model, a revised one with none of its weaknesses but with more virtues arises, in this case, the computing machine philosophy.
>And there are plenty of reasons to think that what goes on inside the brain cannot be mimicked by a computer or algorithms as we currently know them. We don't even know what consciousness is! We should at least admit as much...
I agree, it's a huge jump! And that's precisely my point. The brain as a computer paradigm has nothing to do with the idea that an i5 core can't recognize cats, is the theoretical aspect of a computing machine that is used when trying to argue in favour of the BaaC paradigm.
Conciousness is precisely what doesn't fit in the BaaC paradigm. So the research should start from there.
I'm curious if the definition conciousness will have to be changed in the near future. Exciting times!
Where is the hubris in attempting to do something without knowing whether it's possible? What is it about AI that seems to make people defensive? Is it the idea that human intelligence might not be as special as we think it is?
Religious people, even if not strict, have this reaction always. I don't see this clashing with religion, but 'they' do. Then again, i'm not religious although I was raised with the bible at home and in school and know it by heart.
I guess that if you're religious, then yeah, almost by definition, you would be skeptical of assertions like these. Although to the parent comments point, I'm not defensive at all about looking into AI. I find it fascinating and exciting and though I'm not trained in it, I read as much as I can about it and don't want to stop any research or questions into it at all.
But hubris - yes it is hubris. Because there is no scientific basis for the assertion that we will cross that chasm into 'true' AI, and thus it's based just as much on faith as any religious belief. And it's hubris because they claim a scientific basis where there is none.
When there is a scientific basis or proof that we've reached (or will reach) this 'singularity', you won't see me complaining. I'm not anti-science. I just don't think it's ever going to happen.
On a semi-related note, doesn't anyone find it kind of odd that Ray Kurzweil's calculations for when the singularity will occur happen to be just about the time his natural life will end (statistically speaking)? These projections are all driven by ego and faith, very little by science...
I said exactly that about Kurzweil his predictions here on HN a while ago; others have the same issue; fear of death moves their predictions near the end of their own life. That's not weird though; if you don't get to see it yourself, what's the point? Sure it's nice for the future generations but that's not really how most people think.
About religion and science; it is about definition where there difference is; IF you accept some definition X as being strong AI then when we reach that we have a scientific reality. The chasm and 'true' AI and what these are in scientific terms are vague, however in science we accept definitions of how nature works and if those definitions are things you hold true there is no reason why it won't be reached as there is not 'special' in the fabric of our brains which we couldn't copy given advanced enough ehm, take your pick; biology, nanotech, electronics, 3d printing etc. If you however cannot agree on definitions and have that (to me alien) quality of accepting mystery above all, then sure it's all believe or not. Not a good conversation maker as we are done after 2s, but he.
I don't know tluyben, we've been having this conversation now for....24 hours or so. I think 2s is a bit of a low estimate ;)
Seriously though, I don't think the Conversation needs to end there (conversation with a capital C - ours can end whenever we want). I do indeed believe in 'mystery above all'. I actually think that's a lovely way of putting it. Because mysteries are just unknowns, and without unknowns, what happens to scientific exploration? Do we just assume we know everything? And then the exploration stops. I'll be more explicit than that as well - I believe in God, and I am somewhat religious. I don't think that cancels me out of any interesting conversations.
I think you're making an assumption when you say that if you hold scientific definitions true then there is no reason why it won't be reached. Science says nothing about the future certainty. It is composed of models whose intent is to reflect reality, testable hypotheses to build and refine those models, and the results of the tests of those hypotheses to validate or disprove the hypotheses. We have no model (other than some vague calculations of processing power of the brain), no testable hypotheses and no results for these projections. It's not science.
But I would say I've proven you wrong that this isn't a good conversation!
You are right, it's a good convo. And one worth to be continued. I also do not assume we know everything, but I think we can. And I also think that it's giving up to just assign stuff we don't get to 'something we cannot understand or see or ever find but is there and is intelligent on top of all that'.
Also like I mentioned before, I have no clue how AI would clash with the existence of a God or religion. And so I don't understand why religious people get so upset about it. There are many things we improved on where we don't try to take god's place (as I guess that's what it's all about) according to religious people; like when we made a wheel, did we better God's work or try to out-do His work by showing that wheels are more efficient for a lot of things than legs? I don't see the difference with copying or even improving on intelligence. So what that clash is I don't know but religious people seem to get downright aggressive when you talk about strong AI which gives me, and many others, even more incentive to just side them with the crazies.
I was born in the Dutch bible belt and I was raised with religion in school where I had to learn verses by heart and recite them every day; the people too stupid to learn them (small village with lot of inbred) were punished for not being able to learn them and I was punished for asking questions like didn't God create these stupid people too, why punish them for something they cannot do? My aunt used to give me physics books for my birthday written by religious professors; actual physics books with quantum mechanics and string theory. And although I did not believe in god at all from a very early age (mostly because none of the people who tried to push me into christianity wanted to answer any critical questions) and I don't and never will understand how someone can believe in most of the the things religion dictates, those books my aunt brought showed not everyone was a crackpot and actually there is no reason (and there isn't ffs) why physics, AI, evolution theory would not simply rhyme with religion. They are not mutually exclusive as so many (I cannot find another way of saying it) misguided individuals seem to think especially in the US. I assume you are not one of them as you don't mind critical discussion etc.
No, I'm not one of those people. And I agree with basically everything you've said here (except for the part about the possibilities of AI and the limitations of mystery - but we've covered that!). I find the closed-mindedness of many people so disappointing. I think asking these questions is so important. I also find others' experiences with religion fascinating. Thank you for sharing. My experience was extremely different - and the community I was (and am) part of doesn't strike me as closed-minded. As you note, there are many religious people who don't see a conflict between science and religion. I count myself as one of them.
Well, if you look at them from very far and keep your ears closed they kind of resemble birds. And they do resemble birds more than other stuff in the air. I think that analogy probably will hold up; the eventual strong AI will resemble a brain a complexity and will have a lot of things in common, but it will not be the same.
Well, no. You're right. I don't know whether that invalidates my point though. It may not be necessary to model the human brain to achieve machine intelligence. It may also be that any sufficient machine intelligence would wind up modelling the human brain to a limited but necessary degree, in a similar way that birds and airplanes share wings with similar lift effects, but different mechanics. Until there's something real to point at, I think it's semantics.
I agree with your assessment on those two avenues. One thing that throws people off is that the AI/Machine Learning community is constantly selling their models as "human like" when the models are really only inspired by the human brain by extremely loose analogies.
There is another approach, what I would call the Airplane approach, since it is to the brain what airplanes are to birds. That is, to base machine intelligence on a new kind of mathematical logic that hasn't been invented yet.
Actually, there are some areas where evolution still has an advantage. A biologist was explaining to me how artificial enzymes (man made proteins) can at best speed up a chemical reaction by 10^3 - 10^6, while natural enzymes typically speed them up by 10^9 - 10^12. Might have got those number wrong, but his point was that natural enzymes beat artificial ones by a wide margin.
Is it feasible? Yes, absolutely. Are we anywhere near to 'human-level' AI? No, not by a long shot. How long will it take? Will we ever make it? Very hard to know.
Well... That definition keeps changing. For passed definitions we passed human-level a few times. And for some tasks we did or will pass it soon. So what is your definition of 'human-level' intelligence? Don't copy wikipedia; make one up in strict language (don't say consciousness please). For what a lot people make/made up we passed.
I'm not going by a definition. Getting an airtight definition is pretty hard. I'm talking in a fuzzy "You'll know it when you see it" sense. Sure, we get human or even superhuman performance at various constrained tasks, and that's great. But what about say, iron man's Jarvis? Still in the realm of science fiction.
We get superhuman performance on most tasks we use computers for now; people forget that easily. Not too long ago, before electronic calculators, we used people as calculators; it was a good job for which you needed a brain; they would consider computers now superhuman. A Watson hooked up to Asimo presented a few 100 years back (I don't think you actually would have to go back that far; my grandparents would not see it as less than human, probably more) would be considered god himself.
And we are on HN, mostly smart people here who vastly over estimate 'normal people'. It's nice that we (me included) assume a human can be taught to be able to do anything other humans can (with some margins), but for now this is not true either. And if we want this empirical evidence thing going on; if a (kind of) Turing test would be done with a large part of the population who have not been told they are, for instance, we let a human with earplugs walking around a village in Arkansas and walk up to an average person and play the human interaction for Watson (or something like it), it would usually succeed. It would in my village for 100% sure; I could actually make a knowledge based script for talking to a lot of people and they would not see the difference. So I understand what you mean, but I don't think in a chinese room kind of way (and that experiment, as many have shown, doesn't matter) we are not that far off. When we reach your level of input/output you will 'see it' but still, because you don't have a definition, will deny it. I would wager that we are there are already in the 'fuzzy' sense of at least 40% (I think it's a lot more) of the population. My grandparents, bless them, definitely think they are talking to a human when they call they book a railway ticket (which has been a steadily improving AI for a little under 20 years now); for their 'fuzzy' it's been solved and strong AI exists.
It looks to me, a layman, that the only approach that holds any water is the first one. But then again, it mostly looks like people are implementing software based on a flawed understanding of cognitive functions and basically hoping that something magic happens. How can a scattershot approach like this ever produce anything even remotely resembling human intelligence?