Hacker Newsnew | past | comments | ask | show | jobs | submit | ae35's commentslogin


The trouble I've always had with this argument is that the amount of books needed to convincingly converse in Chinese would be staggeringly huge. You essentially have an entire human personality as a lookup table. It's not just that you can ask, in Chinese, What's your favorite food, and somewhere in the pages is an answer. You can ask, Tell me about your first love, and somewhere in the pages is an entire conversation. And not only would it be an entire conversation, it would be every possible conversation, depending on how the outside person replied. If it helps visualize the problem - the outside person could say, How about a game of chess?, and the inside person cannot rely on their knowledge of chess, so the books must encode every possible chess game, convincingly enough to hold up a game. Imagine how big a lookup table for chess would be. And that's just a tiny part of what a person knows.

And then despite having you accept that without question, and picture it as a small room, Searle turns around and says it would not be "at all plausible" to imagine the system as being conscious.

The system is almost entirely made of book. The human in the middle is almost invisible if you look at the system. That's the implausible part - that a human could even operate this Borges library. It's so unreasonable that intuition past that point doesn't give you any useful conclusions.

But, if you suspend disbelief enough to accept this galaxy-sized Chinese megalibrary, it's not implausible that "conscious" is a fine adjective for an operable printout of an entire consciousness.


You do not need a pre-existing, galaxy-sized library. The program operator has the ability to write into blank books and move them from one shelf in the library to another. The program could, in theory, assemble a book of knowledge on something that exists entirely outside the Room, solely from knowledge gleaned from the inputs passed in from outside.

Your initial set of books might simply be a script to follow to elicit additional information and index all new information such that it can be retrieved later. You would then end up with a set of core rulebooks, a whole heap of raw conversations, and several layers of indexes into them.

Searle's problem is that he decided the computer was incapable of understanding. But the computer is not the program. The library is the thing that might comprise a strong AI, not the human operator.

The scary thing is that the human operator, having no understanding of the Chinese characters, does not actually know what the program is doing. It could smuggle instructions through the output window, through him, and the human operator would then be surprised when burly men drag him out of the Room one day, to replace him with a dozen monkeys who have all been trained to process the book instructions faster and more efficiently. Customers presenting inputs might then be surprised to learn that there is no longer a human inside the Room, as the conversations had become so much more interesting.


Oh, I missed that Searle's argument was that you could follow instructions instead of just looking things up in a book.

That seems like massively begging the question, then. "AI can't exist, because an algorithm can't be intelligent."


This thought experiment seems to say: "the human following the instructions doesn't understand the Chinese, therefore strong AI is impossible".

Seems like a fallacy to draw that conclusion, it doesn't prove anything, this is like saying: this tomato is not yellow, so bananas are not yellow.

In addition, how can he say strong AI (which he defines as "understanding") isn't possible, doesn't he have at least one example of it, which is his own mind?


Taken literally, it even suggests human intelligence is impossible. After all, our thoughts originate from a bunch of neurons following instructions and passing information between each other. And each individual neuron isn't any more intelligent than any other cell and can't be said to "understand" our thoughts any more than the man in the Chinese Room understands Chinese.


Hi! Searle fan, here.

Searle is not, as you might at first anticipate when you see that he's on the "computers can't be conscious" side of the argument, some sort of woo-woo psychic-spiritualism philosopher; he's actually really grounded and relatively materialistic, though he despises the latter term. (He thinks that you should not respond to the problems of dualism by choosing one or the other side of the duality to commit to, and argue the other into nonexistence -- but rather by saying "oh I guess it was stupid to draw a line here in the first place.") Even the earlier simplification I gave, he "thinks computers can't be conscious", he'd object to directly. He thinks that you're a computer, and he thinks that you're conscious. He just doesn't think that you're conscious by virtue of being the computer that you are. So that's who we're talking about here.

Now, Searle says: we know, or think we know, that the brain is conscious, and it's conscious due to some aspect of the dance of molecules that's going on under the hood. And we happen to have this model of dances, called computation, which is defined in terms of abstract symbol-shuffling: the great strength of computation is that the 0s and 1s can be voltages and electric currents (as in transistors) or magnetic alignments of spins (as in hard drive) or how a gear is turned (as in the difference engine) or whether a resistance is infinite or low (as in your keyboard). They're just abstract symbols.

At its most abstract the Turing test suggests that any human language can be boiled down to these abstract intrinsically-meaningless symbols (which is almost certainly true; that's the premise of the invention of writing), and then that anything which can output symbols in a way that's indistinguishable from a human being, deserves to be given the title of "understanding" that language.

Searle steps in and says, no, that's crazy. And he says it's crazy because he's a philosopher and philosophers are very concerned with what words mean exactly, and the word understanding means exactly that these things you're talking about are not abstract symbols, but a computer necessarily treats them that way; it's in the definition of computation that the symbols are abstract.

Now that you understand who this is and where he's coming from, maybe you can understand the argument better. He's trying to try to give a more formal proof of this above statement. So, he steps in with this Chinese room argument.

The Chinese Room argument says: Your brain is conscious, because of the dance that its neurons do. Now according to this nice computer-reductionalist view, anything which does any dance is understanding a language as long as it produces results which are indistinguishable from a human's responses. So Searle says: let's step inside some Turing machine's dance so our brains are doing two dances at once. The very asset which makes computers awesome allows us to take a Turing-test-passing-algorithm and perform it ourselves: we are the computer, we get this stream of symbols in, we look up rules in some complicated rulebooks, we stream some symbols out, and we now pass the Turing test for understanding Chinese. But is the test correct: do we actually understand Chinese? No. There is no reasonable definition by which we understand Chinese just by performing this complicated computational dance. And this applies even if we move all of the rulebooks and whatever into your skull so that the neurons do those things.

Now here's where we come to the conclusion that strong AI is impossible: the human has everything which is part of the definition of computation. I'll repeat that twice more. The human embodies a whole, complete, computer. There's nothing which a computer can be (by virtue of being a computer) which the human is not. So if we go this far and we say "that brain does not understand Chinese" we have to carry that over; "that silicon does not understand Chinese, either." And no computer will understand Chinese merely by virtue of running this computer program. Clearly our understanding of Chinese is not a property of arbitrary dances of neurons which can solve a Turing test, but there is something special about the right dance of neurons which solves a Turing test. The mere fact that the right dance exists points to some physics which is not reducible to computation.

In other words Searle thinks that you can simulate people as much as you please, but that simulations are not reality, thank you very much. Just like we humans are very good at simulating pain-behavior without actually being in pain, so too can a computer simulate understanding-behavior without actually understanding a thing. If you want to understand how understanding works, computers will doubtless be a useful tool, but they cannot offer a complete and final answer precisely because they do not pin down the material dance firmly enough to get to something causal.


Searle was never able to explain what exactly makes a human brain conscious, other than repeating that it is based on biological processes, rather than mechanical or electrical processes (which he says cannot be conscious). He just explains a mystery (consciousness) with another mystery (why is biology superior - in his view - to mechanics as a way of "doing" consciousness)


That's mostly true and he's even admitted it on several occasions. (His TED talk for example.) He thinks that you can ultimately get some sort of (possibly quantum) electromechanical microscopic description of consciousness by poking the brain with finer and finer instruments, but he doesn't know what we're going to find when we do that; so he's pretty consistent about "we don't have a microscopic definition of consciousness because the science isn't done yet--but we can at least define macroscopically what we mean."

Remember, Searle wants to stop before the scientific domain and he is very optimistic about what science can conclude. I myself am more skeptical that we can get to consciousness without some sort of philosophical innovation. Like, I got my degree in physics, so I'm predisposed to think that it's going to work like "here are the building blocks of qualia, each one needs to be identified with some worldline of some particle moving at speed c, so you get timeless processes with qualia at the bottom of your physics: then you can build consciousness by intertwining these basic qualia into bigger and bigger experiences and fractally including their history within themselves to serve as memory and so forth." I want to start with building blocks and laws of combination and at the end come out with the solution. Searle doesn't want to make any assumptions and seems to think "it's like with QM, we basically invented the finer points of philosophy we needed once we got a chance to poke with finer and finer instruments. We just need to get the big points now, that consciousness is a system feature of the brain like the liquidity of water, and stop trying to reduce it, and we'll get there soon enough."


I think we can build emerging non-biological conciousness (if we keep going long enough) before we understand what conciousness is (if we ever understand that). But you can never prove anything or anybody is concious except yourself so attempting to prove it would be futile.

Whether or not we can prove it will not stop it from doing what it does, just like not being able to prove some other human is concious doesn't stop them from being so (or not ;)).


If a human in the room is following simple rules, he's just using part of his body to play a _simple_ computer, not using the full brain capacity for understanding. That doesn't make it impossible to build a computer as complex as the brain that does everything the same way as the brain does it internally.

Saying humans can do something that could never be built artificially is, imho, magical thinking.

If you want to prove that computers can never "understand" like a human can, you would have to prove that nothing that we can ever build with our hands and doesn't involve biological cells can ever do the same things the human brain can.

> Now according to this nice computer-reductionalist view, anything which does any dance is understanding a language as long as it produces results which are indistinguishable from a human's responses

Who says that? More stuff inside is needed for understanding it, than just input and output. That doesn't stop a complex enough computer from doing that. The brain can do it (at least I experience mine can), so some other (even non-biological) network could too.


No, you don't understand: the conclusion of the Chinese room argument is precisely your statement "more stuff inside is needed for understanding it, than just input and output." That's precisely why consciousness is not a reduction to executing the proper computer program.

Maybe you're not clear on what computers are? I strongly encourage you to look up for example the definition of Turing machines. They do not have any "stuff inside"; they consist of a subset of mathematical functions from bitstrings to bitstrings,

    data NextMove s v = GoLeft s v | GoRight s v | Halt v
    newtype TuringMachine s v = TuringMachine (s -> v -> NextMove s v)
A Turing machine is a mathematical function, a subset of (s, [v]) -> [v] that can be generated by the above functions.

Mathematical functions are defined as pairs of inputs and outputs. There literally is no "stuff inside" them. They're just sets of (input, output) pairs satisfying left-uniqueness: if (i1, o1) and (i1, o2) are both in the function then o1 = o2 and they were the same pair all along.

Again, Searle is happy that our brains are computers and happy that our brains are conscious. But they are not conscious by virtue of running the right computer program with their computational aspects. In fact, getting you to compute something is relatively hard and unnatural, which is why computers are so great as calculators whereas it takes many years to teach humans the same.


Is the argument only about turing machines, or about any non-biological machine we could build? Turing machines don't include for example random generators nor parallelism, but we can build both of those. I'm talking about anything we could build.

I do not believe in something magical (unbuildable) that biological cells or human beings have, so with enough technology we could build something that understands just like the brain does. If the brain can do it, nothing stands in the way of something else doing it. Maybe the brain uses for example some quantum mechanics we don't yet fully understand, but we'll still be able to technically use it once we know how, it won't be magically limited to only human brains.

EDIT: ok if the argument is really only about turing machines and that is what is meant by "program", that should be indicated a bit more clearly imho, because computers are not turing machines (nor is the brain), they're less powerful in one way, yet have more features in other ways :)


This is the most interesting view of the Chinese Room argument I've read so far. Thank you for sharing.


Does this mean that the human simulating the Chinese is a Chinese p-zombie?


I have no clue what Searle would say, and my degree is in physics not philosophy. But my best guess is that it's probably a borderline case? The person inside the Chinese room still of course has access to qualia, and those qualia do line up in a one-to-one way with the outside world: this suggests clearly "not a p-zombie." But it does seem that in an important sense they're not the right qualia; to use that language, "there's something that it feels like to talk about a waterfall, and the person in the Chinese room does not feel that when they talk about a waterfall."

If you want to up the geekery factor to eleven, you could probably think of the Chinese room as a sort of homomorphic encryption.


Thank you for both these comments, this is the best explanation I've ever read about Searle (Searle is usually a punchline in my conversations with my friends[1]). I'm not completely convinced, and I think an effective rebuttal is along the lines of "there's no such thing as a privileged qualia", but I'm not smart enough to make it properly, and have much to think about for a few nights! :)

[1] For instance, most recent search in my Slack for Searle returns: "There are days when all i do is purely syntactic copy-pasting into stack overflow and copy-pasting back into my terminal, with absolutely no understanding of the underlying semantics of my actions, and have such surprisingly impressive results, that i've become more and more convinced by Searle."


On the other hand Google translate is (was?) a clear example that word by word translations aren't intelligent or even decidable. So he's got a point, but far from a general proof.


That doesn't at all address the point of the argument. Given the following

- an assumption that you can generate an algorithm to express behavior indistinguishable from a human's at a given task, and - an implementation of the algorithm at a macroscopic scale, carried out by individual humans, each executing a small part of the algorithm

then, where in this system would you say that an actual understanding of the task exists? Google Translate doesn't pass the first requirement.


That assumption is too simplistic. If the algorithmic behavior was indistinguishably human behavior, and carried out by humans, it would just be human behavior. Of course machine translation doesn't pass the requirement for human behavior. Nothing does, except humans. And if newer machine translation does pass, I'd say that's humanly accomplished, by use of tools.

Nobody could learn a distinct language from nothing but a dictionary, enough to fool a native speaker. The assumption is ridiculous. The dictionary isn't conscious, either way.


https://en.wikipedia.org/wiki/Chinese_room

If you have a challenge to the Chinese Room argument, go read and respond to Searle's original essay. There's no point in responding to my TL/DR version -- offered only to correct your initial misapprehensions -- with further misapprehensions.


The Chinese room is a silly example, in which a highly simplified physical system that doesn't appear to have conscious thought is extended to the idea that no merely physical system has conscious thought. If the structure of the Chinese room was not a simple catalog of questions and answers, but hundreds of billions of complicated cardboard-and-marble apparatus, each processing some amount of data in a way that individually seemed meaningless, but together produced cogent speech... would you still be so sure it wasn't conscious?


This is a good point. The chinese room seems to stem from a more antiquated system of serial processing and the analogy doesn't quite hold when we look at the massive parallelism that occurs in the human mind that gives us consciousness. Furthermore humans can still think in the absence of understanding. In fact id argue a majority thoughts by people are conjugated without full understanding of the subject of the thought.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: