John Searle’s Chinese room argument is a staple of popular philosophy and, in particular, the philosophy of artificial intelligence. Laid out in his seminal paper Minds, Brains, and Programs it, at least at first glance, appears to sum up our intuitive apprehensions in ascribing the seemingly rich qualia present in our own human experience to “mere symbol manipulators” as Searle puts it. That said it is in my view, and the view of many scientists and philosophers, that the argument is deeply flawed in a variety of ways. Below I will detail three of my own objections to it.
Objection 1: Misleading Analogy
Searle poses the Chinese room argument to show how ‘ridiculous’ it is that a man inside a room manipulating, what are to him, meaningless symbols constitutes consciousnesses. There are two main problems with this posing of the argument:
Complexity of such a Program
Searle phrases the Chinese room as some simple input output box that would have the operator manipulate a couple of symbols and return some output. This is a vast and naïve understatement of the complexity such a program would have. For this to be a reasonable scenario that man would either have to manipulate symbols many times greater than the speed of light (something which makes no physical sense) or perhaps have a whole team of billions of men only going at a significant fraction of the speed of light (more plausible but obviously still ridiculous).
Now to this you might object, “yeah but in principle we could right?” Well sure, in principle we could, and in principle we could simulate entire universes, alternate histories of earth, brute force a cure for cancer and so on by just manipulating symbols. But at this point the analogy Searle tries to make breaks down and is no more as convincing as simply stating “machines can’t be conscious”.
The Human Mind is a Machine
However even more pressing is the neurological objection. Searle makes this argument stating that the operator of the Chinese room doesn’t understand Chinese, and that he is simply following instructions. This is a fair point, but if we are to accept it at face value we have to do the same for the innerworkings of our own minds. I imagine Searle would think that he in fact understands English, but considering all his mental processes boil down to the statistical processes of neurons firing to a particular influx of inputs via his senses, and nothing more, I would ask what differentiates him from a computer whose inner process are also modeled as statistical processes of neurons firing to a particular influx of inputs? Is it that one is made of flesh and the other of silicon and transistors? That one is borne of natural selection and the other of man?
Is it that my reductionist view is unsatisfactory? If so, why doesn’t the room as a whole understand Chinese? Certainly the parts of the room/brain (i.e. the physically impossible men/neurons) don’t ‘understand’ what they are doing right? And yet the human mind as a whole does, as Searle points out. So unless there is something specific to humans, other than a capability to reason about the world and provide responses to it, the brain and the machine are indistinguishable, no? If not, then there’s something else there. Something… ephemeral. Something… bullshit.
This ‘something’ is the intuition that we have a spirit, or soul, or whatever you’d like to call it. And it’s nothing but that: an intuition. Just a product of our evolved psychology and culture. To be sure, such an intuition is useful for man to have. I imagine that a pack of gloomy and existential cavemen wouldn’t be very effective hunter-gatherers. But usefulness is not our goal in discussing such a concept, and certainly not the goal of philosophy.
Objection 2: Syntax is not Semantics
Searle’s main point is trying to convince the reader that:
- Syntax is not sufficient for semantics.
- Programs are entirely characterized by their syntax.
- Human mental states have semantic aspects.
- Therefore, computers cannot be minds.
And this is where the fundamental problem arises again: “what is semantics, and why can’t a computer have it?” What Searle and all others who come up with fanciful and misleading descriptions of Chinese rooms, philosophical zombies, etc. are toeing around is the age old question of dualism vs. physicalism. If you believe in physicalism then it is necessarily the case that there can be nothing ‘special’ to a human brain that distinguishes it from a computer. This is an empirically, and even further physically, irrefutable fact.
However you can still save your belief that humans are somehow different or ‘special’ and have semantic mental states in a way computers can’t if you believe in the immaterial. And more specifically, that humans have some ephemeral semantic ‘object’ tied to their mental states. While this solves the problem, it is of course just a show stopping “just because” type of belief that has no basis in observable fact (duh that’s the whole point of the immaterial). At this point we may as well posit immaterial solutions to all our philosophical problems:
- “Are there moral truths?”
- “Yes they exist in an ephemeral sense that is totally unobservable and untestable, yet still apply to us and our actions.”
- “Oh that’s great, now I don’t have to give up on my intuitive beliefs that killing is wrong just because or that eating dog is bad but eating cow is ok. How did you figure this out?”
- “I just felt that it was true. I mean, isn’t it obvious? If it wasn’t then killing wouldn’t be wrong!”
I’m certainly being facetious, but I’m also certainly sure that any argument in favor of something similar is either a glorified version of the circular reasoning above or empirically false. Our evolved propensity to believe in something gives it no weight in any truth-bearing dimension.
Objection 3: Searle doesn’t know what a Turing Machine is
Searle’s Chinese room argument unlike other philosophical arguments, makes statements about a mathematical theory. In particular the theory of Turing machines. This will be his downfall as he clearly does not understand them beyond simple ‘symbol manipulators’.
Richard Yee’s Paper on the matter.
The gist of it is that Searle fails to realize that there are actually 2 Turing machines in the Chinese room. A universal Turing machine and the Turing machine it is simulating. His error is his conflation of the two. The box, that is Searle and his pieces of paper, are the universal Turing machine. His input isn’t just the Chinese text, but also the English instructions for how to manipulate that text.
The Turing machine that actually does the Chinese processing is the one he is simulating by interpreting the English instructions given to him. This simulated Turing machine’s input is indeed just the Chinese text.
So when he implements the Chinese translating program he is doing symbolic manipulation, but when he is interpreting the English rules to do that manipulation, he is using his ‘intuition’ and ‘human understanding’ to do so. Searle can only introspect on the universal computation he is performing, and the argument only focuses on this purely symbolic computation. But the actual computation of interest, the simulated Turing machine responding to Chinese, is not accessible to Searle. He wouldn’t be able to know if it was processing inputs the way a real Chinese speaker might.
He is analogous to a neuron in a human mind, or a transistor in a computer chip. Just because a neuron doesn’t know Chinese doesn’t mean the system it comprises doesn’t. This is, a form of, the systems reply to the Chinese room argument.
Searle has replied to this view:
Actually I feel somewhat embarrassed to give even this answer to the systems theory because the theory seems to me so implausible to start with. The idea is that while a person doesn’t understand Chinese, somehow the conjunction of that person and bits of paper might understand Chinese. It is not easy for me to imagine how someone who was not in the grip of an ideology would find the idea at all plausible.
Clearly he still does not understand the theory and its implications as he still believes it is the human and paper that knows Chinese, not the Turing machine they simulate.
Yee boils down his argument to this:
- If computation alone were sufficient for human understanding of Chinese, then in principle there would be a program for it.
- Let Searle execute such a program exactly as a computer would.
- In so doing, Searle detects no human understanding of Chinese. Hence, it does not occur in the room.
His argument is clearly shortsighted and one can make this even more apparent if they were to replace “Searle” with “neuron” and “room” with “skull”.
Conclusion
It would seem to me that the only thing Searle’s argument formalizes is our inherent and seemingly unshakeable bias that if consciousness is a thing, then only humans/biological entities can have it. It is unfounded and ignores the mechanical construction of ourselves and of everything in the universe.
On top of this, it misinterprets what Turing machines are and how computation works. Indeed, upon closer inspection his argument boils down to “human minds are different just because”.
The only way out of this issue is the belief in something immaterial inherent to all humans and while such a belief is certainly a panacea for the problems that plague philosophy, it is not exactly a satisfying one.