The pattern of Searle's anti-AI analogies could be generalized thusly:
- (1) We set out to show that entity A cannot possibly perform action X (in this case, A is "a future computerized AI" and X is "thinking/understanding")
- (2) Conjure up a mental image of entity B, which Searle has carefully chosen for the purposes of this argument.
- (3) Entity B is analogous in principle to entity A, albeit on a much smaller/slower/less powerful scale -- indeed, in terms of some of their important properties, we can even say they are mathematically equivalent.
- (4) Our intuition tells us that the mental image of entity B performing action X is just plain silly. (And indeed, our intuition is generally right about this, though not always for the right reasons)
- (5) Therefore, by analogy, the idea of entity A performing action X must also be just plain silly. If entity A seems to be performing action X, it must only be a simulation, nothing more.
SEARLE'S DRINKING BUDDY: So the amateur baseball league I play in recently decided to allow aluminum bats, and as a result I've hit three home runs already this season!
SEARLE: No you didn't. You only simulated hitting those home runs.
SEARLE: Listen, we know wooden bats can be used to hit home runs. Now, consider, an aluminum baseball bat can be approximated by saying it is, roughly, an aluminum cylinder, right?
BUDDY: More or less, sure.
SEARLE: Well, an unrolled paper clip is also an aluminum cylinder, right? And given a large enough paper clip, you could probably unroll it, cut it to size, hold it over the plate with both hands, swing it at a baseball, and maybe even hit the ball past the outfield fence from time to time.
BUDDY: Hmmmm... in principle, I suppose I agree, though it seems rather far-fetched.
SEARLE: (retrieves a paper clip from his pocket, unrolls it, holds it with two fingers, and swings it like a bat) So, am I playing baseball now?
BUDDY: Well, obviously you're not playing baseball, you're just pretendi--
SEARLE: Ah hah! Yes exactly, using an aluminum paper clip, one can only simulate the act of playing baseball! And we already agreed that there's no significant difference in principle between the aluminum cylinder that is the unrolled paper clip and the aluminum cylinder that is the baseball bat, you see. No no no, one could never play baseball with an aluminum bat, it would only be a simulation. A bat has to have "the right stuff" -- which in our universe, appears to be something to do with the intrinsic properties of wood.
BUDDY: You're a dick.
Okay, so I'm sure some extreme baseball purists might agree with Searle that it's not "really baseball" if you play with an aluminum bat. For them, please move on to my next example.
But for the rest of us, I would point out that as many differences as we might point out between an unrolled paper clip and an aluminum baseball bat (different tensile properties, somewhat different shape, etc.), I would challenge any reader to say that the computer you are using to read this blog post is more similar to toilet paper than is an unrolled paper clip to a baseball bat. John Searle wants us to swallow the clear lie that because you could in principle making a Turing-complete system out of a roll of toilet paper, therefore there are no important differences between a futuristic computer AI (far more powerful than the computer you are using to read this post) and his buttwiper-cum-Turing machine. In comparison, fantasy-Searle's assertion there are no important differences between an unrolled paper clip and an aluminum baseball bat is actually a much smaller lie, I would contend.
If you are hung up on the mathematical equivalence between various incarnations of Turing machine, try this on for size:
JOHN SEARLE'S COLLEGE PHYSICS PROFESSOR: So you see, that using techniques of advanced calculus, Albert Einstein was able to derive all of the equations that govern special relativity.
SEARLE: (raises hand) Um, I don't think so. I think Albert Einstein just simulated his derivations. He wasn't really deriving anything.
PROFESSOR: You again?! Well, what inane objection do you have this time?
SEARLE: Well see, I taught my precocious 9-year-old nephew Jeffrey to solve simple algebraic equations, like "3x + 4 = 19". He's actually quite good at it!
PROFESSOR: That's nice, your point?
SEARLE: Suppose I taught him to compute the limit of simple functions too. He's a bright kid; I'm pretty sure that given enough time I could do it. Now, I learned in one of my other classes that the foundation of calculus is all based on algebra and limits. Correct?
PROFESSOR: Yes, more or less....
SEARLE: So if Jeffrey were in some parallel world where he stayed nine years old forever, in principle there's no reason he couldn't apply his algebraic knowledge to the development of advanced calculus. Given infinite time of course.
PROFESSOR: I think I see where this is goi-
SEARLE: And given infinite time, remember, there's no reason -- at least not in principle -- why little Jeffrey couldn't then go on to use all of his calculus knowledge to derive the equations of special relativity. In principle. But now the coup de gras -- do you truly think Jeffrey could ever really derive all those complex equations?
PROFESSOR: (sigh) No, John, I don't think your 9-year-old nephew Jeffrey could ever really derive the equations that govern special relativity. That mental image is silly.
SEARLE: Yes, it is silly. But how is that different from your laughable contention that Einstein derived the equations that govern special relativity using calculus? As we've seen from our Little Jeffrey thought experiment, calculus can't realy be used to derive the equations after all!
PROFESSOR: (eye roll) So, moving on...
Again, the trick is to agree to the in principle plausibility of a fantastic scenario -- in this case, an immortal fourth-grader who eventually studies advanced mathematics -- and to then redirect the ludicrous absurdity of the imagined scenario in order to invalidate the scenario being analogized. But if we are actually paying attention, the ludicrous absurdity is due to particulars of the imagined scenario that are not shared by the original scenario, rather than those which are carried over from the analogy.
I think that the reason why so many philosophers fall for Searle's seductively snarky analogies is because philosophers are used to playing in a mental sandbox where "in principle" is all that matters. They shift around counterfactuals and possible worlds and infinite resources without a second thought to the real-world plausibility of it all -- which 99% of the time is quite appropriate in the world of philosophy. I'll say in a moment why this is one of those 1% times where philosophers cannot ignore real-world plausibility (I actually already stated it in an early paragraph, but I intend to expound on it and clarify), but first, a digression. (As Richard Dawkins said in Unweaving the Rainbow, "what is this life if, full of stress, we have no freedom to digress?")
As an engineer, Searle's mental errors seem positively glaring to me. Engineers deal day to day in real-world plausibility, and we're used to the idea that some things that are in-principle possible, given infinite time/infinite resources, are in reality patently absurd.
I would like to assert, in all humility, that I think I am particularly qualified to understand Searle's error as a result of being a research engineer. The process of inventing a new idea in an engineering discipline involves frequent perceptual shifts back and forth between what is possible in theory and what is reducible to practice.
To illustrate this, I would like to recount, in those most general terms possible (I don't want to have it be like the last time I talked about my job on my blog), the process by which myself and two of my co-workers developed the idea for a new invention. While ruminating about a prior conversation with my project leader, I had an idea of the form, "Wouldn't it be cool if we could do X?" Well, for clarity let me pick a concrete example, which I will intentionally choose to be as far removed from what I actually do as possible: Let's say my idea was, "Wouldn't it be cool if we could grow an apple that would naturally have the company's logo imprinted on it?" (Nevermind whether this is a good idea or not, I'm just picking an example) I had no idea how this might be technologically accomplished, but it seemed at least in principle possible, and -- in the case of the real invention, at least -- it seemed like there could be a market for it.
I brought it back to my project leader and he agreed it sounded like a cool idea, so I set about to try to figure out how it could be implemented in the real world. This was perceptual shift #1 -- from a pie-in-the-sky possibility, to the search for plausibility. After another day or so, I came up with a somewhat ad hoc, but most definitely workable idea. I brought it back to the first co-worker and started a chalkboard discussion (this guy happens to strongly prefer chalkboards to whiteboards -- go figure!), and soon another team member overheard us and joined us in bouncing around ideas.
After I had described my technical implementation of how we could get apples to grow with a company logo naturally imprinted on it, both of my colleagues started to see other quite different potential uses for the underlying technology. For instance, one of them suggested (in our fantasy analogy) that a feedback loop in the technology could cause the apple to also be imprinted with its own unique individual weight as it grew. The other noticed that the same technology might be modified so that the apple appeared perfectly normal for all intents and purposes, but it had organically "grown" an RFID tag inside of it (you know, to combat the terrible scourge of apple thieves on our society). It's important to mention that many of these new ideas presented their own new technical challenges, despite all of them being a riff off of the technology I had diagrammed on the chalkboard.
Nevermind that this is a silly and almost-certainly-impossible invention -- the actual one, which I can't speak about for obvious reasons, was (I think) far more useful and also much more implementable. But I hope the reader can suspend disbelief long enough to see that there had now been a second perceptual shift, one that originated its journey in the world of the plausible, and wound up back in the unlimited realm of the possible. None of the three of us would have thought of those pie-in-the-sky ideas without first being inspired by the technology I had conceived -- and I wouldn't have bothered to work out the technology if it hadn't originated in the germ of a different pie-in-the-sky idea.
I think the particular invention I am working on is ultimately fairly modest, and so there will probably be only one last perceptual shift, as we reduce these ideas to practice. But for really revolutionary inventions (digital computer, anyone?) this ricocheting between the possible and the plausible can continue for quite some time, decades even, spawning all kinds of unexpected results. Indeed, in a research engineer, the ability to seamlessly shift between a discussion of what could be done in theory, to how it might be accomplished in practice, back to what else that practice could in theory accomplish, and so on ad infinitum, is a central job qualification. So needless to say I am quite accustomed to it.
Okay, end of digression. Back to Searle. As I alluded to very early in this post, the fatal flaw in Searle's style of argument is that there is a fundamental contradiction between step #3 (where we imagine a scenario with infinite time and infinite resources) and step #4 (where we try to apply our everyday intuitions to such a scenario). The former is quite alright if we are playing in the enclosed sandbox of philosophy, where "possible worlds" can be stretched and contorted to all kinds of radical scenarios. But the latter -- the employment of our intuitions -- forces us back into reality. If philosophers want to utilize their human intuitions about what's "silly" or "absurd", rather than rely on formal reasoning, they must take into account the real world. After all, what is a philosopher's idea of "silliness" or "absurdity" based on other than her real world experience!
To put it as forcefully as possible: In a "possible world" that allows the Chinese Room to actually work -- one in which the guy in the room is immortal, the "room" contains a continent's (or more) worth of "books", and the universe goes on indefinitely and unchangingly -- none of our intuitions even remotely apply. To turn it on its head, we can imagine a possible world filled with super-intelligent immortal aliens who are quite used to carrying on conversations that span quadrillions of years -- maybe to these aliens, it is not so far-fetched that a very special roll of Turing-blinged out toilet paper might "think" in a way analogous to the way these puny short-lived humans "think". The Turingified toilet paper might take a little longer to make up its mind, but that could even be an asset rather than a weakness in such an eternal slow-moving universe. Our immortal aliens might even find it rather pathetically cute when these egotistical humans keep insisting there is something about their idiotic nigh-instantaneous lives that transcends those of the equally simple-minded yet much more patient and well-mannered Toiletpaperians!
Indeed, the mental image of a roll of toilet paper that "thinks" is arguably less fantastic than the actual practical conditions that would allow it (or the Chinese Room) to happen. I can quite easily imagine a cartoon-ish mental image of a roll of toilet paper with a thought bubble coming out of it saying, "Oh dear god, not again!", even if I think no such thing could exist in the real world. On the other hand, my primate brain is not capable of imagining a time span of a thousand years, let alone a span of the millions of years (or more, probably astronomically more) it would take for the Chinese Room to answer a single question.
Thinking toilet paper, thirsty beer cans, and the Chinese Room are silly ideas -- but not for the reasons John Searle thinks they are. He tricks us into drawing a false analogy, without realizing that the important differences which divide the absurd from the plausible actually lie outside the scope of the analogy.