Apropos of nothing (actually, apropos of a drunken discussion I had with the drummer from my band last week) I thought I'd give my opinion on the plausibility of P-zombies. In a nutshell: I think P-zombies are plausible in principle, but I strongly suspect that constraints of the physical world make them ultimately impossible, even given unlimited technological capability.
Well, let me back up. Depending on how strictly you define P-zombies, they might just be completely implausible and absurd. Neurological zombies (defined by Wikipedia as a zombie that "has a human brain and is otherwise physically indistinguishable from a human; nevertheless, it has no conscious experience") are clearly nonsensical. Since we know with a high degree of certainty that conscious experience arises in the brain, the idea of two physically identical brains, one with conscious experience and one without, is just stupid. It would be like if I tried to assert that you could have a fruit that was physically identical to an orange, down to each and every molecule, except that it was actually an apple. What the fuck does that even mean? It's word salad.
But I think behavioral zombies are more interesting. Wikipedia defines a behavioral zombie as one that "is behaviorally indistinguishable from a human and yet has no conscious experience." Now we're talking. For instance, we could quite easily imagine an object that looked and tasted exactly like an orange, but was actually composed of a cloud of nanobots that manipulated your nerve cells in just the right way. Whether that kind of technology is practically achievable is irrelevant -- we know that it doesn't violate any physical laws, so it's easy to agree that there is a possible world containing such a "philosophical orange".
The question then becomes, could you have a being that looked, acted, and from the outside was otherwise indistinguishable from a human, and yet had no conscious experience? In other words, if you had a perfect simulation of a human (but built from different components), would that simulation necessarily have conscious experience, or is it possible that it would be a "zombie" in terms of its internal life?
As a first level answer -- which I will revise shortly -- I am going to answer that there is indeed a possible world that contains such behavioral zombies. The reason I say this is because I do not think conscious experience arises from the outputs of our brains, but rather, I think it arises as a result of the structure of our brains.
As an analogy, let us imagine two computer programs: One of them calculates the Mandelbrot fractal according to a fixed bounding box and with preset parameters, and displays it on the screen. The other reads an (uncompressed or losslessly compressed) image file containing a pre-rendered Mandelbrot fractal, does some irrelevant computations to eat up an appropriate amount of compute cycles, and then displays it on the screen. The output of each program is identical. Their effect on compute resources is identical (you could even have the first program open the same image file, but not actually read the contents). We could say they are behaviorally identical. However, we cannot say that both programs "compute a fractal". The first one does, the second one clearly does not. The property "computes fractal" is a function of the structure of the program, not of its outputs.
Similarly, I think that the property "feels pain", for example, is a function of structure rather than of outputs. As a trivial example, I could imagine a computer program that outputs the text, "Please don't press Enter", and then if you press Enter it outputs, "Ouch! Stop it!" Clearly, this computer program does not "feel pain", even though it manifests pain-like behavior. On the other hand, it's fairly obvious that pain must arise from a physical phenomenon, particularly given the tragic but thankfully rare genetic abnormality that prevents individuals from feeling pain (these unfortunate individuals typically don't live very long, for reasons that ought to be clear to anyone who thinks about it for a moment). Or do opponents of physicalism really believe that it is these individuals' "souls" which are anomalous rather than their bodies? Please.
It seems to me that all qualia, not just pain, must be a function of structure. So in principle, a behavioral zombie would be possible. Depending upon the internal structure of this perfect human simulator, we can imagine that it might not have conscious experience, even though the outputs are the same. It is "faking it", just like the program that displays a Mandelbrot fractal from a pre-generated image file, rather than computing it on demand.
But now I revise my answer: I seriously doubt doubt that any such human simulator is possible without having an internal structure that would give rise to a phenomenon that would qualify as conscious experience. I suspect it just can't be done, not even with unlimited technological capability.
Returning once again to the Mandelbrot analogy, now let's modify the first program so that it accepts as input a bounding box and certain other parameters, e.g. how many iterations to compute before assuming a point is within the Mandelbrot series. Now the "zombie" program has a serious problem. Even if the first program is constrained to accept a finite set of possible input parameters, and the second program comes pre-packaged with all possible outputs and then accesses the correct one, it's still no longer behaviorally identical to the first -- it consumes a finite-but-nigh-inconceivable amount of storage space. We're talking rooms filled with nothing but stacked optical drives with astronomical amounts of losslessly compressed images stored on them. In fact, it would not be too difficult to expand the first program so that the number of elements in the set of possible inputs was greater than the number of particles in the known universe. Such a program would be trivial to write (hell, I've done it myself) as long as it had the property "computes fractals". But any zombie program which lacks the internal property "computes fractals" can never replicate the behavior of the real deal.
Worse, even if we allow the physical constraints to grow indefinitely, programs which truly "compute fractals" will always have a leg up on those that don't. For any given "zombie fractal" program that successfully mimics the behavior of a true fractal-computing program, I can always expand the set of possible inputs to the fractal-computing program by one. In other words, for any given possible world where there can exist a "zombie fractal" program that perfectly mimics the behavior of fractal-computing program X, one can always devise a fractal-computing program Y for which no "zombie fractal" equivalent can exist in that possible world.
By the same token, it seems highly unlikely to me that any entity which lacks the internal property "has conscious experiences" could ever mimic the behavior of a human being. Even though there's no philosophical principle why it's not possible, you could just never do it, for the same reason that you could never write a computer program that could output fractals on demand but didn't actually compute them.1
Note that this does not bar the possibility of a perfect human simulator. Just as two fractal-computing programs might use different algorithms internally, the same could be said of two consciousness-experiencing entities. In fact, it's quite conceivable that you could have a perfect human simulator which had a radically different conscious experience from the human it is mimicking. But that it wouldn't have any sort of rich internal life to speak of? Seems implausible.
Incidentally, I think this same issue is the problem with John Searle's Chinese Room thought experiment. The idea that the Chinese Room "understands" Chinese seems absurd to us because we are picturing a guy flipping through a book, and we say, "How can the book itself be a mind with understanding?" But that's because the idea that a guy flipping through a book -- or even a room full of books -- could pass as someone who understood Chinese is just silly.
First of all, if the guy is just executing instructions (rather than using his own mental faculties to choose appropriate response phrases), then in order to pass a Turing test, we're talking about a ridiculous amount of "books". Second of all, in order to simulate a Chinese speaker, the guy is going to have to provide responses in a reasonable amount of time. Hint: He can't, not in the traditional formulation of the Chinese Room. In fact, if the guy is just going to blindly execute instructions from "books" and still pass a Turing test, I would argue that in a real scenario, the guy would grow old and die before he'd even answer the first question.
In order to make the scenario realistic, these "books" would constitute an immense thoroughly cross-referenced "library" of inconceivable size; and the "guy" executing the instructions would have to be some sort of magical demon who could zoom from entry to entry in the books with inconceivable speed. As we refine the scenario to be more and more realistic, all of a sudden the contents of the Chinese Room do start to look like something that we might reasonably call a "mind".
So the allure of the Chinese Room experiment lies in the implausible supposition of a very simple structure producing complex behavior. The Chinese Room is analogous to our "zombie fractal" program that simply stores all possible outputs of the real fractal-computing program. Either you change the internal functioning of the Chinese Room in such a way that it legitimately has a "mind" and "understanding", or else your thought experiment degenerates into something that could only exist in a universe that didn't resemble ours in the slightest (in which case our interpretations of what that would mean are pretty much irrelevant).
As one last point, I would point out that under this mental model, there could exist some entities which possess conscious experience but whose behavior could be duplicated by a "zombie". Return once again to the example of the computer program which pretends to experience pain when you hit Enter. We could, in fact, produce such a program that did experience pain, if the internal structure exhibited the proper traits. Hell, I could build one myself: Take the original computer program, but now every time the user presses Enter, it causes an electrical shock to be delivered to a human sitting in a sealed room at some remote location. The system as a whole now does have the trait "experiences pain", even though the observable inputs and outputs are easily mimicked by a system which doesn't.
I bring this up because I think it clarifies my position: Behavioral zombies are maybe philosophically plausible, depending on how expansive you want to get with your definition of "possible worlds". But because the behavior of humans is so rich and complex and difficult to perfectly mimic, it would seem that a behavioral zombie is impossible in any world which even remotely resembles our own. In order to achieve those inputs and outputs, the algorithms used would by physical necessity need to be structured in such a way that they would produce "conscious experience" as a byproduct.
1 There is a slight caveat to this: If we are going to allow "all possible worlds" to include even quite silly ones, then you could imagine a possible world in which somehow this human simulator could access enough storage space to store all possible input/output pairs of a human brain as we know it. I suppose this possible world could have a behavioral zombie of sorts. But it also would not bear any resemblance whatsoever to the universe that we live in -- it's doubtful humans could even exist in such a world. To state it more strongly, I sincerely doubt that there exists any possible world that allows the existence of human X and a behavioral zombie that mimics human X. To talk of a "possible world" where all possible input/output pairs of a human brain could be stored brute force is no more useful to understanding our universe than it would be to assert the existence of a "possible world" where there was no such thing as matter or energy. What are you even discussing then?
I yam what I yam
6 hours ago
No comments:
Post a Comment