Tuesday, June 29, 2010

I have been convinced of the orthogonality of "agnosticism" vs. "atheism"

A post over at Pharyngula finally pushed me over the edge into accepting the claim that agnosticism-ness is orthogonal to atheism-ness, i.e. whether or not a person is agnostic is independent of whether or not they are an atheist.

I have understood this claim for some time: those who are assert it are defining agnosticism as being about knowability, while atheism is about presumptive belief1. But I was hesitant to accept it, because it seemed to me that this was not the definition by which most people understood agnosticism.

I have changed my mind. And the reason is because I do not think there is any other useful definition of agnostic.

Using the orthogonal definitions, which I now believe to be the only acceptable definitions, very few atheists are not also agnostics. A non-agnostic atheist would be the rare bird who says something along the lines of, "Your claims may be unfalsifiable, but I have received personal revelation which says they are false!" Yeah, I don't know anybody like that either. Virtually all atheists explicitly reject falsifiable gods (like Thor or Yahweh) and tentatively reject unfalsifiable gods, while recognizing the by-definition unknowability of the verity of unfalsifiable claims.

The only alternative definition I can think of is one in which an agnostic not only insists that unfalsifiable claims are unknowable, but furthermore asserts that the odds of any given unfalsifiable claim being true are close to 50/50. First of all, I doubt many people actually think that. And second of all, if anybody does think that, they are grade A stupid.

I suppose we could also define it such that an agnostic recognizes the unfalsifiability of theistic claims, and at that point immediately stops pondering the question any further. But we've already got a word for that: "incurious".

1I insert the word "presumptive" to make it clear that I am talking about the "I believe there are five beers remaining in my fridge" type of belief, not the "I believe in Jesus" or "I believe emacs is better than vi" type of belief. (though emacs is indubitably better than vi, but I digress...) If I looked in my fridge and saw six beers, I would feel comfortable revising my estimate. On the other hand, if (hypothetically speaking) someone showed me scientific evidence that vi was superior to emacs, I would be forced to assume some supernatural explanation, e.g. that Satan, as a well-known vi user, tampered with the evidence to fool us into thinking vi was better, even though the man pages tell us otherwise.

Friday, June 25, 2010

Some gentle criticisms of I Am a Strange Loop

Man, there's been a lot of philosophy on this blog lately, and I'm not really sure if anybody is really reading my logorrheic ramblings. Oh well, it's good to get these thoughts down anyway.

First, a brief preface: Readers may have noticed that almost any time I have a post specifically about something Dawkins or Hitchens or one of those guys has said, it's almost always somewhat critical. The reason is because I agree with these guys so often, I don't feel it necessary to mention when I do. I only feel compelled to record my thoughts on the topic when I disagree. Moreover, those tend to be the topics that are really interesting, at least to me.

So it is with the criticisms I am about to make of Douglas Hofstadter's I Am a Strange Loop. This is a tremendously fantastic book. In particular, I will always be indebted to Hofstadter for his illuminating lay explanation of Gödel's proof of incompleteness. In addition, he conjures up some quite wonderful metaphors, and his vivid metaphors have helped clarify in my mind a picture of how consciousness can emerge from the simple neurological reactions in the human brain.

Okay, with that out of the way, I now want to turn my attention to what I think is a systematic error in Hofstadter's thinking about the human brain, one which, though it never entirely undermines this wonderful work, becomes more and more troublesome as the book progresses. (Admittedly, I am only about two-thirds of the way through, but I am on a passage that I think perfectly highlights the problem I want to address, and I don't want to lose the thought) Very much in the spirit of Hofstadter's writing, let me begin with an elaborate analogy.

Imagine an alternate history of science where particle physics, molecular chemistry, and anatomical biology all developed in parallel at roughly the same rate. At the same time that enlightened thinkers in biology are just starting to let go of the vitalist model, physicists are working on refining the Bohr model of the atom, and molecular chemists are working out the mathematics of protein folding. (Never mind how terribly implausible this is, it's just a thought experiment!)

Biologists are increasingly convinced that vitalism is simply not plausible based on recent advances in particle physics, but they are still baffled at such things as how a cell membrane holds together, and how it is possible for these cells to "decide" to work together to build a larger organism. It is such a vexing problem that some biologists mount a reactionary defense of vitalism. (One particularly snarky biologist devises a thought experiment he calls the Chinese Marbles, observing that marbles are sort of vaguely like elementary particles, and then asking his colleagues to imagine the ridiculous image of a gargantuan Chinese man made by using marbles in place of elementary particles, built up to form marble-molecules, marble-cells, marble-organs, etc.)

Still, a brave subset of biologists, convinced by the total lack of evidence for any sort of vitalist theory, press on trying to make sense of it all. A particularly bright one by the name of Hofflas Dougstadter develops a rather rich, metaphor-filled model for how this might work. He pens a book called I Am a Strange Organ, which, among many other great achievements, makes a rock-solid case that the study of anatomical systems can pretty safely ignore particle physics -- that, in fact, trying to incorporate particle physics into our understanding of biology may add information but does not contribute one iota of comprehensibility to the picture. He even makes the bold claim that, on the level of an entire human body, the organs, the blood cells, the skeleton, the muscles, etc., are in a sense more real than the elementary particles of which they are composed. He doesn't deny (like the vitalists do) that particle physics ultimately underpins all of it, but he rightly observes that, on a biological level, causality gets a little fuzzy here -- are the particles pushing around our organs, or is it mechanisms within our organs that are pushing around the particles?

In making this great insight, however, Dougstadter has largely ignored the importance of molecular chemistry. He gives scant mention to enzymes, proteins, lipids, etc. This leads him to become quite obsessed with the idea of suspended animation. He observes that when an organism is frozen, the anatomical structure remains. The particles making up the structure may have slowed down quite a bit, but why would that matter? After all, the particles are still coming together to make the structure, so is this frozen organism a completely different type of object, or is it the same type, just with slowed-down particles? It never occurs to him that the freezing process could cause widespread havoc at the molecular level, with expanding lattices of frozen water molecules rupturing cell membranes, proteins denaturing, etc.

If you'll excuse my rather over-detailed Hofstadterian metaphor (hey, I've been reading his book; perhaps I've got a "soul shard" of Hofstadter inside my brain that is taking over my fingers and causing me to write like him!), let me now finally get around to saying what I mean by it. The people analogous to the human players in this story I'm sure are obvious. When I talk of particle physics, molecular chemistry, and anatomical biology, what I am analogizing to are levels at which we can model the brain: respectively, the level of an individual neuron, the level of major structures such as the various cortices, etc., and the philosophical/cognitive level in which Hofstadter specializes.

Hofstadter's case for ignoring the lowest level when we attempt to understand the nature of consciousness is rock-solid. The firing of neurons tells us nothing about consciousness, except maybe to impose some physical limitations, and possibly to explain some of the frailty of our humanness. I suspect the "reductionist" view he is attacking is somewhat of a strawman (I don't think anyone seriously thinks that way) but he is very eloquent in expressing his anti-reductionist-but-still-materialist viewpoint, and I think it is one I very much agree with.

But he ignores this middle level at his peril. The specifics of the large physical structures in our brain may not be crucial for the more general idea of consciousness, but I think examining the particular instantiation of those structures in the human brain can shed important light on the kind(s) of architecture(s) that are necessary to support consciousness -- you see, while Hofstadter makes a good case that "a strange loop of perception" is necessary for consciousness, and in fact explains quite a bit of the mystery, I tend to doubt that it is sufficient. Moreover, even if the kinds of structures we see in the brain have nothing to do with general consciousness, it certainly has quite a bit to do with human consciousness!

To understand what I mean when I say that "the particular instantiation of those structures in the human brain can shed important light on the kind(s) of architecture(s) that are necessary to support consciousness", try to imagine a truly conscious, sapient AI. I think those who are serious about cognitive science and have kept up to date with the latest research are pretty much aware that such an AI is not going to simply be a vast-but-undifferentiated neural network. The software would need I/O routines, visual and auditory processing algorithms, most probably a speech and grammar subsystem, etc. The computer running it may be an undifferentiated sea of bits and gates, but the computer is not self-aware -- the software is. And the software is certainly not homogeneous. (Remember this differentiation between the homogeneous hardware and the highly structured software, I will return to it later)

Perhaps this is close-minded of me, but I imagine that any type of computer AI which experienced anything that we might recognize as consciousness will have an internal software architecture that, at the 10,000-foot level, bears some resemblance to the architecture of our brains. Oh, it will be much neater, much less tightly coupled, and maybe some problems it will solve in a completely different way. But if it's going to be possessing consciousness, at least anything like the consciousness we know, it's going to have to be pre-built with some of these components.

I hedged somewhat in the above paragraph using phrases like "as we know it" and "that we might recognize". But even though I was hedging when I talked about consciousness in general, I still think my point needs no hedging whatsoever if our goal is to understand human consciousness, with all of its unique flavors. And I think that is very much part of Hofstadter's central goal, especially when he speaks of things like "soul shards", etc.

I would perhaps be erecting a strawman if I implied that Hofstadter's argument was that all of human consciousness could be understood by the dance of symbols in this strange loop of perception he has defined. Surely he is not really implying this (though at times he seems to verge on it), but let me say why it is wrong anyway.

There is evidence, for example, that our left brain (or is it the right? I always get them backwards) is making up a constant explanatory narrative for our actions -- a post facto explanatory narrative, in fact! (The evidence comes from case studies of individuals who have undergone a corpus callostomy, if I recall)

This "smoothing out" of our internal narrative is surely quite a critical component of our consciousness -- without it, it seems like the "I" would become dangerously unstable. We would be constantly aware of our actions seeming to arise from nowhere, from beyond our conscious control. It would be like the sensation we get at the doctor's office when she tests our knee reflex, except instead of it being this once-per-checkup physical event, it would be an all-day-every-day mental event. You might be sitting at dinner, and, being thirsty, you reach for your glass -- except instead of just being only vaguely conscious of it, as you would now, it would seem like somebody else was moving your hand. Instead of saying, "Heh, I took a drink without even thinking about it," the sensation would be more like, "I stopped paying attention to my arm for ten seconds, and something possessed it and grabbed my glass!" A person in this condition would surely go insane, and might even come to viscerally doubt their own sense of "I"-ness.

I'm sure there are many other examples. To what extent do the speech/grammar centers in our brain impose a specific type of structure on our manipulation of symbols? How about our sensory organs as a means of arbitrating the boundary of "I"? These all seem to be very important to what it means to be human and have first-person experience, and yet none of it can be formed out of an initially homogeneous dance of symbols. A particular heterogeneous structure needs to be imposed first, before consciousness can emerge.

I said I would return to the analogy of a hypothetical sapient AI, and the distinction between its largely homogeneous hardware composed of bits and logic vs. its highly structured software architecture. In a very loose sense, the design of the software can be compared to the development process encoded by our genes (I say very loosely, because I don't mean to imply DNA is anything like a blueprint or a top-down "design"... but, as the IDiots are fond of pointing out, DNA does give rise to structures that exhibit many of the properties of design, and for purposes of this analogy, that is sufficient). If we imagine a developing fetus whose brain grows at an entirely ordinary rate, but none of the neural connections ever differentiate -- its just a vast homogeneous field of gray matter -- this baby will not be born alive, let alone conscious or with the potential for consciousness.

Again, I think it would be attacking a strawman if I implied this was literally Hofstadter's position. He is taking much of this for granted (usually with good justification!), to try and explain the sensation of "I"-ness. And at that I think he does a pretty good job overall, though as I mentioned I think his picture of consciousness could become richer if he started to think about the roles that some of these middle level structures play (e.g. the "smoothing out" of the strange loop performed by our ongoing rationalizing narrative).

But I do think that some of his positions unknowingly rely on very similar "blank slate"-like assumptions about the human mind. In particular, he takes the concept of "soul shards" to the level of asserting that the self-referential symbols representing person B in person A's brain actually constitute a "consciousness" with a first-person experience. (I was going to cite a passage, but I don't have the book with me right now -- will update this post later) I do not think he can support this statement in light of the "middle level" of neurological structure I have been referring to.

His position is that, since it is the pattern that matters rather than the substrate, a coarse self-referential model of another person can "execute" its software on your brain -- much the same way as, for example, the hardware of a Mac can emulate the hardware of a PC. It's all just a pattern of symbols, and the system for executing it does not matter.

Hogwash, I say! And the problem is in a hidden ambiguity in the italicized phrase of the previous paragraph. What exactly do we mean by "substrate"? Do we mean the vast sea of more-or-less homogeneous neurons? Well, we can't mean that if our substrate is supposed to support consciousness. This would be like taking our hypothetical sapient AI, and wiping the software. The hardware is just a "useless" sea of bits and gates. The substrate for the symbols that represent the AI software may indeed be this undifferentiated raw computational network, but the substrate for the AI's consciousness, for the AI's "I" (AI2?), is the software! The software can't work without the hardware, but the consciousness can't work without the software.

In analogizing between brain hardware/software and computer hardware/software, Hofstadter has allowed himself to make false parallels, parallels to levels that don't correspond to each other. He thinks of the totality of the physical brain as analogous to the computer hardware, and our conscious selves as analogous to the computer software1. But this analogy doesn't work, because if our physical brain were analogous to the computer hardware, it could no more support consciousness than could the "useless" sea of bits and gates into which we transformed our poor hypothetical AI when we wiped its software program in the previous paragraph.

A better analogy would be to think of the neurons in our brain as analogous to the computer hardware, the large structures in our brain (the "middle level") as analogous to the computer software, and our emergent consciousness as analogous to the current state of the computer as the sapient AI program is running. Now, every emergent phenomenon is capable of being supported by its substrate all the way up and down both levels of the analogy.

And herein lies why I think his idea of someone else's "software" executing in our own brains and having its own first-person consciousness is bunk. In my improved analogy, we do not have an analogue of our loved one's software running on our hardware; we have an analogue of the our loved one's state stored within our state (albeit these are beautiful recursive strange loop-y states!). And here's the kicker: it seems tremendously unlikely that various components of the analogue of person B's state in person A's brain are being encoded/decoded and manipulated by the same analogous software that exists in both brains. Or to state it more plainly, the mapping of state-to-software in person A's analogue of person B's "soul" bears no resemblance to the original mapping -- the mapping that gave rise to consciousness.

Yes, we have within our brains a coarse-grained copy of the self-referential symbol(s) that make up our loved ones, and in that sense Hofstadter's "soul shards" idea is quite real (and uplifting, I might add). But that symbolic pattern is not being read/manipulated by the right software to cause the emergence of a conscious, first-person entity.

Let me embark on one more Hofstadterian analogy, and then I'm done. As I said before, while I mostly agree with what (I think) Hofstadter means by it, the phrase "the pattern matters and not the substrate" has some ambiguity, and can actually be quite false if you misinterpret it.

So let's say I have the blueprint for a house, and I have two houses that have been built using this blueprint. The houses are in different locations (obviously!), are made from different individual trees (obviously!), but I would assert that we could make them sufficiently identical that just about anyone would agree they are the "same" house, with all the same properties (being able to live in it being arguably the most important property). For some, the varieties of wood and the types of materials would have to be identical; others might say you could use different but similar varieties of wood and still have the "same" house. But no matter -- we can make the houses arbitrarily close, even if they are using different substrates, with the substrate in this case being defined as the foundation on which it is built, the individual trees used to build it, etc.

But is the blueprint "the same" as the house? The pattern is the same, right? So does the substrate -- which in this case means physical materials vs. a 2-dimensional diagram -- matter? Well yes it most certainly does! We can make the blueprint arbitrarily detailed, giving a 2-dimensional cross-section of not just each floor, but of every inch of elevation in the house, or even of every single micron of elevation, if you wish. The "substrate" -- as we have defined it in this case -- still matters. No matter how detailed the blueprint becomes, it will never possess the property "you can live in it" (at least not comfortably!).

So depending on how we defined "substrate" in each individual case, maybe it does matter. In the case of human "souls", you can have quite an elaborate representation of another person within your brain, and as it becomes arbitrarily rich in detail, I think it is a sufficiently good copy of a part of them that the phrase "soul shard" is quite appropriate and evocative. But due to how our brains are structured and operate, that "soul shard" will never get to inhabit, say, our visual cortex, or our somatosensory cortex -- and it seems highly likely to me there are similarly large structures in our brain that are critical to the experience of first-person consciousness, but which the "soul shard" has no access to.

No matter how big a "soul shard" we take on, our brains are wired in such a way that it can never develop an independent first-person consciousness. Our deceased loved ones do live on in a very important way, in our shared hopes and dreams and our memories of them and all the things Hofstadter describes -- but that "spark", that ineffable "I"? I just don't see it. That part is gone, because nobody is (nor is anyone currently capable of) churning the right symbols through the right software.

1For what it's worth, I would have totally bought into this analogy not two days ago, and it is Hofstadter's scintillating ideas that have brought the flaws of this analogy into focus for me -- again, I am only criticizing him at such length because he is right about so damn much, that the areas where (in my opinion) he goes wrong are certain to be quite fascinating!

Thursday, June 24, 2010

More hilarious unintentional fundie irony

A 10-year-old boy who earlier made news for his support of gay rights has been invited to serve as Grand Marshall in a Pride parade -- and the despicable American Family Association is calling it "child abuse". That's fairly predictable though. The funny part is their description of what makes it child abuse:

"It's shameful that adults would abuse a brain-washed child in this way," AFA president Tim Wildmon writes in a press release. "He's obviously just parroting the nonsense he's been told by manipulative adults..."


Wait a minute... "parroting the nonsense he's been told by manipulative adults?" Isn't that the entire point of Sunday School?

I thought they were going to say something about exposure to sexual perversion. But they're focusing on the brainwashing? Seriously?!?

If this type of "brainwashing" is child abuse, then the AFA is also forced to agree with Richard Dawkins when he says that raising a child to believe in religion is child abuse. It's the same fucking thing. Now, I personally think Dawkins' remark is maybe a little over-the-top (though I agree with the sentiment), but if the AFA wants to be logically consistent then they hav-

Oh wait. I almost just used "AFA" and "logically consistent" in the same sentence. Nevermind.

John Searle, Paperclip Baseball, and the Process of Invention

In a recent post, I gave my opinion on John Searle's Chinese Room thought experiment -- basically, that it's a trick that gets you to ignore relevant questions of scale. I am currently reading Douglas Hofstadter's I Am a Strange Loop, where he also takes Searle to task for similarly flippant dismissals of the possibility of legitimately conscious AI, such as an analogy to a roll of toilet paper with dots and Xs printed on it, representing a Turing machine. Of course this snide metaphor makes exactly the same error as the Chinese Room thought experiment, so much so that it inspired me to think about it some more and to expand on what I said previously.

The pattern of Searle's anti-AI analogies could be generalized thusly:
  1. (1) We set out to show that entity A cannot possibly perform action X (in this case, A is "a future computerized AI" and X is "thinking/understanding")
  2. (2) Conjure up a mental image of entity B, which Searle has carefully chosen for the purposes of this argument.
  3. (3) Entity B is analogous in principle to entity A, albeit on a much smaller/slower/less powerful scale -- indeed, in terms of some of their important properties, we can even say they are mathematically equivalent.
  4. (4) Our intuition tells us that the mental image of entity B performing action X is just plain silly. (And indeed, our intuition is generally right about this, though not always for the right reasons)
  5. (5) Therefore, by analogy, the idea of entity A performing action X must also be just plain silly. If entity A seems to be performing action X, it must only be a simulation, nothing more.
Again, what this ignores is that scale matters, particularly when we remember that step #4 invokes our intuitions. Assuming infinite time or some other infinite resource in order to avoid problems of scale invalidates our intuitions, because our intuitions are firmly rooted in a world with finite time and finite resources. I will return to this point about the invalidity of intuition in Searle's thought experiments, but first allow me to extend this pattern of argumentation to a couple of hypothetical conversations between Searle, his drinking buddy, and his college physics professor.

SEARLE'S DRINKING BUDDY: So the amateur baseball league I play in recently decided to allow aluminum bats, and as a result I've hit three home runs already this season!

SEARLE: No you didn't. You only simulated hitting those home runs.

BUDDY: Um....

SEARLE: Listen, we know wooden bats can be used to hit home runs. Now, consider, an aluminum baseball bat can be approximated by saying it is, roughly, an aluminum cylinder, right?

BUDDY: More or less, sure.

SEARLE: Well, an unrolled paper clip is also an aluminum cylinder, right? And given a large enough paper clip, you could probably unroll it, cut it to size, hold it over the plate with both hands, swing it at a baseball, and maybe even hit the ball past the outfield fence from time to time.

BUDDY: Hmmmm... in principle, I suppose I agree, though it seems rather far-fetched.

SEARLE: (retrieves a paper clip from his pocket, unrolls it, holds it with two fingers, and swings it like a bat) So, am I playing baseball now?

BUDDY: Well, obviously you're not playing baseball, you're just pretendi--

SEARLE: Ah hah! Yes exactly, using an aluminum paper clip, one can only simulate the act of playing baseball! And we already agreed that there's no significant difference in principle between the aluminum cylinder that is the unrolled paper clip and the aluminum cylinder that is the baseball bat, you see. No no no, one could never play baseball with an aluminum bat, it would only be a simulation. A bat has to have "the right stuff" -- which in our universe, appears to be something to do with the intrinsic properties of wood.

BUDDY: You're a dick.

Okay, so I'm sure some extreme baseball purists might agree with Searle that it's not "really baseball" if you play with an aluminum bat. For them, please move on to my next example.

But for the rest of us, I would point out that as many differences as we might point out between an unrolled paper clip and an aluminum baseball bat (different tensile properties, somewhat different shape, etc.), I would challenge any reader to say that the computer you are using to read this blog post is more similar to toilet paper than is an unrolled paper clip to a baseball bat. John Searle wants us to swallow the clear lie that because you could in principle making a Turing-complete system out of a roll of toilet paper, therefore there are no important differences between a futuristic computer AI (far more powerful than the computer you are using to read this post) and his buttwiper-cum-Turing machine. In comparison, fantasy-Searle's assertion there are no important differences between an unrolled paper clip and an aluminum baseball bat is actually a much smaller lie, I would contend.

If you are hung up on the mathematical equivalence between various incarnations of Turing machine, try this on for size:

JOHN SEARLE'S COLLEGE PHYSICS PROFESSOR: So you see, that using techniques of advanced calculus, Albert Einstein was able to derive all of the equations that govern special relativity.

SEARLE: (raises hand) Um, I don't think so. I think Albert Einstein just simulated his derivations. He wasn't really deriving anything.

PROFESSOR: You again?! Well, what inane objection do you have this time?

SEARLE: Well see, I taught my precocious 9-year-old nephew Jeffrey to solve simple algebraic equations, like "3x + 4 = 19". He's actually quite good at it!

PROFESSOR: That's nice, your point?

SEARLE: Suppose I taught him to compute the limit of simple functions too. He's a bright kid; I'm pretty sure that given enough time I could do it. Now, I learned in one of my other classes that the foundation of calculus is all based on algebra and limits. Correct?

PROFESSOR: Yes, more or less....

SEARLE: So if Jeffrey were in some parallel world where he stayed nine years old forever, in principle there's no reason he couldn't apply his algebraic knowledge to the development of advanced calculus. Given infinite time of course.

PROFESSOR: I think I see where this is goi-

SEARLE: And given infinite time, remember, there's no reason -- at least not in principle -- why little Jeffrey couldn't then go on to use all of his calculus knowledge to derive the equations of special relativity. In principle. But now the coup de gras -- do you truly think Jeffrey could ever really derive all those complex equations?

PROFESSOR: (sigh) No, John, I don't think your 9-year-old nephew Jeffrey could ever really derive the equations that govern special relativity. That mental image is silly.

SEARLE: Yes, it is silly. But how is that different from your laughable contention that Einstein derived the equations that govern special relativity using calculus? As we've seen from our Little Jeffrey thought experiment, calculus can't realy be used to derive the equations after all!

PROFESSOR: (eye roll) So, moving on...

Again, the trick is to agree to the in principle plausibility of a fantastic scenario -- in this case, an immortal fourth-grader who eventually studies advanced mathematics -- and to then redirect the ludicrous absurdity of the imagined scenario in order to invalidate the scenario being analogized. But if we are actually paying attention, the ludicrous absurdity is due to particulars of the imagined scenario that are not shared by the original scenario, rather than those which are carried over from the analogy.

I think that the reason why so many philosophers fall for Searle's seductively snarky analogies is because philosophers are used to playing in a mental sandbox where "in principle" is all that matters. They shift around counterfactuals and possible worlds and infinite resources without a second thought to the real-world plausibility of it all -- which 99% of the time is quite appropriate in the world of philosophy. I'll say in a moment why this is one of those 1% times where philosophers cannot ignore real-world plausibility (I actually already stated it in an early paragraph, but I intend to expound on it and clarify), but first, a digression. (As Richard Dawkins said in Unweaving the Rainbow, "what is this life if, full of stress, we have no freedom to digress?")

As an engineer, Searle's mental errors seem positively glaring to me. Engineers deal day to day in real-world plausibility, and we're used to the idea that some things that are in-principle possible, given infinite time/infinite resources, are in reality patently absurd.

I would like to assert, in all humility, that I think I am particularly qualified to understand Searle's error as a result of being a research engineer. The process of inventing a new idea in an engineering discipline involves frequent perceptual shifts back and forth between what is possible in theory and what is reducible to practice.

To illustrate this, I would like to recount, in those most general terms possible (I don't want to have it be like the last time I talked about my job on my blog), the process by which myself and two of my co-workers developed the idea for a new invention. While ruminating about a prior conversation with my project leader, I had an idea of the form, "Wouldn't it be cool if we could do X?" Well, for clarity let me pick a concrete example, which I will intentionally choose to be as far removed from what I actually do as possible: Let's say my idea was, "Wouldn't it be cool if we could grow an apple that would naturally have the company's logo imprinted on it?" (Nevermind whether this is a good idea or not, I'm just picking an example) I had no idea how this might be technologically accomplished, but it seemed at least in principle possible, and -- in the case of the real invention, at least -- it seemed like there could be a market for it.

I brought it back to my project leader and he agreed it sounded like a cool idea, so I set about to try to figure out how it could be implemented in the real world. This was perceptual shift #1 -- from a pie-in-the-sky possibility, to the search for plausibility. After another day or so, I came up with a somewhat ad hoc, but most definitely workable idea. I brought it back to the first co-worker and started a chalkboard discussion (this guy happens to strongly prefer chalkboards to whiteboards -- go figure!), and soon another team member overheard us and joined us in bouncing around ideas.

After I had described my technical implementation of how we could get apples to grow with a company logo naturally imprinted on it, both of my colleagues started to see other quite different potential uses for the underlying technology. For instance, one of them suggested (in our fantasy analogy) that a feedback loop in the technology could cause the apple to also be imprinted with its own unique individual weight as it grew. The other noticed that the same technology might be modified so that the apple appeared perfectly normal for all intents and purposes, but it had organically "grown" an RFID tag inside of it (you know, to combat the terrible scourge of apple thieves on our society). It's important to mention that many of these new ideas presented their own new technical challenges, despite all of them being a riff off of the technology I had diagrammed on the chalkboard.

Nevermind that this is a silly and almost-certainly-impossible invention -- the actual one, which I can't speak about for obvious reasons, was (I think) far more useful and also much more implementable. But I hope the reader can suspend disbelief long enough to see that there had now been a second perceptual shift, one that originated its journey in the world of the plausible, and wound up back in the unlimited realm of the possible. None of the three of us would have thought of those pie-in-the-sky ideas without first being inspired by the technology I had conceived -- and I wouldn't have bothered to work out the technology if it hadn't originated in the germ of a different pie-in-the-sky idea.

I think the particular invention I am working on is ultimately fairly modest, and so there will probably be only one last perceptual shift, as we reduce these ideas to practice. But for really revolutionary inventions (digital computer, anyone?) this ricocheting between the possible and the plausible can continue for quite some time, decades even, spawning all kinds of unexpected results. Indeed, in a research engineer, the ability to seamlessly shift between a discussion of what could be done in theory, to how it might be accomplished in practice, back to what else that practice could in theory accomplish, and so on ad infinitum, is a central job qualification. So needless to say I am quite accustomed to it.

Okay, end of digression. Back to Searle. As I alluded to very early in this post, the fatal flaw in Searle's style of argument is that there is a fundamental contradiction between step #3 (where we imagine a scenario with infinite time and infinite resources) and step #4 (where we try to apply our everyday intuitions to such a scenario). The former is quite alright if we are playing in the enclosed sandbox of philosophy, where "possible worlds" can be stretched and contorted to all kinds of radical scenarios. But the latter -- the employment of our intuitions -- forces us back into reality. If philosophers want to utilize their human intuitions about what's "silly" or "absurd", rather than rely on formal reasoning, they must take into account the real world. After all, what is a philosopher's idea of "silliness" or "absurdity" based on other than her real world experience!

To put it as forcefully as possible: In a "possible world" that allows the Chinese Room to actually work -- one in which the guy in the room is immortal, the "room" contains a continent's (or more) worth of "books", and the universe goes on indefinitely and unchangingly -- none of our intuitions even remotely apply. To turn it on its head, we can imagine a possible world filled with super-intelligent immortal aliens who are quite used to carrying on conversations that span quadrillions of years -- maybe to these aliens, it is not so far-fetched that a very special roll of Turing-blinged out toilet paper might "think" in a way analogous to the way these puny short-lived humans "think". The Turingified toilet paper might take a little longer to make up its mind, but that could even be an asset rather than a weakness in such an eternal slow-moving universe. Our immortal aliens might even find it rather pathetically cute when these egotistical humans keep insisting there is something about their idiotic nigh-instantaneous lives that transcends those of the equally simple-minded yet much more patient and well-mannered Toiletpaperians!

Indeed, the mental image of a roll of toilet paper that "thinks" is arguably less fantastic than the actual practical conditions that would allow it (or the Chinese Room) to happen. I can quite easily imagine a cartoon-ish mental image of a roll of toilet paper with a thought bubble coming out of it saying, "Oh dear god, not again!", even if I think no such thing could exist in the real world. On the other hand, my primate brain is not capable of imagining a time span of a thousand years, let alone a span of the millions of years (or more, probably astronomically more) it would take for the Chinese Room to answer a single question.

Thinking toilet paper, thirsty beer cans, and the Chinese Room are silly ideas -- but not for the reasons John Searle thinks they are. He tricks us into drawing a false analogy, without realizing that the important differences which divide the absurd from the plausible actually lie outside the scope of the analogy.

Wednesday, June 23, 2010

Earthquake!

I just felt an earthquake for the first time in my life! Exciting!

Upstate New York is not particularly seismically active, and though there have been a few earthquakes during my lifetime that I could, in principle, have felt, I have until now always been asleep or otherwise didn't happen to feel it.

They are saying it is a fairly big one -- magnitude 5.5! -- centered on the Ontario/Quebec border. For perspective, the last time an earthquake of that magnitude occurred in that seismic zone was 1944.

Sadly, I didn't realize I was feeling an earthquake while it was in progress. I thought somebody was doing some work on the ceiling downstairs! I was about to go down and tell them I was worried they were going to collapse the floor if they didn't stop, and then the earthquake stopped. I didn't realize what it was until I got an e-mail from a friend about it. heh...

Monday, June 21, 2010

A potential explanation for heterogeneity of hereditary propensity for religiosity/credulity?

I was making a comment over at Bruce Hood's blog and a thought occurred to me that I think is worthy of its own blog post.

Pardon me for failing to dig up a citation for this, but there appears to be growing evidence that propensity for religiosity and other o'er-credulous beliefs has a hereditary component. In other words, there could be genes that make you more likely to believe without evidence (or to state it conversely, there could be genes that make you more skeptical).

One problem with this is that it takes very specific conditions to allow genotypic heterogeneity in a population. If there is a clear selective advantage afforded to one phenotype over another, then under normal conditions the corresponding genotype should eventually come to dominate the entire population.

Now, I'm totally a layperson in this regard, and I won't pretend that I understand the mathematics behind what enables heterogeneity, nor that I could list more than a couple of conditions that allow it. But I happen to know offhand that one way you can get heterogeneity is via a host/parasite relationship.

The idea goes like this: Host phenotype A is resistant to parasite phenotype X, but vulnerable to parasite phenotype Y. Host phenotype B is the reverse, i.e. resistant to Y but vulnerable to X. All other things being equal, the prevalence of each phenotype in each population will tend to oscillate, with shifted (or is it opposite?) phases. If the parasite population is dominated by X, then natural selection will cause the host population to tilt towards A... after which natural selection then causes the parasite population to tilt to Y, causing the host population to shift to B, etcetera ad infinitum.

I'm sure there's an evolutionary biologist reading this right now screaming about the horrible state of evolution education in America's schools, but I think I at least got the gist of it... maybe?

Okay, so now on to inherited credulity... one account for why humans are so damn credulous, given by Michael Shermer, is that Type I errors (false positives) tend to be far less costly than Type II errors (false negatives), therefore natural selection will favor individuals who tend to be biased towards Type I errors. Shermer's favored example is the rustle in the grass that might be a tiger. If you assume it's a tiger, and you're wrong, then you waste a little time and energy fleeing from a predator that wasn't there. If you assume it's the wind and you're wrong, then you are dinner.

One thing that's always bothered me about Shermer's account is that he rarely, if ever, explicitly brings probability into it. It's not enough that the cost of a Type I error is less than that of a Type II error -- it must be the case that the cost of a Type I error multiplied by the chance that it is an error must be less than the equivalent calculation for Type II errors. Presumably, most rustles in the grass are in fact not tigers, so for his account to truly show that natural selection would favor "patternicity" requires a bit more than what he has explicitly stated.

It was while complaining about Shermer's omission on Bruce Hood's blog that it occurred to me that we might have a potential solution here for the problem of heterogeneity in inherited credulity. I now copy-paste from my comment on Bruce's blog, with minor edits:

It's just conceivable that there could be an interaction between patternicity and certain environmental conditions (e.g. prevalence of food, prevalence of predation, etc.) that mirrors the interaction between host and parasite — an interaction we already know to mathematically support heterogeneity in a population.

If you'll pardon a "Just So" flight of fancy on my part: We could imagine a population of prey organisms where some individuals are "strongly" biased towards Type I errors, and others are "weakly" biased (whatever that would mean, but run with me here for a second). If conditions of predation were static, we’d expect one of the two phenotypes to become dominant. But if the local density of predators has a cyclic fluctuation, then the ratio of the two phenotypes in the prey population could conceivably track that cycle. In years with an unusually high-density predator population, the "weakly"-biased individuals tend to get eaten before they can reproduce, favoring the "strongly"-biased individuals; while in years with a low predator population, the "weakly"-biased individuals are able to gather more food than their cousins and therefore out-reproduce them.

Furthermore, this very shift could in itself exert a selective influence on the predator population. As the prey population becomes dominated by "weakly"-biased individuals, it's boom time for the predators. Those dumb prey animals just stand there like they didn't even notice that rustling in the grass. The predator population grows, which as we already postulated (remember, this is all wild speculation) will cause the "strongly"-biased phenotype to be favored in the prey population. Suddenly it seems like the prey have smartened up, as they flee even before it's possible they could have even seen the predator. More predators starve, their population wanes, and the cycle begins again with the "weakly"-biased prey being favored.

There's boatloads of math required before we could state that it's truly plausible, but it sounds to me to at least be plausible in principle. IANAEvolutionary Biologist, nor do I have any prayer of working out the mathematical model myself, so for now this will have to stand as merely another "Just So" story by an armchair biologist.

Hey, wait a minute, postulating a superficially-plausible-sounding evolutionary explanation for human behavior, and then throwing it out there like it's fact without even bothering to test it for plausibility against a mathematical model? I think I've found a second career in evolutionary psychology!

Saturday, June 12, 2010

Dawkins is not a neurologist, and neither am I, so let's have at it!

Richard Dawkins has a new featured article on his website that contains some musings about a subject in which he is very much not an expert: neurology. Since I am not an expert in neurology either, I suppose that makes me eminently qualified to play the other side. And in any case, it touches on a subject that is highly relevant to my most recent post: qualia. So let the battle of the Armchair Neurologists commence!

Dawkins main idea is that, if we assume that there is no absolute binding between the physical wavelengths of light and the qualia of color (and this seems a highly reasonable assumption), then it also stands to reason that a red-green colorblind person has "qualia receptors" (my phrase, not Dawkins') that they have never used. If one could electrically stimulate that part of the brain, so Dawkins suggests, the colorblind person would have the unique experience of "seeing" a totally new color that they could never have imagined, thus shedding light on some significant philosophical issues. (I'm surprised Dawkins never mentioned to explicitly cite Mary's Room, since that's basically the core thought experiment of the philosophical problem he is describing)

I'm mostly with Dawkins on all of this -- and I will give my consonant opinion on Mary's Room at the end of this post, which is weird timing because I was going to put it in the last post but decided it was too long and rambling already -- but from what little I know of neurology, I suspect his idea about the neurological experiment on a colorblind person will be unable to produce the desired results.

As I mentioned before, I think it's a quite safe assumption that there is no absolute binding between physical phenomena and the qualia they stimulate in the brain -- though in practice I imagine that the qualia I experience when I see the color "red" is pretty damn similar to what most humans experience. The alternative is certainly possible, i.e. that each person's wavelength/qualia mapping is clean-sheet original, but it seems far more likely that, within a species, those mappings are going to be pretty similar from individual to individual. Admittedly I am basing this mostly on intuition -- but I do think there is one shred of evidence, namely that we all tend to mostly agree on which colors are similar to each other -- if our individual wavelength/qualia mappings had no relationship to each other whatsoever, then maybe we'd all agree which wavelength was called "red", but we might argue vehemently about whether that color was more similar to "magenta" or "chartreuse". On the other hand, Dawkins' speculation about bats' "seeing" color (by co-opting "qualia receptors" that are used visually in our brains to receive audio stimuli) is eminently reasonable. While I imagine that organisms with nearly identical genetic makeup have very similar stimuli/qualia mappings, all bets are off when you are comparing two different species.

Where my understanding of neurology diverges from Dawkins is the idea that a red-green colorblind person has significant patches of unused "qualia receptors". This seems highly unlikely to me based on the way the brain tends to self-organize. It seems likely to me that the visual qualia receptors for these colors would have simply been "invaded" by connections for nearby colors.

Allow me, if you will, to engage in a bit of wild speculation and story-telling here, which is probably complete bullshit, as it is nothing more than blind intuition based on a layman's understanding brain development, paired with a hazy recollection of a brief primer on the neurology of vision I got during a 3-day crash course in color science I took several years ago. With that caveat out of the way I imagine that the development of color vision -- at least in terms of these theoretical "qualia receptors" -- goes something like this:

I imagine there is a a field of cells somewhere which serves a function analogous to the cortical homunculus, except that it represents the qualia of color (and in fact, it seems not unlikely that this field of cells could be a part of the neurological correlate to the cortical homunculus itself). In early life, our genetic program establishes a binding at certain key points on this "visual homunculus" if you will. Since our sensation of hue is two-dimensional (red-green/yellow-blue) perhaps our DNA is "hard-coded" (I analogize liberally here) to wire the signal for 100% red to one region, 100% green to a region spaced as far away as possible, and 100% yellow and 100% blue to regions in between. This establishes four "calibration points" which, since they are hard-coded in our genetic program, are more or less the same among all normal individuals.

As we experience color as infants, the brain automatically populates the gaps in the "visual homunculus" in relation to the calibration points. For instance, a color sensation that mildly stimulated both the 100% red and 100% blue cells could cause a new, stronger wiring to be built at 50% red and 50% blue. As more and more colors are experienced, the gaps are gradually filled in -- according the calibration points, but also with the sloppiness and imprecision that is typical of biological systems. There's nothing in the genetic program that says, "The visual signal for 75% red/25% yellow shall be routed to this approximate region of qualia receptors" -- it just winds up (approximately) there over time, as new stimuli/qualia bindings are populated by recognizing their proximity to existing stimuli/qualia bindings.

I'm sure if any neurologist reads this they are already cringing, but at the risk of making myself look even stupider, let me plow further ahead into territory in which I am completely unqualified to speak: In this fable I have concocted, what happens to a red-green colorblind individual? Their initial four calibration points get wired up just like the genetic program says, but then no stimuli with a red or green component are received. This has two effects: First, the intermediate qualia receptors get populated with wirings based strictly on stimuli from the yellow-blue spectrum. Second, since we know that the strength of neural connectivity is primarily a function of use, the initial red-green calibration bindings would have a tendency to atrophy, with those regions of the "visual homunculus" perhaps even being invaded and overtaken by intermediate yellow-blue spectrum bindings.

In this fantasy world, what would happen if Dawkins' experiment were performed? What if we stimulated in a red-green colorblind person the region of the "visual homunculus" that was supposed to have been hard-wired to 100% green? He would simply see something pretty similar to what he normally sees when shown the color green -- a hue somewhere intermediate between yellow and blue. Counterintuitively, the qualia he experiences might actually be akin to the qualia I experience when I see the color green -- but he would have no frame of reference to recognize that this was distinct from the qualia he and I experience when we see the color red. In fact, given the plasticity of the brain, maybe colorblind individuals experience the qualia for the color red and green simultaneously when they see either green or red. Who knows, I'm just making shit up anyway, right?

Okay, so this concludes my idle speculation. All of the above is likely to be complete bullshit. But from what little I know about neurology and fetal/infant brain development, it would seem it has to be something like that, and it's easier to tell a highly-specific fable than to speak in general and guarded terms for several paragraphs straight.

As evidence that my "something like that" hypothesis seems reasonable, I would point to the experiences of those who have disorders involving the cortical homunculus (e.g. phantom limb, body integry identity disorder). It seems that, if a section of the somatosensory cortex ceases to receive stimulation, it is not uncommon for those areas to be "invaded" by sensory cells for the adjacent body regions (if you aren't a Wikipedia hater, see the last section of this paragraph for some descriptions of this). This takes place even in adults -- how much more pronounced would we expect the effect to be in newborn infants, especially those who never experienced the missing stimulation to begin with!

So Dawkins' experiment is a cool idea, and the article probably serves as a good introduction to the idea of qualia. Moreover, the idea about bats "hearing" in color and rhinos "smelling" in color is pretty cool, and quite plausible I would say. (Though given that the mammilian somatosensory cortex spent most of its time evolving in tiny rodents for whom smell was far more important than vision, it might be more fair to say that humans "see in scents") But I'm guessing the answer to his final question is: No, it doesn't really work that way. Ah well.

Okay, now, as promised, since it's highly relevant to this discussion, my opinion on Mary's Room: Neglecting for the moment some serious technical implausibilities in the scenario as described, I think the entire issue has been clouded by ambiguity over the word "knowledge". So rather than mess with that semantical can of worms, let me take an uber-functionalist position on this and rephrase the question as: When Mary emerges from the room and sees color for the first time, does it stimulate regions of her brain in a profoundly new and novel way? The answer here is obvious: Yes, of course! And thus, qualia do exist, not as some dualist phenomenon separate from the physical world, but rather as a certain type of mental stimulation within the brain.

Like the problems with the Chinese Room, I think the trick with Mary's Room relies on getting the listener to accept that two concepts are analogous, and to then assert that an extremely complex phenomenon can be embodied in an extremely simple one. The latter seems absurd, but that's because it is absurd -- not because there is something in the analogy we are missing, but because a complex thing and a simple thing are not equivalent even if they are in most ways analogous.

In the case of the Chinese Room, the complex thing is a Turing machine programmed to speak and understand Chinese, with enough storage capacity and processing power to carry on a conversation; and the simple thing is a guy with a fucking stack of books. Because humans are bad at thinking about scale, we don't realize the stack of books is the size of the moon and the guy is dead before he even comes up with a single sentence. So the two are analogous in the sense that a guy who is executing instructions from a book is more or less "Turing complete", but who cares?1

In the case of Mary's Room, the complex-to-simple shenanigans is in the premise of Mary being a "super-scientist", and what exactly that implies. We tend to subconsciously parse the phrase "she knows everything there is to know about color vision" as meaning that she has abstract knowledge of how it all works. In other words, her brain has produced an abstract representation of the brain's response to color stimuli.

But then the dualist proponent does a switcheroo, and says, "Ah hah! You said she's a super-scientist and knows everything there is to know about it. Therefore, she must know what it's like to see the color, right??!" But this is stupid if you put it in terms of brain stimuli. What we are really asking for when we say she has learned what it's like to see the color is that she has not only created an abstract representation of the color in her brain, but that she has "magically" stimulated regions of the somatosensory cortex just by knowing those regions of the cortex exist. That's not just a "super-scientist", that's some kind of magic wizard. A "super-scientist" as we imagine it is no more comparable to this version of Mary than a guy in a room with a book is comparable to the world's most epic supercomputer.

The word "knowledge" is really what clouds the issue, because at the start of the thought experiment we implicitly define it as abstract knowledge of facts, while at the end of the thought experiment we have defined knowledge as any sort of brain stimuli. By analogy, let's say I see an object on the other side of the room and I decide I'm going to go pick it up. One might argue that have all of the "knowledge" there is to know about the location of the object. But obviously the pattern of neurons firing in my brain will be very different depending on whether I walk over and pick up the object, or if I continue to sit on my lazy ass writing an over-long blog post on philosophy.

If you restate Mary's Room with "knowledge" explicitly and consistently defined, then it becomes absurd. If we define "knowledge" broadly as any sort of pattern of neural firings, then either the answer becomes an obvious "yes" (but with qualia being nothing more than a pattern of neural firings) or else the experiment would be better called "Magic Mary" since we are asking her to do the impossible. On the other hand, if we define "knowledge" narrowly as being abstract factual information, then the answer is "no, but so fucking what?" Clearly there are things that exist in the physical world other than neural firing patterns representing abstract knowledge of facts. You might as well argue that since Mary doesn't gain any knowledge when something happens behind her back, that therefore all things that happen behind Mary's back exist in some separate epiphenomenal supernatural world independent of physical reality. Um, no.

Yes, qualia exist, but they are nothing more than patterns of neurons firing in the brain. Whether or not you want to call that "knowledge" is boring-ass semantics.

1On a side note, as I understand it one of the main objections to this counter-argument against the Chinese Room is that by relying on speed and complexity as the thing that distinguishes an understanding mind from a guy executing instructions from a book, one has failed to define a sharp quantum boundary between the conscious and the mechanistic -- to which my reply can only be, "No shit Sherlock, get over it!" Seriously, what philosopher worth her salt hasn't figured this out already? Does a rock have consciousness? Does a prion have consciousness? Does a virus have consciousness? Does a bacterium have consciousness? Does a patch of moss have consciousness? Does a mushroom have consciousness? Does a stalk of broccoli have consciousness? Does a worm have consciousness? Does an ant have consciousness? (What about those species of ants in which some individuals are born as immobile storage vessels for food? Do they have consciousness?) Does a trout have consciousness? Does a hummingbird have consciousness? Does an eagle have consciousness? Does a mouse have consciousness? Does a lemur have consciousness? Does a pig have consciousness? Does a dog have consciousness? Does a dolphin have consciousness? Does a chimpanzee have consciousness? Does a newborn human infant have consciousness? Does an adult human have consciousness? Does a severely brain-damaged person have consciousness? Does a person in a persistent vegetative state have consciousness? If you didn't answer at least one of the previous questions with a "no," at least one with a "yes," and at least one with an "I'm not sure," then you are a fucking idiot. It's a continuum, Searle, deal with it!

Friday, June 11, 2010

My Thoughts on Philosophical Zombies

Apropos of nothing (actually, apropos of a drunken discussion I had with the drummer from my band last week) I thought I'd give my opinion on the plausibility of P-zombies. In a nutshell: I think P-zombies are plausible in principle, but I strongly suspect that constraints of the physical world make them ultimately impossible, even given unlimited technological capability.

Well, let me back up. Depending on how strictly you define P-zombies, they might just be completely implausible and absurd. Neurological zombies (defined by Wikipedia as a zombie that "has a human brain and is otherwise physically indistinguishable from a human; nevertheless, it has no conscious experience") are clearly nonsensical. Since we know with a high degree of certainty that conscious experience arises in the brain, the idea of two physically identical brains, one with conscious experience and one without, is just stupid. It would be like if I tried to assert that you could have a fruit that was physically identical to an orange, down to each and every molecule, except that it was actually an apple. What the fuck does that even mean? It's word salad.

But I think behavioral zombies are more interesting. Wikipedia defines a behavioral zombie as one that "is behaviorally indistinguishable from a human and yet has no conscious experience." Now we're talking. For instance, we could quite easily imagine an object that looked and tasted exactly like an orange, but was actually composed of a cloud of nanobots that manipulated your nerve cells in just the right way. Whether that kind of technology is practically achievable is irrelevant -- we know that it doesn't violate any physical laws, so it's easy to agree that there is a possible world containing such a "philosophical orange".

The question then becomes, could you have a being that looked, acted, and from the outside was otherwise indistinguishable from a human, and yet had no conscious experience? In other words, if you had a perfect simulation of a human (but built from different components), would that simulation necessarily have conscious experience, or is it possible that it would be a "zombie" in terms of its internal life?

As a first level answer -- which I will revise shortly -- I am going to answer that there is indeed a possible world that contains such behavioral zombies. The reason I say this is because I do not think conscious experience arises from the outputs of our brains, but rather, I think it arises as a result of the structure of our brains.

As an analogy, let us imagine two computer programs: One of them calculates the Mandelbrot fractal according to a fixed bounding box and with preset parameters, and displays it on the screen. The other reads an (uncompressed or losslessly compressed) image file containing a pre-rendered Mandelbrot fractal, does some irrelevant computations to eat up an appropriate amount of compute cycles, and then displays it on the screen. The output of each program is identical. Their effect on compute resources is identical (you could even have the first program open the same image file, but not actually read the contents). We could say they are behaviorally identical. However, we cannot say that both programs "compute a fractal". The first one does, the second one clearly does not. The property "computes fractal" is a function of the structure of the program, not of its outputs.

Similarly, I think that the property "feels pain", for example, is a function of structure rather than of outputs. As a trivial example, I could imagine a computer program that outputs the text, "Please don't press Enter", and then if you press Enter it outputs, "Ouch! Stop it!" Clearly, this computer program does not "feel pain", even though it manifests pain-like behavior. On the other hand, it's fairly obvious that pain must arise from a physical phenomenon, particularly given the tragic but thankfully rare genetic abnormality that prevents individuals from feeling pain (these unfortunate individuals typically don't live very long, for reasons that ought to be clear to anyone who thinks about it for a moment). Or do opponents of physicalism really believe that it is these individuals' "souls" which are anomalous rather than their bodies? Please.

It seems to me that all qualia, not just pain, must be a function of structure. So in principle, a behavioral zombie would be possible. Depending upon the internal structure of this perfect human simulator, we can imagine that it might not have conscious experience, even though the outputs are the same. It is "faking it", just like the program that displays a Mandelbrot fractal from a pre-generated image file, rather than computing it on demand.

But now I revise my answer: I seriously doubt doubt that any such human simulator is possible without having an internal structure that would give rise to a phenomenon that would qualify as conscious experience. I suspect it just can't be done, not even with unlimited technological capability.

Returning once again to the Mandelbrot analogy, now let's modify the first program so that it accepts as input a bounding box and certain other parameters, e.g. how many iterations to compute before assuming a point is within the Mandelbrot series. Now the "zombie" program has a serious problem. Even if the first program is constrained to accept a finite set of possible input parameters, and the second program comes pre-packaged with all possible outputs and then accesses the correct one, it's still no longer behaviorally identical to the first -- it consumes a finite-but-nigh-inconceivable amount of storage space. We're talking rooms filled with nothing but stacked optical drives with astronomical amounts of losslessly compressed images stored on them. In fact, it would not be too difficult to expand the first program so that the number of elements in the set of possible inputs was greater than the number of particles in the known universe. Such a program would be trivial to write (hell, I've done it myself) as long as it had the property "computes fractals". But any zombie program which lacks the internal property "computes fractals" can never replicate the behavior of the real deal.

Worse, even if we allow the physical constraints to grow indefinitely, programs which truly "compute fractals" will always have a leg up on those that don't. For any given "zombie fractal" program that successfully mimics the behavior of a true fractal-computing program, I can always expand the set of possible inputs to the fractal-computing program by one. In other words, for any given possible world where there can exist a "zombie fractal" program that perfectly mimics the behavior of fractal-computing program X, one can always devise a fractal-computing program Y for which no "zombie fractal" equivalent can exist in that possible world.

By the same token, it seems highly unlikely to me that any entity which lacks the internal property "has conscious experiences" could ever mimic the behavior of a human being. Even though there's no philosophical principle why it's not possible, you could just never do it, for the same reason that you could never write a computer program that could output fractals on demand but didn't actually compute them.1

Note that this does not bar the possibility of a perfect human simulator. Just as two fractal-computing programs might use different algorithms internally, the same could be said of two consciousness-experiencing entities. In fact, it's quite conceivable that you could have a perfect human simulator which had a radically different conscious experience from the human it is mimicking. But that it wouldn't have any sort of rich internal life to speak of? Seems implausible.

Incidentally, I think this same issue is the problem with John Searle's Chinese Room thought experiment. The idea that the Chinese Room "understands" Chinese seems absurd to us because we are picturing a guy flipping through a book, and we say, "How can the book itself be a mind with understanding?" But that's because the idea that a guy flipping through a book -- or even a room full of books -- could pass as someone who understood Chinese is just silly.

First of all, if the guy is just executing instructions (rather than using his own mental faculties to choose appropriate response phrases), then in order to pass a Turing test, we're talking about a ridiculous amount of "books". Second of all, in order to simulate a Chinese speaker, the guy is going to have to provide responses in a reasonable amount of time. Hint: He can't, not in the traditional formulation of the Chinese Room. In fact, if the guy is just going to blindly execute instructions from "books" and still pass a Turing test, I would argue that in a real scenario, the guy would grow old and die before he'd even answer the first question.

In order to make the scenario realistic, these "books" would constitute an immense thoroughly cross-referenced "library" of inconceivable size; and the "guy" executing the instructions would have to be some sort of magical demon who could zoom from entry to entry in the books with inconceivable speed. As we refine the scenario to be more and more realistic, all of a sudden the contents of the Chinese Room do start to look like something that we might reasonably call a "mind".

So the allure of the Chinese Room experiment lies in the implausible supposition of a very simple structure producing complex behavior. The Chinese Room is analogous to our "zombie fractal" program that simply stores all possible outputs of the real fractal-computing program. Either you change the internal functioning of the Chinese Room in such a way that it legitimately has a "mind" and "understanding", or else your thought experiment degenerates into something that could only exist in a universe that didn't resemble ours in the slightest (in which case our interpretations of what that would mean are pretty much irrelevant).

As one last point, I would point out that under this mental model, there could exist some entities which possess conscious experience but whose behavior could be duplicated by a "zombie". Return once again to the example of the computer program which pretends to experience pain when you hit Enter. We could, in fact, produce such a program that did experience pain, if the internal structure exhibited the proper traits. Hell, I could build one myself: Take the original computer program, but now every time the user presses Enter, it causes an electrical shock to be delivered to a human sitting in a sealed room at some remote location. The system as a whole now does have the trait "experiences pain", even though the observable inputs and outputs are easily mimicked by a system which doesn't.

I bring this up because I think it clarifies my position: Behavioral zombies are maybe philosophically plausible, depending on how expansive you want to get with your definition of "possible worlds". But because the behavior of humans is so rich and complex and difficult to perfectly mimic, it would seem that a behavioral zombie is impossible in any world which even remotely resembles our own. In order to achieve those inputs and outputs, the algorithms used would by physical necessity need to be structured in such a way that they would produce "conscious experience" as a byproduct.

1 There is a slight caveat to this: If we are going to allow "all possible worlds" to include even quite silly ones, then you could imagine a possible world in which somehow this human simulator could access enough storage space to store all possible input/output pairs of a human brain as we know it. I suppose this possible world could have a behavioral zombie of sorts. But it also would not bear any resemblance whatsoever to the universe that we live in -- it's doubtful humans could even exist in such a world. To state it more strongly, I sincerely doubt that there exists any possible world that allows the existence of human X and a behavioral zombie that mimics human X. To talk of a "possible world" where all possible input/output pairs of a human brain could be stored brute force is no more useful to understanding our universe than it would be to assert the existence of a "possible world" where there was no such thing as matter or energy. What are you even discussing then?

Monday, June 7, 2010

What, pray tell, can Karl Giberson's idea of a bad conservative be?!

I was reading The Nation's big article on the Templeton Foundation, and this quote from Karl Giberson really struck me:

According to his lifelong friend Jay Norwalk, Templeton "is exceedingly scrupulous about keeping his personal life separate from the foundation." By most accounts, this has been the case. Physicist Karl Giberson, a self-described liberal who has been a close collaborator on various foundation projects, adds, "To me, Jack Templeton represents the way you want conservatives to be."

Really??? We already knew Giberson is no friend of aggressive secularism, but even still, this really surprised me. Putting aside what one thinks of the Templeton Foundation, it seems rather odd to me that a self-professed liberal would be praising Jack Templeton's politics, in light of this fact from the prior paragraph in The Nation's article:

Jack Templeton's money has also gone to the Swift Boat Veterans for Truth and to ads by the neoconservative group Freedom's Watch. In 2008 he and his wife gave more than $1 million to support California's Proposition 8, which banned same-sex marriage.

Now, I know Giberson was praising Templeton's (alleged) separation of his politics from his personal life. But seriously? This is "the way want conservatives to be?!?" Funding attack ads that are blatantly false? Financing a campaign to deny minority rights at the ballot box?

I'm sorry, Giberson, but that's not even close to how I want conservatives to be. As has been pointed out numerous times by folks such as Andrew Sullivan, a real conservative ought to be in favor of gay marriage, as it represents an expansion of actual (non-fake) family values, as well as possibly a significant cost savings according to recent analysis. And anyway, how can someone who professes to believe in small government want a whole separate constitutional amendment specifically defining marriage?

I do think that a conservative counterbalance is an important factor in a healthy representative democracy. But only if it's actual conservatism. The way I want conservatives to be is: skeptical of any type of increased spending (multi-trillion dollar nation-building adventures used to be something liberals did and conservatives criticized), generally opposed to laws that impinge on personal freedom for a supposed societal benefit (it's supposed to be liberals that are trying to legislate people into being good, isn't it?), and supportive of actual (non-fake) constitutional principles.

I say this as an unapologetic liberal -- despite having a certain libertarian idealistic streak, I recognize that on the Real Planet Earth, expansive social programs are both humane and economically beneficial in the long run; that despite the risk of totalitarianism it entails, the government does sometimes have to tell people (and especially businesses) how to behave; and that sometimes social progress needs to be helped along by governmental compensation of past mistakes (i.e. affirmative action and such). But there also needs to be a sane opposing force to help keep government action in check.

Tea Partiers and gay marriage opponents and Jack Templeton are not that sane opposing force. They are assholes and bigots, and are either stupid, ignorant, crazy, immoral, or some combination of the four. If Giberson is saying that the ideal conservative is a bigot who is nice to his liberal friends in person, but still uses his wealth and power to quash the rights of minorities, I have one thing to say to him: Fuck off.

If you believe in Hell and you aren't an Evangelical, screw you

It occurred to me the other day that anyone who really believes in Hell as a literal place of eternal suffering for non-believers, ought to be out there every day trying as hard as they can to convince other people of "the truth". In particular, I'm thinking of the vast masses of Catholics who just callously go about their day-to-day business believing that most of their friends and neighbors will burn forever in a lake of fire. That's stone cold, yo.

Of course, it's also far less annoying than evangelizing. So really this is a purely academic point. It's totally fucked up that someone would believe I was going to experience maximal suffering forever and ever, and yet not do anything about it -- but if the alternative is annoying the piss out of me, I guess I won't complain too much.

Most of the theists I have any respect for don't believe in Hell anyway, or at least not as a place of eternal and/or unbearable suffering. It's hard for me to understand how anyone could believe in Hell and not go completely insane in a matter of days, so I guess this is all a moot point.