Friday, June 25, 2010

Some gentle criticisms of I Am a Strange Loop

Man, there's been a lot of philosophy on this blog lately, and I'm not really sure if anybody is really reading my logorrheic ramblings. Oh well, it's good to get these thoughts down anyway.

First, a brief preface: Readers may have noticed that almost any time I have a post specifically about something Dawkins or Hitchens or one of those guys has said, it's almost always somewhat critical. The reason is because I agree with these guys so often, I don't feel it necessary to mention when I do. I only feel compelled to record my thoughts on the topic when I disagree. Moreover, those tend to be the topics that are really interesting, at least to me.

So it is with the criticisms I am about to make of Douglas Hofstadter's I Am a Strange Loop. This is a tremendously fantastic book. In particular, I will always be indebted to Hofstadter for his illuminating lay explanation of Gödel's proof of incompleteness. In addition, he conjures up some quite wonderful metaphors, and his vivid metaphors have helped clarify in my mind a picture of how consciousness can emerge from the simple neurological reactions in the human brain.

Okay, with that out of the way, I now want to turn my attention to what I think is a systematic error in Hofstadter's thinking about the human brain, one which, though it never entirely undermines this wonderful work, becomes more and more troublesome as the book progresses. (Admittedly, I am only about two-thirds of the way through, but I am on a passage that I think perfectly highlights the problem I want to address, and I don't want to lose the thought) Very much in the spirit of Hofstadter's writing, let me begin with an elaborate analogy.

Imagine an alternate history of science where particle physics, molecular chemistry, and anatomical biology all developed in parallel at roughly the same rate. At the same time that enlightened thinkers in biology are just starting to let go of the vitalist model, physicists are working on refining the Bohr model of the atom, and molecular chemists are working out the mathematics of protein folding. (Never mind how terribly implausible this is, it's just a thought experiment!)

Biologists are increasingly convinced that vitalism is simply not plausible based on recent advances in particle physics, but they are still baffled at such things as how a cell membrane holds together, and how it is possible for these cells to "decide" to work together to build a larger organism. It is such a vexing problem that some biologists mount a reactionary defense of vitalism. (One particularly snarky biologist devises a thought experiment he calls the Chinese Marbles, observing that marbles are sort of vaguely like elementary particles, and then asking his colleagues to imagine the ridiculous image of a gargantuan Chinese man made by using marbles in place of elementary particles, built up to form marble-molecules, marble-cells, marble-organs, etc.)

Still, a brave subset of biologists, convinced by the total lack of evidence for any sort of vitalist theory, press on trying to make sense of it all. A particularly bright one by the name of Hofflas Dougstadter develops a rather rich, metaphor-filled model for how this might work. He pens a book called I Am a Strange Organ, which, among many other great achievements, makes a rock-solid case that the study of anatomical systems can pretty safely ignore particle physics -- that, in fact, trying to incorporate particle physics into our understanding of biology may add information but does not contribute one iota of comprehensibility to the picture. He even makes the bold claim that, on the level of an entire human body, the organs, the blood cells, the skeleton, the muscles, etc., are in a sense more real than the elementary particles of which they are composed. He doesn't deny (like the vitalists do) that particle physics ultimately underpins all of it, but he rightly observes that, on a biological level, causality gets a little fuzzy here -- are the particles pushing around our organs, or is it mechanisms within our organs that are pushing around the particles?

In making this great insight, however, Dougstadter has largely ignored the importance of molecular chemistry. He gives scant mention to enzymes, proteins, lipids, etc. This leads him to become quite obsessed with the idea of suspended animation. He observes that when an organism is frozen, the anatomical structure remains. The particles making up the structure may have slowed down quite a bit, but why would that matter? After all, the particles are still coming together to make the structure, so is this frozen organism a completely different type of object, or is it the same type, just with slowed-down particles? It never occurs to him that the freezing process could cause widespread havoc at the molecular level, with expanding lattices of frozen water molecules rupturing cell membranes, proteins denaturing, etc.

If you'll excuse my rather over-detailed Hofstadterian metaphor (hey, I've been reading his book; perhaps I've got a "soul shard" of Hofstadter inside my brain that is taking over my fingers and causing me to write like him!), let me now finally get around to saying what I mean by it. The people analogous to the human players in this story I'm sure are obvious. When I talk of particle physics, molecular chemistry, and anatomical biology, what I am analogizing to are levels at which we can model the brain: respectively, the level of an individual neuron, the level of major structures such as the various cortices, etc., and the philosophical/cognitive level in which Hofstadter specializes.

Hofstadter's case for ignoring the lowest level when we attempt to understand the nature of consciousness is rock-solid. The firing of neurons tells us nothing about consciousness, except maybe to impose some physical limitations, and possibly to explain some of the frailty of our humanness. I suspect the "reductionist" view he is attacking is somewhat of a strawman (I don't think anyone seriously thinks that way) but he is very eloquent in expressing his anti-reductionist-but-still-materialist viewpoint, and I think it is one I very much agree with.

But he ignores this middle level at his peril. The specifics of the large physical structures in our brain may not be crucial for the more general idea of consciousness, but I think examining the particular instantiation of those structures in the human brain can shed important light on the kind(s) of architecture(s) that are necessary to support consciousness -- you see, while Hofstadter makes a good case that "a strange loop of perception" is necessary for consciousness, and in fact explains quite a bit of the mystery, I tend to doubt that it is sufficient. Moreover, even if the kinds of structures we see in the brain have nothing to do with general consciousness, it certainly has quite a bit to do with human consciousness!

To understand what I mean when I say that "the particular instantiation of those structures in the human brain can shed important light on the kind(s) of architecture(s) that are necessary to support consciousness", try to imagine a truly conscious, sapient AI. I think those who are serious about cognitive science and have kept up to date with the latest research are pretty much aware that such an AI is not going to simply be a vast-but-undifferentiated neural network. The software would need I/O routines, visual and auditory processing algorithms, most probably a speech and grammar subsystem, etc. The computer running it may be an undifferentiated sea of bits and gates, but the computer is not self-aware -- the software is. And the software is certainly not homogeneous. (Remember this differentiation between the homogeneous hardware and the highly structured software, I will return to it later)

Perhaps this is close-minded of me, but I imagine that any type of computer AI which experienced anything that we might recognize as consciousness will have an internal software architecture that, at the 10,000-foot level, bears some resemblance to the architecture of our brains. Oh, it will be much neater, much less tightly coupled, and maybe some problems it will solve in a completely different way. But if it's going to be possessing consciousness, at least anything like the consciousness we know, it's going to have to be pre-built with some of these components.

I hedged somewhat in the above paragraph using phrases like "as we know it" and "that we might recognize". But even though I was hedging when I talked about consciousness in general, I still think my point needs no hedging whatsoever if our goal is to understand human consciousness, with all of its unique flavors. And I think that is very much part of Hofstadter's central goal, especially when he speaks of things like "soul shards", etc.

I would perhaps be erecting a strawman if I implied that Hofstadter's argument was that all of human consciousness could be understood by the dance of symbols in this strange loop of perception he has defined. Surely he is not really implying this (though at times he seems to verge on it), but let me say why it is wrong anyway.

There is evidence, for example, that our left brain (or is it the right? I always get them backwards) is making up a constant explanatory narrative for our actions -- a post facto explanatory narrative, in fact! (The evidence comes from case studies of individuals who have undergone a corpus callostomy, if I recall)

This "smoothing out" of our internal narrative is surely quite a critical component of our consciousness -- without it, it seems like the "I" would become dangerously unstable. We would be constantly aware of our actions seeming to arise from nowhere, from beyond our conscious control. It would be like the sensation we get at the doctor's office when she tests our knee reflex, except instead of it being this once-per-checkup physical event, it would be an all-day-every-day mental event. You might be sitting at dinner, and, being thirsty, you reach for your glass -- except instead of just being only vaguely conscious of it, as you would now, it would seem like somebody else was moving your hand. Instead of saying, "Heh, I took a drink without even thinking about it," the sensation would be more like, "I stopped paying attention to my arm for ten seconds, and something possessed it and grabbed my glass!" A person in this condition would surely go insane, and might even come to viscerally doubt their own sense of "I"-ness.

I'm sure there are many other examples. To what extent do the speech/grammar centers in our brain impose a specific type of structure on our manipulation of symbols? How about our sensory organs as a means of arbitrating the boundary of "I"? These all seem to be very important to what it means to be human and have first-person experience, and yet none of it can be formed out of an initially homogeneous dance of symbols. A particular heterogeneous structure needs to be imposed first, before consciousness can emerge.

I said I would return to the analogy of a hypothetical sapient AI, and the distinction between its largely homogeneous hardware composed of bits and logic vs. its highly structured software architecture. In a very loose sense, the design of the software can be compared to the development process encoded by our genes (I say very loosely, because I don't mean to imply DNA is anything like a blueprint or a top-down "design"... but, as the IDiots are fond of pointing out, DNA does give rise to structures that exhibit many of the properties of design, and for purposes of this analogy, that is sufficient). If we imagine a developing fetus whose brain grows at an entirely ordinary rate, but none of the neural connections ever differentiate -- its just a vast homogeneous field of gray matter -- this baby will not be born alive, let alone conscious or with the potential for consciousness.

Again, I think it would be attacking a strawman if I implied this was literally Hofstadter's position. He is taking much of this for granted (usually with good justification!), to try and explain the sensation of "I"-ness. And at that I think he does a pretty good job overall, though as I mentioned I think his picture of consciousness could become richer if he started to think about the roles that some of these middle level structures play (e.g. the "smoothing out" of the strange loop performed by our ongoing rationalizing narrative).

But I do think that some of his positions unknowingly rely on very similar "blank slate"-like assumptions about the human mind. In particular, he takes the concept of "soul shards" to the level of asserting that the self-referential symbols representing person B in person A's brain actually constitute a "consciousness" with a first-person experience. (I was going to cite a passage, but I don't have the book with me right now -- will update this post later) I do not think he can support this statement in light of the "middle level" of neurological structure I have been referring to.

His position is that, since it is the pattern that matters rather than the substrate, a coarse self-referential model of another person can "execute" its software on your brain -- much the same way as, for example, the hardware of a Mac can emulate the hardware of a PC. It's all just a pattern of symbols, and the system for executing it does not matter.

Hogwash, I say! And the problem is in a hidden ambiguity in the italicized phrase of the previous paragraph. What exactly do we mean by "substrate"? Do we mean the vast sea of more-or-less homogeneous neurons? Well, we can't mean that if our substrate is supposed to support consciousness. This would be like taking our hypothetical sapient AI, and wiping the software. The hardware is just a "useless" sea of bits and gates. The substrate for the symbols that represent the AI software may indeed be this undifferentiated raw computational network, but the substrate for the AI's consciousness, for the AI's "I" (AI2?), is the software! The software can't work without the hardware, but the consciousness can't work without the software.

In analogizing between brain hardware/software and computer hardware/software, Hofstadter has allowed himself to make false parallels, parallels to levels that don't correspond to each other. He thinks of the totality of the physical brain as analogous to the computer hardware, and our conscious selves as analogous to the computer software1. But this analogy doesn't work, because if our physical brain were analogous to the computer hardware, it could no more support consciousness than could the "useless" sea of bits and gates into which we transformed our poor hypothetical AI when we wiped its software program in the previous paragraph.

A better analogy would be to think of the neurons in our brain as analogous to the computer hardware, the large structures in our brain (the "middle level") as analogous to the computer software, and our emergent consciousness as analogous to the current state of the computer as the sapient AI program is running. Now, every emergent phenomenon is capable of being supported by its substrate all the way up and down both levels of the analogy.

And herein lies why I think his idea of someone else's "software" executing in our own brains and having its own first-person consciousness is bunk. In my improved analogy, we do not have an analogue of our loved one's software running on our hardware; we have an analogue of the our loved one's state stored within our state (albeit these are beautiful recursive strange loop-y states!). And here's the kicker: it seems tremendously unlikely that various components of the analogue of person B's state in person A's brain are being encoded/decoded and manipulated by the same analogous software that exists in both brains. Or to state it more plainly, the mapping of state-to-software in person A's analogue of person B's "soul" bears no resemblance to the original mapping -- the mapping that gave rise to consciousness.

Yes, we have within our brains a coarse-grained copy of the self-referential symbol(s) that make up our loved ones, and in that sense Hofstadter's "soul shards" idea is quite real (and uplifting, I might add). But that symbolic pattern is not being read/manipulated by the right software to cause the emergence of a conscious, first-person entity.

Let me embark on one more Hofstadterian analogy, and then I'm done. As I said before, while I mostly agree with what (I think) Hofstadter means by it, the phrase "the pattern matters and not the substrate" has some ambiguity, and can actually be quite false if you misinterpret it.

So let's say I have the blueprint for a house, and I have two houses that have been built using this blueprint. The houses are in different locations (obviously!), are made from different individual trees (obviously!), but I would assert that we could make them sufficiently identical that just about anyone would agree they are the "same" house, with all the same properties (being able to live in it being arguably the most important property). For some, the varieties of wood and the types of materials would have to be identical; others might say you could use different but similar varieties of wood and still have the "same" house. But no matter -- we can make the houses arbitrarily close, even if they are using different substrates, with the substrate in this case being defined as the foundation on which it is built, the individual trees used to build it, etc.

But is the blueprint "the same" as the house? The pattern is the same, right? So does the substrate -- which in this case means physical materials vs. a 2-dimensional diagram -- matter? Well yes it most certainly does! We can make the blueprint arbitrarily detailed, giving a 2-dimensional cross-section of not just each floor, but of every inch of elevation in the house, or even of every single micron of elevation, if you wish. The "substrate" -- as we have defined it in this case -- still matters. No matter how detailed the blueprint becomes, it will never possess the property "you can live in it" (at least not comfortably!).

So depending on how we defined "substrate" in each individual case, maybe it does matter. In the case of human "souls", you can have quite an elaborate representation of another person within your brain, and as it becomes arbitrarily rich in detail, I think it is a sufficiently good copy of a part of them that the phrase "soul shard" is quite appropriate and evocative. But due to how our brains are structured and operate, that "soul shard" will never get to inhabit, say, our visual cortex, or our somatosensory cortex -- and it seems highly likely to me there are similarly large structures in our brain that are critical to the experience of first-person consciousness, but which the "soul shard" has no access to.

No matter how big a "soul shard" we take on, our brains are wired in such a way that it can never develop an independent first-person consciousness. Our deceased loved ones do live on in a very important way, in our shared hopes and dreams and our memories of them and all the things Hofstadter describes -- but that "spark", that ineffable "I"? I just don't see it. That part is gone, because nobody is (nor is anyone currently capable of) churning the right symbols through the right software.

1For what it's worth, I would have totally bought into this analogy not two days ago, and it is Hofstadter's scintillating ideas that have brought the flaws of this analogy into focus for me -- again, I am only criticizing him at such length because he is right about so damn much, that the areas where (in my opinion) he goes wrong are certain to be quite fascinating!

No comments:

Post a Comment