I will be arguing that people have minds because they, or their brains, are biological computers. The biological variety of computer differs in many ways from the kinds of computers engineers build, but the differences are superficial. When evolution created animals that could benefit from performing complex computations, it thereby increased the likelihood that some way of performing them would be found. The way that emerged used the materials at hand, the cells of the brain. But the same computations could have been performed using different materials, including silicon. It may sound odd to describe what brains do as computation, but, as we shall see, when one looks at the behavior of neurons in detail, it is hard to avoid the conclusion that their purpose is to compute things. Of course, the fact that some neurons appear to compute things does not rule out that those same neurons might do something else as well, maybe something more important; and there are many more neurons whose purpose has not yet been fathomed.
Even if it turns out that the brain is a computer, pure and simple, an explanation of mind will not follow as some kind of obvious corollary. We see computers around us all the time, none of which has a mind. Brains appear to make contact with a different dimension. Even very simple animals seem to be conscious of their surroundings, at least to the extent of feeling pleasure and pain, and when we look into the eyes of complex animals such as our fellow mammals, we see depths of soul. In humans the mind has reached its earthly apogee, where it can aspire to intelligence, morality, and creativity.
So if minds are produced by computers, we will have to explain how. Several different mechanisms have been proposed, not all of them plausible. One is that they might "excrete" mind in some mysterious way, as the brain is said to do. This is hardly an explanation, but it has the virtue of putting brains and computers in the same unintelligible boat. A variant of this idea is that mind is "emergent" from complex systems, in the way that wetness is "emergent" from the properties of hydrogen and oxygen atoms when mixed in great numbers to make water.
I think we can be more specific about the way in which computers can have minds. Computers manipulate information, and some of this information has a "causative" rather than a purely "descriptive" character. That is, some of the information a computer manipulates is about entities that exist because of the manipulation itself. I have in mind entities such as the windows one sees on the screens of most computers nowadays. The windows exist because the computer behaves in a way consistent with their existing. When you click "in" a window, the resulting events occur because the computer determines where to display the mouse-guided cursor and determines which window that screen location belongs to. It makes these determinations by running algorithms that consult blocks of stored data that describe what the windows are supposed to look like. These blocks of data, called data structures, describe the windows in the same way that the data structures at IRS Central describe you. But there is a difference. You don't exist because of the IRS's data structures, but that's exactly the situation the window is in. The window exists because of the behavior of the computer, which is guided by the very data structures that describe it. The data structures denote something that exists because of the data structure denoting it: the data structure is a wish that fulfills itself, or, less poetically, a description of an object that brings the object into being. Such a novel and strange phenomenon ought to have interesting consequences. As I shall explain, the mind is one of them.
An intelligent computer, biological or otherwise, must make and use models of its world. In a way this is the whole purpose of intelligence, to explain what has happened and to predict what will happen. One of the entities the system must have models of is itself, simply because the system is the most ubiquitous feature of its own environment. At what we are pleased to call "lower" evolutionary levels, the model can consist of simple properties that the organism assigns to the parts of itself it can sense. The visual system of a snake must classify the snake's tail as "not prey." It can do this by combining proprioceptive and visual information about where its tail is and how it's moving. Different parts of its sensory field can then be labeled "grass," "sky," "possibly prey," "possible predator," and "tail." The label signals the appropriateness of some behaviors and the inappropriateness of others. The snake can glide over its tail, but it mustn't eat it.
The self-models of humans are much more complex. We have to cope with many more ways that our behavior can affect what we perceive. In fact, there are long intervals when everything we perceive involves us. In social settings, much of what we observe is how other humans react to what we are doing or saying. Even when one person is alone in a jungle, she may still find herself explaining the appearance of things partly in terms of her own observational stance. A person who did not have beliefs about herself would appear to be autistic or insane. We can confidently predict that if we meet an intelligent race on another planet they will have to have complex models of themselves, too, although we can't say so easily what those models will look like.
I will make two claims about self-models that may seem unlikely at first, but become obvious once understood:
1. Everything you think you know about yourself derives from your self-model.
2. A self-model does not have to be true to be useful.
The first is almost a tautology, although it seems to contradict a traditional intuition, going back to Descartes, that we know the contents of our minds "immediately," without having to infer them from "sense data" as we do for other objects of perception. There really isn't a contradiction, but the idea of the self-model makes the tradition evaporate. When I say that "I" know the contents of "my" mind, who am I talking about? An entity about whom I have a large and somewhat coherent set of beliefs, that is, the entity described by the self-model. So if you believe you have free will, it's because the self-model says that. If you believe you have immediate and indubitable knowledge of all the sensory events your mind undergoes, that's owing to the conclusions of the self-model. If your beliefs include "I am more than just my body," and even "I don't have a self-model," it's because it says those things in your self-model. As Thomas Metzinger (1995b) puts it, "since we are beings who almost constantly fail to recognize our mental models as models, our phenomenal space is characterized by an all-embracing naive realism, which we are incapable of transcending in standard situations."
You might suppose that a self-model would tend to be accurate, other things being equal, for the same reason that each of our beliefs is likely to be true: there's not much point in having beliefs if they're false. This supposition makes sense up to a point, but in the case of the self-model we run into a peculiar indeterminacy. For most objects of belief, the object exists and has properties regardless of what anyone believes. We can picture the beliefs adjusting to fit the object, with the quality of the belief depending on how good the fit is (Searle 1983). But in the case of the self, this picture doesn't necessarily apply. A person without a self-model would not be a fully functioning person, or, stated otherwise, the self does not exist prior to being modeled. Under these circumstances, the truth of a belief about the self is not determined purely by how well it fits the facts; some of the facts derive from what beliefs there are. Suppose that members of one species have belief P about themselves, and that this enables them to survive better than members of another species with belief Q about themselves. Eventually everyone will believe P, regardless of how true it is. However, beliefs of the self-fulfilling sort alluded to above will actually become true because everyone believes them. As Nietzsche observed, "The falseness of a judgment is... not necessarily an objection to a judgment . . . . The question is to what extent it is life-promoting . . . , species-preserving..." (Nietzsche 1886, pp. 202-203). For example, a belief in free will is very close (as close as one can get) to actually having free will, just as having a description of a window inside a computer is (almost) all that is required to have a window on the computer's screen.
I will need to flesh this picture out considerably to make it plausible. I suspect that many people will find it absurd or even meaningless. For one thing, it seems to overlook the huge differences between the brain and a computer. It also requires us to believe that the abilities of the human mind are ultimately based on the sort of mundane activity that computers engage in. Drawing windows on a screen is trivial compared to writing symphonies, or even to carrying on a conversation. It is not likely that computers will be able to do either in the near future. I will have to argue that eventually they will be able to do such things.
Was this article helpful?