The last step consists in applying the transparency constraint to the internal representation of the relation between subject and perceptual object, to the relation between agent and goal. If, for instance, the phenomenal model of one's own perceptual states contains a transparent representation of their causal history, then inevitably convolved global states will result, the content of which can only be truthfully described by the system itself as (e.g.) "I myself [= the content of a transparent self-model] am now seeing this object [= the content of a transparent object representation], and I am seeing it with my own eyes'' [= the simple story about immediate sensory perception, which sufficed for the brain's evolutionary purposes]. The phenomenal self is a virtual agent perceiving virtual objects in a virtual world. This agent doesn't know that it possesses a visual cortex, and it does not know what electromagnetic radiation is: It just sees "with its own eyes''—by, as it were, effortlessly directing its visual attention. This virtual agent does not know that it possesses a motor system which, for instance, needs an internal emulator for fast, goal-driven reaching movements. It just acts "with its own hands.'' It doesn't know what a sensorimotor loop is—it just effortlessly enjoys what researchers in the field of virtual reality design call "full immersion,'' which for them is still a distant goal. To achieve this global effect, what is needed is a dynamic and transparent subject-object relation that episodically integrates the self-model and those perceptual objects which cause the changes in its content by telling an internal story about how these changes came about. This story does not have to be the true story; it may well be a greatly simplified internal confabulation that has proved to be functionally adequate.
Based on the arguments given above, I claim that phenomenal subjectivity emerges precisely at this stage: As soon as the system transparently models itself as an epistemic or causal agent, you have a transparent representation of episodic subject-object relations. For philosophers, of course, the new distinction of phenomenal inten-tionality as opposed to unconscious processes bearing intentional content will not be too surprising a move. It certainly is exciting that we presently witness this notion surfacing at the frontier of neuroscientific theory formation as well (see, e.g., Damasio 1994, 1999; Damasio and Damasio 1996a: 172, 1996b: 24; chapter 7, this volume; Delacour 1997: 138; LaBerge 1997: 150, 172).
Why would a concise research program for the neural correlate of self-consciousness (the NCSC) be of highest relevance for understanding phenomenal experience? If all the above is true (or if it at least points in the right direction), then it should prove to be more than heuristi-cally fruitful. The vast majority of phenomenal states are subjective states in the way I have just analyzed: Not only are they elements of a coherent internal model of reality used by the system; not only are they activated within a window of presence; not only does their phenomenal content supervene entirely on internal functional and physical properties; but they are bound into a transparently centered representational space. The maximally salient focus of conscious experience will always be constituted by the object-component of the phenomenal model of the intentionality-relation, with the subject-component, the self-model, providing a source of invariance and stability. If I am correct —and that is what it actually means when one says that such states are subjective states—then a straightforward empirical prediction will follow: Under standard conditions a very large class of phenomenal states should become episodically integrated with the current self-model on a very small time scale, as attention, as volition, as cognition wander around in representational space, selecting ever new object-components for the conscious first-person perspective. Global availability of information means availability for transient, dynamical integration into the currently active self-model, generating a "self in the act of knowing.'' In other words, the self-model theory of subjectivity can serve to mark out a specific and highly interesting class of neural correlates of consciousness.
And that is why the NCSC is important: Only if we find the neural and functional correlates of the phenomenal self will we be able to discover a more general theoretical framework into which all data can fit. Only then will we have a chance to understand what we are actually talking about when we say that phenomenal experience is a subjective phenomenon. It is for this reason that I have introduced two new theoretical entities in this chapter, the notion of a "transparent self-model'' and the concept of the "phenomenal model of the intentionality-relation.'' Two predictions are associated with them. First, if—all other constraints held constant—the self-model of a conscious system would become fully opaque, then the phenomenal target property of experiential "selfhood" would disappear. Second, if the phenomenal model of the intentionality-relation collapses or cannot be sustained in a given conscious system, phenomenal states may exist, but will not be experientially subjective states any more, because the phenomenal first-person perspective has disappeared in this system. Inten-tionality-modeling is a necessary condition for perspectivalness.
In conclusion, let me once again illustrate the central thought of the argument by a metaphor. Interestingly, the point of this metaphor is that it contains a logical mistake: We are systems which were configured by evolution in such a way that they constantly confuse themselves with the content of their phenomenal self-model. In other words, we are physical systems that on the level of phenomenal representation are not able to differentiate between themselves and the content of their currently active self-model. We know ourselves only under a representation, and we are not able to subjectively represent this very fact. The evolutionary advantage of the underlying dynamical process of constantly confusing yourself with your own self-model is obvious: It makes a selfless biological system egotistic by generating a very robust self-illusion. Now here is the logical mistake: Whose illusion could that be? It makes sense to speak of truth and falsity, of knowledge and illusion, only if you already have an epistemic agent in the sense of a system possessing conceptualized knowledge in a strong propositional sense. But this is not the case: We have just solved the homunculus problem; there is nobody in there who could be wrong about anything. All you have is a functionally grounded self-modeling system under the condition of a naive-realistic self-misunderstanding. So, if you would really want to carry this metaphor even further, what I have been saying in this paper is that the conscious self is an illusion which is no one's illusion.
Was this article helpful?