Step 3 Transparency and Naive Realism

The antireductionist reply to the theoretical model sketched in this essay is obvious and straightforward. There seems to be no necessary connection between the functional and representational basis properties and the phenomenal target properties of "mineness," "selfhood," and "perspectivalness." Everything described so far could, of course, happen without the instantiation of the phenomenal properties of "mineness," "selfhood," and "perspectivalness." It is conceivable, a property dualist might argue, that a biological information-processing system opens a centered representational space and then always embeds a model of itself into the model of reality active within this space without automatically generating a phenomenal self. An active, dynamical "self-model" still is just a representation of the system; it is a system model—

not an instance of genuine self-consciousness. In order for the functional property of centeredness to contribute to the phenomenal property of perspectivalness, the model of the system has to become a phenomenal self. From a philosophical point of view the cardinal question is What is needed—by conceptual necessity—to make a phenomenal first-person perspective emerge from a representational space that is already functionally centered? In short, how do you get from the functional property of "centeredness" and the representational property of "self-modeling" to the phenomenal property of "selfhood"?

The answer lies in what one might call the "semantic transparency'' of the data structures used by the system. Terminological details21 aside, the general idea is that the representational vehicles22 employed by the system are transparent in the sense that they do not contain the information that they are models on the level of their content (see Metzinger 1993; Van Gulick 1988a, 1988b). In our present context "transparency'' means that we are systems which are not able to recognize their own representational instruments as representational instruments. That is why we "look through'' those representational structures, as if we were in direct and immediate contact with their content, with what they represent for us (see also row 2 of table 20.1).

Again, one may move downward and speculate about certain functional properties of the internal instruments the system uses to represent the world and itself to itself. A simple functional hypothesis might say that the respective data structures are activated in such a fast and reliable way that the system itself is not able to recognize them as such anymore (e.g., because of a lower temporal resolution of metarepresentational processes; see, e.g., Metzinger 1995c). This can then be supplemented by a plausible teleofunctionalist assumption: For biological systems like ourselves —who always had to minimize computational load and find simple but viable solutions—naive realism was a functionally adequate "back ground assumption'' to achieve reproductive success. In short, there has been no evolutionary pressure on our representational architecture to overcome the naive realism inherent in semantic transparency. The decisive step of my argument consists in applying this point to the self-model.

Was this article helpful?

0 0

Post a comment