The dynamical systems approach to mind has been presented as a scientific revolution, proposing a brand new framework of theoretical notions, methods and experiments, totally different from whatever the symbolic paradigm and the sub-symbolic connectionism might envisage. No doubt, partial differential equations for describing flows through a membrane are different from either recursive rules of symbol manipulation or backpropagated reassignments of weights. But couldn't a connectionist network be a good approximation to deal with continuous causal feedbacks through massive parallel computation? As surveyed by Eliasmith (1996), this issue has many aspects, of which I shall consider only one, related to the continuous or discrete character of time.
The common idea of a dynamical system is that of a differentiable manifold M together with a vector field V defined over M, the action of V being expressed by a set of (partial) differential equations.13 Once M and V are suit ably specified, this idea already covers a vast range of applications, but it can be further generalised, for instance, by relaxing the differentiability condition. In any case, in order to deal with the local/global properties of M and the way V is sensitive to changes in position and direction, what matters is geometry.
The state space of a dynamical system over M has a geometry strictly depending on the geometry of M. As soon as cognitive systems are treated as just dynamical systems, logic is on the way to being subsumed, but not replaced, by geometry, although logic can be subsumed by geometry in other ways too, which do not imply such a treatment. In any case, logical structure is not necessarily opposed to geometric structure, as appropriate geometric (more generally, topological) constructions allow for many aspects of logic to be recovered from the underlying geometric structure (see Peruzzi 1994a).
In the "dynamicistic" approach, any talk of cause in the cognitive domain is translated into the language of state spaces, trajectories and attractors. This translation, in either deterministic or indeterministic form, requires that the cognitive "subject" be no longer treated as a system isolated from the actual interactions with the external physical environment. Rather, it has to be treated as a system whose behaviour is described by an appropriate set of partial differential equations.
Saying that such equations correspond to a non-linear system evolving in continuous time is not enough. They have to be such that the dimension of the low-level global state space (with many degrees of freedom) is "reducible" to that of a space corresponding to high-level cognitive, but no less embodied, states, conceptualised and linguistically expressed as discrete representations. Along this "vertical" process, the system's degrees of freedom decrease while coordinative structure increases.
Here, reference to representations calls for an immediate warning in that such a picture is not prevalent among the proponents of a dynamical systems approach, as the main trend aims at full elimination of representations. The fundamental reason for such elimination is that representing involves a map between two sharply distinct domains, whereas "dynamicists" replace maps with couplings. Since the boundary between coupled (sub) systems is largely indeterminate, the real cognitive system is identified with the global system embracing brain, body and environment (see Port &van Gelder 1995).
Unfortunately, this claim coincides with one of the most controversial answers to Searle's Chinese Room argument. Is the whole Room our best candidate to be the bearer of meaning's understanding? But the Room is not isolated, thus why not the ecological system on the surface of Earth? Why not the solar system? Where ought we to stop the slippery slope of such nested sys tems in order to attribute intentionality? However essential the causal chain is in leading the universe to host thinking beings, the ascription of cognitive skills to the universe, rather than to humans, is as suggestive as misleading, exactly as a botanist's claim that Earth is performing chlorophyll synthesis since plants on Earth perform it. Nor can we say that a boundary's indeterminacy implies ascription of cognitive activity to the all-embracing system, if not by another slippery slope argument. Our skin is a surface with many holes, and yet it remains a boundary through which a continuous flow of energy occurs. The holes notwithstanding, we keep our body sharply distinct from the external environment, for very good reasons, and, after all, if cognition is framed in terms of system couplings, the systems to be coupled have to be identified.
If system A is considered as a subsystem of B, then not only A's domain is included in B's domain but other conditions have to be satisfied, among which the property of A's structure being embedded into B's structure is a very strict one. Still, the embedding of A into B is not necessarily full (it is such iff any relation among A's components comes from the restriction of some B-relation to A-domain and any B-relation admits such a restriction). Thus, A can have highly specific properties due to amplified fluctuations, the control of which allows for the stability, and hence for the ex-istence, of A.
As cognition is approached in terms of dynamical couplings, its study becomes integral part of natural science and, in particular, "mental" properties become stable configurations due to the self-organisation of physical systems. Isn't it a decisive progress in understanding . . . understanding?
Qualifications are needed. As regards a charge frequently addressed to dynamical models of cognition, namely, that they are metaphorical, I confine myself to note that, at some extent, all theoretical models are such. But this is by itself no obstacle to objectivity. First, the nature of basic metaphors is, so to say, dynamics-laden, for (i) every metaphorical pattern derives its sense from a small set of perception-action patterns, and (ii) such "generating" patterns are expressed by means of topological dynamics. Second, there are objective criteria to determine whether one metaphorical map is more adequate than another - a methodological issue that does not specifically concern models of cognition, however.
If the case made for (i) and (ii) in Peruzzi (2000) is correct, then the specific problems to be solved by dynamical systems theory are related to the constitution of the ground state space B. It is over B that a hierarchy of spaces endowed with suitable operator algebras is defined, leading to the emergence of meaningful "representations". To achieve this aim, we have to identify the collective variables and parameters out of which macro-states and their transi tions, in the form of basic perception-action schemata, come to be expressible. Once such identification is achieved, the constraints on the epigenetic landscape guiding the transformation of micro-quantities into macro-qualities show up (through the "slaving principle"), and the dimensional collapse of the ground B-dynamics into low-dimensional state spaces and state-transitions is on the right track to explanation.
Now, continuity of motions is a feature of action by contact, on which the localizability of causal interactions is grounded, but it is also a feature of time. As mentioned above, one of the reasons why the dynamical approach presents itself as essentially different from, and more adequate than, connectionism is that it makes reference to continuous time, whereas networks and their learning algorithms are indexed by discrete time, which is not the time of natural phenomena, if not by the mere fact that the "clock" pace varies for different systems. At best, connectionist models provide discrete approximations to continuity, but any such approximation is "essentially" limited.14 In Robert Port's words, "The touchstone of a thoroughly dynamical approach is the study of phenomena that occur in continuous time".
Recovery of "background" continuity is problematic by starting from discrete computational systems physically embedded in the environment. On the contrary it is unproblematic to reach discreteness from continuity (as witnessed by the range of possible metrics for any given Riemannian manifold). In between, there are the constraints associated with relevant cyclic phenomena of different period. The very possibility of their synchronization, as emphasized by Giuseppe Longo (this volume), is one such constraint.15
A simple example is at hand. By replacing the reals with the integers, the quotient map Z ^ Z12 is familiar through its implementation in a watch display. This map shows how a set of state or events can be re-parameterised by a finite cyclic group whose domain is included in that of the larger group Z, and yet Z12 is not embeddable into Z (while the additive group of the integers is a subgroup of the reals). In addition, the cycles induced by feedback loops in a given system can be linearly sequenced, as is no less familiar from the example of the watch. If we have to recognise the existence of what is referred to by Kelso (1995) as "circular causality", we also have to face the task of embedding such causal loops into a linear order that is no less causal. Of course, we don't want to say that the presence of plants caused the Earth's formation. Thus, rather than talking of backward or "circular" causality, the amalgamation of cycles with the arrow of time simply needs suitable coordinates, thus passing from a loop in the base, time-indexed, space, to an anticipatory feedback in the fiber space. In other words, a helix is not a circle, but of course the vertical projection of an upward helix of constant ray onto the plane is a circle.
The necessity of continuous time, to establish a demarcation line between connectionist and dynamical models, has been doubted, for instance by noting that dynamical systems can have discrete state spaces - and that cellular automata already show good approximations to continuous evolution. Moreover, there are analog computers too. Does it follow from this sort of reply that it does not matter whether a cognitive system is a computational or a dynamical system? On the one hand, a dynamical systems picture of cognition allows a direct and smooth embedding of minds (in the extended sense) within the domain of natural science, which is not allowed by symbolic or subsymbolic models. On the other hand, a computational approach to high-level cognitive skills as in the domain of grammar and logic provides models, whose effectiveness seems to be well beyond the range of differential equations.
The laws of physics are (differential) equations, whereas common assertions (in ordinary language) about the world are not. But physicists make use of a discrete set of notions in order to write such equations and word usage in every-day life takes account of continuous processes, though only with qualitative approximation. This intriguing dialectics of the continuous and the discrete is another relevant constraint on our picture of causality.
As objections have been raised to the actual novelty of dynamical systems with respect to connectionism, so objections can be raised to the exclusion of representations from any dynamics. What is the net income in saying that all of cognition is but computation? Or dynamics? Since the methods of each science are, or can be, enriched by the methods of others, purely methodological debates risk becoming lucubration on the sex of angels. What really matters is the growth in our understanding of cognitive skills. This is helped because science is rich in feedback loops through the cross-fertilisation of different methods, exactly as cognition is through neural, perception-action and ecological models. Once again, there is an implicit consistency requirement which is heuristically fruitful, provided different layers of structure are not confused.
It is not by chance that classical representational theories of semantic competence put almost exclusive emphasis on nouns rather than verbs, since the meaning of most verbs concerns continuous motions. In fact, whereas the range of nouns is extremely vast, any cognitively relevant kind of motion can be categorised in a small family of basic topological patterns. If we start from the dynamics in order to understand statics, then the stable reference of a noun emerges from algebraic invariants corresponding to the constrained extraction of patterns in the state space of a dynamical system, not the other way around. Then the very existence of semantics shows up as the tip of a self-sustaining iceberg of attractor basins. Category theory provides the most general and flexible framework to deal with these various levels of structure and their correlations. As a consequence, there is no need to follow Brooks (1991) in claiming that representations can be eliminated. For representations are now approached as non-static attractors, the task being rather that of explaining how representations emerge as conscious tags for the basins of a perceptual and sensory-motor dynamics.
If we concede that an increasing number of cognitive domains will be successfully framed in dynamical terms, the problem becomes one of how is it that the self-representation of human beings as symbol-manipulators has been so successful for at least some high-level cognitive domains? Even if this were just an illusion, what made this illusion possible? I suggest the hypothesis that the high-level qualitative state-space of a cognitive agent is organised as an algorithmic structure in the same way as the topology of a space is coded in its path group.16 As discrete invariants provide essential (though possibly insufficient) information on a continuum, so attractors of a dynamical system are the source of conceptual patterns and their manipulation.
This hypothesis agrees with recent research on the computational power of dynamical systems whose architecture does not conform to that of a Turing machine (see Churchland & Sejnowski 1992), the implicit suggestion being that the Church-Turing Thesis might be rejected. The idea behind such a hypothesis goes beyond the remark, common among computer scientists, that the design of an actual computer is far from that of a Turing machine, or the remark that at least some features of mind call for analogical, rather than digital, computation. Much work has yet to be done to make it clear in which sense top-down control is still achievable, as we try expanding the range of computability by means that are acclaimed candidates for non-computable procedures (just think of chaotic systems).
At issue it is not just which is the adequate (or the handiest) form of language to build up a theory of mind. By saying that one form is better than the other to deal with certain topics, and that the converse holds for other topics, we are back at square one. There are two questions to answer: (1) Can the (high-level) computational emerge from the dynamical according to a dynamical model of emergence? (2) Can the (low-level) dynamical be recovered from the computational according to a computational model of reduction?
If both answers are negative, the way is paved to restore dualism. If both are positive, we are back at the logical-empiricist picture of "equivalent descriptions". As far as I know, there is no evidence for dualism and any general argu ment provided for the existence of globally equivalent descriptions is flawed. This is because, for any pair of supposedly equivalent descriptions, it is impossible to exclude that there is an empirically relevant context that makes the difference. In particular, for what concerns the computational and the dynamical description of cognitive structure, there are phenomena that are explained in one way and not in the other. Moreover, there is no evidence that this is just a contingent state of affairs. (For instance, no computational model parallels the well-known differential equations for the flow of sodium and potassium through neuronal membrane.)
If we are not content with remaining at square one, we have to search for a phenomenon preventing a positive answer to both questions. Hence, there is a matter of fact about what explanatory setting has to be chosen. I would suggest a positive answer to (1) only: there is a necessary and sufficient level of vertical architecture that is causally explanatory and avoids both unbounded downward reduction and cognitivistic dualism of form and content. The resulting perspective is no longer confined to the realm of abstract possibilities, and recognising that cognition is inseparable from action further strengthens the evidence supporting this option.17 Further evidence is to be expected from research on the self-organisation of the brain, as a (sub-) system whose growth is coupled with sensory-motor feedbacks induced by action. Along a line more directly related to dynamical models, Freeman (1999) has proposed an approach to the neurophysiological grounds of cognition, intended to avoid both "reductive" materialism and cognitivism.
The main point remains that the cognitive structures of a living being are inseparable from the dynamics of bodily interactions with the environment's affordances. As rightly emphasised by Brian Hopkins (this volume), this does not mean that, set apart mentalese and neuronese, all of causal information is "ecological", already out there, ready to pick up. Thus, for example, information about time-to-contact and surface texture constrains without determining action, which is, in turn, the source of further information. Within the dialectics of this virtuous circle, the agent's goal-oriented intention (as an anticipated selection of a future state, such as grasping an object) is another aspect that cannot be explained either in isolation or in static terms. Self-organisation, however, involves more than one level, and kind, of mathematical structure. Constraints and bounds on this many-layered system of mind narrow the window within which the consistency requirement of horizontal and vertical causality is satisfied. The same applies to the system of mind.
Was this article helpful?