Model

1.An abstract or concrete *system that represents another, usually more complex or less familiar system.

2. A schematic representation that accounts for properties of a system, often used to infer additional properties and to predict the outcome of manipulations.

All models (modus, Latin for 'a measure', 'standard of measurement') are analogies (things similar but not identical, Greek for 'proportionate'). Some are only "metaphors, others sets of quantitative relations among components of the modelled system, and yet others somewhere in between. In the context of the present discussion, it is useful to distinguish among (a) mathematical models, (b) diagrammatic models, and (c) the use of the term 'model' to describe a "simple system that could potentially illuminate phenomena of interest in what is considered a more complex system. Although different in type and complexity, all models share some elementary methodological aims, pros and cons.

Basically, models are heuristics devised to explain and predict (Lakatos 1978; "paradigm). A profound question is whether we can do without them, i.e. are they only mental aids, or an inherent necessity for human understanding of nature (Duhem 1914; Hesse 1963). Some will argue that even what seems to us the most accurate depiction of a natural phenomenon is still a schematic model distilled through human cognition ("internal representation). Another issue is when does a detailed representation loses its modelness, hence usefulness as a simplifying and explanatory agent. Borges (1964) has something to say about it:

... In that Empire, the Art of Cartography reached such Perfection that the map of one Province alone took up the whole of a city, and the map of the empire, the whole of a Province. In time, those Unconscionable Maps did not satisfy and the Colleges of Cartographers set up a Map of the Empire which had the size of the Empire itself and coincided with it point by point. Less Addicted to the Study of Cartography, Succeeding Generations understood that this Widespread Map was Useless and not without Impiety they abandoned it to the Inclemencies of the Sun and the Winters. In the Deserts of the West some mangled Ruins of the Map lasted on, inhabited by Animals and Beggars; in the whole Country there are no other relics of the Disciplines of Geography.

It is not uncommon to find great minds captivated by their pet model to such an extent that they lose sight of the initial question and reach a stage where the model must be modelled to simplify it. This, of course, is still an utterly legitimate and potentially rewarding intellectual pursuit, only that its relevance to the original research objective deserves scrutiny.

1. Mathematical models. These usually combine an attempt to "reduce biology into the exact sciences with the quest for a powerful descriptive and predictive language. In disciplines such as physics, models could refer to a theory (Nagel 1979). As there aren't yet real comprehensive formal theories with theory-derived laws in the neurosciences, even the most 'formal' models of learning and memory are not formal in the full sense of the term, but rather an hypothesis or experimental "generalization expressed in mathematical notations. In the field of memory research, there are influential formal models at different "levels of analysis. An example at the level of "synaptic "plasticity is provided by models that employ the Hebb type of "algorithm (Hebb 1949), which assumes that the alteration in synaptic weight is a function of the correlation of pre- and postsynaptic activity. An example at the level of behavioural learning is provided by models that employ the Recorla-Wagner algorithm (Rescorla and Wagner 1972), which proposes that in associative learning, the change in the associative strength of a stimulus with a "reinforcer, depends upon the concurrent associative strength of all present stimuli with that reinforcer. Many other examples exist of semiquantitative models at the system and behavioural level (e.g. Raaijmakers and Shiffrin 1992).

A particularly important class of mathematical models is that of artificial neural networks (ANN; McCulloch and Pitts 1943; Amit 1989; Fausett 1994; Mehrota et al. 1997). These models deal with the collective behaviour of systems that consist of a large number of interconnected computational units ('neurons'). Signals are passed between units over connections, which manipulate the signal in a typical way. Each unit applies an activation function to its net input to determine the output signal. Networks are characterized by the architecture of connectivity, the algorithm that determines the weight on the connections, and the activation function of the units. The collective behaviours of such networks appears to mimic various dynamic properties of neuronal circuits, such as representation of "percepts, learning and "retrieval. Certain types of ANN are implemented in technological systems that need to perceive, recognize, and learn from experience.

2. Diagrammatic models. These are rather common, often intended as a didactic tool or as a rudimentary functional explanation in familiar terms. Such models run the danger of being perceived as the real world rather than an analytic tool. Textbooks and papers provide many examples of such models: block and flowchart diagrams of "intracellular signal transduction cascades (Figure 39, p. 136), graphs of "phases in "acquisition, "consolidation, and "retrieval of learned information, or "maps of interconnecting brain circuits (e.g. "limbic system). Models of this kind commonly echo the contemporary technological "zeitgeist, borrowing, for example, from electrical engineering and computer science ("metaphor).

3. Simple systems as models. Here 'model' is usually a figure of speech more than a real model. The justification of the usage of the term is sometimes questionable and the outcome of this usage potentially problematic. There are two major types of so-called 'simple models'. One is organisms or naturally occurring biological phenomena that are used to cast light on properties of other organisms or phenomena. Examples include the use of animal systems in studying human cognition and disease (e.g. "dementia), the use of one species as a 'model' for learning and memory in remote species, or of a simple type of learning as a 'model' for other, more complex types of learning (for selected examples, see Thompson and Spencer 1966; Clause 1993; Suppes et al. 1994; D'Mello and Steckler 1996; Eichenbaum 1997a; Eisenstein 1997; Gallagher and Rapp 1997; Milner et al. 1998; Robbins 1998; also "Aplysia, "Drosophila, "monkey, "mouse, "rat). It should not be forgotten that 'simple organisms' did not evolve to serve as models for other organisms. Although one does expect some similarity to increase in an inverse proportion to the phylogenetic distance, rats and mice are not models for humans; they are rats and mice. If forgotten, this trivial truth may lead to erroneous conclusions on the properties of human brain, behaviour, and pathology, and promote fishing expeditions for "red herrings.

Another caveat concerning the use of simple organisms as models involves the distinction between homology and analogy. Homology (Greek for 'same reason', 'in agreement') refers to having the same phylo-genetic or ontogenetic origin but not necessarily the same form or function.1 Analogy refers to having the same form or function but not the same phylogenetic or ontogenetic origin. Whether one should expect to unveil homologies or analogies depends on the species used as a model, the physiology and behaviour modelled, and the level of analysis. For example, using invertebrate learning to model mammalian learning may unveil homology at the molecular and cellular level but only analogy at best at the circuit and behavioural level.

The other type of simple systems commonly referred to as models involves artificial manipulation of biological systems. A most popular example is "long-term potentiation (Bliss and Collingridge 1993). Such model systems unravel processes and mechanisms that might serve as candidate components of the real thing, e.g. of synaptic plasticity in the behaving brain under physiological conditions. A common problem in this type of models is that their practitioners may become trapped in the "homunculus fallacy: trusting that the "reduced model system should display properties of the complex system that it is supposed to model. Parts of a whole are not expected to display the properties of the whole, and if they do, one should suspect the simplification has gone too far.

Despite all the caveats, models of all the three aforementioned types are indispensable both as conceptual and as practical tools. They unveil phenomena, processes, and mechanisms that could later be pursued in the more complex and less tangible system. The trick is probably in always remembering to distinguish between the role of a system as a model and what it really is, and in having the courage to abandon the model, in spite of the great investment and affection, when its use invokes too many discrepancies with the original research goal.

Selected associations: Metaphor, Map, Paradigm, Simple system, System

1For the history and usage of this term, see Donoghue (1992).

Unraveling Alzheimers Disease

Unraveling Alzheimers Disease

I leave absolutely nothing out! Everything that I learned about Alzheimer’s I share with you. This is the most comprehensive report on Alzheimer’s you will ever read. No stone is left unturned in this comprehensive report.

Get My Free Ebook


Post a comment