The Franciscan theologian William of Ockham (also spelled Occam; 1285-1347), later of Oxford, was a most influential medieval scholastic (Adams McCord 1987; Colish 1997). In his analysis of the Universe and human ability to perceive it, Ockham proposed that phenomena should be better explained in terms of the simplest causes rather than the more complex ones. Admittedly, he was not the first to suggest such a principle of parsimony. Even more so, his principle was rather qualified: 'No plurality should be assumed', said Ockham, 'unless it can be proved (a) by reason, or (b) by experience, or (c) by some infallible authority' (Adams McCord 1987), meaning that the Bible, the Saints, and the Church can make exceptions, and, furthermore, God is not in the game: 'There are many things that God does with more that he could do with fewer' (ibid.). In spite of his hesitation to regard the principle as a sweeping universal, Ockham's name became associated with it forever. Almost half a millennium later, the French philosopher de Condillac referred to Ockham's principle "metaphorically as 'the razor of the Nominalists',1 paving the way to the current idiom 'Ockham's razor' (Safire 1999). It became and remained 'a most fruitful principle in logical analysis' (Russell 1945).
Ockham's razor has a derivative in the behavioural sciences, termed Lloyd Morgan's Canon: 'In no case may we interpret an action as the outcome of the exercise of a higher psychical faculty, if it can be interpreted as the outcome of the exercise of one which stands lower in the psychological scale ' (Morgan 1894). Lord Morgan's canon was a cautionary reaction to overen-thusiastic accounts of animal intelligence, that have dominated late nineteenth century psychology as a consequence of Darwin's theory of evolution. 'There is no fundamental difference between man and the higher mammals in their mental faculties. With respect to animals very low on the scale ... their mental powers are much higher than might have been expected' (Darwin 1971). "Anthropomorphism became a trend in leading circles of the discipline of animal behaviour. It attained its pinnacle in the "classic book on animal intelligence by Romanes (1882), in which the author based generous interpretations of animal cognition on anecdotes obtained from secondary sources. Against that background, a canon of parsimony was utterly justified.
To the neuroscientist at the turn of the twenty-first century, Ockham's canon is still a useful guideline, but in practice is easier to quote than to use, and in any case, must not be followed blindly. The major initial successes in modern biology relied on an Ockham's razor-guided world view; modern "neurogenetics drew from the same conceptual source (Benzer 1967). "Models of artificial neuron-like networks focused in their early days on minimalistic neuronal units, but soon afterwards appreciated the need for at least a partial mimicking of real-life complexity (Segev 1992). The problem is that in general, when scratched beyond the surface, biological "systems display much more complexity than first expected. For example, a brief period of naive hopes that the map of "intracellular signal transduction cascades is around the corner, gave way to the realization that the intricacy and complexity of these signalling networks and their interactions is overwhelming, and that their analysis requires radical rethinking not only of the methodologies but also of the education of biologists (Alberts 1998). Enzymes and "receptors are amazingly intricate machines with multiple regulatory sites and permutational states. The same holds for the regulatory apparatuses of gene expression (Lewin 1994; "immediate early genes). The living cell is hence packed with highly elaborate miniature machines and their understanding in great detail is the subject matter of gargantuan efforts (Alberts and Miake-Lye 1992; Bray 1995). And on top of it all, organisms and neural preparations initially regarded as "'simple systems' soon ceased to be simple anymore (Byrne and Kandel 1996; Wolpaw 1997; see also on the demise of simple explanations of "classical conditioning; Wasserman and Miller 1997). Even the rejection of anthropomorphism is not so trivial nowadays; for example, is the suggestion that in certain conditioning protocols the rabbit becomes "consciously aware of the associated events, in conflict with Ockham's razor (Clark and Squire 1998; "declarative memory)?
The difficulty beyond all this is that we simply do not know whether the complexity that we detect is real and non-parsimonious. In Ockham's own words, we cannot determine whether 'God (did) with more that he could do with fewer' (see above). We also cannot simply conclude that a biological system, assumed to be moulded by aeons of opportunistic evolution, had evolved to take the simplest route to its goal. The remedy to this dilemma is to identify what information a system encodes at each "level of its organization, be it a molecular ensemble within the neuron or a neuronal ensemble within the brain, and then find out whether the so-called 'complexity' is indeed essential for the representation of the relevant information. If it turns out that many detected variations in individual elements, say enzymes in signalling networks, are irrelevant to the encoding of critical information (e.g. Barkai and Liebler 1997), then there is a better hope for a simple explanation of the operation of the seemingly complex system. Admittedly, it is hard to believe that nature has taken all these pains to ensure highly intricate regulation of proteins and cells in order to end up in systems in which this complexity is not important. We are thus back to square one.
So may be the solution to the dilemma presented by Ockham's razor is on the pragmatic level: we should adhere to the maxim of parsimony merely as a reminder that we had better focus first on the simplest facets of our experimental systems, because in real life biology is too complex for us to approach otherwise. And the overall take-home message is that it is not easy to handle biological systems with tools borrowed from logic unless we understand what the logic of the system is.2
Selected associations: Anthropomorphism, Clever Hans, Declarative memory, Observational learning, Reduction
'Nominalists deny the existence of universals. Universals are *gen-eralizations of knowledge, abstract properties and relationships, that contrast with particulars, which are instantiated objects. Generally speaking, nominalists believe only in the existence of particulars, whereas their opponents, the realists, do believe in addition in universals (Armstrong 1989). Compare also 'token' vs.'type' in *system. Nominalists and realists come in multiple versions, but a discussion of these variants in the present context would certainly defy Ockham's razor. 2Which brings us back to the question whether to fully understand we must be able to fully produce the *system; see 'criterion, *observa-tional learning.
Was this article helpful?