Frances Bacon, trusting that 'the subtlety of nature is greater many times over than the subtlety of the senses and understanding' (Bacon 1620), distinguished four classes of'idols' (illusions) that beset the human mind: Idols of the 'Tribe' (inherent in the "a priori limited capacity of the species' senses and mind), of the 'Cave' (resulting from the individual's education and experience), of the 'Market-Place' (originating in social influence and public opinion), and of the 'Theatre' (stemming from dogmas and illusory knowledge). The analysis of error and bias in science has since became richer and more sophisticated, but the basic illusions still haunt us: those that stem from the senses, faulty logic, acquired prejudices, and suffocating "paradigms. Science has learned to cope with the shortcomings of the senses, yet finds it rather difficult to struggle with other faults of human nature, be them conscious or not.
Bias could be explicit (definition 1) or implicit (definitions 1 and 2). But even if explicit, it should definitely be distinguished from explicit distortion, which falsifies the data. The latter deplorable disease will not be discussed here further. At the other end of the spectrum stand the 'idols of the tribe', the elementary sensory and cognitive illusions that bias reality and usually transcend culture, education, and profession (Gregory 1966; Kahneman and Tversky 1982); they will not be referred to here either.
In the context of the present discussion, it is methodologically useful to distinguish four major domains in which bias could emerge: The behaviour of the experimental "subject, that of the experimenter, the interaction between the subject and the experimenter, and the scientific community that judges the research project. A notable source of potential "perceptual, "attentional, mnemonic, and judgement bias in the subject, is the emotional state (Power and Dalgleish 1997). For example, depression imposes a bias toward recalling unpleasant rather then pleasant memories (Clark and Teasdale 1982; see also 'mood congruency' under "state-dependent learning). In some situations, interactions unknown to the experimenter among individual subjects in a shared experimental situation, could lead to biased response by the subjects and "artefacts on the side of the experimenter (e.g. Heyes et al. 1994; "observational learning). In addition, multiple sources of bias stem from implicit interactions of the subject with the "context and the experimenter. In many behavioural experiments, the subject is actively involved more than the experimenter is inclined to admit. The subject pays attention to the experimental demands, could try to extract "cues about the objective of the test, reacts to involuntary signs emitted by the experimenter ("Clever Hans), and sometimes attempts to comply with a perceived goal (Pierce 1908). The cues that convey an experimental 'hypothesis' to the subject and hence influence the subject's behaviour are termed 'demand characteristics' (Orne 1962). Their influence on the behavioural outcome of an experiment were mostly studied in humans, but they clearly exist in experiments involving other species as well. Demand characteristics may lead to biased responses by the subjects and to potential artefacts on the side of the experimenter. And finally, the experimenter is itself a potential source of bias (Rosenthal and Rubin 1978; Martin and Bateson 1993). An almost trivial source is self-deception, motivated by a wish to obtain certain results but not others (a potential negative spin-off of "scoopophobia). In such situations minor acts of sampling bias and even data selection throughout the experiment could accumulate to a significant impairment in the overall outcome.
Proper "controls in the experimental design are a must if one wishes to minimize bias due to the subject, experimenter, or experimenter-subject interactions. For example, the potential for some facets of bias could be reduced by strictly following a 'blind' design, in which the person making the measurements does not know the treatment each subject has received until after the experiment is over. In human experiments (such as those that test the effect of drugs on behaviour), a 'double-blind' design should be followed, in which the subject as well does not know the treatment. Furthermore, experimenters must be well aware of their own behaviour. For example, the location and the bodily gestures of the experimenter could markedly bias the behaviour of a "rat or "mouse in a "maze. The design and execution of reliable learning and memory experiments is a complex mixture of science and art, and at least the science part (Martin and Bateson 1993; Kerlinger and Lee 2000) should be mastered before the first experiment is trusted.
But the ordeal of overcoming bias in the experimental design and in its execution is not over even when the manuscript is finally ready for publication. The idols of the market-place and of the theatre could still pose substantial obstacles. The attitude of referees and editors is sometimes biased by "zeitgeist, by a prevalent conceptual paradigm, or, even worse, by the fame of the senior author or the institution in which the work had been done. The refusal over years to accept papers on "conditioned taste aversion, because it had seemed to defy some ideas about what conditioning should be (Garcia 1981; "classical conditioning), provides but one example of referees and editors being biased by a conceptual paradigm. In other cases, the wish of referees and editors to appear politically correct in their scientific milieu or in society at large may also introduce bias into the scientific literature.
Selected associations: Control, Culture, Paradigm, Subject
Was this article helpful?