## A procedure for solving a problem or achieving a goal in a finite number of steps

'Begin at the beginning', said the King of Hearts, 'and go on till you come to the end: then stop'. He thus provided White Rabbit with an algorithm (Carroll 1865). The term 'algorithm' is derived from Latiniza-tion of the name of one of the most creative mathematician in medieval Islam, Al-Kwarizmi (780-c 850; Boyer 1989; Colish 1997). In modern times algorith-mics is a field fundamental to the science of computing (Harel 1987). In the neurosciences algorithms are encountered in multiple contexts (Marr 1982; Hinton 1989; Churchland and Sejnowski 1992). One of these is in "models of biological learning. It is noteworthy that in discussion of such models the terms 'law', 'rule', and 'algorithm' are sometimes intermixed. It is therefore useful to distinguish among them. A 'law' is a scientifically proven formal statement with theoretical underpinning that describes a quantitative relationship between entities. Strictly speaking, there aren't yet bona fide 'laws' specific to the discipline of biological memory. It is sensible, therefore, not to misuse the term. 'Rule' describes a standard procedure for solving a class of problems. It is hence close to 'algorithm'. However, they are not equivalent. 'Algorithm' is a formal term referring to a detailed recipe, whereas 'rule' may be vaguer. Furthermore, a 'rule' may connote knowledge by the executing agent of the input-output relationship, 'algorithm' does not. A "system can execute algorithms perfectly without having the faintest idea what it is doing, why it is all done, and what the outcome is likely to be. As there is no "a priori reason to assume that biological learning at the "synaptic or circuit "level is governed by a knowledgeable supervisor ("homunculus), it does not make a lot of sense to claim that synapses or circuits follow 'rules'; rather, they execute algorithms. Finally, an assumption (usually tacit) of the neuroscience of learning, and an incentive for the analysis of "simple systems, is that a great variety of biological learning systems, in different species, share general laws/rules/algorithms. This posit makes sense if evolution is considered, but is definitely not itself a law, and its generality must be scrutinized in every experimental system anew (e.g. Seligman 1970).

The most popular algorithms in the neuroscience are synaptic ones, and are associated with a postulate of synaptic "plasticity dubbed 'Hebb's postulate'. In its original version it states the following: 'When an axon of cell A is near enough to excite a cell B and repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that A's efficiency, as one of the cells firing B, is increased' (Hebb 1949; for rudimentary precedents see James 1890;Kappers 1917).In a Hebbian synapse, the increase in synaptic weight is thus a function of the correlation of pre- and postsynaptic activity. Hebb postulated the process to account for experience-dependent modification of local nodes in "cell assemblies. In formal notation, Hebb's postulate is of the type w.(t+ 1) = wi^(t) + Awij(t), where Awij(i) = f [ai(t),flj(t)]; wij is the strength ('weight') of the connection from presynaptic unit u. to postsynaptic unit ui, Awjj(t) is the change in synaptic strength, a.(t) and ai(t) are measures of pre- and postsynaptic activity (Brown et al. 1990). Each step in the algorithm is thus a computation of the aforementioned type, and the algorithm consists of proceeding step-by-step over time (at a more "reduced level, the Hebbian computation itself is based on multiple subordinate algorithms, such as summation and multiplication, but this should not concern us here). The original 'Hebbian' became a generic term as well as a reference for many variants of synaptic modification algorithms. Terms composed of 'Hebb-plus-a-modifier' to mark their relationship to the Hebbian are common, and sometimes a bit confusing. For example, 'anti-Hebb' is used to describe rather different types of algorithms that culminate in decrement of synaptic efficacy (e.g. Lisman 1989; Bell et al. 1993; "long-term potentiation, "metaplasticity). Over the years multiple attempts have been made to demonstrate how Hebbian algorithms might be implemented in synapses in "development and learning (e.g. Lisman 1989; Fregnac and Shulz 1994; Buonomano and Merzenich 1998; Lechner and Byrne 1998; but see a critical review in Cruikshank and Weinberger 1996).

A discipline in which synaptic learning algorithms became particularly popular and useful is that of artificial neural networks (ANN; Fausett 1994). These are artificial systems (i.e. either abstract "models or the physical implementation of such models) composed of a large number of interconnected computational units ('neurons'). Signals are passed between neurons over connections, which manipulate the signal in a typical way. Each neuron applies an activation function to its net input to determine its output signal. Specific networks are characterized by the pattern of their connectivity ('architecture'), the algorithm that determines the weight on the connections, and the activation function of the neurons. The collective behaviours of such networks could mimic various dynamic properties of neuronal circuits, such as "perception and learning. Certain subclasses of ANN use Hebbian algorithms to achieve 'unsupervised' learning (see above) in local nodes. Other algorithms refer to 'supervised' learning, in which some type of global information or 'instructor' informs the node what the desired end-point is. An algorithm of the latter type that has gained considerable popularity is 'back-propagation' (or 'back-propagation of errors'). Here the error for each unit (the desired minus the actual output) is calculated at the output of the network, and recursively propagated backward into the network, so that ultimately, the weights of connections are ad.usted to approach the desired output vector of the network (Rumelhart et al. 1986a).

A number of algorithms have been proposed to underlie learning at the more global levels of brain and behaviour (Thorndike 1911; Dickinson 1980; Wasserman and Miller 1997). An influential one is associated with the Rescorla and Wagner model of learning (1972; for precursors, see Hull 1943; Bush and Mosteller 1951). Basically, Rescorla and Wagner posited that in "associative learning, changing the associative strength of a stimulus with a "reinforcer, depends upon the concurrent associative strength of all present stimuli with that reinforcer; if in a given training trial the composite associative strength is already high, learning will be less effective. In formal notation, Rescorla-Wag-ner propose that A VX = aXPR(XR—VE), where A VX is the change produced by a given training trial in the strength of the association (VX) between stimulus X, and reinforcer R; aX and PR are learning rate parameters (associability parameters) representing properties such as the intensity and saliency of X and R; is the maximal conditioning supportable by R; and VE is the total associative strength with respect to R of all the stimuli present on the aforementioned trial. The expression —VE can be said to represent the disparity between expectation and reality on a given trial; the smaller it is, the weaker is the learning. In other words, as many a reader might have concluded from their own experience, the amount of learning is proportional to the amount of "surprise (see also "attention). Here again, each step in the algorithm is a computation of the aforementioned type, and the algorithm consists of proceeding step-by-step over time. The Rescorla-Wagner model can explain multiple behavioural phenomena in conditioning, including cases of "cue revaluation (Dickinson 1980; Wasserman and Miller 1997; "classical conditioning).

Over the years multiple attempts have been made to account for the operation of selected brain regions by proposing identified synaptic and circuit algorithms (For notable examples, see Marr 1969; Albus 1971; Zipser and Andersen 1988). At the current state of the art in brain research, synapses and model circuits still provide a more suitable arena than whole real-life circuits to identify and test learning algorithms, because the input-output relationship of real-life brain circuits is seldom understood in reasonable detail, if at all. Still, advances are being made at more global levels of brain function as well; for example, Schultz et al. (1997) report that in the course of multitrial instrumental training, "dopaminergic activity in the primate brain encodes expectations and prediction errors for reward. The dopaminergic neuro-modulatory system may thus be part of a circuit that performs computations of the type XR- VT in the Rescorla-Wagner model.

New classes of algorithms are expected to emerge at the cellular, circuit, and system levels with the intensification of the mechanistic revolution in biology. One of these days, much of descriptive neuro-biology is bound to give way to a science of biological engineering, in which algorithms and quantitative relations will become the rule rather than the exception. This has profound implications concerning the proper education of future neurobiologists (e.g. Alberts 1998).

Selected associations: Learning, Models, Level, Plasticity, Synapse

0 0