20.1 The Problem of Global Optimization

Problems to find maxima and minima (optimization) are common in biological modeling. We have already encountered them in the context of parameter estimation where we minimized the error between data and a function. Optimization also arises in models of the evolutionary process (e.g., optimal foraging) because a valuable working hypothesis when addressing questions of adaptations in organisms is that the observed traits maximize individual Darwinian fitness (i.e., reproductive contribution to future generations). A related application arises when modeling biological systems as control systems: systems that are able to adjust parameters in order to maintain system dynamics within some specified operating range. For example, in mammals, heart rate is increased when oxygen demand through physical exertion increases so that a constant amount of oxygen is delivered to vital organs. This can be considered to be an optimization problem since the system is "attempting" to minimize deviations of oxygen delivery rates from normal (acceptable) values. A third application arises when dynamic models must adjust flow rates of physical quantities between compartments so as to adhere to a physical law. For example, Caldwell et al. (1986, Chapter 17) modeled radiant heat absorption by leaves as part of a canopy-level photosynthesis model. Since there was no analytical solution for the heat flow into each layer of leaves in the canopy given only the input radiation, they used an iterative approximation that minimized the difference between the energy input at the top of the canopy and the total amount of energy absorbed based on a model of the effects of higher leaf levels on lower ones.

The above applications share a common feature in that they are all based on real-valued, continuous functions. That is, least-square minimization in parameter estimation, individual fitness as a function of a foraging efficiency, and balancing energy budgets all fit this pattern. When the functions to optimize are "simple," there are robust and well-studied methods (e.g., nonlinear regression). There are, however, two situations in which the methods have difficulty: global optimization in the context of nonlinear functions with many local extrema and combinatorial optimization.

Hill-climbing methods such as Nelder-Mead simplex will usually find the closest extremum, but this may not be the global optimum. In Sec. 7.4, we recommended that the locality of the solution be tested by starting the simplex at several different initial parameter values. However, as we will see below, there are better approaches. Combinatorial optimization problems are those in which optima are sought that are not simple, real-valued functions. These are optimization problems, as the name suggests, whose goal is not to find parameter values, but to find the best way to combine objects together. A classical example is the traveling salesperson problem where the problem is to find the best sequence of cities to visit in order to minimize total distance traveled. Both of these aspects of optimization are hard, and new computational techniques using biological metaphors have been developed to deal with them. Foremost among these are methods based on analogies with evolution by natural selection.

New optimization techniques are useful only if they address interesting biological questions. In this chapter, the models used as examples will attempt to answer the following. (1) What are the best days to irrigate a field crop to get the highest yield and use the least water? (2) What set of parameters best predicts dry mass accretion in a cotton crop simulator? (3) To survive when competing for food, is it better for bean weevils to stay and fight with each other or to eat fast and run? (4) How can a lizard know when to chase an insect and when to pass it by? All of these questions are optimization problems that can be answered with evolutionary computation.

Based on the observation that biological evolution by means of natural selection produces organisms progressively better fit for a given environment, several computer scientists [e.g., Fogel et al. 1966; Holland 1975 (re-issued and updated in 1992)] proposed an analogy between natural selection and general optimization algorithms. While there are many biological situations where we would not expect organisms to be optimally adapted to their environment, the general relationship is strong enough to encourage computer scientists. The basic analogy is that potential solutions to an optimization problem are similar to the phenotype (observable traits) of organisms, and the proximity of a potential solution to the true solution is similar to Darwinian fitness. If there are differences within a population of potential solutions, then some will be "fitter" than others and, such as biological evolution, the best potential solutions will be those that contribute the most to the next iteration of the algorithm just as more fit organisms contribute more offspring to the next generation.

A large family of algorithms uses this basic analogy and has been subsumed under the label evolutionary computation. The algorithms differ in their interpretations of the basic elements of the analogy and in their computer implementations. Below, we briefly survey some of the alternatives and describe in more detail one especially popular approach.

Let P(&) denote a population of N potentially optimal solutions at algorithm iteration k. Most approaches to evolutionary computation use the following general evolutionary algorithm (Back 1994).

1. Initialize P(0) with random solutions.

2. Evaluate the fitness of each element of the initial P(0).

3. Recombine elements of the current V(k) to form a new P'(&).

6. Select the best of the P"(it) to form a new P(£).

7. Repeat steps 3-7 with k-k + 1 until a stopping criterion is met.

The differences among the methods depend on the class of problems (and solutions) attacked, methods to evaluate fitness, choice of the potential solutions to retain for the next generation, and techniques to modify the current set of potential solutions to produce variation in the population. In this discussion, we include as evolutionary algorithms the following techniques: simulated annealing, evolutionary programming, evolution strategies, genetic algorithms, and genetic programming.

Simulated annealing (SA) is based on an analogy with physical thermal annealing: a process used to create crystals by heating a substance to liquid and allowing it to cool. If the cooling proceeds sufficiently slowly, pure crystals will form because the individual molecules will succeed in reaching an energy minimum given the states of their neighbors. If cooling is too fast, not all molecules can orient properly before their thermal energy is removed, and imperfect crystals are the result. Imperfections are not necessarily bad; different types of metal are produced by different cooling rates.

The basic approach to SA is straightforward: (1) generate a single random solution to the problem, (2) calculate the cost or quality of the solution (i.e., "energy"), (3) if the solution is better than the previous best, accept the current solution, (4) if the solution is worse than the previous best, accept the current solution with some probability, and (5) repeat step (1) until a stopping criterion is satisfied. SA is a special case of the general evolutionary algorithm because it uses a population size of 1 and does not perform recombination among existing solutions (step 3).

The purpose of step (4) in SA is to avoid local minima by sometimes accepting poorer solutions. This allows the proposed solution to jump out of local minimum energy traps. The probability functions used vary greatly among applications. In general, the probability decreases as the control "temperature" increases (van Laarhoven and Aarts 1987):

where k is iteration number, qk(c) is usually called the Boltzmann probability, Q(c) is a normalization function, c is the control constant analogous to temperature, and AC(k) is the difference between the costs of the current solution and the previous best. If AC(k) < 0, the new configuration is accepted as the best. If AC(k) > 0, the choice to retain the inferior current solution is essentially accomplished by a coin toss. If qk(c) is greater than a uniform random deviate from the interval 0-1, then the current solution becomes the best solution, even though its quality is less than that of the previous best.

When c is small, qk(c) is large, causing the algorithm to accept "inferior" solutions relatively frequently. This permits the algorithm to continue searching for the global minimum when it is in the vicinity of a local minimum. To converge on a solution, however, the acceptance of inferior solutions must eventually become unlikely. This is accomplished by reducing c, analogous to cooling the medium in real annealing. In minimization problems, it is typical to choose the new control value as 1 = /(a), where /() is the cooling schedule.

The algorithm for reducing c is not specified and, generally, is chosen by a combination of intuition and iterative trials. Common approaches (van Laarhoven and Aarts 1987) include: (1) linear decrement: /(q) = ack, where a is a number slightly less than 1.0. a can be chosen by fixing the final control value and maximum number of iterations to be performed. (2) Nonlinear decrement:

Was this article helpful?

## Post a comment