As stated in the previous section, the majority of rate equations that describe cellular systems are nonlinear with respect to their dependent variables and their parameters. Unfortunately this means that we can no longer use the techniques of linear algebra and so we rely on iterative-search methods for parameter estimation. For nonlinear estimation we start with an initial trial set of parameter values and progressively refine these by iteratively applying an algorithm that adjusts the values so as to decrease the value of a merit function.

The Levenberg-Marquardt algorithm is the standard one used in nonlinear parameter estimation. It relies on two different methods for the minimization of the merit function. The first is the steepest descent, or gradient descent, method which decreases the value of the merit function by using information on the direction in which the rate of decrease is the greatest. For example, if x2 (Eqn [6.8]) is the merit function, then the direction of steepest descent is given by the negative gradient, -Vx2 (j), where

By making a step a in the direction -Vx2 (j), jnext is estimated from jcurrnet using jnext = jcurrent -a V X (jcurrent) . [6.16]

The second method of determining jnext from jcurrent is to evaluate the merit function using a second-order Taylor approximation around jcurrent. Again, taking Eqn [6.8] as the merit function, the second-order approximation of x2 around jcurrent is

"X2 (j curren t) ( j - jcurrent) + — (j - jcurrent)-H.( j - jcurrent) , where H is the matrix of second derivatives called the Hessian. It is defined as follows:

In this case jnext is selected to be the set of parameter values which minimizes Eqn [6.17]. This set is found by setting Vx2 (j) = 0; in other words, we solve

"X2 (jnext) = X2 (jcurrent) + H.(jnext - jcurrent) = 0 [6.19]

for vnext. The solution is jnext = jcurrent - H-1.^^2 (jnext) . [6.20]

The method is called Newton's method or the inverse-Hessian method.

The reason that the Levenberg-Marquardt method employs the two iterative methods is that the reliability and efficiency of each method vary depending on how close the initial parameter estimates are to their final values. For example, if the initial estimate of Vcurrent is very close to the global minimum the inverse-Hessian method converges quickly; while the steepest descent method converges relatively slowly because the gradient becomes progressively smaller as the minimum is approached. Alternatively, if the estimate of vcurrent is far from the minimum, the value given for Vnext by the inverse-Hessian method can be even farther away from the minimum than the previous

^current.

How does the Levenberg-Marquardt algorithm 'know' when to switch between the two methods? The answer is as follows: the algorithm first uses the diagonal elements of the

Hessian matrix to specify the order of magnitude of the step (i.e., a in Eqn [6.16]) that is taken in the steepest descent method, hence Eqn [6.16] becomes

Was this article helpful?

## Post a comment