B1 Nonlinear Models

Single equation nonlinear models are those in which the dependent variable y depends in a nonlinear fashion on at least one of the parameters in the model. In general, nature is nonlinear. It is not surprising that nonlinear models crop up in a wide variety of instrumental experiments used in chemistry and biochemistry. A few of these models are listed in Table 2.5. Note that in each of them y depends nonlinearly on at least one of the parameters b\ ... bk. Linear regression is not applicable to such models, but nonlinear regression can be applied generally to nonlinear models, to curvilinear models, and to models with no closed form representations.

As in linear regression, the goal of nonlinear regression is to find the absolute minimum in the error sum with respect to all the parameters. If we elaborate S in eq. (2.9) for a nonlinear model and set the first derivative with respect to each parameter equal to zero, we will find that the set of simultaneous equations does not yield closed form solutions. The alternative is to find the minimum S by numerical methods, using a sequence of repetitive mathematical operations called a minimization algorithm. This is what is done by nonlinear regression programs.

A mountain climbing analogy can be used to help understand how nonlinear regression works. When Sir Edmund Hillary climbed Mt. Everest, he and his team did not proceed to the summit in one long fast journey. They made a series of small journeys, one per day. At the end of each day, Sir Edmund and his colleagues reviewed their progress and planned the course of action for the next day's climb. In this way an efficient and successful assault on the summit was made, and the climbers reached the top of Mt. Everest.

Minimization algorithms used in nonlinear regression analysis can be viewed in a way similar to climbing a large mountain. We define a parameter space, with one axis for each parameter and an additional axis for the error sum S. For a specific model, this space includes an upside down mountain called an error surface. For a two parameter model, parameter space is three-dimensional. The axes represent the two parameters in the x-y plane, and S on the z axis (Figure 2.4).

The error surface describes the family of paths that the minimization algorithm may take to the minimum S. As in Sir Edmund's expedition, there can be a large number of possible paths. The minimum is approached

Table 2.5 Forms of Some Nonlinear Single-Equation Models

Models y = ft, exp(-b2x) y = bx exp[-(x - b2)2/2b3]

Experiment

First-order decay kinetics

Gaussian peak shape, NMR, IR, chromatography, etc.

Sigmoid shape, polarography, steady state voltammetry [6]

Thermodynamic linkage [7], e.g., diffusion of ligand-protein complex [5]

Figure 2.4 Idealized error surface for a nonlinear regression analysis.

in a series of steps, with each successive step planned on the basis of the previous one. Each step is designed to come a little closer to the minimum, until the absolute minimum is reached.

Computer programs for nonlinear regression analysis begin from a set of best guesses for the parameters in the model, as provided by the user. The minimization algorithm varies the parameters in a repetitive series of successive computational cycles, which are the steps leading toward the minimum S. At the end of each cycle, a new, and presumably smaller, value of S is computed from the new set of parameters found by the algorithm. These cycles are called iterations. A method based on successive computational cycles designed to approach the final answer is called an iterative method.

The goal of the minimization algorithm is to find the absolute minimum in the error sum S. Recall that the minimum S can be described by closed form equations for linear regression problems, but not for nonlinear models. The absolute minimum in the error sum S is the absolute or global minimum in the error surface.

The process of approaching and finding the minimum S is called convergence. This process can be envisioned graphically on an error surface in three-dimensional space (Figure 2.4). The initial guesses of the two parameters places the starting point of the calculation at an initial point on the error surface. The iterative minimization algorithm provides a stepwise journey along the error surface, which ends eventually at the global minimum.

In general, the coordinate system containing the error surface has k + 1 dimensions, one for each of the k parameters and one for S. The approach to minimum S starting from a set of initial best guesses for the parameters can be viewed as the journey of a point toward the minimum of an error surface in a (k + l)-dimensional orthogonal coordinate system. (In practice, the coordinates are not always fully orthogonal.)

Figure 2.4 represents an error surface for a model with two parameters, b1 and b2. Here k = 2, and the coordinate system is three-dimensional. The axis of the coordinate system in the Z direction corresponds to S and the other axes correspond to the parameters bi and b2. The x and y axes are identified with these parameters. From initial guesses of bi and b2, the initial point P1 [bu b2, 5,] is located on the error surface. By systematic variation of the parameters, the algorithm employed by the program to minimize S causes this point to travel toward point Pmin [bi, b2, SQ], This is called the point of convergence, where Sa is at the absolute or global minimum on the error surface. The values bt and b2 at SQ are the best values of these parameters with respect to the experimental data according to the least squares principle.

Reliable programs for nonlinear regression should contain criteria to automatically test for convergence limits. These limits are designed to tell the program when the minimum S has been reached. It is at this point in the computation that the best values of the parameters have been found. Here the program stops and displays the final parameter values and statistics and graphs concerning goodness of fit.

Convergence criteria are usually based on achieving a suitably small rate of change of S, of parameter values, or of statistics such as x2- For example, a program might terminate when the rate of change of S over 10 cycles is <0.02%. An alternative criteria might demand that the change in all parameter values is <0.005% on successive cycles. These criteria are included as logical statements in the regression programs and are usually tested at each cycle. Default values of the convergence limits suffice for most problems. In a few special cases, the user might want to change these limits. This is usually possible by simple changes in one or two lines of the program code.

0 0

Post a comment