Linear least squares methods can be used for all linear models. Recall that the equations for parameters obtained from unweighted linear least squares are derived by assuming that there are no errors in the Xj and that the errors in the y;(meas) have equal variances. This assumes explicitly that ey has about the same value for a series of measured yI, independent of the actual value of yr In the Beer's law example, we estimated a standard error in the absorbance of 0.003 units. This means that, on average, all the absorbance values in Tables 2.2 and 2.3 have ey = 0.003.

The equal variance assumption that justifies the use of w; = 1 can be verified experimentally. In the previous example, each y, could be measured several times. From these results, a set of standard deviations ,vy can be estimated, one for each yr If the are approximately the same for all yn then the equal variance assumption holds.

There are other possibilities for the distribution of errors. For example, suppose that the errors in y, are directly proportional to y,. Intuitively, we need to place less weight on the results with larger errors. To achieve this, we minimize an error sum that contains a weighting factor wt- = 1/yj. The correct form of the error sum (cf. eq. (2.9)) becomes

The resulting equations for bx and b2 will be different from those for the unweighted case. The new equations must be derived by using the expressions in eq. (2.10) with S as defined in eq. (2.19). This new set of simultaneous equations are solved for bi and b2.

Another type of error distribution in >>;(meas) results for experiments using detectors that count events, such as photon or electron counters. In this case, errors in y, depend on y}/2, and Wj = 1 ly. Yet another set of equations for bt and b2 results from this type of weighting. One common error in regression analysis involves using an unweighted regression when the distribution of errors requires a weighting factor.

In many cases it is possible to linearize a nonlinear model. However, it is necessary to consider how the linearization of a model changes the distribution of errors in y and x. We need to find out whether unweighted linear regression remains applicable after linearization [1]. As an example we consider the model for first-order decay, which is relevant to radioactive elements, chemical kinetics, luminescence, and many other situations [6], The model is y = y0exp(-kt) + b (2.20)

where y is the observed signal at time t, k is the rate constant for the decay, and ya is the response at t = 0. The dependent variable y is a nonlinear function of the rate constant k, and b is a time invariant background parameter. If b = 0 or if b can be subtracted from y, we can linearize eq. (2.20) by taking the logarithm of both sides:

Linear regression with eq. (2.21) can now be undertaken, but we must realize that the new independent variable is not y but In (y - b). We must consider the error distribution of this new independent variable in the linearized model. To follow this idea further, suppose we obtain data on the decay of a chemical species by using an absorbance spectrophotometer. We have seen that for such data, the constant variance assumption allows the use of Wj = 1. Suppose the constant error in y is ey = 0.001 absorbance units. Because the error in y is constant, we can use it to compute the error in In y, assuming b = 0.

Table 2.4 shows the resulting error in the dependent variable In y for three different y values. Here, In y was computed from the value of y, and einy was computed as ln(_y + ey) - ln(y - ey). We see in Table 2.4 that the error in In y is not independent of In y. Even though ey is independent of y, the error in the new independent variable In y is not independent of

y |
ey |
In y |
«"in v |

1.000 |
±0.001 |
0 |
±0.001 |

0.100 |
±0.001 |
-2.303 |
±0.01 |

0.010 |
±0.001 |
-4.605 |
±0.1 |

its magnitude. The value of e]n y increases as In y becomes more negative. The assumption w, = 1 is not justified for the linearized model in eq. (2.21). A rather complex weighting function is required for linear regression using this model.

Another complication is introduced if the background of the first-order decay experiment drifts with time. We can take this into account by adding a linear background term mt + b, where m is the slope of the background signal. Then the model becomes y = A' exp(-kt) + mt + b. (2.22)

Taking the logarithm of both sides of this equation no longer provides a linear equation:

Now, In y depends on the logarithm of the background slope. On the other hand, nonlinear regression is directly applicable to models such as eq. (2.22).

Linear regression can also be used to analyze data with curvilinear models. Typical examples are polynomials in x; for example, y = b{x3 + b2x2 + b3x + b4 (2.24)

Because independent variable y is a linear function of the parameters bi... bk in these models, linear regression can be used. Assuming that ey is independent of y, eq. (2.9) is the correct form of the error sum S, and its derivatives with respect to b\ ... bk are set equal to zero. The resulting simultaneous equations are solved for b\ ... bk as usual. However, this procedure will result in a different set of equations for each curvilinear model.

We shall see in the next section that nonlinear regression is a convenient and easy to use general solution to all of these difficulties. It can be used to obtain direct fits of nonlinear models to experimental data with or without background components. These analyses do not require closed form equations for the parameters but depend on iterative numerical algorithms. Therefore, weighting can be approached in a general way without a new derivation for each separate problem. In principle, the same basic regression program can be used to fit any model.

Was this article helpful?

## Post a comment