C2 Dependent and Independent Variables Are Not Statistically Equivalent

In the preceding sections, we have identified the dependent variables in our models as the quantities measured in the experiment. The independent variable is time or the quantity that is controlled in the experiment. A few examples are listed in Table 3.12.

Table 3.11 Use of the Extra Sum of Squares F Test" to Distinguish between Summed Exponential Models

Model No. parameters


F(2, 49) exponential

F(2, 49) table

1 2


2 4


@80% CL = 2.42

3 6



@90% CL = 3.19

a Analysis of 55 data points, see [1] for details.

a Analysis of 55 data points, see [1] for details.

Table 3.12 Examples of Variables in Typical Experiments

Experiment Dependent variable, y Independent variable, x

Spectrophotometry Electron spectroscopy Potentiometrie titrations Chromatography Voltammetry Calorimetry

Absorbance, intensity Electron counts pH or potential Detector response Current

Heat absorbed or evolved

Time, wavelength, frequency

Binding energy

Volume of titrant



Time or temperature

It is important to realize that when nonlinear regression algorithms are used to minimize S in eq. (2.9), they are minimizing the errors in the dependent variable y with respect to the model. This can be seen by looking at the right-hand side of the equation:

Only y appears there. This minimization ignores errors in x. For this reason, it is important to keep the identities of x and y consistent with their respective roles as independent and dependent variables. In practice, this means that the models should be written as y = F(x, bi,..., bk), and not as x = F(y, bi, ..., bk). These equations are not statistically equivalent in terms of the minimized function S.

A simple example of the incorrectness of mixing up dependent and independent variables was discussed by Kateman, Smit, and Meites [5], They showed what happens when the linear model y = b0x + (3.16)

is converted to the algebraically equivalent form x = b'(jy + b[. (3.17)

Thus, algebraically, we have b0 = 1 lb'0 and 6, = -b[lb'0. (3.18)

However, when both eqs. (3.16) and (3.17) were subjected to linear regression onto the same data set, the equalities in eq. (3.18) were not followed. That is, if eqs. (3.16) and (3.17) were used, errors of +0.4% in b0 and -2.1% in bi would be found. These errors resulting solely from the computation are too large to be acceptable. This example illustrates the principle that the identities of the independent and dependent variables, which are dictated by the nature of the experiment, must be kept intact during regression analyses.

Was this article helpful?

0 0

Post a comment