Many models used in a nonlinear regression analysis converge with no difficulty. In a few cases, however, minor problems may be encountered. In this section, we discuss some practical considerations concerning convergence, which should make the solution of these problems easier.
Our first concern involves the signs of parameters. Suppose we have a model in which all the parameters should be positive to make physical sense. For example, they might be concentrations of reagents, equilibrium constants, and rate constants. Obviously, these parameters cannot have negative values. Suppose we try to run a nonlinear regression analysis and find that one of the parameters, such as /),, consistently goes negative. In extreme cases bx might stay negative, and the analysis might converge with a negative value of b}. This would be a disconcerting result if, for example, b\ represents a rate constant for a chemical reaction for which we have obtained products!
It should be realized in the preceding example that the program has found a minimum in the error sum in a physically unrealistic region of parameter space. Both of the solutions to this dilemma that follow strive to keep the value of the error sum away from physically meaningless regions.
The first solution (Table 4.6, method 1) involves simply using a logical statement in the program, in the part that computes y(calc), so that a parameter that becomes negative is forced to go more positive. This will increase the error sum for this cycle and hopefully redirect the search for the minimum in the correct direction. Another possible solution is to penalize the values of y(calc) to increase the error sum greatly if a parameter goes negative (Table 4.6, method 2). Both solutions can be tried in a given situation. In some cases, it may be necessary to force more than one parameter to remain positive. Appropriate logical statements similar to those in Table 4.6 can be employed. Similar approaches can be used to keep parameters negative.
Was this article helpful?