A preliminary view of how we obtain a numerical solution for y[t] is as follows. We first substitute a value of t into f[t, y] in Eqn [1.21] and thus determine the slope of the curve. At the outset, the only value of y[t] that is known is the one at the point (t0, yo); so we begin our solution here. We compute the slope of the curve at t = t0and then draw a short tangent line through (t0, y0). If we denote the increment in t by h, we arrive at the new coordinate t1 = t0 + h. This value of t, coupled with the equation for a straight line [ y = y0+ (t1 - t0) slope], yields the new value y = y1. We then substitute the new values y1 and t1 into the slope formula to obtain f[t1, y1 ]. By cycling through (called iterating) this process, we obtain a series of joined straight-line segments (i.e., a portion of a polygon) that approximates the true function. This is called Euler's pointslope method of numerical integration of a differential equation.

Consider the implementation of the Euler method to the exponential function.

Q: Use Euler's method to solve y'[t] = y[t] with y[0] = 1 and compare this solution graphically with the analytical solution.

A: First, we use DSolve to generate an analytical solution to this differential equation in the usual manner.

solution =

DSolve [{yanalytical ' [t] == yanalytical [ t] , yanalytical [ 0 ] == 1}, yanalytical [t] , t] yanalytical [t_] = yanalytical [t] /. solution;

To solve this differential equation by Euler's method we begin by specifying the initial condition and choosing a stepsize, say, h = 0.5.

Now apply Euler's method iteratively using the Do command for 5 iterations, and then put the results into a Table of ordered pairs (t, y[t]).

Do[yem[i] = yem [i - 1] + hyem [i - 1] , {i, 1, 5}] ; emTable = Table[{ih,yem [i]} , {i, 0,5}]

{{0, 1}, {0.5, 1.5}, {1., 2.25}, {1.5, 3.375}, {2., 5.0625}, {2.5, 7.59375}}

Graphically compare the two results using ListPlot to plot the table of points in emTable and the familiar Plot function to graph the analytical solution. Notice that in the following functions we suppress the initial output of the ListPlot and Plot commands by using the option DisplayFunction^Identity; this allows us to produce a single combined plot using the Show function.

gph1 = ListPlot[emTable, PlotStyle -> {PointSize[0.025]},

DisplayFunction ^Identity]; gph2 = ListPlot[emTable, PlotJoined^ True,

DisplayFunction ^Identity]; gph3 = Plot [yanalytical [t] , {t, 0, 2.5}, DisplayFunction ^Identity];

Show[gph1, gph2, gph3,

DisplayFunction^ $DisplayFunction, AxesLabel^ {"t", "y"}];

Figure 1.9. The analytical solution of Eqn [1.20] and the numerical solution (large points) obtained by using Euler's point-slope method.

Figure 1.9. The analytical solution of Eqn [1.20] and the numerical solution (large points) obtained by using Euler's point-slope method.

Dofexpr, {i, imin, imax, di<] Table[expr, {i, imin, imax, di<]

ListPlotf/ist, option —>value]

evaluate expr with i running from imin to imax in steps of di. If di is omitted, step size is 1

make a list of the values of expr with i running from imin to imax in steps of di . If di is omitted, step size is 1

plot points (xi, _yi) , ... in list. See help browser for a list of options

Loops, lists, and plotting lists.

It can be seen from Fig. 1.9 that there are dangers with approximating a smoothly curved solution by a series of linear segments. So, it is important to find ways of taking into account the likely curvature of the real solution, which of course we don't know a priori. It is a marvel of modern numerical analysis that error-knowing-and-minimizing algorithms have been developed. To this end, there are two basic categories of numerical integration: (1) the single step methods that rely on information about the curve, one point at a time. They are said to be the direct methods as they do not iterate a solution (see much more on this topic below); and (2) multi-step methods in which each successive point on the curve is found with fewer evaluations of the slope function (derivative) than in (1); but iterations are required to arrive at a sufficiently accurate value. Most methods of this type are called predictor-corrector methods. One of these is the default algorithm used in NDSolve in Mathematica. There is a problem with getting these methods started, but this disadvantage is offset by the side benefit of obtaining an estimate of the error as the solution proceeds. In practice with Mathematica the peculiarities of the various methods are transparent to the user, and the most modern algorithm is implemented unless the default is overridden by specific commands (see the Mathematica book for details).

Was this article helpful?

## Post a comment