Notice the similarity between the right-hand side of Eq. (25.18) and the trapezoidal rule

[Eq. (21.3)]. The connection between the two methods can be formally demonstrated by

starting with the ordinary differential equation


dx 5 f(x)

This equation can be solved for y by integration:

# yi11


dy 5 #



f(x) dx (25.19)

which yields

yi11 2 yi 5 # xi11


f(x) dx (25.20)


yi11 5 yi 1 # xi11


f(x) dx (25.21)

Now, recall from Chap. 21 that the trapezoidal rule [Eq. (21.3)] is de! ned as

# xi11


f(x) dx >

f(xi) 1 f(xi11)

2 h (25.22)

where h 5 xi11 2 xi. Substituting Eq. (25.22) into Eq. (25.21) yields

yi11 5 yi 1 f(xi) 1 f(xi11)

2 h (25.23)

which is equivalent to Eq. (25.18).

Because Eq. (25.23) is a direct expression of the trapezoidal rule, the local truncation

error is given by [recall Eq. (21.6)]

Et 5 2 f –(j)

12 h3 (25.24)

where j is between xi and xi1l. Thus, the method is second order because the second de-

rivative of the ODE is zero when the true solution is a quadratic. In addition, the local and

global errors are O(h3) and O(h2), respectively. Therefore, decreasing the step size decreases

the error at a faster rate than for Euler’s method. Figure 25.11, which shows the result of

using Heun’s method to solve the polynomial from Example 25.1 demonstrates this behavior.

25.2.2 The Midpoint (or Improved Polygon) Method

Figure 25.12 illustrates another simple modi! cation of Euler’s method. Called the mid-

point method (or the improved polygon or the modi ed Euler), this technique uses Euler’s

method to predict a value of y at the midpoint of the interval (Fig. 25.12a):

yi11y2 5 yi 1 f(xi, yi) h

2 (25.25)


FIGURE 25.11 Comparison of the true solution with a numerical solution using Euler’s and Heun’s methods for the integral of y9 5 22×3 1 12×2 2 20x 1 8.5.




True solution

Euler’s method

Heun’s method



xxi + 1xi

y Slope = f (xi + 1/2, yi + 1/2)

xxi + 1/2xi



Slope = f (xi + 1/2, yi + 1/2)

FIGURE 25.12 Graphical depiction of the midpoint method. (a) Eq. (25.25) and (b) Eq. (25.27).


Then, this predicted value is used to calculate a slope at the midpoint:

y¿i11y2 5 f(xi11y2, yi11y2) (25.26)

which is assumed to represent a valid approximation of the average slope for the entire

interval. This slope is then used to extrapolate linearly from xi to xi1l (Fig. 25.12b):

yi11 5 yi 1 f(xi11y2, yi11y2)h (25.27)

Observe that because yi1l is not on both sides, the corrector [Eq. (25.27)] cannot be ap-

plied iteratively to improve the solution.

As in the previous section, this approach can also be linked to Newton-Cotes inte-

gration formulas. Recall from Table 21.4, that the simplest Newton-Cotes open integra-

tion formula, which is called the midpoint method, can be represented as

# b

a f(x) dx > (b 2 a) f(x1)

where x1 is the midpoint of the interval (a, b). Using the nomenclature for the present

case, it can be expressed as

# xi11


f(x) dx > h f(xi11y2)

Substitution of this formula into Eq. (25.21) yields Eq. (25.27). Thus, just as the Heun

method can be called the trapezoidal rule, the midpoint method gets its name from the

underlying integration formula upon which it is based.

The midpoint method is superior to Euler’s method because it utilizes a slope estimate

at the midpoint of the prediction interval. Recall from our discussion of numerical differ-

entiation in Sec. 4.1.3 that centered ! nite divided differences are better approximations of

derivatives than either forward or backward versions. In the same sense, a centered ap-

proximation such as Eq. (25.26) has a local truncation error of O(h2) in comparison with

the forward approximation of Euler’s method, which has an error of O(h). Consequently,

the local and global errors of the midpoint method are O(h3) and O(h2), respectively.

25.2.3 Computer Algorithms for Heun and Midpoint Methods

Both the Heun method with a single corrector and the midpoint method can be easily

programmed using the general structure depicted in Fig. 25.7. As in Fig. 25.13a and b,

simple routines can be written to take the place of the Euler routine in Fig. 25.7.

However, when the iterative version of the Heun method is to be implemented, the

modi! cations are a bit more involved (although they are still localized within a single

module). We have developed pseudocode for this purpose in Fig. 25.13c. This algorithm

can be combined with Fig. 25.7 to develop software for the iterative Heun method.

25.2.4 Summary

By tinkering with Euler’s method, we have derived two new second-order techniques. Even

though these versions require more computational effort to determine the slope, the accom-

panying reduction in error will allow us to conclude in a subsequent section (Sec. 25.3.4)


that the improved accuracy is usually worth the effort. Although there are certain cases where

easily programmable techniques such as Euler’s method can be applied to advantage, the

Heun and midpoint methods are generally superior and should be implemented if they are

consistent with the problem objectives.

As noted at the beginning of this section, the Heun (without iterations), the midpoint

method, and in fact, the Euler technique itself are versions of a broader class of one-step

approaches called Runge-Kutta methods. We now turn to a formal derivation of these



Runge-Kutta (RK) methods achieve the accuracy of a Taylor series approach without

requiring the calculation of higher derivatives. Many variations exist but all can be cast

in the generalized form of Eq. (25.1):

yi11 5 yi 1 f(xi, yi, h)h (25.28)

where f(xi, yi, h) is called an increment function, which can be interpreted as a represen-

tative slope over the interval. The increment function can be written in general form as

f 5 a1k1 1 a2k2 1 p 1 ankn (25.29)

(a) Simple Heun without Iteration

SUB Heun (x, y, h, ynew)

CALL Derivs (x, y, dy1dx)

ye 5 y 1 dy1dx ? h

CALL Derivs(x 1 h, ye, dy2dx)

Slope 5 (dy1dx 1 dy2dx)y2 ynew 5 y 1 Slope ? h

x 5 x 1 h


(b) Midpoint Method

SUB Midpoint (x, y, h, ynew)

CALL Derivs(x, y, dydx)

ym 5 y 1 dydx ? hy2 CALL Derivs (x 1 hy2, ym, dymdx) ynew 5 y 1 dymdx ? h

x 5 x 1 h


(c) Heun with Iteration

SUB HeunIter (x, y, h, ynew)

es 5 0.01

maxit 5 20

CALL Derivs(x, y, dy1dx)

ye 5 y 1 dy1dx ? h

iter 5 0


yeold 5 ye

CALL Derivs(x 1 h, ye, dy2dx)

slope 5 (dy1dx 1 dy2dx)y2 ye 5 y 1 slope ? h

iter 5 iter 1 1

ea 5 ` ye 2 yeold ye

` 100% IF (ea # es OR iter . maxit) EXIT


ynew 5 ye

x 5 x 1 h


FIGURE 25.13 Pseudocode to implement the (a) simple Heun, (b) midpoint, and (c) iterative Heun methods.


where the a’s are constants and the k’s are

k1 5 f(xi, yi) (25.29a)

k2 5 f(xi 1 p1h, yi 1 q11k1h) (25.29b)

k3 5 f(xi 1 p2h, yi 1 q21k1h 1 q22k2h) (25.29c)




kn 5 f(xi 1 pn21h, yi 1 qn21, 1k1h 1 qn21, 2k2h 1 p 1 qn21, n21kn21h) (25.29d)

where the p’s and q’s are constants. Notice that the k’s are recurrence relationships. That

is, k1 appears in the equation for k2, which appears in the equation for k3, and so forth.

Because each k is a functional evaluation, this recurrence makes RK methods ef! cient

for computer calculations.

Various types of Runge-Kutta methods can be devised by employing different num-

bers of terms in the increment function as speci! ed by n. Note that the ! rst-order RK

method with n 5 1 is, in fact, Euler’s method. Once n is chosen, values for the a’s, p’s,

and q’s are evaluated by setting Eq. (25.28) equal to terms in a Taylor series expansion

(Box 25.1). Thus, at least for the lower-order versions, the number of terms, n, usually

represents the order of the approach. For example, in the next section, second-order RK

methods use an increment function with two terms (n 5 2). These second-order methods

will be exact if the solution to the differential equation is quadratic. In addition, because

terms with h3 and higher are dropped during the derivation, the local truncation error is

O(h3) and the global error is O(h2). In subsequent sections, the third- and fourth-order

RK methods (n 5 3 and 4, respectively) are developed. For these cases, the global trun-

cation errors are O(h3) and O(h4), respectively.

25.3.1 Second-Order Runge-Kutta Methods

The second-order version of Eq. (25.28) is

yi11 5 yi 1 (a1k1 1 a2k2)h (25.30)


k1 5 f(xi, yi) (25.30a)

k2 5 f(xi 1 p1h, yi 1 q11k1h) (25.30b)

As described in Box 25.1, values for al, a2, p1, and q11 are evaluated by setting Eq. (25.30)

equal to a Taylor series expansion to the second-order term. By doing this, we derive three

equations to evaluate the four unknown constants. The three equations are

a1 1 a2 5 1 (25.31)

a2 p1 5 1

2 (25.32)

a2q11 5 1

2 (25.33)


Because we have three equations with four unknowns, we must assume a value of one

of the unknowns to determine the other three. Suppose that we specify a value for a2. Then

Eqs. (25.31) through (25.33) can be solved simultaneously for

a1 5 1 2 a2 (25.34)

p1 5 q11 5 1

2a2 (25.35)

Box 25.1 Derivation of the Second-Order Runge-Kutta Methods

The second-order version of Eq. (25.28) is

yi11 5 yi 1 (a1k1 1 a2k2)h (B25.1.1)


k1 5 f (xi, yi) (B25.1.2)


k2 5 f (xi 1 p1h, yi 1 q11k1h) (B25.1.3)

To use Eq. (B25.1.1) we have to determine values for the

constants a1, a2, p1, and q11. To do this, we recall that the second-

order Taylor series for yi11 in terms of yi and f(xi, yi) is written as

[Eq. (25.11)]

yi11 5 yi 1 f (xi, yi)h 1 f ¿(xi, yi)

2! h2 (B25.1.4)

where f 9(xi, yi) must be determined by chain-rule differentiation

(Sec. 25.1.3):

f ¿(xi, yi) 5 0f (x, y)

0x 1 0f (x, y)

0y dy

dx (B25.1.5)

Substituting Eq. (B25.1.5) into (B25.1.4) gives

yi11 5 yi 1 f (xi, yi)h 1 a 0f 0x

1 0f

0y dy

dx b h2

2! (B25.1.6)

The basic strategy underlying Runge-Kutta methods is to use alge-

braic manipulations to solve for values of a1, a2, p1, and q11 that

make Eqs. (B25.1.1) and (B25.1.6) equivalent.

To do this, we ! rst use a Taylor series to expand Eq. (25.1.3).

The Taylor series for a two-variable function is de! ned as [recall

Eq. (4.26)]

g(x 1 r, y 1 s) 5 g(x, y) 1 r 0g

0x 1 s 0g

0y 1p

Applying this method to expand Eq. (B25.1.3) gives

f (xi 1 p1h, yi 1 q11k1h) 5 f (xi, yi) 1 p1h 0f


1 q11k1h 0f

0y 1 O(h2)

This result can be substituted along with Eq. (B25.1.2) into Eq.

(B25.1.1) to yield

yi11 5 yi 1 a1h f (xi, yi) 1 a2h f(xi, yi) 1 a2 p1h 2



1 a2q11h 2 f (xi, yi)


0y 1 O(h3)

or, by collecting terms,

yi11 5 yi 1 [a1 f (xi, yi) 1 a2 f (xi, yi) ]h

1 c a2 p1 0f 0x

1 a2q11 f (xi, yi) 0f

0y d h2 1 O(h3)


Now, comparing like terms in Eqs. (B25.1.6) and (B25.1.7), we

determine that for the two equations to be equivalent, the following

must hold:

a1 1 a2 5 1

a2 p1 5 1


a2q11 5 1


These three simultaneous equations contain the four unknown con-

stants. Because there is one more unknown than the number of

equations, there is no unique set of constants that satisfy the equa-

tions. However, by assuming a value for one of the constants, we

can determine the other three. Consequently, there is a family of

second-order methods rather than a single version.


Because we can choose an in! nite number of values for a2, there are an in! nite

number of second-order RK methods. Every version would yield exactly the same results

if the solution to the ODE were quadratic, linear, or a constant. However, they yield

different results when (as is typically the case) the solution is more complicated. We

present three of the most commonly used and preferred versions:

Heun Method with a Single Corrector (a2 5 1y2). If a2 is assumed to be 1y2, Eqs. (25.34) and (25.35) can be solved for a1 5 1y2 and pl 5 q11 5 1. These parameters, when substituted into Eq. (25.30), yield

yi11 5 yi 1 a1 2

k1 1 1

2 k2b h (25.36)


k1 5 f(xi, yi) (25.36a)

k2 5 f(xi 1 h, yi 1 k1h) (25.36b)

Note that k1 is the slope at the beginning of the interval and k2 is the slope at the end

of the interval. Consequently, this second-order Runge-Kutta method is actually Heun’s

technique without iteration.

The Midpoint Method (a2 5 1). If a2 is assumed to be 1, then a1 5 0, p1 5 q11 5 1y2, and Eq. (25.30) becomes

yi11 5 yi 1 k 2h (25.37)


k1 5 f(xi, yi) (25.37a)

k2 5 f axi 1 1 2

h, yi 1 1

2 k1hb (25.37b)

This is the midpoint method.

Ralston’s Method (a2 5 2y3). Ralston (1962) and Ralston and Rabinowitz (1978) determined that choosing a2 5 2y3 provides a minimum bound on the truncation error for the second-order RK algorithms. For this version, a1 5 1y3 and p1 5 q11 5 3y4 and yields

yi11 5 yi 1 a1 3

k1 1 2

3 k2b h (25.38)


k1 5 f(xi, yi) (25.38a)

k2 5 f axi 1 3 4

h, yi 1 3

4 k1hb (25.38b)


EXAMPLE 25.6 Comparison of Various Second-Order RK Schemes

Problem Statement. Use the midpoint method [Eq. (25.37)] and Ralston’s method [Eq. (25.38)] to numerically integrate Eq. (PT7.13)

f(x, y) 5 22×3 1 12×2 2 20x 1 8.5

from x 5 0 to x 5 4 using a step size of 0.5. The initial condition at x 5 0 is y 5 1.

Compare the results with the values obtained using another second-order RK algorithm,

that is, the Heun method without corrector iteration (Table 25.3).

Solution. The ! rst step in the midpoint method is to use Eq. (25.37a) to compute

k1 5 22(0) 3

1 12(0)2 2 20(0) 1 8.5 5 8.5

However, because the ODE is a function of x only, this result has no bearing on the

second step—the use of Eq. (25.37b) to compute

k2 5 22(0.25) 3

1 12(0.25)2 2 20(0.25) 1 8.5 5 4.21875

Notice that this estimate of the slope is much closer to the average value for the interval

(4.4375) than the slope at the beginning of the interval (8.5) that would have been used

for Euler’s approach. The slope at the midpoint can then be substituted into Eq. (25.37)

to predict

y(0.5) 5 1 1 4.21875(0.5) 5 3.109375  et 5 3.4%

The computation is repeated, and the results are summarized in Fig. 25.14 and Table 25.3.

Order now and get 10% discount on all orders above $50 now!!The professional are ready and willing handle your assignment.