跳转至

Chapter 4 | Numerical Differentiation and Integration

Before everything start, it's necessary to introduce an approach to reduce truncation error: Richardson's extrapolation.

Richardson's Extrapolation 外推

Suppose for each hh, we have a formula N(h)N(h) to approximate an unknown value MM. And suppose the truncation error have the form

MN(h)=K1h+K2h2+K3h3+, M - N(h) = K_1 h + K_2 h^2 + K_3 h^3 + \cdots,

for some unknown constants K1K_1, K2K_2, K3K_3, \dots. This has O(h)O(h) approximation.

First, we try to make some transformation to reduce the K1hK_1 h term.

M=N(h)+K1h+K2h2+K3h3+,M=N(h2)+K1h2+K2h24+K3h38+, \begin{aligned} M &= N(h) + K_1 h + K_2 h^2 + K_3 h^3 + \cdots, \\ M &= N\left(\frac{h}{2}\right) + K_1 \frac{h}{2} + K_2 \frac{h^2}{4} + K_3 \frac{h^3}{8} + \cdots, \end{aligned}

Eliminate hh, and we have

M=[N(h2)+(N(h2)N(h))]+K2(h22h2)+K3(h34h3)+ \small M = \left[N\left(\frac{h}{2}\right) + \left(N\left(\frac{h}{2}\right) - N(h)\right)\right] + K_2\left(\frac{h^2}{2} - h^2\right) + K_3 \left(\frac{h^3}{4} - h^3\right) + \cdots

Define N1(h)N(h)N_1(h) \equiv N(h) and

N2(h)=N1(h2)+(N1(h2)N1(h)). N_2(h) = N_1 \left(\frac{h}{2}\right) + \left(N_1\left(\frac{h}{2}\right) - N_1(h)\right).

Then we have O(h2)O(h^2) approximation formula for MM:

M=N2(h)K22h23K34h3. M = N_2(h) - \frac{K_2}{2}h^2 - \frac{3K_3}{4}h^3 - \cdots.

Repeat this process by eliminating h2h^2, and then we have

M=N3(h)+K38h3+, M = N_3(h) + \frac{K_3}{8}h^3 + \cdots,

where

N3(h)=N2(h2)+13(N2(h2)N2(h)). N_3(h) = N_2 \left(\frac{h}{2}\right) + \frac{1}{3}\left(N_2\left(\frac{h}{2}\right) - N_2(h)\right).

We can repeat this process recursively, and finally we get the following conclusion.

Richardson's Extrapolation

If MM is in the form

M=N(h)+j=1m1Kjhj+O(hm), M = N(h) + \sum\limits_{j = 1}^{m - 1}K_jh^j + O(h^m),

then for each j=2,3,,mj = 2, 3, \dots, m, we have an O(hj)O(h^j) approximation of the form

Nj(h)=Nj1(h2)+12j11(Nj1(h2)Nj1(h)). N_j(h) = N_{j - 1}\left(\frac{h}{2}\right) + \frac{1}{2^{j - 1} - 1}\left(N_{j - 1}\left(\frac{h}{2}\right) - N_{j - 1}(h)\right).

Also it can be used if the truncation error has the form

j=1m1Kjhαj+O(hαm). \sum\limits_{j = 1}^{m - 1}K_jh^{\alpha_j} + O(h^{\alpha_m}).

chap4_0

Numerical Differentiation

First Order Differentiation

Suppose {x0,x1,,xn}\{x_0, x_1, \dots, x_n\} are distinct in some interval II and fCn+1(I)f \in C^{n + 1}(I), then we have the Lagrange interpolating polynomials,

f(x)=k=0nf(xk)Lk(x)+f(n+1)(ξ(x))(n+1)!i=0n(xxi). f(x) = \sum\limits_{k = 0}^{n}f(x_k)L_k(x) + \frac{f^{(n + 1)}(\xi(x))}{(n + 1)!}\prod\limits_{i = 0}^{n}(x - x_i).

Differentiate f(x)f(x) and substitute xjx_j to it,

f(xj)=k=0nf(xk)Lk(x)+f(n+1)(ξ(xj))(n+1)!k=0kjn(xjxk), f'(x_j) = \sum\limits_{k = 0}^{n}f(x_k)L_k'(x) + \frac{f^{(n + 1)}(\xi(x_j))}{(n + 1)!} \prod\limits_{\substack{k = 0 \\ k \ne j}}^n(x_j - x_k),

which is called an (n + 1)-point formula to approximate f(xj)f'(x_j).

For convenience, we only discuss the equally spaced situation. Suppose the interval is [a,b][a, b] divided to nn parts, and denote

h=ban,xi=a+ih. \begin{aligned} h = \frac{b - a}{n}, \\ x_i = a + ih. \end{aligned}

When n=1n = 1, we simply get the two-point formula,

f(x0)=f(x0+h)f(x0)hh2f(ξ), f'(x_0) = \frac{f(x_0 + h) - f(x_0)}{h} - \frac{h}{2}f''(\xi),

which is known as forward-difference formula. Inversely, by replacing hh with h-h,

f(x0)=f(x0)f(x0h)h+h2f(ξ), f'(x_0) = \frac{f(x_0) - f(x_0 - h)}{h} + \frac{h}{2}f''(\xi),

is known as backward-difference formula.

When n=3n = 3, we get the three-point formulae. Due to symmetry, there are only two.

f(x0)=12h[3(f(x0))+4f(x0+h)f(x0+2h)]+h23f(3)(ξ),where some ξ[x0,x0+2h],f(x0)=12h[f(x0+h)+f(x0h)]h26f(3)(ξ),where some ξ[x0h,x0+h]. \begin{aligned} f'(x_0) &= \frac{1}{2h}[-3(f(x_0)) + 4f(x_0 + h) - f(x_0 + 2h)] + \frac{h^2}{3}f^{(3)}(\xi), \\ & \text{where some } \xi \in [x_0, x_0 + 2h], \\ f'(x_0) &= \frac{1}{2h}[f(x_0 + h) + f(x_0 - h)] - \frac{h^2}{6}f^{(3)}(\xi), \\ & \text{where some } \xi \in [x_0 - h, x_0 + h]. \end{aligned}

When n=5n = 5, we get the five-point formulae. The following are the useful two of them.

f(x0)=112h[25f(x0)+48f(x0+h)36f(x0+2h)    +16f(x0+3h)3f(x0+4h)]+h45f(5)(ξ),where some ξ[x0,x0+4h],f(x0)=112h[f(x02h)8f(x0h)]+8f(x0+h)f(x0+2h)]    +h430f(5)(ξ),where some ξ[x02h,x0+2h]. \begin{aligned} f'(x_0) &= \frac{1}{12h}[-25f(x_0) + 48f(x_0 + h) - 36f(x_0 + 2h) \\ & \ \ \ \ + 16f(x_0 + 3h) - 3f(x_0 + 4h)] + \frac{h^4}{5}f^{(5)}(\xi), \\ & \text{where some } \xi \in [x_0, x_0 + 4h], \\ f'(x_0) &= \frac{1}{12h}[f(x_0 - 2h) - 8f(x_0 - h)] + 8f(x_0 + h) - f(x_0 + 2h)] \\ & \ \ \ \ + \frac{h^4}{30}f^{(5)}(\xi), \\ & \text{where some } \xi \in [x_0 - 2h, x_0 + 2h]. \end{aligned}
chap4_2
chap4_3

Differentiation with Richardson's Extrapolation

Consider the three-point formula,

f(x0)=12h[f(x0+h)+f(x0h)]h26f(3)(ξ)h4120f(5)(ξ). f'(x_0) = \frac{1}{2h}[f(x_0 + h) + f(x_0 - h)] - \frac{h^2}{6}f^{(3)}(\xi) - \frac{h^4}{120}f^{(5)}(\xi) - \cdots.

In this case, considering Richardson's extrapolation, we have

N1(h)=12h[f(x0+h)+f(x0h)]. N_1(h) = \frac{1}{2h}[f(x_0 + h) + f(x_0 - h)].

Eliminate h2h^2, we have

f(x0)=N2(h)+h4480f(5)(x0)+, f'(x_0) = N_2(h) + \frac{h^4}{480}f^{(5)}(x_0) + \cdots,

where

N2(h)=N1(h2)+13(N1(h2)N1(h)). N_2(h) = N_1 \left(\frac{h}{2}\right) + \frac{1}{3}\left(N_1\left(\frac{h}{2}\right) - N_1(h)\right).

Continuing this procedure, we have,

Nj(h)=Nj1(h2)+14j11(Nj1(h2)Nj1(h)). N_j(h) = N_{j - 1}\left(\frac{h}{2}\right) + \frac{1}{4^{j - 1} - 1}\left(N_{j - 1}\left(\frac{h}{2}\right) - N_{j - 1}(h)\right).
Example

Suppose x0=2.0x_0 = 2.0, h=0.2h = 0.2, f(x)=xexf(x) = xe^x, the exact value of f(x0)f'(x_0) is 22.167168. While the extrapolation process is shown below.

chap4_1

Higher Differentiation

Take second order differentiation as an example. From Taylor polynomial about point x0x_0, we have

f(x0+h)=f(x0)+f(x0)h+12f(x0)h2+16f(x0)h3+124f(4)(ξ1)h4,f(x0h)=f(x0)f(x0)h+12f(x0)h216f(x0)h3+124f(4)(ξ1)h4, \begin{aligned} f(x_0 + h) &= f(x_0) + f'(x_0)h + \frac{1}{2}f''(x_0)h^2 + \frac{1}{6}f'''(x_0)h^3 + \frac{1}{24}f^{(4)}(\xi_1)h^4, \\ f(x_0 - h) &= f(x_0) - f'(x_0)h + \frac{1}{2}f''(x_0)h^2 - \frac{1}{6}f'''(x_0)h^3 + \frac{1}{24}f^{(4)}(\xi_{-1})h^4, \\ \end{aligned}

where x0h<ξ1<x0<ξ1<x0+hx_0 - h < \xi_{-1} < x_0 < \xi_1 < x_0 + h.

Add these equations and take some transformations, we get

f(x0)=1h2[f(x0h)2f(x0)+f(x0+h)]h212f(4)(ξ), f''(x_0) = \frac{1}{h^2}[f(x_0 - h) - 2f(x_0) + f(x_0 + h)] - \frac{h^2}{12}f^{(4)}(\xi),

where x0h<ξ<x0+hx_0 - h < \xi < x_0 + h, and f(4)(ξ)=12(f(4)(ξ1)+f(4)(ξ1))f^{(4)}(\xi) = \dfrac{1}{2}(f^{(4)}(\xi_1) + f^{(4)}(\xi_{-1})).

Error Analysis

Now we examine the formula below,

f(x0)=12h[f(x0+h)+f(x0h)]h26f(3)(ξ). f'(x_0) = \frac{1}{2h}[f(x_0 + h) + f(x_0 - h)] - \frac{h^2}{6}f^{(3)}(\xi).

Suppose that in evaluation of f(x0+h)f(x_0 + h) and f(x0h)f(x_0 - h), we encounter roundoff errors e(x0+h)e(x_0 + h) and e(x0h)e(x_0 - h). Then our computed values f~(x0+h)\tilde f(x_0 + h) and f~(x0h)\tilde f(x_0 - h) satisfy the following formulae,

f(x0+h)=f~(x0+h)+e(x0+h),f(x0h)=f~(x0h)+e(x0h). \begin{aligned} f(x_0 + h) &= \tilde f(x_0 + h) + e(x_0 + h), \\ f(x_0 - h) &= \tilde f(x_0 - h) + e(x_0 - h). \end{aligned}

Thus the total error is

R=f(x0)f~(x0+h)f~(x0h)2h=e(x0+h)e(x0h)2hh26f(3)(ξ). R = f'(x_0) - \frac{\tilde f(x_0 + h) - \tilde f(x_0 - h)}{2h} = \frac{e(x_0 + h) - e(x_0 - h)}{2h} - \frac{h^2}{6}f^{(3)}(\xi).

Moreover, suppose the roundoff error ee are bounded by some number ε>0\varepsilon > 0 and f(3)(x)f^{(3)}(x) is bounded by some number M>0M > 0, then

Rεh+h26M. |R| \le \frac{\varepsilon}{h} + \frac{h^2}{6}M.

Thus theoretically the best choice of hh is 3ε/M3\sqrt[3]{3\varepsilon / M}. But in reality we cannot compute such an hh since we know nothing about f(3)(x)f^{(3)}(x).

In all, we should aware that the step size hh cannot be too large or too small.

Numerical Integration

Numerical Quadrature

Similarly to the differentiation case, suppose {x0,x1,,xn}\{x_0, x_1, \dots, x_n\} are distinct in some interval II and fCn+1(I)f \in C^{n + 1}(I), then we have the Lagrange interpolating polynomials,

f(x)=k=0nf(xk)Lk(x)+f(n+1)(ξ(x))(n+1)!i=0n(xxi). f(x) = \sum\limits_{k = 0}^{n}f(x_k)L_k(x) + \frac{f^{(n + 1)}(\xi(x))}{(n + 1)!}\prod\limits_{i = 0}^{n}(x - x_i).

Integrate f(x)f(x) and we get

abf(x)dx=abk=0nf(xk)Lk(x)dx+abf(n+1)(ξ(x))(n+1)!k=0n(xxk)dx,=i=0nAif(xi)+1(n+1)!abi=0n(xxi)f(n+1)(ξ(x))dxE(f), \begin{aligned} \int_a^b f(x) dx &= \int_a^b \sum\limits_{k = 0}^{n}f(x_k)L_k'(x) dx + \int_a^b \frac{f^{(n + 1)}(\xi(x))}{(n + 1)!} \prod\limits_{k = 0}^n (x - x_k)dx, \\ &= \sum\limits_{i = 0}^n A_i f(x_i) + \underbrace{\frac{1}{(n + 1)!} \int_a^b \prod_{i = 0}^n(x - x_i)f^{(n + 1)}(\xi(x))dx}_{E(f)}, \end{aligned}

where

Ai=abLi(x)dx. A_i = \int_a^b L_i(x)dx.

Definition 4.0

The degree of accuracy, or say precision, of a quadrature formula is the largest positive integer nn such that the formula is exact for xkx^k, for each k=0,1,,nk = 0, 1, \dots, n.

Similarly, we suppose equally spaced situation here again.

Theorem 4.0 | (n + 1)-point closed Newton-Cotes formulae

Suppose x0=ax_0 = a, xn=bx_n = b, and h=(ba)/nh = (b - a) / n, then  ξ(a,b)\exists\ \xi \in (a, b), s.t.

abf(x)dx=i=0nAif(xi)+hn+3f(n+2)(ξ)(n+2)!0nt2(t1)(tn)dt, \int_a^b f(x)dx = \sum\limits_{i = 0}^n A_i f(x_i) + \frac{h^{n + 3}f^{(n + 2)}(\xi)}{(n + 2)!}\int_0^n t^2(t - 1)\cdots(t - n)dt,

if nn is even and fCn+2[a,b]f \in C^{n + 2}[a, b], and

abf(x)dx=i=0nAif(xi)+hn+2f(n+1)(ξ)(n+1)!0nt(t1)(tn)dt, \int_a^b f(x)dx = \sum\limits_{i = 0}^n A_i f(x_i) + \frac{h^{n + 2}f^{(n + 1)}(\xi)}{(n + 1)!}\int_0^n t(t - 1)\cdots(t - n)dt,

if nn is odd and fCn+1[a,b]f \in C^{n + 1}[a, b],

where

Ai=abLi(x)dx=abj=0jin(xxj)(xixj)dx. A_i = \int_a^b L_i(x) dx = \int_a^b \prod\limits_{\substack{j = 0 \\ j \ne i}}^n \frac{(x - x_j)}{(x_i - x_j)} dx.

chap4_6
(n + 1)-point closed Newton-Cotes formulae

n=1n = 1 Trapezoidal rule

x0x1f(x)dx=h2[f(x0)+f(x1)]h312f(ξ),   where some x0<ξ<x1. \int_{x_0}^{x_1}f(x)dx = \frac{h}{2}[f(x_0) + f(x_1)] - \frac{h^3}{12}f''(\xi) ,\ \ \text{ where some } x_0 < \xi < x_1.

n=2n = 2 Simpson's rule

x0x2f(x)dx=h3[f(x0)+4f(x1)+f(x2)]h590f(4)(ξ),   where some x0<ξ<x2. \int_{x_0}^{x_2}f(x)dx = \frac{h}{3}[f(x_0) + 4f(x_1) + f(x_2)] - \frac{h^5}{90}f^{(4)}(\xi) ,\ \ \text{ where some } x_0 < \xi < x_2.

n=3n = 3 Simpson's Three-Eighths rule

x0x3f(x)dx=3h8[f(x0)+3f(x1)+3f(x2)+f(x3)]3h580f(4)(ξ),where some x0<ξ<x3. \int_{x_0}^{x_3}f(x)dx = \frac{3h}{8}[f(x_0) + 3f(x_1) + 3f(x_2) + f(x_3)] - \frac{3h^5}{80}f^{(4)}(\xi), \\ \text{where some } x_0 < \xi < x_3.
chap4_4
Trapezoidal rule
chap4_5
Simpson's rule

Theorem 4.1 | (n + 1)-point open Newton-Cotes formulae

Suppose x1=ax_{-1} = a, xn+1=bx_{n + 1} = b, and h=(ba)/(n+2)h = (b - a) / (n + 2), then  ξ(a,b)\exists\ \xi \in (a, b), s.t.

abf(x)dx=i=0nAif(xi)+hn+3f(n+2)(ξ)(n+2)!1n+1t2(t1)(tn)dt, \int_a^b f(x)dx = \sum\limits_{i = 0}^n A_i f(x_i) + \frac{h^{n + 3}f^{(n + 2)}(\xi)}{(n + 2)!}\int_{-1}^{n + 1} t^2(t - 1)\cdots(t - n)dt,

if nn is even and fCn+2[a,b]f \in C^{n + 2}[a, b], and

abf(x)dx=i=0nAif(xi)+hn+2f(n+1)(ξ)(n+1)!1n+1t(t1)(tn)dt, \int_a^b f(x)dx = \sum\limits_{i = 0}^n A_i f(x_i) + \frac{h^{n + 2}f^{(n + 1)}(\xi)}{(n + 1)!}\int_{-1}^{n + 1} t(t - 1)\cdots(t - n)dt,

if nn is odd and fCn+1[a,b]f \in C^{n + 1}[a, b],

where

Ai=abLi(x)dx=abj=0jin(xxj)(xixj)dx. A_i = \int_a^b L_i(x) dx = \int_a^b \prod\limits_{\substack{j = 0 \\ j \ne i}}^n \frac{(x - x_j)}{(x_i - x_j)} dx.

chap4_7
(n + 1)-point open Newton-Cotes formulae

n=0n = 0 Midpoint rule

x1x1f(x)dx=2hf(x0)+h23f(ξ)where some x1<ξ<x1. \int_{x_{-1}}^{x_1}f(x)dx = 2hf(x_0) + \frac{h^2}{3}f''(\xi) \text{where some } x_{-1} < \xi < x_1.

Composite Numerical Integration

Motivation: Although the Newton-Cotes gives a better and better approximation as nn increases, since it's based on interpolating polynomials, it also owns the oscillatory nature of high-degree polynomials.

Similarly, we discuss a piecewise approach to numerical integration with low-order Newton-Cotes formulae. These are the techniques most often applied.

Theorem 4.2 | Composite Trapezoidal rule

fC2[a,b]f \in C^2[a, b], h=(ba)/nh = (b - a) /n, and xj=a+jhx_j = a + jh, j=0,1,,nj = 0, 1, \dots, n. Then  μ(a,b)\exists\ \mu \in (a, b), s.t.

abf(x)dx=h2[f(a)+2j=1n1f(xj)+f(b)]ba12h2f(μ). \int_a^b f(x)dx = \frac{h}{2}\left[f(a) + 2 \sum\limits_{j = 1}^{n - 1}f(x_j) + f(b)\right] - \frac{b - a}{12}h^2 f''(\mu).

chap4_8

Theorem 4.3 | Composite Simpson's rule

fC2[a,b]f \in C^2[a, b], h=(ba)/nh = (b - a) /n, and xj=a+jhx_j = a + jh, j=0,1,,nj = 0, 1, \dots, n. Then  μ(a,b)\exists\ \mu \in (a, b), s.t.

abf(x)dx=h3[f(a)+2j=1(n/2)1f(x2j)+4j=1n/2f(x2j1)+f(b)]ba180h4f(4)(μ). \small \int_a^b f(x)dx = \frac{h}{3}\left[f(a) + 2 \sum\limits_{j = 1}^{(n / 2) - 1}f(x_{2j}) + 4 \sum\limits_{j = 1}^{n / 2}f(x_{2j - 1}) + f(b)\right] - \frac{b - a}{180}h^4 f^{(4)}(\mu).

chap4_9

Theorem 4.4 | Composite Midpoint rule

fC2[a,b]f \in C^2[a, b], h=(ba)/(n+2)h = (b - a) / (n + 2), and xj=a+(j+1)hx_j = a + (j + 1)h, j=1,0,,n+1j = -1, 0, \dots, n + 1. Then  μ(a,b)\exists\ \mu \in (a, b), s.t.

abf(x)dx=2hj=0n/2f(x2j)+ba6h2f(μ). \int_a^b f(x)dx = 2h \sum\limits_{j = 0}^{n / 2}f(x_{2j}) + \frac{b - a}{6} h^2 f''(\mu).

chap4_10

Stability

Composite integration techniques are all stable w.r.t roundoff error.

Example: Composite Simpson's rule

Suppose f(xi)f(x_i) is apporximated by f~(xi)\tilde f(x_i) with

f(xi)=f~(xi)+ei. f(x_i) = \tilde f(x_i) + e_i.

Then the accumulated error is

e(h)=h3[e0+2j=1(n/2)1e2j+4j=1n/2e2j1+en]. e(h) = \left|\frac{h}{3}\left[e_0 + 2\sum\limits_{j = 1}^{(n / 2) - 1}e_{2j} + 4\sum\limits_{j = 1}^{n / 2}e_{2j-1} + e_n\right]\right|.

Suppose eie_i are uniformly bounded by ε\varepsilon, then

e(h)h3[ε+2(n21)+4n2ε+ε]=nhε=(ba)ε. e(h) \le \frac{h}{3}\left[\varepsilon + 2 \left(\frac{n}{2} - 1\right) + 4 \frac{n}{2} \varepsilon + \varepsilon \right] = nh\varepsilon = (b - a)\varepsilon.

That means even though divide an interval to more parts, the roundoff error will not increase, which is quite stable.

Romberg Integration

Romberg integration combine the Composite Trapezoidal rule and Richardson's extrapolation to derive a more useful approximation.

Suppose we divide the interval [a,b][a, b] into m1=1m_1 = 1, m2=2m_2 = 2, \dots, and mn=2n1m_n = 2^{n - 1} subintervals respectively. For each division, then the step size hkh_k is (ba)/mk=(ba)/2k1(b - a) / m_k = (b - a) / 2^{k - 1}.

Then we use Rk,1R_{k, 1} to denote the composite trapezoidal rule,

Rk,1=abf(x)dx=hk2[f(a)+f(b)+2(i=12k11f(a+ihk))](ba)12hk2f(μk). \small R_{k, 1} = \int_a^b f(x) dx = \frac{h_k}{2} \left[f(a) + f(b) + 2 \left(\sum\limits_{i = 1}^{2^{k - 1} - 1} f(a + ih_k)\right)\right] - \frac{(b - a)}{12}h^2_kf''(\mu_k).

Mathematically, we have the recursive formula,

Rk,1=12[Rk1,1+hk1i=12k2f(a+(2i1)hk)]. R_{k, 1} = \frac{1}{2}\left[R_{k - 1, 1} + h_{k - 1}\sum\limits_{i = 1}^{2^{k - 2}}f(a+(2i-1)h_k)\right].
chap4_11

Theorem 4.5

the Composite Trapezoidal rule can represented by an alternative error term in the form

abf(x)dxRk,1=i=1Kihk2i, \int_a^b f(x) dx - R_{k, 1} = \sum\limits_{i = 1}^{\infty} K_i h^{2i}_k,

where KiK_i depends only on f(2i1)(a)f^{(2i-1)}(a) and f(2i1)(b)f^{(2i-1)}(b).

This nice theorem makes Richardson's extrapolation available to reduce the truncation error! Similar to Differentiation with Richardson's Extrapolation, we have the following formula.

Romberg Integration

Rk,j=Rk,j1+Rk,j1Rk1,j14j11, R_{k, j} = R_{k, j - 1} + \frac{R_{k, j - 1} - R_{k - 1, j - 1}}{4^{j - 1} - 1},

with an O(hk2j)O(h^{2j}_k) approximation.

chap4_12

Adaptive Quadrature Methods

Motivation: On the premise of equal spacing, in some cases, the left half of the interval is well approximated, and maybe we only need to subdivide the right half to approximate better. Here we introduce the Adaptive quadrature methods based on the Composite Simpson's rule.

First, we want to derive, if we apply Simpson's rule in two subinterval and add them up, how much precision does it improve compared to only applying Simpson's rule just in the whole interval.

From Simpson's rule, we have

abf(x)dx=S(a,b)h590f(4)(μ), \int_a^b f(x) dx = S(a, b) - \frac{h^5}{90}f^{(4)}(\mu),

where

S(a,b)=h3[f(a)+4f(a+h)+f(b)]. S(a, b) = \frac{h}{3}[f(a) + 4f(a + h) + f(b)].

If we divide [a,b][a, b] into two subintervals, applying Simpson's rule respectively (namely apply Composite Simpson's rule with n=4n = 4 and step size h/2h / 2), we have

abf(x)dx=S(a,a+b2)+S(a+b2,b)116(h590)f(4)(μ~). \int_a^b f(x) dx = S\left(a, \frac{a + b}{2}\right) + S\left(\frac{a + b}{2}, b\right) - \frac{1}{16}\left(\frac{h^5}{90}\right)f^{(4)}(\tilde \mu).

Moreover, assume f(4)(μ)f(4)(μ~)f^{(4)}(\mu) \approx f^{(4)}(\tilde \mu), then we have

S(a,a+b2)+S(a+b2,b)116(h590)f(4)(μ~)S(a,b)h590f(4)(μ), S\left(a, \frac{a + b}{2}\right) + S\left(\frac{a + b}{2}, b\right) - \frac{1}{16}\left(\frac{h^5}{90}\right)f^{(4)}(\tilde \mu) \approx S(a, b) - \frac{h^5}{90}f^{(4)}(\mu),

so

h590f(4)(μ)1615[S(a,b)S(a,a+b2)S(a+b2,b)]. \frac{h^5}{90}f^{(4)}(\mu) \approx \frac{16}{15}\left[S(a, b) - S\left(a, \frac{a + b}{2}\right) - S\left(\frac{a + b}{2}, b\right)\right].

Then,

abf(x)dxS(a,a+b2)S(a+b2,b)=116(h590)f(4)(μ~)115S(a,b)S(a,a+b2)S(a+b2,b). \begin{aligned} \left|\int_a^b f(x) dx - S\left(a, \frac{a + b}{2}\right) - S\left(\frac{a + b}{2}, b\right)\right| = \left|\frac{1}{16}\left(\frac{h^5}{90}\right)f^{(4)}(\tilde \mu)\right| \\ \approx \frac{1}{15} \left|S(a, b) - S\left(a, \frac{a + b}{2}\right) - S\left(\frac{a + b}{2}, b\right)\right|. \end{aligned}

This result means that the subdivision approximates abf(x)dx\int_a^b f(x) dx about 15 times better than it agree with S(a,b)S(a, b). Thus suppose we have a tolerance ε\varepsilon across the interval [a,b][a, b]. If

S(a,b)S(a,a+b2)S(a+b2,b)<15ε, \left|S(a, b) - S\left(a, \frac{a + b}{2}\right) - S\left(\frac{a + b}{2}, b\right)\right| < 15\varepsilon,

then we expect to have

abf(x)dxS(a,a+b2)S(a+b2,b)<ε, \left|\int_a^bf(x)dx - S\left(a, \frac{a + b}{2}\right) - S\left(\frac{a + b}{2}, b\right)\right| < \varepsilon,

and the subdivision is thought to be a better approximation to abf(x)dx\int_a^bf(x)dx.

Conclusion

Suppose we have a tolerance ε\varepsilon on [a,b][a, b], and we expect the tolerance is uniform. Thus at the subinterval [p,q][a,b][p, q] \subseteq [a, b], with qp=k(ba)q - p = k(b - a), we expect the tolerance as kεk\varepsilon.

Moreover, suppose the approximation of Simpson's rule on [p,q][p, q] is SS while the approxiamtion of Simpson's rule on [p,(p+q)/2][p, (p + q) / 2] and [(p+q)/2,q][(p + q) / 2, q] are S1S_1 and S2S_2 respectively.

Then the criterion to subdivide is that

S1+S2S<Mkε. |S_1 + S_2 - S| < M \cdot k \varepsilon.

where MM is often taken as 10 but not 15, which we derive above, since it also consider the error between f(4)(μ)f^{(4)}(\mu) and f(4)(μ~)f^{(4)}(\tilde \mu).

Gaussian Quadrature

Instead of equal spacing in the Newton-Cotes formulae, the selection of the points xix_i also become variables. Gaussian Quadrature is aimed to construct a formula k=0nAkf(xk)\sum\limits_{k = 0}^n A_kf(x_k) to approximate abw(x)f(x)dx\int_a^b w(x)f(x)dx of precision degree 2n+12n + 1 with n+1n + 1 points, where w(x)w(x) is a weight function. (Compare to the equally spaced strategy only have a precision of around nn).

That means, to determine xix_i and AiA_i (totally 2n+22n + 2 unknowns) such that the formula is accurate for f(x)=1,x,,x2n+1f(x) = 1, x, \dots, x^{2n + 1} (totally 2n+22n + 2 equations). The selected points xix_i are called Gaussian points.

Problem

Theoretically, since we have 2n+22n + 2 unknowns and 2n+22n + 2 eqautions, we can solve out xix_i and AiA_i. But the equations are not linear!

Thus we give the following theorem to find Gaussian points without solving the nonlinear eqautions. Recap the definition of weight function and orthogonality of polynomials, and the construction of the set of orthogonal polynomials.

Theorem 4.6

x0,,xnx_0, \dots, x_n are Gaussian point iff W(x)=k=0n(xxk)W(x) = \prod\limits_{k = 0}^n (x - x_k) is orthogonal to all the polynomials of degree no greater than nn on interval [a,b][a,b] w.r.t the weight function w(x)w(x).

Proof

\Rightarrow

if x0,xnx_0, \dots x_n are Gaussian points, then the degree of precision of the formula abw(x)f(x)dxk=0nAkf(xk)\int_a^b w(x)f(x)dx \approx \sum\limits_{k = 0}^n A_k f(x_k) is at least 2n+12n + 1.

Then  P(x)Πn\forall\ P(x) \in \Pi_n, deg(P(x)W(x))2n+1\text{deg}(P(x)W(x)) \le 2n + 1. Thus

abw(x)P(x)W(x)dx=k=0nAkP(xk)W(xk)0=0. \int_a^b w(x)P(x)W(x)dx = \sum\limits_{k = 0}^n A_k P(x_k)\underbrace{W(x_k)}_{0} = 0.

\Leftarrow

 P(x)Π2n+1\forall\ P(x) \in \Pi_{2n + 1}, let P(x)=W(x)q(x)+r(x),  q(x),r(x)ΠnP(x) = W(x)q(x) + r(x),\ \ q(x), r(x) \in \Pi_{n}, then

abw(x)P(x)dx=abw(x)W(x)q(x)dx+abw(x)r(x)dx=k=0nAkr(xk)=k=0nAkP(xk). \begin{aligned} \int_a^b w(x)P(x) dx &= \int_a^b w(x)W(x)q(x)dx + \int_a^b w(x)r(x)dx \\ &= \sum\limits_{k = 0}^n A_k r(x_k) = \sum\limits_{k = 0}^n A_k P(x_k). \end{aligned}

Recap that the set of orthogonal polynomials {φ0,,φn,}\{\varphi_0, \dots, \varphi_n, \dots\} is linearly independent and φn+1\varphi_{n + 1} is orthogonal to any polynomials of degree no greater than nn.

Thus we can take φn+1(x)\varphi_{n + 1}(x) to be W(x)W(x) and the roots of φn+1(x)\varphi_{n + 1}(x) are the Gaussian points.

Genernal Solution

Problem: Assume

abw(x)f(x)dxi=0nAkf(xk). \int_a^b w(x)f(x)dx \approx \sum\limits_{i = 0}^n A_k f(x_k).

Step.1 Construct the set of orthogonal polynomial on the interval [a,b][a, b] by Gram-Schimidt Process from φ0(x)1\varphi_0(x) \equiv 1 to φn(x)\varphi_{n}(x).

Step.2 Find the roots of φn(x)\varphi_{n}(x), which are the Gaussian points x0,,xnx_0, \dots, x_n.

Step.3 Solve the linear systems of the equation given by the precision for f(x)=1,x,,x2n+1f(x) = 1, x, \dots, x^{2n + 1}, and obtain A0,,AnA_0, \dots, A_n.

Although the method above is theoretically available, but it's tedious. But we have some special solutions which have been calculated. With some transformations, we can make them to solve the general problem too.

Legendre Polynomials

A typical set of orthogonal functions is Legrendre Polynomials.

Legrendre Polynomials

Pk(x)=12kk!dkdxk(x21)k, P_k(x) = \frac{1}{2^k k!} \frac{d^k}{dx^k}(x^2 - 1)^k,

or equally defined recursively by

P0(x)=1, P1(x)=x,Pk+1(x)=1k+1((2k+1)xPk(x)kPk1(x)). \begin{aligned} & P_0(x) = 1,\ P_1(x) = x, \\ & P_{k + 1}(x) = \frac{1}{k + 1}\left((2k + 1)x P_k(x) - k P_{k - 1}(x)\right). \end{aligned}

Property

  • {Pn(x)}\{P_n(x)\} are orthogonal on [1,1][-1, 1], w.r.t the weight function

    w(x)1. w(x) \equiv 1.
    (Pj,Pk)={0,kl,22k+1,k=l. (P_j, P_k) = \left\{ \begin{aligned} & 0, && k \ne l, \\ & \frac{2}{2k + 1}, && k = l. \end{aligned} \right.
  • Pn(x)P_n(x) is a monic polynomial of degree nn.

  • Pn(x)P_n(x) is symmetric w.r.t the origin.
  • The nn roots of Pn(x)P_n(x) are all on [1,1][-1, 1].

The first few of them are

P0(x)=1,P1(x)=x,P2(x)=x213,P3(x)=x335x,P4(x)=x467x2+335. \begin{aligned} P_0(x) &= 1, \\ P_1(x) &= x, \\ P_2(x) &= x^2 - \frac13, \\ P_3(x) &= x^3 - \frac35 x, \\ P_4(x) &= x^4 - \frac67 x^2 + \frac{3}{35}. \end{aligned}

The following table gives the pre-calculated values.

chap4_13

And from interval [1,1][-1, 1] to [a,b][a, b], we have a linear map

t=2xabbax=12[(ba)t+a+b]. t = \frac{2x - a - b}{b - a} \Leftrightarrow x = \frac12 [(b - a)t + a + b].
chap4_14

Thus we have

abf(x)dx=11f((ba)t+(b+a)2)ba2dt. \int_a^b f(x) dx = \int_{-1}^1 f\left(\frac{(b - a)t + (b + a)}{2}\right)\frac{b - a}{2}dt.

The formula using the roots of Pn+1(x)P_{n + 1}(x) is called the Gauss-Legendre quadrature formula.

Example

Approxiamte 11.5ex2dx\int_1^{1.5} e^{-x^2}dx (exact value to 7 decimal places is 0.1093643).

Solution.

11.5ex2=1411e(t+5)2/16dt, \int_1^{1.5} e^{-x^2} = \frac14 \int_{-1}^1 e^{-(t + 5)^2 / 16}dt,

For n=2n = 2,

11.5ex214[e(0.57735+5)2/16+e(57735+5)2/16]=0.1094003. \int_1^{1.5} e^{-x^2} \approx \frac14 [e^{-(0.57735 + 5)^2 / 16} + e^{-(-57735 + 5)^2 / 16}] = 0.1094003.

For n=3n = 3,

11.5ex214[0.55556e(0.77460+5)2/16+0.88889e(5)2/16    +0.55556e(0.77460+5)2/16]=0.1093642. \begin{aligned} \int_1^{1.5} e^{-x^2} &\approx \frac14 [0.55556e^{-(0.77460 + 5)^2 / 16} + 0.88889e^{-(5)^2 / 16} \\ & \ \ \ \ + 0.55556e^{-(-0.77460 + 5)^2 / 16}] \\ & = 0.1093642. \end{aligned}

Chebyshev Polynomials

Also, Chebyshev polynomials are typical set of orthogonal polynomials too. We don't discuss much about it there.

The formula using the roots of Tn+1(x)T_{n + 1}(x) is called the Gauss-Chebyshev quadrature formula.


最后更新: 2023.11.21 17:00:13 CST
创建日期: 2022.11.10 01:04:43 CST