爱的深处不正涌流着企图和对方分毫不差的这种不可能实现的热望吗?这种热望不正驱使着人们将不可能由另一极端变成可能,从而引导他们走向那种悲剧性的叛离之路吗?既然相爱的人不可能完全相似,那么不如干脆使他们致力于相互之间毫不相似,使这样的叛离完整地作用于媚态。

# 第一章 相关基础数学知识

Theorem: If the function ff is differentiable at x0x_0, then ff is continuous at x0x_0.

Notation: C[a,b]C[a,b] stands for the set of continuous function defined in [a,b][a,b].

Theorem (Generalized Rolle’s Theorem): Suppose fC[a,b]f\in C[a,b] is nn times differentiable on (a,b)(a,b). If f(x)f(x) is zero at the n+1n+1 distinct numbers x0,x1,...,xnx_0,x_1,...,x_n in the [a,b][a,b], then a number cc in the (a,b)(a,b) exists with f(n)(c)=0f^{(n)}(c)=0

Theorem (Taylor’s Theorem): f(x)=Pn(x)+Rn(x)f(x)=P_n(x)+R_n(x), Pn(x)=k=0nf(k)(x)k!(xx0)kP_n(x)=\sum_{k=0}^n\frac{f^{(k)}(x)}{k!}(x-x_0)^{k}, Rn(x)=f(n+1)(ξ(x))(n+1)!(xx0)n+1R_n(x)=\frac{f^{(n+1)(\xi(x))}}{(n+1)!}(x-x_0)^{n+1}. ξ(x)\xi(x) is between xx and x0x_0.


Definition: If pp^* is an approximation to pp, the absolute error is pp|p-p^*|, and the relative error is ppp\frac{|p-p^*|}{|p|}, provided that p0p\neq 0.

Definition: The number pp^* is said to approximate pp to tt significant digit if tt is the largest nonnegative integer for which ppp<5×10t\frac{|p-p^*|}{|p|}<5\times 10^{-t}.


Definition: An algorithm is said to be stable when small changes in the initial data can produce correspondingly small changes in final results. Some algorithm are stable only for certain choices of initial data, these cases are called conditionally stable.

Definition: Suppose that E0E_0 denotes an initial error and EnE_n represents the magnitude of an error after nn subsequent operations.

  • If EnnCE0E_n\approx nCE_0, where CC is a constant independent of nn, then the growth of error is said to be linear.

  • If EnCnE0E_n\approx C^nE_0, for some C>1C>1, then the growth of error is said to be exponential.

Definition: Suppose {βn}n=1\{\beta_n\}^{\infty}_{n=1} is a sequence known to converge to zero, and {αn}n=1\{\alpha_n\}^{\infty}_{n=1} converges to a number α\alpha.

  • If a positive constant KK exists with αnαKβn|\alpha_n-\alpha|\leq K|\beta_n| for large n, then we say that {α}n=1\{\alpha\}^\infty_{n=1} converges to α\alpha with the rate of convergence O(βn)O(\beta_n), writing αn=α+O(βn)\alpha_n=\alpha+O(\beta_n).

Property (Operator errors):

  • αn+βn=α+β+O(εα+εβ)\alpha_n+\beta_n=\alpha+\beta+O(\varepsilon_\alpha+\varepsilon_\beta).
  • αnβn=αβ+αO(εβ)+βO(εα)+O(εαεβ)\alpha_n\beta_n=\alpha\beta+\alpha O(\varepsilon_\beta)+\beta O(\varepsilon_\alpha)+O(\varepsilon_\alpha\varepsilon_\beta).

Example: αn=n+1n2,βn=1n\alpha_n=\frac{n+1}{n^2},\beta_n=\frac{1}{n}. αn0<n+nn2=2×1n|\alpha_n-0|<\frac{n+n}{n^2}=2\times\frac{1}{n}. So αn=0+O(1n)\alpha_n=0+O(\frac{1}{n}).

Definition: Suppose

limx0G(x)=0,limx0F(x)=L\lim_{x\rightarrow 0}G(x)=0,\lim_{x\rightarrow 0}F(x)=L

If a positive constant KK exists with F(x)LKG(x)|F(x)-L|\leq K|G(x)|, then we write F(x)=L+O(G(x))F(x)=L+O(G(x)).

Example: cos(x)+12x2=1+O(x4)cos(x)+\frac{1}{2}x^2=1+O(x^4).


Definition: A mathematical problem is said to be well-posed if a solution:

  • exists,
  • is unique,
  • depends continuously on input data

Otherwise, the problem is ill-posed.

# 第二章 一元方程的近似解

Definition: The system of mm nonlinear equations in nn unknowns can alternatively be represented by defining a function ff, mapping Rn\mathbb{R}^n into Rm\mathbb{R}^m by

f(x)=(f1(x),...,fm(x))T\textbf{f}(\textbf{x})=(f_1(\textbf{x}),...,f_m(\textbf{x}))^T

  • The function f1,f2,..,fmf_1,f_2,..,f_m are the coordinate functions of f.

  • The function f is continuous at x0Dx_0\in D provided limxx0f(x)=f(x0)\lim_{x\rightarrow x_0}\textbf{f}(\textbf{x})=\textbf{f}(\textbf{x}_0).

Theorem: Let ff be a function from DRnD\subset \mathbb{R}^n into R\mathbb{R} and x0Dx_0\in D. If

f(x)xjK,j=1,2,...,n|\frac{\partial f(\textbf{x})}{\partial x_j}|\leq K,j=1,2,...,n

whenever xjx0jδ|x_j-x_{0j}|\leq\delta, then ff is continuous at x0\textbf{x}_0.

Theorem: Suppose that fC[a,b]f\in C[a,b] and f(a)f(b)<0f(a)f(b)<0. The Bisection method generates a sequence {pn}1\{p_n\}^\infty_1 approximating a zero point pp of ff (f(p)=0f(p)=0) with

pnpba2n|p_n-p|\leq\frac{b-a}{2^ n}

So pn=p+O(12n)p_n=p+O(\frac{1}{2^n}).

# Fixed Point Method

Definition: Fixed point of given function g:RRg:\mathbb{R}\rightarrow\mathbb{R} is value xx^* such that x=g(x)x^*=g(x^*)


For given equation f(x)=0f(x)=0, there may be many equivalent fixed point problems x=g(x)x=g(x) with different choices for g.

Example: f(x)=0g(x)=xf(x)=0\Leftrightarrow g(x)=x, we can choose g(x)=xf(x)g(x)=x-f(x) or g(x)=x+3f(x)g(x)=x+3f(x).

To approximate the fixed point of a function g(x)g(x), we choose an initial approximation p0p_0, and generate the sequence by:

pn=g(pn1),n=1,2,3,...p_n=g(p_{n-1}),n=1,2,3,...

If the sequence converges to pp and g(x)g(x) is continuous, then we have

p=limnpn=limng(pn)=g(limnpn)=g(p)p=\lim_{n\rightarrow\infty}p_n=\lim_{n\rightarrow\infty}g(p_n)=g(\lim_{n\rightarrow \infty}p_n)=g(p)

This technique is called fixed point iteration (or functional iteration).


Theorem: If g(x)C[a,b]g(x)\in C[a,b] and g(x)[a,b]g(x)\in[a,b] for all x[a,b]x\in[a,b], then g(x)g(x) has a fixed point in [a,b][a,b]. What’s more, if g(x)g'(x) exists on (a,b)(a,b) and a positive constant k<1k<1 exists with g(x)k|g'(x)|\leq k for all x(a,b)x\in(a,b), then the fixed point is unique.

  • The proof is easily obtained by proof by contradiction considering the geometric feature.

Corollary: Obviously we have pnp=g(pn1)g(p)=g(ξ)pn1pp_n-p=|g(p_{n-1})-g(p)|=|g'(\xi)||p_{n-1}-p| where ξ(a,b)\xi\in(a,b). So pnpknp0p|p_n-p|\leq k^n|p_0-p|. If k<1k<1, the fixed point is unique and we have pn=p+O(kn)p_n=p+O(k^n).

What’s more, pnpknp0pknmax(p0a,bp0)|p_n-p|\leq k^n|p_0-p|\leq k^nmax(p_0-a,b-p_0), and

pnp0pnpn1+pn1pn2+...+p1p0kn1kp1p0|p_n-p_0|\leq |p_n-p_{n-1}|+|p_{n-1}-p_{n-2}|+...+|p_1-p_0|\leq \frac{k^n}{1-k}|p_1-p_0|

# Newton’s Method

Suppose that fC2[a,b]f\in C^2[a,b] and xx^* is a solution for f(x)=0f(x)=0. Let xˉ[a,b]\bar{x}\in[a,b] e an approximation to xx^* such that f(xˉ)0f'(\bar{x})\neq 0 and xˉx|\bar{x}-x^*| is very small. Due to the Taylor polynomial:

f(x)=f(xˉ)+(xxˉ)f(xˉ)+(xxˉ)22f(ξ)f(x)=f(\bar{x})+(x-\bar{x})f'(\bar{x})+\frac{(x-\bar{x})^2}{2}f''(\xi)

where ξ\xi lies between xx and xˉ\bar{x}. So

0=f(x)f(xˉ)+(xxˉ)f(xˉ)xxˉf(xˉ)f(xˉ)0=f(x^*)\approx f(\bar{x})+(x^*-\bar{x})f'(\bar{x})\\ x^*\approx\bar{x}-\frac{f(\bar{x})}{f'(\bar{x})}

Theorem: Let fC2[a,b]f\in C^2[a,b], if p[a,b]p\in[a,b] is such that f(p)=0f(p)=0 and f(p)0f'(p)\neq 0, then there exists a δ>0\delta>0 such that for any p0[pδ,p+δ]p_0\in[p-\delta,p+\delta], pn=pn1f(pn1)f(pn1)p_n=p_{n-1}-\frac{f(p_{n-1})}{f'(p_{n-1})} converges to pp.

  • The proof is based on the fixed point iteration g(x)=xf(x)f(x)g(x)=x-\frac{f(x)}{f'(x)}.

# Secant Method

Similar to Newton’s method:

pn=pn1f(pn1)f(pn1)f(pn2)pn1pn2p_{n}=p_{n-1}-\frac{f(p_{n-1})}{\frac{f(p_{n-1})-f(p_{n-2})}{p_{n-1}-p_{n-2}}}

# Method of False Position

Improved Secant Method. If f(p0)f(p1)<0f(p_0)f(p_1)<0, we can chose:

pn={pn1f(pn1)f(pn1)f(pn2)pn1pn2pn1f(pn1)f(pn1)f(pn3)pn1pn3p_n=\begin{cases} p_{n-1}-\frac{f(p_{n-1})}{\frac{f(p_{n-1})-f(p_{n-2})}{p_{n-1}-p_{n-2}}}\\ p_{n-1}-\frac{f(p_{n-1})}{\frac{f(p_{n-1})-f(p_{n-3})}{p_{n-1}-p_{n-3}}}\\ \end{cases}

dependent on f(pn1)f(pn2)<0f(p_{n-1})f(p_{n-2})<0 or f(pn1)f(pn3)<0f(p_{n-1})f(p_{n-3})<0.

  • Line1 across (pn1,f(pn1))(p_{n-1},f(p_{n-1})), (pn2,f(pn2))(p_{n-2},f(p_{n-2})).

  • Line2 across (pn1,f(pn1))(p_{n-1},f(p_{n-1})), (pn3,f(pn3))(p_{n-3},f(p_{n-3})).

We can choose which line to use by where zero point locates: [pn2,pn1][p_{n-2},p_{n-1}] or [pn1,pn3][p_{n-1},p_{n-3}].

# Error Analysis for Iteration

Definition: If positive constants λ\lambda and α\alpha exists with

limnpn+1ppnpα=λ\lim_{n\rightarrow\infty}\frac{|p_{n+1}-p|}{|p_n-p|^{\alpha}}=\lambda

then {pn}n=0\{p_n\}^\infty_{n=0} converges to pp of order α\alpha, with asymptotic error constant λ\lambda.

Higher the order is, more rapidly the sequence converges.

# Fixed point method

Theorem: If g(x)k<1|g'(x)|\leq k<1 for any x(a,b)x\in(a,b), then the fixed point iteration converges linearly to the unique fixed point.

  • Proof:

limnpn+1ppnp1=limng(pn)g(p)pnp=limng(ξn)=g(p)=Constantξn(pn,p)\lim_{n\rightarrow\infty}\frac{|p_{n+1}-p|}{|p_n-p|^1}=\lim_{n\rightarrow\infty}\frac{|g(p_n)-g(p)|}{|p_n-p|}=\lim_{n\rightarrow\infty}|g'(\xi_n)|=|g'(p)|=Constant\\ \xi_n\in(p_n,p)

​ So α=1\alpha=1。If g(p)=0g'(p)=0, then the order might be higher.

What’s more, if g(p)=0g'(p)=0 and g(p)M|g''(p)|\leq M, then we have:

limnpn+1ppnp2=limng(pn)g(p)pnp2=limng(p)+g(p)(pnp)+g(ξ)2(pnp)2g(p)pnp2=limng(ξ)2=M2\lim_{n\rightarrow\infty}\frac{|p_{n+1}-p|}{|p_n-p|^2}=\lim_{n\rightarrow\infty}\frac{|g(p_n)-g(p)|}{|p_n-p|^2}\\ =\lim_{n\rightarrow\infty}\frac{|g(p)+g'(p)(p_{n}-p)+\frac{g''(\xi)}{2}(p_n-p)^2-g(p)|}{|p_n-p|^2}\\ =\lim_{n\rightarrow\infty}\frac{|g''(\xi)|}{2}=\frac{M}{2}

So α=2\alpha=2 if the g(x)g''(x) has a bound.


If g(p)>1|g'(p)|>1, then the iteration diverges with any starting point other than p.

# From Root to Fix point

For root finding problem f(x)=0f(x)=0, we let g(x)g(x) be in the form

g(x)=xϕ(x)f(x),ϕ(x)0g(x)=x-\phi(x)f(x),\phi(x)\neq 0

so g(x)=xf(x)=0g(x)=x\Rightarrow f(x)=0.

Since g(x)=1ϕ(x)f(x)ϕ(x)f(x)g'(x)=1-\phi'(x)f(x)-\phi(x)f'(x), let x=px=p, g(p)=1ϕ(p)f(p)g'(p)=1-\phi(p)f'(p), and g(p)=0g'(p)=0 if and only if ϕ(x)=1f(p)\phi(x)=\frac{1}{f'(p)},

Definition: A solution pp of f(x)=0f(x)=0 is a zero of multiplicity mm of f(x)f(x) if for xpx\neq p, we can write

f(x)=(xp)mq(x)limxpq(x)0f(x)=(x-p)^mq(x)\\ \lim_{x\rightarrow p}q(x)\neq 0

when it comes to Newton’s method, g(x)=xf(x)/f(x)g(x)=x-f(x)/f'(x), we have

g(x)=x(xp)mq(x)m(xp)m1q(x)+(xp)mq(x)=x(xp)q(x)mq(x)+(xp)q(x)limxpg(x)=11m0g(x)=x-\frac{(x-p)^mq(x)}{m(x-p)^{m-1}q(x)+(x-p)^mq'(x)}\\ =x-(x-p)\frac{q(x)}{mq(x)+(x-p)q'(x)}\\ \lim_{x\rightarrow p}g'(x)=1-\frac{1}{m}\neq 0

注:这里我存疑,因为都是极限意义下,x=px=pg(x)g(x) 甚至无定义。

但这是数值计算,实际中xxpp 总有点差值的。

# Aitken’s Δ2 Method

Suppose {pn}n=0\{p_n\}^\infty_n=0 is a linearly convergent sequence (α=1\alpha=1) with limit pp:

limnpn+1ppnp=λ0\lim_{n\rightarrow\infty}\frac{p_{n+1}-p}{p_n-p}=\lambda\neq 0

So when n is large enough:

pn+1ppnppn+2ppn+1pppn(pn+1pn)2pn+22pn+1+pn\frac{p_{n+1}-p}{p_n-p}\approx\frac{p_{n+2}-p}{p_{n+1}-p}\\ p\approx p_n-\frac{(p_{n+1}-p_n)^2}{p_{n+2}-2p_{n+1}+p_n}

Aitken’s Δ2\Delta^2 method is to define a new sequence:

qn=pn(pn+1pn)2pn+22pn+1+pnq_n=p_n-\frac{(p_{n+1}-p_n)^2}{p_{n+2}-2p_{n+1}+p_n}

{qn}n\{q_n\}^\infty_{n} will converge to pp more rapidly than {pn}\{p_n\}

Definition: The forward difference Δpn\Delta p_n of a sequence {pn}n\{p_n\}^\infty_n is defined by:

Δpn=pn+1pn\Delta p_n=p_{n+1}-p_n

So qn=pn(Δpn)2Δ2pnq_n=p_n-\frac{(\Delta p_n)^2}{\Delta^2p_n}.

Theorem: limnqnppnp=0\lim_{n\rightarrow\infty}\frac{q_n-p}{p_n-p}=0, so qnq_n converges faster than pnp_n.

# Steffensen’s Method (Aitken’s Method for fixed point iteration)

Repeat:

{p0p1=g(p0)p2=g(p1)p3=p0(Δp0)2Δ2p0{p3p4=g(p3)p5=g(p4)p6=p3(Δp3)2Δ2p3{p6p7=g(p6)p8=g(p7)p9=p6(Δp6)2Δ2p6......\begin{cases}p_0\\p_1=g(p_0)\\p_2=g(p_1)\\p_3=p_0-\frac{(\Delta p_0)^2}{\Delta^2p_0}\end{cases} \begin{cases}p_3\\p_4=g(p_3)\\p_5=g(p_4)\\p_6=p_3-\frac{(\Delta p_3)^2}{\Delta^2p_3}\end{cases} \begin{cases}p_6\\p_7=g(p_6)\\p_8=g(p_7)\\p_9=p_6-\frac{(\Delta p_6)^2}{\Delta^2p_6}\end{cases}......

Theorem: If g(p)1g'(p)\neq 1, and there exists a δ>0\delta>0 such that gC3[pδ,p+δ]g\in C^3[p-\delta,p+\delta], then Steffensen’s method gives a quadratic (α=2\alpha=2) convergence for any p0[pδ,p+δ]p_0\in[p-\delta,p+\delta].

# Zeros of Polynomials and Muller’s Method

Find the root of given polynomial:

f(x)=xn+a1xn1+...+an1x+anf(x)=x^n+a_1x^{n-1}+...+a_{n-1}x+a_n

initially we have three points: (x0,f(x0)),(x1,f(x1)),(x2,f(x2))(x_0,f(x_0)),(x_1,f(x_1)),(x_2,f(x_2)), then we draw a parabola across the three points:

1

There are two meeting points of x-axis and the parabola, we choose the one which is closer to x2x_2 as x3x_3, and repeat the iteration to draw parabola across (x1,f(x1)),(x2,f(x2)),(x3,f(x3))(x_1,f(x_1)),(x_2,f(x_2)),(x_3,f(x_3)).

When xn+1xn<ε|x_{n+1}-x_n|<\varepsilon, the iteration terminates. and we’ve found the root.

# 第三章 差值和多项式拟合

Theorem (Weierstrass Approximation Theorem): Suppose ff is defined and continuous on [a,b][a,b], for each ε>0\varepsilon>0, there exists a polynomial P(x)P(x) defined on [a,b][a,b], with the property that:

x[a,b],  f(x)P(x)<ε\forall x\in[a,b],\;|f(x)-P(x)|<\varepsilon

# the n-th Lagrange Interpolating Polynomial

Ln,k(x)=i=0,iknxxixkxiLn,k(x)={0xxk1x=xkP(x)=k=0nf(xk)Ln,k(x)L_{n,k}(x)=\prod_{i=0,i\neq k}^n\frac{x-x_i}{x_k-x_i}\\ L_{n,k}(x)=\begin{cases}0&x\neq x_k\\1&x=x_k\end{cases} \\ P(x)=\sum_{k=0}^nf(x_k)L_{n,k}(x)

Theorem: Suppose x0,x1,..,xnx_0,x_1,..,x_n are distinct numbers in the interval [a,b][a,b] and fCn+1[a,b]f\in C^{n+1}[a,b], then for each x[a,b]x\in[a,b], a number ξ(x)(a,b)\xi(x)\in(a,b) exists with

f(x)=Pn(x)+f(n+1)(ξ(x))(n+1)!(xx0)(xx1)...(xxn)f(x)=P_n(x)+\frac{f^{(n+1)}(\xi(x))}{(n+1)!}(x-x_0)(x-x_1)...(x-x_n)

Where Pn(x)P_n(x) is the Lagrange Interpolating polynomial.

The error is related to h=bah=|b-a|. If f(n+1)Mn+1|f^{(n+1)}|\leq M_{n+1}, then:

f(x)P1(x)=f(ξ)2(xx0)(xx1)M2h28f(x)Pn(x)Mn+1n!hn|f(x)-P_1(x)|=\frac{|f''(\xi)|}{2}|(x-x_0)(x-x_1)|\leq\frac{M_2h^2}{8}\\ |f(x)-P_n(x)|\leq \frac{M_{n+1}}{n!}h^n

Runge Phenomenon: when nn is bigger, the difference between function and polynomial is larger.

# Data Approximation and Neville’s Method

Theorem: Consider nn points: x1,x2,...,xnx_1,x_2,...,x_n, and Lagrange polynomial:

P1(x)=k=0,kjnLn,k(x)f(xk)P2(x)=k=0,kinLn,k(x)f(xk)P_1(x)=\sum_{k=0,k\neq j}^nL_{n,k}(x)f(x_k)\\ P_2(x)=\sum_{k=0,k\neq i}^nL_{n,k}(x)f(x_k) \\

Then P(x)=(xxj)P1(x)(xxi)P2(x)xixjP(x)=\frac{(x-x_j)P_1(x)-(x-x_i)P_2(x)}{x_i-x_j}.

So the Lagrange Polynomial can be generate recursively.

(x1,x2,x3,...,xn1,xn)(x1,x2,..,xn1)+(x2,x3,...,xn)(x1,x2,...,xn2)+2(x2,x3,...,xn1)+(x3,x4,...,xn)...(x_1,x_2,x_3,...,x_{n-1},x_n)\\ \Rightarrow(x_1,x_2,..,x_{n-1})+(x_2,x_3,...,x_n)\\ \Rightarrow (x_1,x_2,...,x_{n-2})+2(x_2,x_3,...,x_{n-1})+(x_3,x_4,...,x_n)\\ \Rightarrow ...

# Divided Difference

Definition: The zeroth divided difference of the function ff with respect to xix_i, denoted f[xi]f[x_i], is simply the value of ff at xix_i:

f[xi]=fi=f(xi)f[x_i]=f_i=f(x_i)

while the first divided difference of ff is defined as

f[xi,xi+1]=f[xi+1]f[xi]xi+1xif[x_i,x_{i+1}]=\frac{f[x_{i+1}]-f[x_i]}{x_{i+1}-x_i}

the second divided difference is

f[xi,xi+1,xi+2]=f[xi+1,xi+2]f[xi,xi+1]xi+2xif[x_i,x_{i+1},x_{i+2}]=\frac{f[x_{i+1},x_{i+2}]-f[x_{i},x_{i+1}]}{x_{i+2}-x_i}

Theorem: Pn(x)P_n(x) can be rewritten as

Pn(x)=f[x0]+f[x0,x1](xx0)+f[x0,x1,x2](xx0)(xx1)(xx2)+...P_n(x)=f[x_0]+f[x_0,x_1](x-x_0)+f[x_0,x_1,x_2](x-x_0)(x-x_1)(x-x_2)+...

which is known as Newton’s interpolatory divided difference formula.

Theorem: a number ξ\xi exists in (a,b)(a,b) with

f[x0,x1,...,xn]=f(n)(ξ)n!f[x_0,x_1,...,x_n]=\frac{f^{(n)}(\xi)}{n!}


Definition: the forward difference is defined by:

Δf(xi)=f(xi+1)f(xi)\Delta f(x_i)=f(x_{i+1})-f(x_i)

if i,xi+1xih\forall i,x_{i+1}-x_i\equiv h and x=x0+shx=x_0+sh, then

Pn(x)=f[x0]+f[x0,x1]sh+f[x0,x1,x2]s(s1)h2+...=k=0nf[x0,x1,...,xk]Cskk!hkf[x0,x1,..,xk]=1k!hkΔ2f(x0)Pn(x)=k=0nCskΔkf(x0)P_n(x)=f[x_0]+f[x_0,x_1]sh+f[x_0,x_1,x_2]s(s-1)h^2+...\\ =\sum_{k=0}^nf[x_0,x_1,...,x_k]C_s^kk!h^k\\ \because f[x_0,x_1,..,x_k]=\frac{1}{k!h^k}\Delta^2f(x_0)\\ \therefore P_n(x)=\sum_{k=0}^nC_s^k\Delta^kf(x_0)

The formula is named as Newton Forward Difference Formula.

Definition: the backward difference is defined by:

f(xi)=f(xi)f(xi1)\bigtriangledown f(x_i)=f(x_i)-f(x_{i-1})

So we have

f[xn,xn1]=1hf(xn)f[xn,xn1,xn2,...,xnk]=1k!hkkf(xn)Pn(x)=k=0n(1)lCskkf(xn)where  Csk=s(s1)...(sk+1)k!f[x_n,x_{n-1}]=\frac{1}{h}\bigtriangledown f(x_n)\\ f[x_n,x_{n-1},x_{n-2},...,x_{n-k}]=\frac{1}{k!h^k}\bigtriangledown^kf(x_n)\\ P_n(x)=\sum_{k=0}^n(-1)^lC_{-s}^k\bigtriangledown^k f(x_n)\\ where\;C_{-s}^k=\frac{-s(-s-1)...(-s-k+1)}{k!}

The formula is named as Newton Backward Difference Formula.

# Hermite Interpolation

Problem: Given n+1n+1 distinct numbers x0,x1,...,xnx_0,x_1,...,x_n and non-negative integers k0,k1,...,knk_0,k_1,...,k_n, to construct the osculating polynomial approximating a function fCm[a,b]f\in C^m[a,b] where m=max(k0,k1,...,kn)m=max(k_0,k_1,...,k_n).

The osculating polynomial H(x)H(x) requires:

f(x0)=H(x0),f(x0)=H(x0),...,f(k0)(x0)=H(k0)(x0)f(x1)=H(x1),f(x1)=H(x1),...,f(k1)(x1)=H(k1)(x1)......f(xn)=H(xn),f(xn)=H(xn),...,f(kn)(xn)=H(kn)(xn)f(x_0)=H(x_0),f'(x_0)=H'(x_0),...,f^{(k_0)}(x_0)=H^{(k_0)}(x_0)\\ f(x_1)=H(x_1),f'(x_1)=H'(x_1),...,f^{(k_1)}(x_1)=H^{(k_1)}(x_1)\\ ......\\ f(x_n)=H(x_n),f'(x_n)=H'(x_n),...,f^{(k_n)}(x_n)=H^{(k_n)}(x_n)\\

Obviously the degree of this osculating polynomial is at most K=i=1nki+nK=\sum_{i=1}^nk_i+n.

since the number of equation to be satisfied is K+1K+1.


if k0=k1=...=knk_0=k_1=...=k_n, then the Hermite polynomial is:

H2n+1(x)=j=0nf(xj)Hn,j(x)+j=0nf(xj)H^n,j(x)where  Hn,j(x)=[12(xxj)Ln,j(xj)]Ln,j2(x)H^n,j(x)=(xxj)Ln,j2(x)H_{2n+1}(x)=\sum_{j=0}^n f(x_j)H_{n,j}(x)+\sum_{j=0}^nf'(x_j)\hat{H}_{n,j}(x)\\ where\;H_{n,j}(x)=[1-2(x-x_j)L'_{n,j}(x_j)]L^2_{n,j}(x)\\ \hat{H}_{n,j}(x)=(x-x_j)L^2_{n,j}(x)

the error is

f(x)=H2n+1(x)+k=0n(xxk)2(2n+2)!f(2n+2)(ξ)f(x)=H_{2n+1}(x)+\frac{\prod_{k=0}^n(x-x_k)^2}{(2n+2)!}f^{(2n+2)}(\xi)

# Piecewise Interpolation Linear Polynomial

Instead of constructing a high-degree polynomial to approximate the function, we can construct several segments:

2

First construct the base function:

l0(x)=xx1x0x1,x0xx1li(x)={xxi1xixi1xi1xxixxi+1xixi+1xixxi+1ln(x)=xxn1xnxn1,xn1xxnl_0(x)=\frac{x-x_1}{x_0-x_1},x_0\leq x\leq x_1\\ l_i(x)=\begin{cases}\frac{x-x_{i-1}}{x_i-x_{i-1}}&x_{i-1}\leq x\leq x_i\\\frac{x-x_{i+1}}{x_i-x_{i+1}}&x_{i}\leq x\leq x_{i+1} \end{cases}\\ l_n(x)=\frac{x-x_{n-1}}{x_n-x_{n-1}},x_{n-1}\leq x\leq x_n

Then the piecewise interpolation linear polynomial is:

P(x)=i=0nli(x)f(xi)P(x)=\sum_{i=0}^nl_i(x)f(x_i)

the error bounds is:

f(x)P(x)h28M|f(x)-P(x)|\leq\frac{h^2}{8}M

where h=max(xi+1xi)h=max(x_{i+1}-x_i), M=maxf(x)M=max|f''(x)|.


When considering the osculating polynomial and k0=k1=...=kn=1k_0=k_1=...=k_n=1, we can construct:

P(x)=i=0nHi(x)f(xi)+H^i(x)f(xi)P(x)=\sum_{i=0}^n H_i(x)f(x_i)+\hat{H}_i(x)f'(x_i)

where

Hi(x)={(1+2xxixi1xi)(xxi1xixi1)2xi1xxi(1+2xxixi+1xi)(xxi+1xixi+1)2xixxi+1H^i(x)={(xxi)(xxi1xixi1)2xi1xxi(xxi)(xxi+1xixi+1)2xixxi+1H_i(x)=\begin{cases}(1+2\frac{x-x_i}{x_{i-1}-x_i})(\frac{x-x_{i-1}}{x_i-x_{i-1}})^2&x_{i-1}\leq x\leq x_i\\ (1+2\frac{x-x_i}{x_{i+1}-x_i})(\frac{x-x_{i+1}}{x_i-x_{i+1}})^2&x_{i}\leq x\leq x_{i+1} \end{cases}\\ \hat{H}_i(x)=\begin{cases}(x-x_i)(\frac{x-x_{i-1}}{x_i-x_{i-1}})^2&x_{i-1}\leq x\leq x_i\\ (x-x_i)(\frac{x-x_{i+1}}{x_i-x_{i+1}})^2&x_i\leq x\leq x_{i+1} \end{cases}

# 第四章 数值微分和数值积分

Formula:

f(x0)f(x0+h)f(x0)hf(x0)f(x0)f(x0h)hf(x0)f(x0+h)f(x0h)2hf'(x_0)\approx \frac{f(x_0+h)-f(x_0)}{h}\\ f'(x_0)\approx \frac{f(x_0)-f(x_0-h)}{h}\\ f'(x_0)\approx \frac{f(x_0+h)-f(x_0-h)}{2h}

# (n+1)-Point Formula

f(x)=k=0nf(xk)Ln,k(x)+(xx0)...(xxn)(n+1)!f(n+1)(ξ)f(xj)=k=0nf(xk)Ln,k(xj)+f(n+1)(ξ)(n+1)!k=0,kjn(xjxk)\because f(x)=\sum_{k=0}^nf(x_k)L_{n,k}(x)+\frac{(x-x_0)...(x-x_n)}{(n+1)!}f^{(n+1)}(\xi)\\ \therefore f'(x_j)=\sum_{k=0}^nf(x_k)L'_{n,k}(x_j)+\frac{f^{(n+1)}(\xi)}{(n+1)!}\prod_{k=0,k\neq j}^n(x_j-x_k)

note that f(x)f'(x) is only meaningful when x=x0,x1,...,xnx=x_0,x_1,...,x_n.

when n=2n=2 and x1=x0+h,x2=x0+2hx_1=x_0+h,x_2=x_0+2h:

f(x0)=12h[3f(x0)+4f(x0+h)f(x0+2h)]+h23f(3)(ξ)f'(x_0)=\frac{1}{2h}[-3f(x_0)+4f(x_0+h)-f(x_0+2h)]+\frac{h^2}{3}f^{(3)}(\xi)


By Taylor polynomial, we have

f(x0+h)=f(x0)+f(x0)h+12f(x0)h2+16f(3)(x0)h3+124f(4)(ξ1)h4f(x0h)=f(x0)f(x0)h+12f(x0)h216f(3)(x0)h3+124f(4)(ξ1)h4f(x_0+h)=f(x_0)+f'(x_0)h+\frac{1}{2}f''(x_0)h^2+\frac{1}{6}f^{(3)}(x_0)h^3+\frac{1}{24}f^{(4)}(\xi_1)h^4\\ f(x_0-h)=f(x_0)-f'(x_0)h+\frac{1}{2}f''(x_0)h^2-\frac{1}{6}f^{(3)}(x_0)h^3+\frac{1}{24}f^{(4)}(\xi_{-1})h^4\\

Therefore:

f(x0+h)+f(x0h)=2f(x0)+f(x0)h2+h424[f(4)(ξ1)+f(4)(ξ1)]f(x_0+h)+f(x_0-h)=2f(x_0)+f''(x_0)h^2+\frac{h^4}{24}[f^{(4)}(\xi_1)+f^{(4)}(\xi_{-1})]

Suppose f(4)f^{(4)} is continuous, then there exists ξ[x0h,x0+h]\xi\in[x_0-h,x_0+h] satisfying

f(4)(ξ)=12[f(4)(ξ1)+f(4)(ξ1)]f^{(4)}(\xi)=\frac{1}{2}[f^{(4)}(\xi_1)+f^{(4)}(\xi_{-1})]

So

f(x0)=1h2[f(x0h)2f(x0)+f(x0+h)]h412f(4)(ξ)f''(x_0)=\frac{1}{h^2}[f(x_0-h)-2f(x_0)+f(x_0+h)]-\frac{h^4}{12}f^{(4)}(\xi)

# Richardson’s Extrapolation

Suppose that for each number h0h\neq 0 we have a formula N(h)N(h) that approximates an unknown value MM and that the truncation error involved with the approximation has the form:

MN(h)=K1h+K2h2+K3h3+...M-N(h)=K_1h+K_2h^2+K_3h^3+...

Then we have

M=N(h2)+K1h2+K2h24+...M=N(\frac{h}{2})+K_1\frac{h}{2}+K_2\frac{h^2}{4}+...

Therefore

M=[N(h2)+N(h2)N(h)]+K2(h22h2)+...=2N(h2)N(h)+O(h2)M=[N(\frac{h}{2})+N(\frac{h}{2})-N(h)]+K_2(\frac{h^2}{2}-h^2)+...\\ =2N(\frac{h}{2})-N(h)+O(h^2)

let N2(h)=2N(h2)N(h)N_2(h)=2N(\frac{h}{2})-N(h), then

M=N2(h)K22h23K34h3...3M=4N2(h2)N2(h)+O(h3)\because M=N_2(h)-\frac{K_2}{2}h^2-\frac{3K_3}{4}h^3-...\\ \therefore3M=4N_2(\frac{h}{2})-N_2(h)+O(h^3)


In general, if MM can be written in the form:

M=N1(h)+j=1m1Kjhj+O(hm)M=N_1(h)+\sum_{j=1}^{m-1}K_jh^j+O(h^m)

then

Nj(h)=Nj1(h2)+Nj1(h/2)Nj1(h)2j11M=Nj(h)+O(hj)N_j(h)=N_{j-1}(\frac{h}{2})+\frac{N_{j-1}(h/2)-N_{j-1}(h)}{2^{j-1}-1}\\ M=N_j(h)+O(h^j)