# # 第一章 相关基础数学知识

Theorem: If the function $f$ is differentiable at $x_0$, then $f$ is continuous at $x_0$.

Notation: $C[a,b]$ stands for the set of continuous function defined in $[a,b]$.

Theorem (Generalized Rolle’s Theorem): Suppose $f\in C[a,b]$ is $n$ times differentiable on $(a,b)$. If $f(x)$ is zero at the $n+1$ distinct numbers $x_0,x_1,...,x_n$ in the $[a,b]$, then a number $c$ in the $(a,b)$ exists with $f^{(n)}(c)=0$

Theorem (Taylor’s Theorem): $f(x)=P_n(x)+R_n(x)$, $P_n(x)=\sum_{k=0}^n\frac{f^{(k)}(x)}{k!}(x-x_0)^{k}$, $R_n(x)=\frac{f^{(n+1)(\xi(x))}}{(n+1)!}(x-x_0)^{n+1}$. $\xi(x)$ is between $x$ and $x_0$.

Definition: If $p^*$ is an approximation to $p$, the absolute error is $|p-p^*|$, and the relative error is $\frac{|p-p^*|}{|p|}$, provided that $p\neq 0$.

Definition: The number $p^*$ is said to approximate $p$ to $t$ significant digit if $t$ is the largest nonnegative integer for which $\frac{|p-p^*|}{|p|}<5\times 10^{-t}$.

Definition: An algorithm is said to be stable when small changes in the initial data can produce correspondingly small changes in final results. Some algorithm are stable only for certain choices of initial data, these cases are called conditionally stable.

Definition: Suppose that $E_0$ denotes an initial error and $E_n$ represents the magnitude of an error after $n$ subsequent operations.

• If $E_n\approx nCE_0$, where $C$ is a constant independent of $n$, then the growth of error is said to be linear.

• If $E_n\approx C^nE_0$, for some $C>1$, then the growth of error is said to be exponential.

Definition: Suppose $\{\beta_n\}^{\infty}_{n=1}$ is a sequence known to converge to zero, and $\{\alpha_n\}^{\infty}_{n=1}$ converges to a number $\alpha$.

• If a positive constant $K$ exists with $|\alpha_n-\alpha|\leq K|\beta_n|$ for large n, then we say that $\{\alpha\}^\infty_{n=1}$ converges to $\alpha$ with the rate of convergence $O(\beta_n)$, writing $\alpha_n=\alpha+O(\beta_n)$.

Property (Operator errors):

• $\alpha_n+\beta_n=\alpha+\beta+O(\varepsilon_\alpha+\varepsilon_\beta)$.
• $\alpha_n\beta_n=\alpha\beta+\alpha O(\varepsilon_\beta)+\beta O(\varepsilon_\alpha)+O(\varepsilon_\alpha\varepsilon_\beta)$.

Example: $\alpha_n=\frac{n+1}{n^2},\beta_n=\frac{1}{n}$. $|\alpha_n-0|<\frac{n+n}{n^2}=2\times\frac{1}{n}$. So $\alpha_n=0+O(\frac{1}{n})$.

Definition: Suppose

$\lim_{x\rightarrow 0}G(x)=0,\lim_{x\rightarrow 0}F(x)=L$

If a positive constant $K$ exists with $|F(x)-L|\leq K|G(x)|$, then we write $F(x)=L+O(G(x))$.

Example: $cos(x)+\frac{1}{2}x^2=1+O(x^4)$.

Definition: A mathematical problem is said to be well-posed if a solution:

• exists,
• is unique,
• depends continuously on input data

Otherwise, the problem is ill-posed.

# # 第二章 一元方程的近似解

Definition: The system of $m$ nonlinear equations in $n$ unknowns can alternatively be represented by defining a function $f$, mapping $\mathbb{R}^n$ into $\mathbb{R}^m$ by

$\textbf{f}(\textbf{x})=(f_1(\textbf{x}),...,f_m(\textbf{x}))^T$

• The function $f_1,f_2,..,f_m$ are the coordinate functions of f.

• The function f is continuous at $x_0\in D$ provided $\lim_{x\rightarrow x_0}\textbf{f}(\textbf{x})=\textbf{f}(\textbf{x}_0)$.

Theorem: Let $f$ be a function from $D\subset \mathbb{R}^n$ into $\mathbb{R}$ and $x_0\in D$. If

$|\frac{\partial f(\textbf{x})}{\partial x_j}|\leq K,j=1,2,...,n$

whenever $|x_j-x_{0j}|\leq\delta$, then $f$ is continuous at $\textbf{x}_0$.

Theorem: Suppose that $f\in C[a,b]$ and $f(a)f(b)<0$. The Bisection method generates a sequence $\{p_n\}^\infty_1$ approximating a zero point $p$ of $f$ ($f(p)=0$) with

$|p_n-p|\leq\frac{b-a}{2^ n}$

So $p_n=p+O(\frac{1}{2^n})$.

## # Fixed Point Method

Definition: Fixed point of given function $g:\mathbb{R}\rightarrow\mathbb{R}$ is value $x^*$ such that $x^*=g(x^*)$

For given equation $f(x)=0$, there may be many equivalent fixed point problems $x=g(x)$ with different choices for g.

Example: $f(x)=0\Leftrightarrow g(x)=x$, we can choose $g(x)=x-f(x)$ or $g(x)=x+3f(x)$.

To approximate the fixed point of a function $g(x)$, we choose an initial approximation $p_0$, and generate the sequence by:

$p_n=g(p_{n-1}),n=1,2,3,...$

If the sequence converges to $p$ and $g(x)$ is continuous, then we have

$p=\lim_{n\rightarrow\infty}p_n=\lim_{n\rightarrow\infty}g(p_n)=g(\lim_{n\rightarrow \infty}p_n)=g(p)$

This technique is called fixed point iteration (or functional iteration).

Theorem: If $g(x)\in C[a,b]$ and $g(x)\in[a,b]$ for all $x\in[a,b]$, then $g(x)$ has a fixed point in $[a,b]$. What’s more, if $g'(x)$ exists on $(a,b)$ and a positive constant $k<1$ exists with $|g'(x)|\leq k$ for all $x\in(a,b)$, then the fixed point is unique.

• The proof is easily obtained by proof by contradiction considering the geometric feature.

Corollary: Obviously we have $p_n-p=|g(p_{n-1})-g(p)|=|g'(\xi)||p_{n-1}-p|$ where $\xi\in(a,b)$. So $|p_n-p|\leq k^n|p_0-p|$. If $k<1$, the fixed point is unique and we have $p_n=p+O(k^n)$.

What’s more, $|p_n-p|\leq k^n|p_0-p|\leq k^nmax(p_0-a,b-p_0)$, and

$|p_n-p_0|\leq |p_n-p_{n-1}|+|p_{n-1}-p_{n-2}|+...+|p_1-p_0|\leq \frac{k^n}{1-k}|p_1-p_0|$

## # Newton’s Method

Suppose that $f\in C^2[a,b]$ and $x^*$ is a solution for $f(x)=0$. Let $\bar{x}\in[a,b]$ e an approximation to $x^*$ such that $f'(\bar{x})\neq 0$ and $|\bar{x}-x^*|$ is very small. Due to the Taylor polynomial:

$f(x)=f(\bar{x})+(x-\bar{x})f'(\bar{x})+\frac{(x-\bar{x})^2}{2}f''(\xi)$

where $\xi$ lies between $x$ and $\bar{x}$. So

$0=f(x^*)\approx f(\bar{x})+(x^*-\bar{x})f'(\bar{x})\\ x^*\approx\bar{x}-\frac{f(\bar{x})}{f'(\bar{x})}$

Theorem: Let $f\in C^2[a,b]$, if $p\in[a,b]$ is such that $f(p)=0$ and $f'(p)\neq 0$, then there exists a $\delta>0$ such that for any $p_0\in[p-\delta,p+\delta]$, $p_n=p_{n-1}-\frac{f(p_{n-1})}{f'(p_{n-1})}$ converges to $p$.

• The proof is based on the fixed point iteration $g(x)=x-\frac{f(x)}{f'(x)}$.

## # Secant Method

Similar to Newton’s method:

$p_{n}=p_{n-1}-\frac{f(p_{n-1})}{\frac{f(p_{n-1})-f(p_{n-2})}{p_{n-1}-p_{n-2}}}$

## # Method of False Position

Improved Secant Method. If $f(p_0)f(p_1)<0$, we can chose:

$p_n=\begin{cases} p_{n-1}-\frac{f(p_{n-1})}{\frac{f(p_{n-1})-f(p_{n-2})}{p_{n-1}-p_{n-2}}}\\ p_{n-1}-\frac{f(p_{n-1})}{\frac{f(p_{n-1})-f(p_{n-3})}{p_{n-1}-p_{n-3}}}\\ \end{cases}$

dependent on $f(p_{n-1})f(p_{n-2})<0$ or $f(p_{n-1})f(p_{n-3})<0$.

• Line1 across $(p_{n-1},f(p_{n-1}))$, $(p_{n-2},f(p_{n-2}))$.

• Line2 across $(p_{n-1},f(p_{n-1}))$, $(p_{n-3},f(p_{n-3}))$.

We can choose which line to use by where zero point locates: $[p_{n-2},p_{n-1}]$ or $[p_{n-1},p_{n-3}]$.

## # Error Analysis for Iteration

Definition: If positive constants $\lambda$ and $\alpha$ exists with

$\lim_{n\rightarrow\infty}\frac{|p_{n+1}-p|}{|p_n-p|^{\alpha}}=\lambda$

then $\{p_n\}^\infty_{n=0}$ converges to $p$ of order $\alpha$, with asymptotic error constant $\lambda$.

Higher the order is, more rapidly the sequence converges.

### # Fixed point method

Theorem: If $|g'(x)|\leq k<1$ for any $x\in(a,b)$, then the fixed point iteration converges linearly to the unique fixed point.

• Proof:

$\lim_{n\rightarrow\infty}\frac{|p_{n+1}-p|}{|p_n-p|^1}=\lim_{n\rightarrow\infty}\frac{|g(p_n)-g(p)|}{|p_n-p|}=\lim_{n\rightarrow\infty}|g'(\xi_n)|=|g'(p)|=Constant\\ \xi_n\in(p_n,p)$

​ So $\alpha=1$。If $g'(p)=0$, then the order might be higher.

What’s more, if $g'(p)=0$ and $|g''(p)|\leq M$, then we have:

$\lim_{n\rightarrow\infty}\frac{|p_{n+1}-p|}{|p_n-p|^2}=\lim_{n\rightarrow\infty}\frac{|g(p_n)-g(p)|}{|p_n-p|^2}\\ =\lim_{n\rightarrow\infty}\frac{|g(p)+g'(p)(p_{n}-p)+\frac{g''(\xi)}{2}(p_n-p)^2-g(p)|}{|p_n-p|^2}\\ =\lim_{n\rightarrow\infty}\frac{|g''(\xi)|}{2}=\frac{M}{2}$

So $\alpha=2$ if the $g''(x)$ has a bound.

If $|g'(p)|>1$, then the iteration diverges with any starting point other than p.

## # From Root to Fix point

For root finding problem $f(x)=0$, we let $g(x)$ be in the form

$g(x)=x-\phi(x)f(x),\phi(x)\neq 0$

so $g(x)=x\Rightarrow f(x)=0$.

Since $g'(x)=1-\phi'(x)f(x)-\phi(x)f'(x)$, let $x=p$, $g'(p)=1-\phi(p)f'(p)$, and $g'(p)=0$ if and only if $\phi(x)=\frac{1}{f'(p)}$,

Definition: A solution $p$ of $f(x)=0$ is a zero of multiplicity $m$ of $f(x)$ if for $x\neq p$, we can write

$f(x)=(x-p)^mq(x)\\ \lim_{x\rightarrow p}q(x)\neq 0$

when it comes to Newton’s method, $g(x)=x-f(x)/f'(x)$, we have

$g(x)=x-\frac{(x-p)^mq(x)}{m(x-p)^{m-1}q(x)+(x-p)^mq'(x)}\\ =x-(x-p)\frac{q(x)}{mq(x)+(x-p)q'(x)}\\ \lim_{x\rightarrow p}g'(x)=1-\frac{1}{m}\neq 0$

## # Aitken’s Δ2 Method

Suppose $\{p_n\}^\infty_n=0$ is a linearly convergent sequence ($\alpha=1$) with limit $p$:

$\lim_{n\rightarrow\infty}\frac{p_{n+1}-p}{p_n-p}=\lambda\neq 0$

So when n is large enough:

$\frac{p_{n+1}-p}{p_n-p}\approx\frac{p_{n+2}-p}{p_{n+1}-p}\\ p\approx p_n-\frac{(p_{n+1}-p_n)^2}{p_{n+2}-2p_{n+1}+p_n}$

Aitken’s $\Delta^2$ method is to define a new sequence:

$q_n=p_n-\frac{(p_{n+1}-p_n)^2}{p_{n+2}-2p_{n+1}+p_n}$

$\{q_n\}^\infty_{n}$ will converge to $p$ more rapidly than $\{p_n\}$

Definition: The forward difference $\Delta p_n$ of a sequence $\{p_n\}^\infty_n$ is defined by:

$\Delta p_n=p_{n+1}-p_n$

So $q_n=p_n-\frac{(\Delta p_n)^2}{\Delta^2p_n}$.

Theorem: $\lim_{n\rightarrow\infty}\frac{q_n-p}{p_n-p}=0$, so $q_n$ converges faster than $p_n$.

## # Steffensen’s Method (Aitken’s Method for fixed point iteration)

Repeat:

$\begin{cases}p_0\\p_1=g(p_0)\\p_2=g(p_1)\\p_3=p_0-\frac{(\Delta p_0)^2}{\Delta^2p_0}\end{cases} \begin{cases}p_3\\p_4=g(p_3)\\p_5=g(p_4)\\p_6=p_3-\frac{(\Delta p_3)^2}{\Delta^2p_3}\end{cases} \begin{cases}p_6\\p_7=g(p_6)\\p_8=g(p_7)\\p_9=p_6-\frac{(\Delta p_6)^2}{\Delta^2p_6}\end{cases}......$

Theorem: If $g'(p)\neq 1$, and there exists a $\delta>0$ such that $g\in C^3[p-\delta,p+\delta]$, then Steffensen’s method gives a quadratic ($\alpha=2$) convergence for any $p_0\in[p-\delta,p+\delta]$.

## # Zeros of Polynomials and Muller’s Method

Find the root of given polynomial:

$f(x)=x^n+a_1x^{n-1}+...+a_{n-1}x+a_n$

initially we have three points: $(x_0,f(x_0)),(x_1,f(x_1)),(x_2,f(x_2))$, then we draw a parabola across the three points:

There are two meeting points of x-axis and the parabola, we choose the one which is closer to $x_2$ as $x_3$, and repeat the iteration to draw parabola across $(x_1,f(x_1)),(x_2,f(x_2)),(x_3,f(x_3))$.

When $|x_{n+1}-x_n|<\varepsilon$, the iteration terminates. and we’ve found the root.

# # 第三章 差值和多项式拟合

Theorem (Weierstrass Approximation Theorem): Suppose $f$ is defined and continuous on $[a,b]$, for each $\varepsilon>0$, there exists a polynomial $P(x)$ defined on $[a,b]$, with the property that:

$\forall x\in[a,b],\;|f(x)-P(x)|<\varepsilon$

## # the n-th Lagrange Interpolating Polynomial

$L_{n,k}(x)=\prod_{i=0,i\neq k}^n\frac{x-x_i}{x_k-x_i}\\ L_{n,k}(x)=\begin{cases}0&x\neq x_k\\1&x=x_k\end{cases} \\ P(x)=\sum_{k=0}^nf(x_k)L_{n,k}(x)$

Theorem: Suppose $x_0,x_1,..,x_n$ are distinct numbers in the interval $[a,b]$ and $f\in C^{n+1}[a,b]$, then for each $x\in[a,b]$, a number $\xi(x)\in(a,b)$ exists with

$f(x)=P_n(x)+\frac{f^{(n+1)}(\xi(x))}{(n+1)!}(x-x_0)(x-x_1)...(x-x_n)$

Where $P_n(x)$ is the Lagrange Interpolating polynomial.

The error is related to $h=|b-a|$. If $|f^{(n+1)}|\leq M_{n+1}$, then:

$|f(x)-P_1(x)|=\frac{|f''(\xi)|}{2}|(x-x_0)(x-x_1)|\leq\frac{M_2h^2}{8}\\ |f(x)-P_n(x)|\leq \frac{M_{n+1}}{n!}h^n$

Runge Phenomenon: when $n$ is bigger, the difference between function and polynomial is larger.

## # Data Approximation and Neville’s Method

Theorem: Consider $n$ points: $x_1,x_2,...,x_n$, and Lagrange polynomial:

$P_1(x)=\sum_{k=0,k\neq j}^nL_{n,k}(x)f(x_k)\\ P_2(x)=\sum_{k=0,k\neq i}^nL_{n,k}(x)f(x_k) \\$

Then $P(x)=\frac{(x-x_j)P_1(x)-(x-x_i)P_2(x)}{x_i-x_j}$.

So the Lagrange Polynomial can be generate recursively.

$(x_1,x_2,x_3,...,x_{n-1},x_n)\\ \Rightarrow(x_1,x_2,..,x_{n-1})+(x_2,x_3,...,x_n)\\ \Rightarrow (x_1,x_2,...,x_{n-2})+2(x_2,x_3,...,x_{n-1})+(x_3,x_4,...,x_n)\\ \Rightarrow ...$

## # Divided Difference

Definition: The zeroth divided difference of the function $f$ with respect to $x_i$, denoted $f[x_i]$, is simply the value of $f$ at $x_i$:

$f[x_i]=f_i=f(x_i)$

while the first divided difference of $f$ is defined as

$f[x_i,x_{i+1}]=\frac{f[x_{i+1}]-f[x_i]}{x_{i+1}-x_i}$

the second divided difference is

$f[x_i,x_{i+1},x_{i+2}]=\frac{f[x_{i+1},x_{i+2}]-f[x_{i},x_{i+1}]}{x_{i+2}-x_i}$

Theorem: $P_n(x)$ can be rewritten as

$P_n(x)=f[x_0]+f[x_0,x_1](x-x_0)+f[x_0,x_1,x_2](x-x_0)(x-x_1)(x-x_2)+...$

which is known as Newton’s interpolatory divided difference formula.

Theorem: a number $\xi$ exists in $(a,b)$ with

$f[x_0,x_1,...,x_n]=\frac{f^{(n)}(\xi)}{n!}$

Definition: the forward difference is defined by:

$\Delta f(x_i)=f(x_{i+1})-f(x_i)$

if $\forall i,x_{i+1}-x_i\equiv h$ and $x=x_0+sh$, then

$P_n(x)=f[x_0]+f[x_0,x_1]sh+f[x_0,x_1,x_2]s(s-1)h^2+...\\ =\sum_{k=0}^nf[x_0,x_1,...,x_k]C_s^kk!h^k\\ \because f[x_0,x_1,..,x_k]=\frac{1}{k!h^k}\Delta^2f(x_0)\\ \therefore P_n(x)=\sum_{k=0}^nC_s^k\Delta^kf(x_0)$

The formula is named as Newton Forward Difference Formula.

Definition: the backward difference is defined by:

$\bigtriangledown f(x_i)=f(x_i)-f(x_{i-1})$

So we have

$f[x_n,x_{n-1}]=\frac{1}{h}\bigtriangledown f(x_n)\\ f[x_n,x_{n-1},x_{n-2},...,x_{n-k}]=\frac{1}{k!h^k}\bigtriangledown^kf(x_n)\\ P_n(x)=\sum_{k=0}^n(-1)^lC_{-s}^k\bigtriangledown^k f(x_n)\\ where\;C_{-s}^k=\frac{-s(-s-1)...(-s-k+1)}{k!}$

The formula is named as Newton Backward Difference Formula.

## # Hermite Interpolation

Problem: Given $n+1$ distinct numbers $x_0,x_1,...,x_n$ and non-negative integers $k_0,k_1,...,k_n$, to construct the osculating polynomial approximating a function $f\in C^m[a,b]$ where $m=max(k_0,k_1,...,k_n)$.

The osculating polynomial $H(x)$ requires:

$f(x_0)=H(x_0),f'(x_0)=H'(x_0),...,f^{(k_0)}(x_0)=H^{(k_0)}(x_0)\\ f(x_1)=H(x_1),f'(x_1)=H'(x_1),...,f^{(k_1)}(x_1)=H^{(k_1)}(x_1)\\ ......\\ f(x_n)=H(x_n),f'(x_n)=H'(x_n),...,f^{(k_n)}(x_n)=H^{(k_n)}(x_n)\\$

Obviously the degree of this osculating polynomial is at most $K=\sum_{i=1}^nk_i+n$.

since the number of equation to be satisfied is $K+1$.

if $k_0=k_1=...=k_n$, then the Hermite polynomial is:

$H_{2n+1}(x)=\sum_{j=0}^n f(x_j)H_{n,j}(x)+\sum_{j=0}^nf'(x_j)\hat{H}_{n,j}(x)\\ where\;H_{n,j}(x)=[1-2(x-x_j)L'_{n,j}(x_j)]L^2_{n,j}(x)\\ \hat{H}_{n,j}(x)=(x-x_j)L^2_{n,j}(x)$

the error is

$f(x)=H_{2n+1}(x)+\frac{\prod_{k=0}^n(x-x_k)^2}{(2n+2)!}f^{(2n+2)}(\xi)$

## # Piecewise Interpolation Linear Polynomial

Instead of constructing a high-degree polynomial to approximate the function, we can construct several segments:

First construct the base function:

$l_0(x)=\frac{x-x_1}{x_0-x_1},x_0\leq x\leq x_1\\ l_i(x)=\begin{cases}\frac{x-x_{i-1}}{x_i-x_{i-1}}&x_{i-1}\leq x\leq x_i\\\frac{x-x_{i+1}}{x_i-x_{i+1}}&x_{i}\leq x\leq x_{i+1} \end{cases}\\ l_n(x)=\frac{x-x_{n-1}}{x_n-x_{n-1}},x_{n-1}\leq x\leq x_n$

Then the piecewise interpolation linear polynomial is:

$P(x)=\sum_{i=0}^nl_i(x)f(x_i)$

the error bounds is:

$|f(x)-P(x)|\leq\frac{h^2}{8}M$

where $h=max(x_{i+1}-x_i)$, $M=max|f''(x)|$.

When considering the osculating polynomial and $k_0=k_1=...=k_n=1$, we can construct:

$P(x)=\sum_{i=0}^n H_i(x)f(x_i)+\hat{H}_i(x)f'(x_i)$

where

$H_i(x)=\begin{cases}(1+2\frac{x-x_i}{x_{i-1}-x_i})(\frac{x-x_{i-1}}{x_i-x_{i-1}})^2&x_{i-1}\leq x\leq x_i\\ (1+2\frac{x-x_i}{x_{i+1}-x_i})(\frac{x-x_{i+1}}{x_i-x_{i+1}})^2&x_{i}\leq x\leq x_{i+1} \end{cases}\\ \hat{H}_i(x)=\begin{cases}(x-x_i)(\frac{x-x_{i-1}}{x_i-x_{i-1}})^2&x_{i-1}\leq x\leq x_i\\ (x-x_i)(\frac{x-x_{i+1}}{x_i-x_{i+1}})^2&x_i\leq x\leq x_{i+1} \end{cases}$

# # 第四章 数值微分和数值积分

Formula:

$f'(x_0)\approx \frac{f(x_0+h)-f(x_0)}{h}\\ f'(x_0)\approx \frac{f(x_0)-f(x_0-h)}{h}\\ f'(x_0)\approx \frac{f(x_0+h)-f(x_0-h)}{2h}$

## # (n+1)-Point Formula

$\because f(x)=\sum_{k=0}^nf(x_k)L_{n,k}(x)+\frac{(x-x_0)...(x-x_n)}{(n+1)!}f^{(n+1)}(\xi)\\ \therefore f'(x_j)=\sum_{k=0}^nf(x_k)L'_{n,k}(x_j)+\frac{f^{(n+1)}(\xi)}{(n+1)!}\prod_{k=0,k\neq j}^n(x_j-x_k)$

note that $f'(x)$ is only meaningful when $x=x_0,x_1,...,x_n$.

when $n=2$ and $x_1=x_0+h,x_2=x_0+2h$:

$f'(x_0)=\frac{1}{2h}[-3f(x_0)+4f(x_0+h)-f(x_0+2h)]+\frac{h^2}{3}f^{(3)}(\xi)$

By Taylor polynomial, we have

$f(x_0+h)=f(x_0)+f'(x_0)h+\frac{1}{2}f''(x_0)h^2+\frac{1}{6}f^{(3)}(x_0)h^3+\frac{1}{24}f^{(4)}(\xi_1)h^4\\ f(x_0-h)=f(x_0)-f'(x_0)h+\frac{1}{2}f''(x_0)h^2-\frac{1}{6}f^{(3)}(x_0)h^3+\frac{1}{24}f^{(4)}(\xi_{-1})h^4\\$

Therefore:

$f(x_0+h)+f(x_0-h)=2f(x_0)+f''(x_0)h^2+\frac{h^4}{24}[f^{(4)}(\xi_1)+f^{(4)}(\xi_{-1})]$

Suppose $f^{(4)}$ is continuous, then there exists $\xi\in[x_0-h,x_0+h]$ satisfying

$f^{(4)}(\xi)=\frac{1}{2}[f^{(4)}(\xi_1)+f^{(4)}(\xi_{-1})]$

So

$f''(x_0)=\frac{1}{h^2}[f(x_0-h)-2f(x_0)+f(x_0+h)]-\frac{h^4}{12}f^{(4)}(\xi)$

## # Richardson’s Extrapolation

Suppose that for each number $h\neq 0$ we have a formula $N(h)$ that approximates an unknown value $M$ and that the truncation error involved with the approximation has the form:

$M-N(h)=K_1h+K_2h^2+K_3h^3+...$

Then we have

$M=N(\frac{h}{2})+K_1\frac{h}{2}+K_2\frac{h^2}{4}+...$

Therefore

$M=[N(\frac{h}{2})+N(\frac{h}{2})-N(h)]+K_2(\frac{h^2}{2}-h^2)+...\\ =2N(\frac{h}{2})-N(h)+O(h^2)$

let $N_2(h)=2N(\frac{h}{2})-N(h)$, then

$\because M=N_2(h)-\frac{K_2}{2}h^2-\frac{3K_3}{4}h^3-...\\ \therefore3M=4N_2(\frac{h}{2})-N_2(h)+O(h^3)$

In general, if $M$ can be written in the form:

$M=N_1(h)+\sum_{j=1}^{m-1}K_jh^j+O(h^m)$

then

$N_j(h)=N_{j-1}(\frac{h}{2})+\frac{N_{j-1}(h/2)-N_{j-1}(h)}{2^{j-1}-1}\\ M=N_j(h)+O(h^j)$