Ngô Quốc Anh

June 2, 2009

Finite difference method: Local truncation error and consistency

Truncation error or local truncation error is error made by numerical algorithms that arises from taking finite number of steps in computation. It is present even with infinite-precision arithmetic, because it is caused by truncation of the infinite Taylor series to form the algorithm.

Use of arbitrarily small steps in numerical computation is prevented by round-off error, which are the consequence of using finite precision floating point numbers on computers.

Let F_{i,j}(u)=0 represent the difference equation approximating the PDE at the (i.j)-th mesh point, with exact solution u. For example,

\displaystyle {F_{i,j}}\left( u \right) = \frac{{{u_{i,j + 1}} - {u_{i,j}}}}{k} - {\text{ }}\frac{{{u_{i - 1,j}} - 2{u_{i,j}} + {u_{i + 1,j}}}}{{{h^2}}}

represent the difference equation approximating the following PDE

\displaystyle\frac{{\partial u}} {{\partial t}} = \frac{{{\partial ^2}u}} {{\partial {t^2}}}

at the (i,j)-th mesh point. If u is replaced by U at the mesh points of the \Delta E, where U is the exact solution of the PDEs, the value of F_{i,j}(U) is called the local truncation error T_{i,j} at the (i,j)-th mesh point. F_{i,j}(U) clearly measures the amount by which the exact solution values of the PDE at the mesh points of the \Delta E do not satisfy the \Delta E at the point (ih, jk).

Continuing the above example gives us

\displaystyle {T_{i,j}} = {F_{i,j}}\left( U \right) = \frac{{{U_{i,j + 1}} - {U_{i,j}}}}{k} - \frac{{{U_{i - 1,j}} - 2{U_{i,j}} + {U_{i + 1,j}}}}{{{h^2}}}.

Using Taylor expansions, it is easy to express T_{i,j} in terms of power of h and k and partial derivatives of U at (ih, jk). This leads us to the computation of the local truncation error.

It is sometimes possible to approximate a parabolic or hyperbolic equation by a finite-difference scheme that is stable (i.e. limits the amplification of all the components of the initial conditions), but which has a solution that converges to the solution of a different differential equation as the mesh lengths tend to zero. Such a difference scheme is said to be inconsistent or incompatible with the PDE.

The real importance of the concept of consistency lies in a theorem by Lax which states that if a linear finite-difference equation is consistent with a properly posed linear initial-value problem then stability guarantees convergence of u to U as the mesh lengths tend to zero. Consistency can be defined in either of two equivalent but slightly different ways.

The more general definition is as follows. Let L(U)=0 represent the PDE in the independent variables x and t, with exact solution U. Let F(u)=0 represent the approximating finite-difference equation with exact solution u. Let v be a continuous function of x and t with a sufficient number of continuous derivatives to enable L(v) to be evaluated at the point (ih,jk).

Then the truncation error T_{i,j}(v) at the point (ih,jk) is defined by

\displaystyle T_{i,j}(v) = F_{i,j}(v) - L(v_{i,j}).

If T_{i,j}(v) \to 0 as h \to 0 and k \to 0, the difference equation is said to be consistent or compatible with the PDE. Most authors put v = U because L(U)=0.

For example, let us consider the following parabolic equation

\displaystyle u_t = \nu u_{xx}(x) - \kappa u(x), \quad a<x<b, 0<t<T


\displaystyle u(x,0)=f(x), \quad a \leq x \leq b


\displaystyle u(a,t)=g_1(t), \quad u(b,t)=g_2(t), \quad 0 \leq t \leq T,

where \nu >0 and \kappa \geq 0.

Backward Euler + second order central finite difference discretization.

We split equally the domain a \leq x \leq b into N parts and the domain 0 \leq t \leq T into M parts, i.e. (i,j)-th mesh point is of the following form

\displaystyle \left( {a + i\frac{{b - a}} {N},j\frac{T} {M}} \right).

We are now in a position to discretize the problem as follows

\displaystyle\frac{{{u_{i,j + 1}} - {u_{i,j}}}}{k} = \nu  \frac{{{u_{i - 1,j + 1}} - 2{u_{i,j + 1}} + {u_{i + 1,j + 1}}}}{{{h^2}}}  - \kappa {u_{i,j}}

where 1 \leq i \leq N - 1. At i=0, that is x=a, one has

\displaystyle\frac{{{u_{1,j + 1}} - {u_{ - 1,j + 1}}}} {{2h}} = g_1\left( t \right)

and at i=N, that is x=b, one obtains

\displaystyle\frac{{{u_{N + 1,j + 1}} - {u_{N - 1,j + 1}}}} {{2h}} = g_2\left( t \right).

The local truncation error and consistency

The above discretization can be rewritten as following

\frac{{{u_{i,j + 1}} - {u_{i,j}}}} {k} = \nu \frac{{{u_{i - 1,j + 1}} - 2{u_{i,j + 1}} + {u_{i + 1,j + 1}}}} {{{h^2}}} - \kap...

where 1 \leq i \leq N - 1 with the following boundary conditions

\frac{{{u_{0,j + 1}} - {u_{0,j}}}} {k} = \nu \frac{{2{u_{1,j + 1}} - 2{u_{0,j + 1}} - 2hf\left( a \right)}} {{{h^2}}} - \kapp...


\frac{{{u_{N,j + 1}} - {u_{N,j}}}} {k} = \nu \frac{{2{u_{N - 1,j + 1}} - 2{u_{N,j + 1}} + 2hf\left( b \right)}} {{{h^2}}} - \...

Now we have

{F_{i,j}}\left( U \right) = \frac{{{U_{i,j + 1}} - {U_{i,j}}}} {k} - \nu \frac{{{U_{i - 1,j + 1}} - 2{U_{i,j + 1}} + {U_{i + ...

By Taylor’s expansion

{U_{i + 1,j + 1}} = {U_{i,j}} + \left[ {h{{\left( {\frac{{\partial U}} {{\partial x}}} \right)}_{i,j}} + k{{\left( {\frac{{\p...


{U_{i - 1,j + 1}} = {U_{i,j}} + \left[ { - h{{\left( {\frac{{\partial U}} {{\partial x}}} \right)}_{i,j}} + k{{\left( {\frac{...


{U_{i,j + 1}} = {U_{i,j}} + k{\left( {\frac{{\partial U}} {{\partial t}}} \right)_{i,j}} + \frac{{{k^2}}} {2}{\left( {\frac{{...


{F_{i,j}}\left( U \right) = {\left( {\frac{{\partial U}} {{\partial t}}} \right)_{i,j}} + \frac{k} {2}{\left( {\frac{{{\parti...

which yields

{F_{i,j}}\left( U \right) = \left[ {{{\left( {\frac{{\partial U}} {{\partial t}}} \right)}_{i,j}} - \nu {{\left( {\frac{{{\pa...

Thus, the principle part of the local truncation error is

\frac{k} {2}{\left( {\frac{{{\partial ^2}U}} {{\partial {t^2}}}} \right)_{i,j}} - \nu k{\left( {\frac{{{\partial ^3}U}} {{\pa...


\displaystyle {T_{i,j}} = O\left( k \right) + O\left( {{h^2}} \right).


In view of the local truncation error, one can easily see that the local truncation error is a polynomial of two variables h and k which implies that the method is consistent.

Leave a Comment »

No comments yet.

RSS feed for comments on this post. TrackBack URI

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Blog at

%d bloggers like this: