In mathematics, finite-difference methods are numerical methods for approximating the solutions to differential equations using finite difference equations to approximate derivatives.
Finite-difference methods approximate the solutions to differential equations by replacing derivative expressions with approximately equivalent difference quotients. That is, because the first derivative of a function is, by definition,
then a reasonable approximation for that derivative would be to take
for some small value of . In fact, this is the forward difference equation for the first derivative. Using this and similar formulae to replace derivative expressions in differential equations, one can approximate their solutions without the need for calculus.
Derivation from Taylor’s polynomial
Assuming the function whose derivatives are to be approximated is properly-behaved, by Taylor’s theorem,
where denotes the factorial of , and is a remainder term, denoting the difference between the Taylor polynomial of degree n and the original function. Again using the first derivative of the function as an example, by Taylor’s theorem,
which, with some minor algebraic manipulation, is equivalent to
so that for sufficiently small,
Accuracy and order
The error in a method’s solution is defined as the difference between its approximation and the exact analytical solution. The two sources of error in finite difference methods are round-off error, the loss of precision due to computer rounding of decimal quantities, and truncation error or discretization error, the difference between the exact solution of the finite difference equation and the exact quantity assuming perfect arithmetic (that is, assuming no round-off).
To use a finite difference method to attempt to solve (or, more generally, approximate the solution to) a problem, one must first discretize the problem’s domain. This is usually done by dividing the domain into a uniform grid (see image to the right). Note that this means that finite-difference methods produce sets of discrete numerical approximations to the derivative, often in a “time-stepping” manner.
An expression of general interest is the local truncation error of a method. Typically expressed using Big-O notation, local truncation error refers to the error from a single application of a method.That is, it is the quantity if refers to the exact value and to the numerical approximation. The remainder term of a Taylor polynomial is convenient for analyzing the local truncation error. Using the Lagrange form of the remainder from the Taylor polynomial for , which is
where , the dominant term of the local truncation error can be discovered. For example, again using the forward-difference formula for the first derivative, knowing that ,
and with some algebraic manipulation, this leads to
and further noting that the quantity on the left is the approximation from the finite difference method and that the quantity on the right is the exact quantity of interest plus a remainder, clearly that remainder is the local truncation error. A final expression of this example and its order is
This means that, in this case, the local truncation error is proportional to the step size.
Using a forward difference at time and a second-order central difference for the space derivative at position (“FTCS”) we get the recurrence equation
This is an explicit method for solving the one-dimensional heat equation. We can obtain from the other values this way
where . So, knowing the values at time n you can obtain the corresponding ones at time using this recurrence relation. and must be replaced by the boundary conditions, in this example they are both .
This explicit method is known to be numerically stable and convergent whenever . The numerical errors are proportional to the time step and the square of the space step
If we use the backward difference at time and a second-order central difference for the space derivative at position (“BTCS”) we get the recurrence equation
This is an implicit method for solving the one-dimensional heat equation.
We can obtain from solving a system of linear equations
The scheme is always numerically stable and convergent but usually more numerically intensive than the explicit method as it requires solving a system of numerical equations on each time step. The errors are linear over the time step and quadratic over the space step.
Finally if we use the central difference at time and a second-order central difference for the space derivative at position (“CTCS”) we get the recurrence equation
This formula is known as the Crank-Nicolson method. We can obtain from solving a system of linear equations
The scheme is always numerically stable and convergent but usually more numerically intensive as it requires solving a system of numerical equations on each time step.
The errors are quadratic over the time step and formally are of the fourth degree regarding the space step
However, near the boundaries, the error is often instead of . Usually the Crank-Nicolson scheme is the most accurate scheme for small time steps. The explicit scheme is the least accurate and can be unstable, but is also the easiest to implement and the least numerically intensive. The implicit scheme works the best for large time steps.