A root-finding algorithm is a numerical method, or algorithm, for finding a value such that , for a given function . Such an is called a root of the function .
Suppose is a continuous function defined on the interval , with and of opposite sign. By the Intermediate Value Theorem, there exists a number in with . Although the procedure will work when there is more than one root in the interval , we assume for simplicity that the root in this interval is unique. The method calls for a repeated halving of subintervals of and, at each step, locating the half containing .
To begin, set and , and let be the midpoint of ; that is,
If , then , and we are done. If , then has the same sign as either or . When and have the same sign, , and we set and . When and have opposite signs, , and we set and . We then reapply the process to the interval .
A number p is a fixed point for a given function g if . In this section we consider the problem of finding solutions to fixed-point problems and the connection between the fixed-point problems and the root-finding problems we wish to solve. Root-finding problems and fixed-point problems are equivalent classes in the following sense:
Given a root-finding problem , we can define functions with a fixed point at in a number of ways, for example, as or as . Conversely, if the function has a fixed point at , then the function defined by has a zero at .
Although the problems we wish to solve are in the root-finding form, the fixed-point form is easier to analyze, and certain fixed-point choices lead to very powerful root-finding techniques.
- Newton’s method
Newton’s (or the Newton-Raphson) method is one of the most powerful and well-known numerical methods for solving aroot-finding problem. With an initial approximation , the Newton’s method generates the sequence by
- The Secant method
To circumvent the problem of the derivative evaluation in Newton’s method, we introduce a slight variation. By definition,
Letting , we have
- The method of False Position
The method of False Position (also called Regula Falsi) generates approximations in the same manner as the Secant method, but it includes a test to ensure that the root is bracketed between successive iterations. Although it is not a method we generally recommend, it illustrates how bracketing can be incorporated.
First choose initial approximations and with . The approximation is chosen in the same manner as in the Secant method, as the -intercept of the line joining and . To decide which secant line to use to compute , we check . If this value is negative, then and bracket a root, and we choose as the -intercept of the line joining and . If not, we choose as the -intercept of the line joining and , and then interchange the indices on and .
In a similar manner, once is found, the sign of determines whether we use and or and to compute . In the latter case a relabeling of and is performed.
Source: Richard L. Burden and J. Douglas Faires, Numerical Analysis, 8th edition, Thomson/Brooks/Cole, 2005.