Ngô Quốc Anh

August 1, 2009

Solutions of Equations in One Variable

A root-finding algorithm is a numerical method, or algorithm, for finding a value x such that f(x) = 0, for a given function f. Such an x is called a root of the function f.

Suppose f is a continuous function defined on the interval [a, b], with f(a) and f(b) of opposite sign. By the Intermediate Value Theorem, there exists a number p in (a, b) with f(p) = 0. Although the procedure will work when there is more than one root in the interval (a,b), we assume for simplicity that the root in this interval is unique. The method calls for a repeated halving of subintervals of [a, b] and, at each step, locating the half containing p.

bisectionmethodTo begin, set a_1=a and b_1=b, and let p_1 be the midpoint of [a,b]; that is,

\displaystyle p_1 = a_1 + \frac{b_1-a_1}{2}=\frac{a_1+b_1}{2}.

If f(p_1)=0, then p=p_1, and we are done. If f(p_1)\ne 0, then f(p_1) has the same sign as either f(a_1) or f(b_1). When f(p_1) and f(a_1) have the same sign, p \in (p_1, b_1), and we set a_2=p_1 and b_2 = b_1. When f(p_1) and f(a_1) have opposite signs, p \in (a_1, p_1), and we set a_2 = a_1 and b_2 = p_1. We then reapply the process to the interval [a_2, b_2].

A number p is a fixed point for a given function g if g(p) = p. In this section we consider the problem of finding solutions to fixed-point problems and the connection between the fixed-point problems and the root-finding problems we wish to solve. Root-finding problems and fixed-point problems are equivalent classes in the following sense:

Given a root-finding problem f(p) = 0, we can define functions g with a fixed point at p in a number of ways, for example, as g(x) = x-f(x) or as g(x) = x + 3f(x). Conversely, if the function g has a fixed point at p, then the function defined by f(x) = x-g(x) has a zero at p.


Although the problems we wish to solve are in the root-finding form, the fixed-point form is easier to analyze, and certain fixed-point choices lead to very powerful root-finding techniques.


  • Newton’s method

Newton’s (or the Newton-Raphson) method is one of the most powerful and well-known numerical methods for solving aroot-finding problem. With an initial approximation p_0, the Newton’s method generates the sequence \{p_n\}_{n=1}^\infty by

\displaystyle {p_{n + 1}} = {p_n} - \frac{{f\left( {{p_n}} \right)}} {{f'\left( {{p_n}} \right)}}, \quad n \geqslant 0.


To circumvent the problem of the derivative evaluation in Newton’s method, we  introduce a slight variation. By definition,

\displaystyle f'(x) = \mathop {\lim }\limits_{x \to {p_n}} \frac{{f(x) - f({p_{n - 1}}){\text{ }}}}{{x - {p_{n - 1}}}}.

Letting x=p_{n-1}, we have

\displaystyle f'\left( {{p_n}} \right) \approx \frac{{f\left( {{p_{n - 1}}} \right) - f\left( {{p_n}} \right)}} {{{p_{n - 1}} - {p_n}}}

which yields

\displaystyle {p_{n + 1}} = {p_n} - {\text{ }}\frac{{f({p_n})\left( {{p_{n - 1}} - {p_n}} \right)}}{{f({p_{n - 1}}) - f({p_{n - 2}})}}.


  • The method of False Position

The method of False Position (also called Regula Falsi) generates approximations in the same manner as the Secant method, but it includes a test to ensure that the root is bracketed between successive iterations. Although it is not a method we generally recommend, it illustrates how bracketing can be incorporated.

First choose initial approximations p_0 and p_1 with f(p_0) f(p_1) < 0. The approximation p_2 is chosen in the same manner as in the Secant method, as the x-intercept of the line joining (p_0, f(p_0)) and (p_1, f(p_1)). To decide which secant line to use to compute p_3, we check f(p_2) f(p_1). If this value is negative, then p_1 and p_2 bracket a root, and we choose p_3 as the x-intercept of the line joining (p_1, f(p_1)) and (p_2, f(p_2)). If not, we choose p_3 as the x-intercept of the line joining (p_0, f(p_0)) and (p_2, f(p_2)), and then interchange the indices on p_0 and p_1.


In a similar manner, once p_3 is found, the sign of f(p_3)f(p_2) determines whether we use p_2 and p_3 or p_3 and p_1 to compute p_4. In the latter case a relabeling of p_2 and p_1 is performed.

Source: Richard L. Burden and J. Douglas Faires, Numerical Analysis, 8th edition, Thomson/Brooks/Cole, 2005.

Leave a Comment »

No comments yet.

RSS feed for comments on this post. TrackBack URI

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Create a free website or blog at

%d bloggers like this: