Ngô Quốc Anh

April 25, 2010

Explicit One-Step Schemes for the Advection Equation: The upwind scheme

Filed under: Giải tích 9 (MA5265), PDEs — Ngô Quốc Anh @ 1:56

Let us consider a very simple class of schemes solving the convection equations entitled the upwind schemes. Precisely, let us consider

\displaystyle u_t+cu_x=0, \quad x \in \mathbb R, t>0

together with the following initial data

\displaystyle u(x,0)=u_0(x), \quad x\in \mathbb R.

It is well-known that the above problem has a unique solution given by u(x, t ) = u_0(x - c t ), which is a right traveling wave of speed c.

Since the upwind schemes are well-known and appear in most of numerical analysis’s textbooks, what I am trying to do here is to give some basic ideas together with their motivation.

First of all, the original upwind schemes are just explicit one-step schemes. Upwind schemes use an adaptive or solution-sensitive finite difference stencil to numerically simulate more properly the direction of propagation of information in a flow field. The upwind schemes attempt to discretize hyperbolic partial differential equations by using differencing biased in the direction determined by the sign of the characteristic speeds. Historically, the origin of upwind methods can be traced back to the work of Courant, Isaacson, and Reeves who proposed the CIR method.

Discretization. Like the finite difference methods, we need to use a a grid with points (x_j , t_n) defined by

x_j=\pm j\Delta x, j \geqslant 0, \quad t^n = \pm n\Delta t, n \geqslant 0.

for some spatial grid size \Delta x and time step \Delta t.

(more…)

April 22, 2010

The Bramble-Hilbert lemma

Filed under: Giải tích 9 (MA5265) — Ngô Quốc Anh @ 14:27

In numerical analysis, the Bramble-Hilbert lemma, named after James H. Bramble and Stephen R. Hilbert, bounds the error of an approximation of a function u by a polynomial of order at most k in terms of derivatives of u of order k+1. Both the error of the approximation and the derivatives of u are measured by L^p norms on a bounded domain in \mathbb R^n.

Theorem. Over a sufficiently domain \Omega, there exists a constant C(\Omega) such that

\displaystyle\mathop {\inf }\limits_{v \in {P_k}(\Omega )} {\left\| {u - v} \right\|_{{W^{k + 1,p}}(\Omega )}} \leqslant C(\Omega ){\left| u \right|_{{W^{k + 1,p}}(\Omega )}}

for every u \in W^{k+1,p}(\Omega) where \| \cdot \| and | \cdot| denote the norm and semi-norm of the Sobolev space W^{k+1,p}(\Omega).

This is similar to classical numerical analysis, where, for example, the error of linear interpolation u can be bounded using the second derivative of u. However, the Bramble-Hilbert lemma applies in any number of dimensions, not just one dimension, and the approximation error and the derivatives of u are measured by more general norms involving averages, not just the maximum norm.

Additional assumptions on the domain are needed for the Bramble-Hilbert lemma to hold. Essentially, the boundary of the domain must be “reasonable”. For example, domains that have a spike or a slit with zero angle at the tip are excluded. Lipschitz domains are reasonable enough, which includes convex domains and domains with C^1 boundary.

The main use of the Bramble-Hilbert lemma is to prove bounds on the error of interpolation of function u by an operator that preserves polynomials of order up to k, in terms of the derivatives of u of order k+1. This is an essential step in error estimates for the finite element method. The Bramble-Hilbert lemma is applied there on the domain consisting of one element.

(more…)

February 24, 2010

FEM for heat equations in 2D

Filed under: Giải tích 9 (MA5265) — Ngô Quốc Anh @ 19:01

In this topic, we have introduced the finite element method via the Poisson equations in 1D. Note that the FEM has some advantages compared to the FDM, for example, the mesh is not necessarily uniform.  Today, we discuss the question how to apply this FE method to some evolution equations. What we are going to do is to study the heat equations in 2D.

In the literature, the simplest model of the heat equations in 2D over a domain \Omega is given as follows

\displaystyle \frac{\partial u}{\partial t} - \frac{\partial^2 u}{\partial x_1^2} - \frac{\partial^2 u}{\partial x_2^2} = f(t,x_1,x_2)

where f is a given function. For simplicity, we write u_t - \Delta u = f(t,x). Unlike the heat equations in 1D, we cannot illustrate the solution u in just one picture some several t. The point is u has 3 variables, and thus to achieve a full picture of u, we need to draw u in 4D, that’s impossible.

Due to the presence of t, normally we need to impose an initial condition, i.e., the solution u at t=0. The way to solve this equation numerically is to use, for example, the Backward Euler Method. We denote by dt the difference in time, if we have already known the solution u at t=t_0, then we can solve u at t=t_0+dt. The Backward Euler Method applied to this model equation gives

\displaystyle\frac{{u({t_0} + dt,x) - u({t_0},x)}}{{dt}} - \Delta u({t_0} + dt,x) = f({t_0} + dt,x).

Usually, for the sake of clarity by n we mean we are working on the n-th step, therefore, at n-step, the time t is given by t_n=ndt, the solution u is then denoted by u^n(x) which is actually u(t_n, x). The time-discretization equation is then given by

\displaystyle\frac{{{u^n}(x) - {u^{n - 1}}(x)}}{{dt}} - \Delta {u^n}(x) = f({t_n},x).

We assume T is a triangulation of \Omega, that means, we devide the domain \Omega into M small special triangles. Vertices of triangles, usually called nodes, are those points we want to approximate u. If we denote by V_h the finite-dimensional subspace of H^1(\Omega) with a basis \{\varphi_1,...,\varphi_M\} such that for each u_h \in V_h, the restriction of u_h into T is piecewise linear in the sense that u_h is linear on each triangle of T.

The above picture shows us a case when \Omega is a ball. The idea of FEM is to approximate solution u by some u_h \in V_h such that u(x)=u_h(x) for every vertices x. Let talk about the basis \{\varphi_1,...,\varphi_M\}. Functions \varphi_i at the i-node are chosen so that \varphi({\rm node}_i)=1 and \varphi({\rm node}_j)=0 for every j \ne i.

We are now interested in finding weak solutions to the heat equations. In this situation, one tries to find a function u_h \in V_h which satisfies the following equation

\displaystyle\frac{1}{{dt}}\int_\Omega {u_h^n(x)v(x)dx} - \frac{1}{{dt}}\int_\Omega {u_h^{n - 1}(x)v(x)dx} + \int_\Omega {\nabla u_h^n(x)\nabla v(x)dx} = \int_\Omega {f({t_n},x)v(x)dx}

for every v\in V_h.

Assume

\displaystyle u_h^i(x) = \sum\limits_{k = 1}^M {{\xi ^i}{\varphi _k}(x)}

we then have

\displaystyle\begin{gathered} \sum\limits_{k = 1}^M {{\xi ^n}\left( {\int_\Omega {{\varphi _k}(x)v(x)dx} + dt\int_\Omega {\nabla {\varphi _k}(x)\nabla v(x)dx} } \right)} \hfill \\ \qquad\qquad= \sum\limits_{k = 1}^M {{\xi ^{n - 1}}\int_\Omega {{\varphi _k}(x)v(x)dx} } + dt\int_\Omega {f({t_n},x)v(x)dx} \hfill \\ \end{gathered}.

If we apply the above equation for v(x) being \varphi_j(x), we then have

\displaystyle\begin{gathered} \sum\limits_{k = 1}^M {{\xi ^n}\left( {\int_\Omega {{\varphi _k}(x){\varphi _j}(x)dx} + dt\int_\Omega {\nabla {\varphi _k}(x)\nabla {\varphi _j}(x)dx} } \right)} \hfill \\ \qquad\qquad= \sum\limits_{k = 1}^M {{\xi ^{n - 1}}\int_\Omega {{\varphi _k}(x){\varphi _j}(x)dx} } + dt\int_\Omega {f({t_n},x){\varphi _j}(x)dx} , \qquad j = \overline {1,M} . \hfill \\ \end{gathered}.

In term of linear algebra, the above equations can be transfered to a linear system (A+dtB) \xi^n = B\xi^{n-1}+b where

\displaystyle B = \left( {\begin{array}{*{20}{c}} {\int_\Omega {{\varphi _1}(x){\varphi _1}(x)dx} } & \cdots & \cdots & {\int_\Omega {{\varphi _1}(x){\varphi _M}(x)dx} } \\ \vdots & \ddots & {} & \vdots \\ \vdots & {} & \ddots & \vdots \\ {\int_\Omega {{\varphi _M}(x){\varphi _1}(x)dx} } & \cdots & \cdots & {\int_\Omega {{\varphi _M}(x){\varphi _M}(x)dx} } \\ \end{array} } \right)

and

\displaystyle A = \left( {\begin{array}{*{20}{c}} {\int_\Omega  {{\nabla\varphi _1}(x){\nabla\varphi _1}(x)dx} } & \cdots & \cdots &  {\int_\Omega {{\nabla\varphi _1}(x){\nabla\varphi _M}(x)dx} } \\ \vdots & \ddots  & {} & \vdots \\ \vdots & {} & \ddots & \vdots \\  {\int_\Omega {{\nabla\varphi _M}(x){\nabla\varphi _1}(x)dx} } & \cdots &  \cdots & {\int_\Omega {{\nabla\varphi _M}(x){\nabla\varphi _M}(x)dx} } \\  \end{array} } \right)

and

\displaystyle b = dt\left( {\begin{array}{*{20}{c}} {\int_\Omega {f({t_n},x){\varphi _1}(x)dx} } \\ \vdots \\ \vdots \\ {\int_\Omega {f({t_n},x){\varphi _M}(x)dx} } \\ \end{array} } \right).

The rest of our task is to solve this system of linear equations.

Source: http://en.wikipedia.org/wiki/Finite_element_method and a book due to Claes Johnson.

February 19, 2010

Numerical Quadrature Over Triangles

Filed under: Giải tích 9 (MA5265) — Ngô Quốc Anh @ 21:17

In engineering analysis it is often necessary to numerically evaluate surface integrals over a particular geometric shape. The triangle is one of the most widely used shapes used to evaluate surface integrals arising in techniques such as the Finite Element Method (FEM) and the Method of Moments (MoM) in electromagnetics.

Numerical Quadrature Over Triangles is typically done by sampling the function being integrated at discrete points on the triangle, multiplying each samples by specially chosen quadrature weights, and summing these up, i.e.

\displaystyle I = \frac{1}{A} \iint_S F(r_i) ds \approx \sum_i w_i F(r_i)

where F(r_i) is the function sampled at quadrature point r_i on the triangle, and w_i is the quadrature weight for that point. The quadrature operation results in a quantity that is normalized by the triangle’s area A, hence the \frac{1}{A} coefficient.

I will present a set of quadrature points and weights useful for performing typical quadratures. These quadrature points are generalized into what are called barycentric coordinates, also commonly referred to as area coordinates. Given a triangle defined by the three vertexes v_1, v_2 and v_3, any point r inside that triangle can be written in terms of the barycentric coordinates \beta_1, \beta_2 and \beta_3 as

\displaystyle r = \beta_1 v_1 + \beta_2 v_2 + \beta_3 v_3.

Given a set of barycentric coordinates (quadrature points), each location r_i on the triangle may be found and the integrated function sampled. The weights to be presented are what are called Gauss-Legendre quadrature weights, the derivation of which I will not show here. Read numerical integration for a more in-depth discussion on this.

Below are tables of sampling points and weights. The points are chosen to lie in the interior of the triangle. There exist quadrature formulas that have points on the triangle edge, however they are not suitable for the analysis that I do so I don’t have them on hand.

One-point quadrature (center of triangle):

————————————————————————————————————————————————————————–
i     \beta_1            \beta_2            \beta_3            w_i
————————————————————————————————————————————————————————–
1    0.33333333    0.33333333    0.33333333    1.00000000

Three-point quadrature (midpoint of edges):

————————————————————————————————————————————————————————–
i     \beta_1            \beta_2            \beta_3            w_i
————————————————————————————————————————————————————————–
1    0.50000000    0.00000000    0.50000000    1.00000000
2    0.00000000    0.50000000    0.50000000    1.00000000
3    0.50000000    0.50000000    0.00000000    1.00000000

Four-point quadrature:

————————————————————————————————————————————————————————–
i     \beta_1            \beta_2            \beta_3            w_i
————————————————————————————————————————————————————————–
1    0.33333333    0.33333333    0.33333333    0.28125000
2    0.73333333    0.13333333    0.13333333    0.26041666
3    0.13333333    0.73333333    0.13333333    0.26041666
4    0.13333333    0.13333333    0.73333333    0.26041666

Seven-point quadrature:

————————————————————————————————————————————————————————–
i     \beta_1            \beta_2            \beta_3            w_i
————————————————————————————————————————————————————————–
1    0.33333333    0.33333333    0.33333333    0.22500000
2    0.05961587    0.47014206    0.47014206    0.13239415
3    0.47014206    0.05961587    0.47014206    0.13239415
4    0.47014206    0.47014206    0.05961587    0.13239415
5    0.79742699    0.10128651    0.10128651    0.12593918
6    0.10128651    0.79742699    0.10128651    0.12593918
7    0.10128651    0.10128651    0.79742699    0.12593918

Source: http://everything2.com/title/Numerical+Quadrature+Over+Triangles

January 22, 2010

Finite element method: A brief introduction via an 2-point boundary value problem of laplacian equation in 1D

Filed under: Các Bài Tập Nhỏ, Giải tích 9 (MA5265), Nghiên Cứu Khoa Học — Ngô Quốc Anh @ 17:05

Followed by this topic where I discussed how to use Finite Difference Method in numerical analysis. Today, we will study a so-called Finite Element Method, an improved Finite Difference Method. We first consider the following problem

-u''(x)=f(x) where 0 \leq x \leq 1 with boundary conditions u(0)=0=u(1).

The above problem is usually called 2-point boundary value problem of laplacian equation in 1D.

We first consider such problem by using Finite Difference Method. The idea of using the Finite Difference Method is to split equally the interval [0,1] into n pieces by x_i=\frac{i}{n} where i=\overline{0,n}. We then approximate u'' by using the following

\displaystyle u''({x_i}) = \frac{{{u_{i - 1}} - 2{u_i} + {u_{i + 1}}}}{{{h^2}}}, \quad i=\overline{1,n-1}

where h:=\frac{1}{n}. Therefore, we have the following system of linear equations

\displaystyle\frac{{ - 1}}{{{h^2}}}\left( {\begin{array}{*{20}{c}} { - 2} & 1 & {} & {} & {}\\ 1 & { - 2} & 1 & {} & {} \\ {} & {} & {} & {} & {}\\ {} & {} & {} &\ddots& 1\\ {} & {} & {} & 1 & 2 \\ \end{array} } \right) = \left( {\begin{array}{*{20}{c}} {f({x_1})}\\ {f({x_1})}\\ \vdots \\ {f({x_{n - 1}})}\\ \end{array} } \right).

The idea of the Finite Element Method is to split the interval [0,1] into n pieces by x_i where i=\overline{0,n} but not necessarily equality. In order to deal with the problem, we shall look for weak solutions. In the sense of the distribution, u is called a weak solution if

\displaystyle\int_0^1 {\nabla u\nabla vdx} = \int_0^1 {f(x)vdx} ,\quad\forall v \in H_0^1([0,1]).

Under some conditions, for example, if f \in L^2([0,1]), the existence of weak solution is well-understood via the Lax-Milgram lemma.

Let V_h be a collection of piecewise linear map g in [0,1] so that g is linear on every intervals [x_{i-1},x_i] where i=\overline{1,n}. We also suppose that g is such that g(0)=g(1)=0. Clearly the space V_h is finite dimension with a basis \{\phi_i\}_{i=1}^{n-1} constructed as follows

\displaystyle {\phi _i}(x) = \left\{ \begin{gathered}1,\quad x = {x_i}, \hfill \\ 0 , \quad x \ne {x_i}. \hfill \\ \end{gathered}\right.

It is easy to see that \phi_i \in H_0^1([0,1]).

We will find u as a linear combination of these functions \phi_i denoted by u_h, to be exact, we will approximate u by u_h such that at u(x_i)=u_h(x_i) for every i=\overline{1,n-1}. Precisely, let

\displaystyle {u_h}(x) = \sum\limits_{i = 1}^{n - 1} {{\xi _i}{\phi _i}(x)}

we then have

{\xi _i}=\displaystyle u({x_i}), \quad i = \overline {1,n - 1} .

From the equation defining the notion of weak solution we replace v by \phi _i and u by u_h, we now get

\displaystyle\int_0^1 {\left( {\sum\limits_{j= 1}^{n - 1} {u({x_j}){{\phi '}_j}(x)} } \right){{\phi '}_i}(x)dx} = \int_0^1 {f(x){\phi _i}(x)dx}

which yields

\displaystyle\sum\limits_{j = 1}^{n - 1} {u({x_j})\int_0^1 {{{\phi '}_j}(x){{\phi '}_i}(x)dx} } = \int_0^1 {f(x){\phi _i}(x)dx}.

The above identity can be rewritten as a linear system Ax=b as the following

\displaystyle\left( {\begin{array}{*{20}{c}} {({\phi' _1},{\phi' _1})} & {({\phi '_2},{\phi '_1})} &\cdots& {} & {} &\cdots& {({\phi' _{n - 1}},{\phi' _1})}\\{({\phi '_1},{\phi' _2})} & {({\phi' _2},{\phi' _2})} & \cdots & {} & {} & {} & \vdots\\ \vdots&\vdots&\ddots& {} & {} & {} & {}\\{} & {} & {} & {} & {} & {} & {}\\{} & {} & {} & {} & {} & {} & {}\\ \vdots&\vdots& {} & {} & {} &\ddots&\vdots \\{({\phi '_1},{\phi' _{n - 1}})} & {({\phi' _2},{\phi' _{n - 1}})} &\cdots& {} & {} &\cdots& {({\phi' _{n - 1}},{\phi' _{n - 1}})}\\ \end{array} } \right)\left( {\begin{array}{*{20}{c}} {u({x_1})}\\{u({x_2})}\\ \vdots \\ \vdots \\ \vdots \\{u({x_{n - 2}})}\\{u({x_{n - 1}})}\\ \end{array} } \right) = \left( {\begin{array}{*{20}{c}} {(f,{\phi _1})}\\{(f,{\phi _2})}\\ \vdots \\ \vdots \\ \vdots \\{(f,{\phi _{n - 2}})}\\{(f,{\phi _{n - 1}})}\\ \end{array} } \right)

where we use the following notation

\displaystyle (f,g) = \int_0^1 {f(x)g(x)dx}.

August 16, 2009

On the stability of the Runge-Kutta 4 (RK4)


In the literature, the so-called RK4 is given as following: we first define the following coefficients

\displaystyle \begin{gathered}{K_{1}}= f\left({{t_{i}},{y_{i}}}\right),\hfill\\ {K_{2}}= f\left({{t_{i}}+\frac{1}{2}\Delta t,{y_{i}}+\frac{1}{2}\Delta t{K_{1}}}\right),\hfill\\ {K_{3}}= f\left({{t_{i}}+\frac{1}{2}\Delta t,{y_{i}}+\frac{1}{2}\Delta t{K_{2}}}\right),\hfill\\ {K_{4}}= f\left({{t_{i}}+\Delta t,{y_{i}}+\Delta t{K_{3}}}\right),\hfill\\ \end{gathered}

then

\displaystyle {y_{i + 1}} = {y_i} + \frac{{\Delta t}} {6}\left( {{K_1} + 2{K_2} + 2{K_3} + {K_4}} \right).

This is the most important iterative method for the approximation of solutions of ordinary differential equations

\displaystyle y' = f(t, y), \quad y(t_0) = y_0.

This technique was developed around 1900 by the German mathematicians C. Runge and M.W. Kutta. In order to study its stability, we use the model problem

\displaystyle y' = \lambda y, \quad \Re \lambda < 0.

In other words, we replace f(t,y) by \lambda y. Then the stability condition for time step \Delta comes from the following condition

\displaystyle \left| \frac{y_{i+1}}{y_i} \right| \leq 1.

Applying the above discussion to RK4 method, we see that

\displaystyle\begin{gathered}{K_{4}}=\lambda\left({{y_{i}}+\Delta t{K_{3}}}\right),\hfill\\ {K_{3}}=\lambda\left({{y_{i}}+\frac{1}{2}\Delta t{K_{2}}}\right),\hfill\\ {K_{2}}=\lambda\left({{y_{i}}+\frac{1}{2}\Delta t{K_{1}}}\right),\hfill\\ {K_{1}}=\lambda{y_{i}},\hfill\\ \end{gathered}

which implies

\displaystyle\begin{gathered}{K_{4}}=\lambda\left({{y_{i}}+\Delta t\lambda\left({{y_{i}}+\frac{1}{2}\Delta t\lambda\left({{y_{i}}+\frac{1}{2}\Delta t{y_{i}}}\right)}\right)}\right) =\lambda{y_{i}}\left({1+\Delta t\lambda\left({1+\frac{1}{2}\Delta t\lambda\left({1+\frac{1}{2}\Delta t}\right)}\right)}\right),\hfill\\ {K_{3}}=\lambda\left({{y_{i}}+\frac{1}{2}\Delta t\lambda\left({{y_{i}}+\frac{1}{2}\Delta t{y_{i}}}\right)}\right) =\lambda{y_{i}}\left({1+\frac{1}{2}\Delta t\lambda\left({1+\frac{1}{2}\Delta t}\right)}\right),\hfill\\ {K_{2}}=\lambda\left({{y_{i}}+\frac{1}{2}\Delta t\lambda{y_{i}}}\right) =\lambda{y_{i}}\left({1+\frac{1}{2}\Delta t\lambda }\right),\hfill\\ {K_{1}}=\lambda{y_{i}},\hfill\\ \end{gathered}

which yields

\displaystyle\begin{gathered}{K_{4}}=\lambda\left({{y_{i}}+\Delta t\lambda\left({{y_{i}}+\frac{1}{2}\Delta t\lambda\left({{y_{i}}+\frac{1}{2}\Delta t{y_{i}}}\right)}\right)}\right) =\lambda{y_{i}}\left({1+\Delta t\lambda\left({1+\frac{1}{2}\Delta t\lambda\left({1+\frac{1}{2}\Delta t}\right)}\right)}\right),\hfill\\ {K_{3}}=\lambda\left({{y_{i}}+\frac{1}{2}\Delta t\lambda\left({{y_{i}}+\frac{1}{2}\Delta t{y_{i}}}\right)}\right) =\lambda{y_{i}}\left({1+\frac{1}{2}\Delta t\lambda\left({1+\frac{1}{2}\Delta t}\right)}\right),\hfill\\ {K_{2}}=\lambda\left({{y_{i}}+\frac{1}{2}\Delta t\lambda{y_{i}}}\right) =\lambda{y_{i}}\left({1+\frac{1}{2}\Delta t\lambda }\right),\hfill\\ {K_{1}}=\lambda{y_{i}},\hfill\\ \end{gathered}

Thus

\displaystyle\frac{{{y_{i+1}}}}{{{y_{i}}}}= 1+\frac{{\Delta t}}{6}\left[{\lambda+2\lambda\left({1+\frac{1}{2}\Delta t\lambda }\right)+2\lambda\left({1+\frac{1}{2}\Delta t\lambda\left({1+\frac{1}{2}\Delta t}\right)}\right)+\lambda\left[{1+\Delta t\lambda\left({1+\frac{1}{2}\Delta t\lambda\left({1+\frac{1}{2}\Delta t}\right)}\right)}\right]}\right]

which is nothing but

\displaystyle\frac{{{y_{i+1}}}}{{{y_{i}}}}= 1+\Delta t\lambda+\frac{{{{\left({\Delta t\lambda }\right)}^{2}}}}{2}+\frac{{{{\left({\Delta t\lambda }\right)}^{3}}}}{6}+\frac{{{{\left({\Delta t\lambda }\right)}^{4}}}}{{24}}.

Therefore, the stability condition is given as follows

\displaystyle \left| {1 + z + \frac{{{z^2}}} {2} + \frac{{{z^3}}} {6} + \frac{{{z^4}}} {{24}}} \right| \leq 1, \quad \Re z < 0.

August 15, 2009

Summation by parts (Abel sum formula)


Suppose \{f_k\} and \{g_k\} are two sequences. Then,

\displaystyle\sum_{k=m}^n f_k(g_{k+1}-g_k) = \left[f_{n+1}g_{n+1} - f_m g_m\right] - \sum_{k=m}^n g_{k+1}(f_{k+1}- f_k).

Using the forward difference operator \Delta, it can be stated more succinctly as

\displaystyle\sum_{k=m}^n f_k\Delta g_k = \left[f_{n+1} g_{n+1} - f_m g_m\right] - \sum_{k=m}^n g_{k+1}\Delta f_k,

Note that summation by parts is an analogue to the integration by parts formula,

\displaystyle\int f\,dg = f g - \int g df.

We also have the following identity

\displaystyle\sum_{k=m}^n f_k(g_k-g_{k-1}) = \left[f_{n+1}g_n - f_m g_{m-1}\right] - \sum_{k=m}^n g_k(f_{k+1}- f_k).

Using the backward difference operator \Delta, it can be stated more succinctly as

\displaystyle\sum_{k=m}^n f_k\Delta g_{k-1} = \left[f_{n+1}g_n - f_m g_{m-1}\right] - \sum_{k=m}^n g_k \Delta f_k.

August 1, 2009

Solutions of Equations in One Variable


A root-finding algorithm is a numerical method, or algorithm, for finding a value x such that f(x) = 0, for a given function f. Such an x is called a root of the function f.

Suppose f is a continuous function defined on the interval [a, b], with f(a) and f(b) of opposite sign. By the Intermediate Value Theorem, there exists a number p in (a, b) with f(p) = 0. Although the procedure will work when there is more than one root in the interval (a,b), we assume for simplicity that the root in this interval is unique. The method calls for a repeated halving of subintervals of [a, b] and, at each step, locating the half containing p.

bisectionmethodTo begin, set a_1=a and b_1=b, and let p_1 be the midpoint of [a,b]; that is,

\displaystyle p_1 = a_1 + \frac{b_1-a_1}{2}=\frac{a_1+b_1}{2}.

If f(p_1)=0, then p=p_1, and we are done. If f(p_1)\ne 0, then f(p_1) has the same sign as either f(a_1) or f(b_1). When f(p_1) and f(a_1) have the same sign, p \in (p_1, b_1), and we set a_2=p_1 and b_2 = b_1. When f(p_1) and f(a_1) have opposite signs, p \in (a_1, p_1), and we set a_2 = a_1 and b_2 = p_1. We then reapply the process to the interval [a_2, b_2].

A number p is a fixed point for a given function g if g(p) = p. In this section we consider the problem of finding solutions to fixed-point problems and the connection between the fixed-point problems and the root-finding problems we wish to solve. Root-finding problems and fixed-point problems are equivalent classes in the following sense:

Given a root-finding problem f(p) = 0, we can define functions g with a fixed point at p in a number of ways, for example, as g(x) = x-f(x) or as g(x) = x + 3f(x). Conversely, if the function g has a fixed point at p, then the function defined by f(x) = x-g(x) has a zero at p.

fixedpointiteration

Although the problems we wish to solve are in the root-finding form, the fixed-point form is easier to analyze, and certain fixed-point choices lead to very powerful root-finding techniques.

fixedpointiteration

  • Newton’s method

Newton’s (or the Newton-Raphson) method is one of the most powerful and well-known numerical methods for solving aroot-finding problem. With an initial approximation p_0, the Newton’s method generates the sequence \{p_n\}_{n=1}^\infty by

\displaystyle {p_{n + 1}} = {p_n} - \frac{{f\left( {{p_n}} \right)}} {{f'\left( {{p_n}} \right)}}, \quad n \geqslant 0.

newtonmethod

To circumvent the problem of the derivative evaluation in Newton’s method, we  introduce a slight variation. By definition,

\displaystyle f'(x) = \mathop {\lim }\limits_{x \to {p_n}} \frac{{f(x) - f({p_{n - 1}}){\text{ }}}}{{x - {p_{n - 1}}}}.

Letting x=p_{n-1}, we have

\displaystyle f'\left( {{p_n}} \right) \approx \frac{{f\left( {{p_{n - 1}}} \right) - f\left( {{p_n}} \right)}} {{{p_{n - 1}} - {p_n}}}

which yields

\displaystyle {p_{n + 1}} = {p_n} - {\text{ }}\frac{{f({p_n})\left( {{p_{n - 1}} - {p_n}} \right)}}{{f({p_{n - 1}}) - f({p_{n - 2}})}}.

thesecandmethod

  • The method of False Position

The method of False Position (also called Regula Falsi) generates approximations in the same manner as the Secant method, but it includes a test to ensure that the root is bracketed between successive iterations. Although it is not a method we generally recommend, it illustrates how bracketing can be incorporated.

First choose initial approximations p_0 and p_1 with f(p_0) f(p_1) < 0. The approximation p_2 is chosen in the same manner as in the Secant method, as the x-intercept of the line joining (p_0, f(p_0)) and (p_1, f(p_1)). To decide which secant line to use to compute p_3, we check f(p_2) f(p_1). If this value is negative, then p_1 and p_2 bracket a root, and we choose p_3 as the x-intercept of the line joining (p_1, f(p_1)) and (p_2, f(p_2)). If not, we choose p_3 as the x-intercept of the line joining (p_0, f(p_0)) and (p_2, f(p_2)), and then interchange the indices on p_0 and p_1.

sedantandfalsepositionmethods

In a similar manner, once p_3 is found, the sign of f(p_3)f(p_2) determines whether we use p_2 and p_3 or p_3 and p_1 to compute p_4. In the latter case a relabeling of p_2 and p_1 is performed.

Source: Richard L. Burden and J. Douglas Faires, Numerical Analysis, 8th edition, Thomson/Brooks/Cole, 2005.

July 24, 2009

Linear shooting method


In numerical analysis, the shooting method is a method for solving a boundary value problem by reducing it to the solution of an initial value problem. The following exposition may be clarified by this illustration of the shooting method.

For a boundary value problem of a second-order ordinary differential equation, the method is stated as follows. Let

y''(t) = f(t, y(t), y'(t)), \quad y(t_0) = y_0, \quad y(t_1) = y_1

be the boundary value problem. Let y(t, a) denote the solution of the initial value problem

y''(t) = f(t, y(t), y'(t)), \quad y(t_0) = y_0, \quad y'(t_0) = a

Define the function F(a) as the difference between y(t_1, a) and the specified boundary value y_1.

F(a) = y(t_1, a) - y_1 \,

If the boundary value problem has a solution, then F has a root, and that root is just the value of y'(t_0) which yields a solution y(t) of the boundary value problem. The usual methods for finding roots may be employed here, such as the bisection method or Newton’s method.

Linear shooting method

The boundary value problem is linear if f has the form

f(t, y(t), y'(t))=p(t)y'(t)+q(t)y(t)+r(t).

In this case, the solution to the boundary value problem is usually given by

y(t) = y_{(1)}(t)+\frac{y_1-y_{(1)}(t_1)}{y_{(2)}(t_1)}y_{(2)}(t)

where y_{(1)}(t) is the solution to the initial value problem

y''(t) = f(t, y(t), y'(t)),\quad y(t_0) = y_0, \quad y'(t_0) = 0,

and y_{(2)}(t) is the solution to the initial value problem

y''(t) = p(t)y'(t)+q(t)y(t),\quad y(t_0) = 0, \quad y'(t_0) = 1.

See the proof for the precise condition under which this result holds.

Remark

One can easily see that if y_{(2)}(t_1) = 0 then y_{(2)}(t) \equiv 0 for all t. Thus y(t) = y_{(1)}(t) for all t. Besides, the formula

y(t) = y_{(1)}(t)+\frac{y_1-y_{(1)}(t_1)}{y_{(2)}(t_1)}y_{(2)}(t)

comes from the fact that we need to find y(t) as a combination of y_{(1)}(t) and y_{(2)}(t). In this manner, y(t) should be of the form

y(t) = y_{(1)}(t) + Cy_{(2)}(t) for all t.

At x=t_0,

{y_0} = y({t_0}) = \underbrace {{y_{(1)}}({t_0})}_{{y_0}} + C{y_{(2)}}({t_0})

which implies that y_{(2)}(t_0)=0. This is self-satisfied.
At x=t_1,

{y_1} = y({t_1}) = {y_{(1)}}({t_1}) + C{y_{(2)}}({t_1})

which implies that

C = \frac{{{y_1} - {y_{(1)}}({t_1})}} {{{y_{(2)}}({t_1})}}.

Therefore,

y(t) = {y_{(1)}}(t) + \frac{{{y_1} - {y_{(1)}}({t_1})}} {{{y_{(2)}}({t_1})}}{y_{(2)}}(t).

Example

A boundary value problem is given as follows by Stoer and Bulirsch.

w''(t) = \frac{3}{2} w^2, \quad w(0) = 4, \quad w(1) = 1.

The initial value problem

w''(t) = \frac{3}{2} w^2, \quad w(0) = 4, \quad w'(0) = s

was solved for s = -1, -2, -3, ..., -100, and F(s) = w(1,s) -1 plotted in the first figure. Inspecting the plot of F, we see that there are roots near -8 and -36. Some trajectories of w(t,s) are shown in the second figure.

Solutions of the initial value problem were computed by using the LSODE algorithm, as implemented in the mathematics package GNU Octave. Stoer and Bulirsch state that there are two solutions, which can be found by algebraic methods. These correspond to the initial conditions w'(0) = -8 and w'(0) = -35.9 (approximately).

https://i0.wp.com/upload.wikimedia.org/wikipedia/commons/8/87/Shooting_method_error.png

The function F(s) = w(1,s)-1.

https://i0.wp.com/upload.wikimedia.org/wikipedia/commons/2/2d/Shooting_method_trajectories.png

Trajectories w(t,s) for s = w'(0) equal to -7, -8, -10, -36, and -40 (red, green, blue, cyan, and magenta, respectively). The point (1,1) is marked with a red diamond.

Source: http://en.wikipedia.org/wiki/Shooting_method

June 30, 2009

A quick introduction to Simplex Method via an example

Filed under: Giải tích 9 (MA5265), Linh Tinh, Nghiên Cứu Khoa Học — Ngô Quốc Anh @ 14:36

Below is the
original problem:
objective function
is in green.
See
step
1
Red variables below
are called
SLACK VARIABLES
See
step
2
Below is the
SIMPLEX TABLEAU.
Compare RED symbols
with Z = x1 + 2x2 x3.
Blue numbers below
are the “ISM“.
The 1st SIMPLEX TABLEAU
is below. Note
missing z-column
(correction by Steve Hulet)
See steps 3,4,5 of
SIMPLEX METHOD
as you handle INDICATORS,
RATIOS, and PIVOTS.
Named below are the 4
row operations needed to
pivot on the number
“5” encircled in red
Below are the
results of the
row operations
named above
blue
numbers
are the
new ISM
Since one INDICATOR
(namely -1/5) remains
negative, we must
repeat steps 3-9
Below is the result
of changing our pivot
to “1”
Named below are
4 row operations
needed to pivot
on the number(16/5)
encircled in red
Above there was a tie for least non-negative ratio:
either row 1 or row 2 could have become the pivot row, and either choice leads to the final tableau after one additional pivoting. At the right is the result of the final 3 row operations.
All indicators {0, 0, 49
16
, 0, 1
16
and 3
8
} are now zero or bigger (“13” is NOT an indicator).
Thus, as in step 8 of the SIMPLEX METHOD, the last tableau is a FINAL TABLEAU.
Row operations of SIMPLEX METHOD are done.
Thus, the basic solution for the tableau above is the solution to our original problem.
[1st] set equal to 0 all variables NOT associated with the blue ISM, as at the right. Each column of the final tableau has a label naming it’s variable.
[2nd] convert each row of the final tableau (except the bottom row) back into equation form (as at the right) to find the values of the remaining variables. The value of the objective function is in the lower right corner of the final tableau.

Source: http://math.uww.edu/~mcfarlat/s-prob.htm

More: http://people.hofstra.edu/stefan_Waner/realworld/tutorialsf4/unit4_3.html

Older Posts »

Blog at WordPress.com.