Ngô Quốc Anh

February 28, 2010

Wave equations: The L^2-norm estimates

Filed under: Nghiên Cứu Khoa Học, PDEs — Tags: — Ngô Quốc Anh @ 12:06

Now we talk about L^2-norm of solution to homogeneous wave equations. Let u(x,t) be a solution of the Cauchy problem

\displaystyle u_{tt}- \Delta u =0, \quad x \in \mathbb R^3, t>0

with the following conditions

\displaystyle u(x,0)=f(x), \quad u_t(x,0)=g(x), \quad x\in \mathbb R^3.

Assume that f, g \in C_0^\infty(\mathbb R^3). If

\displaystyle\mathop {\lim }\limits_{t \to \infty } \int_{\mathbb R^3} {u{{(x,t)}^2}dx} = 0

then

\displaystyle\mathop {\lim }\limits_{t \to \infty } \int_{\mathbb R^3}  {g{{(x)}^2}dx} = 0.

Proof. Let R>0 such that

\displaystyle {\rm supp}(f),{\rm supp}(g) \subset B_R = \{x \in \mathbb R^3, |x|<R\}.

It is not difficult to verify that

\displaystyle {\rm supp}(u(\cdot,t)) \subset \{ x\in \mathbb R^3, t-R<|x|<t+R\}

for any t>R.

By using the Green formula, we get the following equality from the equation

\displaystyle\int_{{\mathbb{R}^3}} {{u_{tt}}(x,t)dx} = \int_{{\mathbb{R}^3}} {\Delta u(x,t)dx} = 0.

Therefore, we have

\displaystyle\frac{d}{{dt}}\int_{{\mathbb{R}^3}} {{u_t}(x,t)dx} = 0,

and

\displaystyle\int_{{\mathbb{R}^3}} {{u_t}(x,t)dx} = \int_{{\mathbb{R}^3}} {g(x)dx},

and

\displaystyle\int_{{\mathbb{R}^3}} {u(x,t)dx} = \int_{{\mathbb{R}^3}} {f(x)dx} + t\int_{{\mathbb{R}^3}} {g(x)dx}

by the initial conditions. In the other side, by Cauchy inequality, it follows that

\displaystyle\begin{gathered} \int_{{\mathbb{R}^3}} {u(x,t)dx} \leqslant {\left( {\int_{{\mathbb{R}^3}} {u{{(x,t)}^2}dx} } \right)^{\frac{1}{2}}}{\left( {\int_{t - R < |x| < t + R} {dx} } \right)^{\frac{1}{2}}} \hfill \\ \qquad= {\left( {\frac{8}{3}\pi R} \right)^{\frac{1}{2}}}{\left( {3{t^2} + {R^2}} \right)^{\frac{1}{2}}}{\left( {\int_{{\mathbb{R}^3}} {u{{(x,t)}^2}dx} } \right)^{\frac{1}{2}}} \hfill \\ \end{gathered}

for any t>R. This is a contradiction provided that

\displaystyle\int_{{\mathbb{R}^3}} {g(x)dx} \ne 0.

We now try to find the limit

\displaystyle\mathop {\lim }\limits_{t \to \infty } \int_{\mathbb  R^3} {u{{(x,t)}^2}dx}

in general case. For simplicity, we assume f \equiv 0. We will show that there exists a non-negative constant c such that

\displaystyle\mathop {\lim }\limits_{t \to \infty } \int_{\mathbb   R^3} {u{{(x,t)}^2}dx} = c.

Proof. Let \widehat u denote the Fourier transform of u with respect to the variable x. By taking the Fourier transform of the wave equation and the initial conditions we can get

\displaystyle\widehat u(\xi ) = \frac{{\left| {\widehat g(\xi )} \right|}}{{\left| \xi \right|}}\sin (\left| \xi \right|t).

Therefore

\displaystyle\begin{gathered} \int_{{\mathbb{R}^3}} {u{{(x,t)}^2}dx} = \frac{1}{{{{(2\pi )}^3}}}\int_{{\mathbb{R}^3}} {\widehat u{{(\xi ,t)}^2}d\xi } \hfill \\ \qquad= \frac{1}{{{{(2\pi )}^3}}}\int_{{\mathbb{R}^3}} {|\widehat g(\xi ){|^2}\frac{{{{\sin }^2}(\left| \xi \right|t)}}{{{{\left| \xi \right|}^2}}}d\xi } \hfill \\ \qquad= \frac{1}{{2{{(2\pi )}^3}}}\int_{{\mathbb{R}^3}} {{{\left| \xi \right|}^{ - 2}}|\widehat g(\xi ){|^2}d\xi } - \frac{1}{{2{{(2\pi )}^3}}}\int_{{\mathbb{R}^3}} {{{\left| \xi \right|}^{ - 2}}|\widehat g(\xi ){|^2}\cos (2\left| \xi \right|t)d\xi } . \hfill \\ \end{gathered}

Noting

\displaystyle {\left| \xi \right|^{ - 2}}|\widehat g(\xi ){|^2} \in {L^1}({\mathbb{R}^3})

we have

\displaystyle\mathop {\lim }\limits_{t \to \infty } \frac{1}{{2{{(2\pi )}^3}}}\int_{{\mathbb{R}^3}} {{{\left| \xi \right|}^{ - 2}}|\widehat g (\xi ){|^2}\cos (2\left| \xi \right|t)d\xi } = 0.

We then get

\displaystyle\mathop {\lim }\limits_{t \to \infty } \int_{{\mathbb{R}^3}} {u{{(x,t)}^2}dx} = \underbrace {\frac{1}{{2{{(2\pi )}^3}}}\int_{{\mathbb{R}^3}} {{{\left| \xi \right|}^{ - 2}}|\widehat g (\xi )|^2d\xi } }_c.

Note that this constant c is positive if g \not\equiv 0.

Advertisements

Heat equations: The L^2-norm estimates

Filed under: Nghiên Cứu Khoa Học, PDEs — Tags: — Ngô Quốc Anh @ 11:00

In the last topic we consider a pointwise estimate for homogeneous heat equation. The conclusion is the following: if the initial data is L^2-bounded then the solution u decays as t^\frac{1}{4}. Today, we consider another phenomena. Let assume \Omega \subset \mathbb R^n be an open set with smooth boundary and suppose u \in C^\infty(\Omega \times [0, \infty)) is a solution of

\displaystyle u_t - \Delta u = f

with

\displaystyle u=0

on the boundary \partial \Omega \times [0, \infty). Assume that

\displaystyle\mathop {\lim }\limits_{t \to \infty } \int_\Omega {f{{(x,t)}^2}dx} = 0

then

\displaystyle\mathop {\lim }\limits_{t \to \infty } \int_\Omega  {u{{(x,t)}^2}dx} = 0.

Proof. The method used here is very standard when we deal with energy estimate or if we want to estimate L^2-norm of the solution.

Multiplying the equation by u and then integrating the resulting equation on \Omega, we can get

\displaystyle \frac{d}{{dt}}\left( {\frac{1}{2}\int_\Omega {u{{(x,t)}^2}dx} } \right) + \int_\Omega {{{\left| {\nabla u(x,t)} \right|}^2}dx} = \int_\Omega {f(x,t)u(x,t)dx}.

The Poincare inequality gives

\displaystyle\int_\Omega {u{{(x,t)}^2}dx} \leqslant C\int_\Omega {{{\left| {\nabla u(x,t)} \right|}^2}dx}

where C>0 is constant. For any \varepsilon>0, by the Young inequality, it holds that

\displaystyle\left| {\int_\Omega {f(x,t)u(x,t)dx} } \right| \leqslant \frac{1}{2}\left( {\varepsilon \int_\Omega {u{{(x,t)}^2}dx} + \frac{1}{\varepsilon }\int_\Omega {f{{(x,t)}^2}dx} } \right).

Taking \varepsilon=\frac{1}{C} we obtain

\displaystyle\begin{gathered} \frac{d}{{dt}}\left( {\frac{1}{2}\int_\Omega {u{{(x,t)}^2}dx} } \right) + \frac{1}{C}\int_\Omega {u{{(x,t)}^2}dx} \hfill \\ \qquad\leqslant \frac{d}{{dt}}\left( {\frac{1}{2}\int_\Omega {u{{(x,t)}^2}dx} } \right) + \int_\Omega {{{\left| {\nabla u(x,t)} \right|}^2}dx} \hfill \\ \qquad\leqslant \left| {\int_\Omega {f(x,t)u(x,t)dx} } \right| \hfill \\ \qquad\leqslant \frac{1}{2}\left( {\frac{1}{C}\int_\Omega {u{{(x,t)}^2}dx} + C\int_\Omega {f{{(x,t)}^2}dx} } \right) \hfill \\ \end{gathered}

which implies

\displaystyle\frac{d}{{dt}}\left( {\int_\Omega {u{{(x,t)}^2}dx} } \right) + \int_\Omega {u{{(x,t)}^2}dx} \leqslant C\int_\Omega {f{{(x,t)}^2}dx}.

Solving this differential inequality, we have

Theorem (L^2 estimates).

\displaystyle\int_\Omega {u{{(x,t)}^2}dx} \leqslant {e^{ - \frac{t}{C}}}\int_\Omega {u{{(x,0)}^2}dx} + C\int_0^t {{e^{ - \frac{{t - \tau }}{C}}}\left( {\int_\Omega {f{{(x,\tau )}^2}dx} } \right)d\tau } .

It is not difficult to verify that

\displaystyle\mathop {\lim }\limits_{t \to \infty } \int_0^t {{e^{ - \frac{{t - \tau }}{C}}}\left( {\int_\Omega {f{{(x,\tau )}^2}dx} } \right)d\tau } = 0.

This completes the proof.

Note that in the above proof, we use a result which is similar to the Gronwall inequality. Clearly, the statement is as follows.

Lemma. If the function \psi(x,t) satisfies

\displaystyle\frac{d}{{dt}}\psi (x,t) + \alpha (t)\psi (x,t) \leqslant \beta (t)

we then have

\displaystyle\psi (x,t) \leqslant {e^{ - \int_0^t {\alpha (\tau )} d\tau }}\psi (x,0) + \int_0^t {{e^{ - \int_\tau ^t {\alpha (s)} ds}}\beta (\tau )d\tau }.

The proof of this statement is quite simple. We first try to write

\displaystyle\varphi (x,t) = {e^{\int_0^t {\alpha (\tau )} d\tau }}\psi (x,t).

Clearly

\displaystyle \frac{d}{{dt}}\varphi (x,t) = {e^{\int_0^t {\alpha (\tau )} d\tau }}\left( {\frac{d}{{dt}}\psi (x,t) + \alpha (t)\psi (x,t)} \right) \leqslant \beta (t){e^{\int_0^t {\alpha (\tau )} d\tau }}

which gives, after integrating with respect to t,

\displaystyle \varphi (x,t) \leqslant \varphi (x,0) + \int_0^t {\beta (\tau ){e^{\int_0^\tau {\alpha (s)} ds}}d\tau } .

Thus

\displaystyle {e^{\int_0^t {\alpha (\tau )} d\tau }}\psi (x,t) \leqslant \psi (x,0) + \int_0^t {\beta (\tau ){e^{\int_0^\tau {\alpha (s)} ds}}d\tau } .

Hence

\displaystyle \psi (x,t) \leqslant {e^{ - \int_0^t {\alpha (\tau )} d\tau }}\psi (x,0) + \int_0^t {\beta (\tau ){e^{\int_\tau ^t {\alpha (s)} ds}}d\tau }.

The proof follows.

February 26, 2010

Wave equations: The first pointwise estimates

Filed under: Nghiên Cứu Khoa Học, PDEs — Tags: — Ngô Quốc Anh @ 19:15

Followed by the topic where we discuss a pointwise estimate of solution to a class of heat equations. We are now interested in the case of wave equations.

Consider the wave equation in \mathbb R^3

\displaystyle u_{tt}-\Delta u =0, \quad x\in \mathbb R^3, t>0

together with the following initial data

\displaystyle u(x,0)=0, \quad u_t(x,0)=g(x).

We assume g \in C_0^\infty(\mathbb R^3). We will prove the following estimate

\displaystyle |u(x,t)| \leq \frac{C}{t}, \quad t>0

for some constant C depending only on the given data.

Proof. The assumption on g implies the existence of two positive constants R and M such that

\displaystyle {\rm supp}(g) \subset B_R = \{ x \in \mathbb R^3:|x|<R\}

and

\displaystyle |g(x)| \leq M, \quad \forall x \in \mathbb R^3.

From the Poisson formula (but is known as Kirchhoff formula) in 3D, the solution to the problem is given as the following

\displaystyle u(x,t)=\frac{1}{4\pi t} \int_{|y-x|=t} {g(y)dS_y}.

It is easy to verify that: the area of the intersection \{ y \in \mathbb R^3:|y-x|=t\} \cap B_R is less than or equals to the area of \partial B_R. Therefore, we obtain

\displaystyle |u(x,t)| \leq\frac{1}{4\pi t} M 4\pi R^2 = \frac{MR^2}{t}, \quad t>0.

Heat equations: The first pointwise estimates

Filed under: Các Bài Tập Nhỏ, PDEs — Ngô Quốc Anh @ 17:08

I am going to discuss a series of results involving pointwise estimates of heat and wave equations. Let us start with the following  simple problem

Consider the Cauchy problem for the heat operator in \mathbb R

\displaystyle u_t = u_{xx} \quad x\in \mathbb R, t>0

with the initial condition

\displaystyle u(x,0)=f(x), \quad x \in \mathbb R.

We assume f is bounded, continuous, and satisfies

\displaystyle\int_{ - \infty }^{ + \infty } {{{\left| {f(x)} \right|}^2}dx} < \infty.

We will show that there exists a constant C such that

\displaystyle \left| {u(x,t)} \right| \leqslant \frac{C}{{{t^{\frac{1}{4}}}}}

for all x \in \mathbb R and t>0.

Proof. The solution to the problem is given by the Poisson formula

\displaystyle u(x,t) = \frac{1}{{\sqrt {2\pi t} }}\int_{ - \infty }^{ + \infty } {f(y){e^{ - \frac{{{{(x - y)}^2}}}{{4t}}}}dy} .

By the Cauchy inequality, we have

\displaystyle\left| {u(x,t)} \right| \leqslant \frac{1}{{\sqrt {2\pi t} }}{\left( {\int_{ - \infty }^{ + \infty } {{{\left| {f(y)} \right|}^2}dy} } \right)^{\frac{1}{2}}}{\left( {\int_{ - \infty }^{ + \infty } {{e^{ - \frac{{{{(x - y)}^2}}}{{2t}}}}dy} } \right)^{\frac{1}{2}}}.

Set y=x+\sqrt{2t}\eta in the second integral of the above inequality. Then we get immediately

\displaystyle\int_{ - \infty }^{ + \infty } {{e^{ - \frac{{{{(x - y)}^2}}}{{2t}}}}dy} = \int_{ - \infty }^{ + \infty } {{e^{ - \frac{{{{(\sqrt {2t} \eta )}^2}}}{{2t}}}}\sqrt {2t} d\eta } = \sqrt {2t} \int_{ - \infty }^{ + \infty } {{e^{ - {\eta ^2}}}d\eta } .

Thus

\displaystyle\left| {u(x,t)} \right| \leqslant \frac{1}{{\sqrt {2\pi } {t^{\frac{1}{2}}}}}{\left( {\int_{ - \infty }^{ + \infty } {{{\left| {f(y)} \right|}^2}dy} } \right)^{\frac{1}{2}}}{\left( {\int_{ - \infty }^{ + \infty } {{e^{ - {\eta ^2}}}d\eta } } \right)^{\frac{1}{2}}}{\left( {2t} \right)^{\frac{1}{4}}} = \frac{C}{{{t^{\frac{1}{4}}}}}

for some constant C.

Note that the above pointwise estimate shows us that the function u is decreasing with respect to time t. This is reasonable because, for example, the heat equation predicts that if a hot body is placed in a box of cold water, the temperature of the body will decrease, and eventually (after infinite time, and subject to no external heat sources) the temperature in the box will equalize.

February 25, 2010

Double Kelvin transform being the identity map


In this topic, we proved a very interesting property involving the Laplacian of the Kelvin transform of a function. Recall that, for a given function u, its Kelvin transform is defined to be

\displaystyle {u^\sharp }(x) = \frac{1}{{{{\left| x \right|}^{n - 2}}}}u\left( {\frac{x}{{{{\left| x \right|}^2}}}} \right).

We then have

\displaystyle\Delta {u^\sharp }(x) = |{x^\sharp }{|^{n + 2}}\Delta u\left( {{x^\sharp }} \right)

where the inversion point x^\sharp of x is defined to be

\displaystyle {x^\sharp } = \frac{x}{{\left| x \right|^2}}.

Therefore, the Kelvin transform can be defined to be

\displaystyle {u^\sharp }({x^\sharp }) = \frac{1}{{{{\left| {{x^\sharp }} \right|}^{n - 2}}}}u\left( x \right).

We also have another formula

\displaystyle\Delta {u^\sharp }({x^\sharp }) = |{({x^\sharp })^\sharp }{|^{n + 2}}\Delta u\left( {{{({x^\sharp })}^\sharp }} \right) = {\left| x \right|^{n + 2}}\Delta u(x).

The right hand side of the above identity involves \Delta u(x), we can rewrite this one in terms of (\Delta u)^\sharp. Actually, we have the following

\displaystyle {(\Delta u)^\sharp }({x^\sharp }) = \frac{1}{{{{\left| {{x^\sharp }} \right|}^{n -2}}}}\Delta u(x)

which gives

\displaystyle\Delta {u^\sharp }({x^\sharp }) = {\left| x \right|^{n + 2}}\Delta u(x) = {\left| x \right|^{n + 2}}{\left| {{x^\sharp }} \right|^{n - 2}}{(\Delta u)^\sharp }({x^\sharp }) =|x|^4 {(\Delta u)^\sharp }({x^\sharp }).

Today, we study the a more general form of the Kelvin transform. The above definition of the Kelvin transform is with respect to the origin and unit ball in \mathbb R^n. We are now interested in the case when the Kelvin transform is defined with respect to a fixed ball. The following result is adapted from a paper due to M.C. Leung published in Math. Ann. in 2003.

Define the reflection on the sphere with center at \xi and radius a>0 by

\displaystyle {R_{\xi ,a}}(x) = \xi + {a^2}\frac{{x - \xi }}{{{{\left| {x - \xi } \right|}^2}}}, \quad x \ne \xi .

It is direct to check that

\displaystyle {R_{\xi ,a}}({R_{\xi ,a}}(x)) = x,\quad \forall x \ne \xi .

We observe that

\displaystyle \xi + {a^2}\frac{{x - \xi }}{{{{\left| {x - \xi } \right|}^2}}} \ne \xi , \quad \forall x \ne \xi .

The  Kelvin transform with center at \xi and radius a>0 of a function u is given by

\displaystyle u_{\xi ,a}^\sharp (x) = \frac{{{a^{n - 2}}}}{{{{\left| {x - \xi } \right|}^{n - 2}}}}u\left( {\xi + {a^2}\frac{{x - \xi }}{{{{\left| {x - \xi } \right|}^2}}}} \right), \quad \forall x \ne \xi .

We remark that if \xi and a are fixed, and if u(y)\gg 1, then u_{\xi ,a}^\sharp (x) \gg 1 as well, where y=R_{\xi, a}(x). In addition, R_{\xi, a} sends a set of small diameter not too close to \xi to a set of small diameter. We verify that double Kelvin transform is the identity map. We have

\displaystyle\begin{gathered} (u_{\xi ,a}^\sharp )_{\xi ,a}^\sharp (x) = \frac{{{a^{n - 2}}}}{{{{\left| {x - \xi } \right|}^{n - 2}}}}u_{\xi ,a}^\sharp \left( {\xi + {a^2}\frac{{x - \xi }}{{{{\left| {x - \xi } \right|}^2}}}} \right) \hfill \\ \qquad\qquad= \frac{{{a^{n - 2}}}}{{{{\left| {x - \xi } \right|}^{n - 2}}}}\frac{{{a^{n - 2}}}}{{\frac{{{a^{2(n - 2)}}}}{{{{\left| {x - \xi } \right|}^{n - 2}}}}}}u\left( {\xi + {a^2}\frac{{{a^2}\frac{{x - \xi }}{{{{\left| {x - \xi } \right|}^2}}}}}{{\frac{{{a^4}}}{{{{\left| {x - \xi } \right|}^2}}}}}} \right) \hfill \\ \qquad\qquad= u(x), \quad\forall x \ne \xi . \hfill \\ \end{gathered}

February 24, 2010

FEM for heat equations in 2D

Filed under: Giải tích 9 (MA5265) — Ngô Quốc Anh @ 19:01

In this topic, we have introduced the finite element method via the Poisson equations in 1D. Note that the FEM has some advantages compared to the FDM, for example, the mesh is not necessarily uniform.  Today, we discuss the question how to apply this FE method to some evolution equations. What we are going to do is to study the heat equations in 2D.

In the literature, the simplest model of the heat equations in 2D over a domain \Omega is given as follows

\displaystyle \frac{\partial u}{\partial t} - \frac{\partial^2 u}{\partial x_1^2} - \frac{\partial^2 u}{\partial x_2^2} = f(t,x_1,x_2)

where f is a given function. For simplicity, we write u_t - \Delta u = f(t,x). Unlike the heat equations in 1D, we cannot illustrate the solution u in just one picture some several t. The point is u has 3 variables, and thus to achieve a full picture of u, we need to draw u in 4D, that’s impossible.

Due to the presence of t, normally we need to impose an initial condition, i.e., the solution u at t=0. The way to solve this equation numerically is to use, for example, the Backward Euler Method. We denote by dt the difference in time, if we have already known the solution u at t=t_0, then we can solve u at t=t_0+dt. The Backward Euler Method applied to this model equation gives

\displaystyle\frac{{u({t_0} + dt,x) - u({t_0},x)}}{{dt}} - \Delta u({t_0} + dt,x) = f({t_0} + dt,x).

Usually, for the sake of clarity by n we mean we are working on the n-th step, therefore, at n-step, the time t is given by t_n=ndt, the solution u is then denoted by u^n(x) which is actually u(t_n, x). The time-discretization equation is then given by

\displaystyle\frac{{{u^n}(x) - {u^{n - 1}}(x)}}{{dt}} - \Delta {u^n}(x) = f({t_n},x).

We assume T is a triangulation of \Omega, that means, we devide the domain \Omega into M small special triangles. Vertices of triangles, usually called nodes, are those points we want to approximate u. If we denote by V_h the finite-dimensional subspace of H^1(\Omega) with a basis \{\varphi_1,...,\varphi_M\} such that for each u_h \in V_h, the restriction of u_h into T is piecewise linear in the sense that u_h is linear on each triangle of T.

The above picture shows us a case when \Omega is a ball. The idea of FEM is to approximate solution u by some u_h \in V_h such that u(x)=u_h(x) for every vertices x. Let talk about the basis \{\varphi_1,...,\varphi_M\}. Functions \varphi_i at the i-node are chosen so that \varphi({\rm node}_i)=1 and \varphi({\rm node}_j)=0 for every j \ne i.

We are now interested in finding weak solutions to the heat equations. In this situation, one tries to find a function u_h \in V_h which satisfies the following equation

\displaystyle\frac{1}{{dt}}\int_\Omega {u_h^n(x)v(x)dx} - \frac{1}{{dt}}\int_\Omega {u_h^{n - 1}(x)v(x)dx} + \int_\Omega {\nabla u_h^n(x)\nabla v(x)dx} = \int_\Omega {f({t_n},x)v(x)dx}

for every v\in V_h.

Assume

\displaystyle u_h^i(x) = \sum\limits_{k = 1}^M {{\xi ^i}{\varphi _k}(x)}

we then have

\displaystyle\begin{gathered} \sum\limits_{k = 1}^M {{\xi ^n}\left( {\int_\Omega {{\varphi _k}(x)v(x)dx} + dt\int_\Omega {\nabla {\varphi _k}(x)\nabla v(x)dx} } \right)} \hfill \\ \qquad\qquad= \sum\limits_{k = 1}^M {{\xi ^{n - 1}}\int_\Omega {{\varphi _k}(x)v(x)dx} } + dt\int_\Omega {f({t_n},x)v(x)dx} \hfill \\ \end{gathered}.

If we apply the above equation for v(x) being \varphi_j(x), we then have

\displaystyle\begin{gathered} \sum\limits_{k = 1}^M {{\xi ^n}\left( {\int_\Omega {{\varphi _k}(x){\varphi _j}(x)dx} + dt\int_\Omega {\nabla {\varphi _k}(x)\nabla {\varphi _j}(x)dx} } \right)} \hfill \\ \qquad\qquad= \sum\limits_{k = 1}^M {{\xi ^{n - 1}}\int_\Omega {{\varphi _k}(x){\varphi _j}(x)dx} } + dt\int_\Omega {f({t_n},x){\varphi _j}(x)dx} , \qquad j = \overline {1,M} . \hfill \\ \end{gathered}.

In term of linear algebra, the above equations can be transfered to a linear system (A+dtB) \xi^n = B\xi^{n-1}+b where

\displaystyle B = \left( {\begin{array}{*{20}{c}} {\int_\Omega {{\varphi _1}(x){\varphi _1}(x)dx} } & \cdots & \cdots & {\int_\Omega {{\varphi _1}(x){\varphi _M}(x)dx} } \\ \vdots & \ddots & {} & \vdots \\ \vdots & {} & \ddots & \vdots \\ {\int_\Omega {{\varphi _M}(x){\varphi _1}(x)dx} } & \cdots & \cdots & {\int_\Omega {{\varphi _M}(x){\varphi _M}(x)dx} } \\ \end{array} } \right)

and

\displaystyle A = \left( {\begin{array}{*{20}{c}} {\int_\Omega  {{\nabla\varphi _1}(x){\nabla\varphi _1}(x)dx} } & \cdots & \cdots &  {\int_\Omega {{\nabla\varphi _1}(x){\nabla\varphi _M}(x)dx} } \\ \vdots & \ddots  & {} & \vdots \\ \vdots & {} & \ddots & \vdots \\  {\int_\Omega {{\nabla\varphi _M}(x){\nabla\varphi _1}(x)dx} } & \cdots &  \cdots & {\int_\Omega {{\nabla\varphi _M}(x){\nabla\varphi _M}(x)dx} } \\  \end{array} } \right)

and

\displaystyle b = dt\left( {\begin{array}{*{20}{c}} {\int_\Omega {f({t_n},x){\varphi _1}(x)dx} } \\ \vdots \\ \vdots \\ {\int_\Omega {f({t_n},x){\varphi _M}(x)dx} } \\ \end{array} } \right).

The rest of our task is to solve this system of linear equations.

Source: http://en.wikipedia.org/wiki/Finite_element_method and a book due to Claes Johnson.

February 22, 2010

The Poincaré inequality: W^{1,p} vs. W_0^{1,p}

Filed under: Giải Tích 6 (MA5205), Giải tích 8 (MA5206), Nghiên Cứu Khoa Học, PDEs — Tags: — Ngô Quốc Anh @ 1:50

In mathematics, the Poincaré inequality is a result in the theory of Sobolev spaces, named after the French mathematician Henri Poincaré. The inequality allows one to obtain bounds on a function using bounds on its derivatives and the geometry of its domain of definition. Such bounds are of great importance in the modern, direct methods of the calculus of variations. A very closely related result is the Friedrichs’ inequality.

This topic will cover two versions of the Poincaré inequality, one is for W^{1,p}(\Omega) spaces and the other is for W_o^{1,p}(\Omega) spaces.

The classical Poincaré inequality for W^{1,p}(\Omega) spaces. Assume that 1\leq p \leq \infty and that \Omega is a bounded open subset of the ndimensional Euclidean space \mathbb R^n with a Lipschitz boundary (i.e., \Omega is an open, bounded Lipschitz domain). Then there exists a constant C, depending only on \Omega and p, such that for every function u in the Sobolev space W^{1,p}(\Omega),

\displaystyle \| u - u_{\Omega} \|_{L^{p} (\Omega)} \leq C \| \nabla u \|_{L^{p} (\Omega)},

where

\displaystyle u_{\Omega} = \frac{1}{|\Omega|} \int_{\Omega} u(y) \, \mathrm{d} y

is the average value of u over \Omega, with |\Omega| standing for the Lebesgue measure of the domain \Omega.

Proof. We argue by contradiction. Were the stated estimate false, there would exist for each integer k = 1,... a function u_k \in W^{1,p}(\Omega) satisfying

\displaystyle \| u_k - (u_k)_{\Omega} \|_{L^{p} (\Omega)} \geq k \|  \nabla u_k \|_{L^{p} (\Omega)}.

We renormalize by defining

\displaystyle {v_k} = \frac{{{u_k} - {{({u_k})}_\Omega }}}{{{{\left\| {{u_k} - {{({u_k})}_\Omega }} \right\|}_{{L^p}(\Omega )}}}}, \quad k \geqslant 1.

Then

\displaystyle {({v_k})_\Omega } = 0, \quad {\left\| {{v_k}} \right\|_{{L^p}(\Omega )}} = 1

and therefore

\displaystyle\|  \nabla v_k \|_{L^{p} (\Omega)} \leqslant \frac{1}{k}.

In particular the functions \{v_k\}_{k\geq 1} are bounded in W^{1,p}(\Omega).

By mean of the Rellich-Kondrachov Theorem, there exists a subsequence {\{ {v_{{k_j}}}\} _{j \geqslant 1}} \subset {\{ {v_k}\} _{k \geqslant 1}} and a function v \in L^p(\Omega) such that

\displaystyle v_{k_j} \to v in L^p(\Omega).

Passing to a limit, one easily gets

\displaystyle v_\Omega = 0, \quad {\left\| {{v}}  \right\|_{{L^p}(\Omega )}} = 1.

On the other hand, for each i=\overline{1,n} and \varphi \in C_0^\infty(\Omega),

\displaystyle\int_\Omega {v{\varphi _{{x_i}}}dx} = \mathop {\lim }\limits_{{k_j} \to \infty } \int_\Omega {{v_{{k_j}}}{\varphi _{{x_i}}}dx} = - \mathop {\lim }\limits_{{k_j} \to \infty } \int_\Omega {{v_{{k_j},{x_i}}}\varphi dx} = 0.

Consequently, v\in W^{1,p}(\Omega) with \nabla v=0 a.e. Thus v is constant since \Omega is connected. Since v_\Omega=0 then v \equiv 0. This contradicts to \|v\|_{L^p(\Omega)}=1.

The Poincaré inequality for W_0^{1,2}(\Omega) spaces. Assume that \Omega is a bounded open subset of the n-dimensional Euclidean space \mathbb R^n with a Lipschitz boundary (i.e., \Omega is an open, bounded Lipschitz domain). Then there exists a constant C, depending only on \Omega such that for every function u in the Sobolev space W_0^{1,2}(\Omega),

\displaystyle \| u \|_{L^2(\Omega)} \leq C \| \nabla u   \|_{L^2(\Omega)}.

Proof. Assume \Omega can be enclosed in a cube

\displaystyle Q=\{ x \in \mathbb R^n: |x_i| \leqslant a, 1\leqslant i \leqslant n\}.

Then for any x \in Q, we have

\displaystyle\begin{gathered} {u^2}(x) = {\left( {\int_{ - a}^{{x_1}} {{u_{{x_1}}}(t,{x_2},...,{x_n})dt} } \right)^2} \hfill \\ \qquad\leqslant ({x_1} + a)\int_{ - a}^{{x_1}} {{{({u_{{x_1}}})}^2}dt} \hfill \\ \qquad\leqslant 2a\int_{ - a}^a {{{({u_{{x_1}}})}^2}dt} . \hfill \\ \end{gathered}.

Thus

\displaystyle\int_{ - a}^a {{u^2}(x)dx} \leqslant 4{a^2}\int_{ - a}^a {{{({u_{{x_1}}})}^2}dt}.

Integration over x_2,...,x_n from -a to a gives the result.

The Poincaré inequality for W_0^{1,p}(\Omega) spaces. Assume that 1\leq p<n and that \Omega is a bounded open subset of the n-dimensional Euclidean space \mathbb R^n with a Lipschitz boundary (i.e., \Omega is an open, bounded Lipschitz domain). Then there exists a constant C, depending only on \Omega and p, such that for every function u in the Sobolev space W_0^{1,p}(\Omega),

\displaystyle \| u \|_{L^{p^\star} (\Omega)} \leq C \| \nabla u  \|_{L^{p} (\Omega)},

where p^\star is defined to be \frac{np}{n-p}.

Proof. The proof of this version is exactly the same to the proof of W^{1,p}(\Omega) case.

Remark. The point u =0 on the boundary of \Omega is important. Otherwise, the constant function will not satisfy the Poincaré inequality. In order to avoid this restriction, a weight has been added like the classical Poincaré inequality for W^{1,p}(\Omega) case. Sometimes, the Poincaré inequality for W_0^{1,p}(\Omega) spaces is called the Sobolev inequality.

February 19, 2010

Numerical Quadrature Over Triangles

Filed under: Giải tích 9 (MA5265) — Ngô Quốc Anh @ 21:17

In engineering analysis it is often necessary to numerically evaluate surface integrals over a particular geometric shape. The triangle is one of the most widely used shapes used to evaluate surface integrals arising in techniques such as the Finite Element Method (FEM) and the Method of Moments (MoM) in electromagnetics.

Numerical Quadrature Over Triangles is typically done by sampling the function being integrated at discrete points on the triangle, multiplying each samples by specially chosen quadrature weights, and summing these up, i.e.

\displaystyle I = \frac{1}{A} \iint_S F(r_i) ds \approx \sum_i w_i F(r_i)

where F(r_i) is the function sampled at quadrature point r_i on the triangle, and w_i is the quadrature weight for that point. The quadrature operation results in a quantity that is normalized by the triangle’s area A, hence the \frac{1}{A} coefficient.

I will present a set of quadrature points and weights useful for performing typical quadratures. These quadrature points are generalized into what are called barycentric coordinates, also commonly referred to as area coordinates. Given a triangle defined by the three vertexes v_1, v_2 and v_3, any point r inside that triangle can be written in terms of the barycentric coordinates \beta_1, \beta_2 and \beta_3 as

\displaystyle r = \beta_1 v_1 + \beta_2 v_2 + \beta_3 v_3.

Given a set of barycentric coordinates (quadrature points), each location r_i on the triangle may be found and the integrated function sampled. The weights to be presented are what are called Gauss-Legendre quadrature weights, the derivation of which I will not show here. Read numerical integration for a more in-depth discussion on this.

Below are tables of sampling points and weights. The points are chosen to lie in the interior of the triangle. There exist quadrature formulas that have points on the triangle edge, however they are not suitable for the analysis that I do so I don’t have them on hand.

One-point quadrature (center of triangle):

————————————————————————————————————————————————————————–
i     \beta_1            \beta_2            \beta_3            w_i
————————————————————————————————————————————————————————–
1    0.33333333    0.33333333    0.33333333    1.00000000

Three-point quadrature (midpoint of edges):

————————————————————————————————————————————————————————–
i     \beta_1            \beta_2            \beta_3            w_i
————————————————————————————————————————————————————————–
1    0.50000000    0.00000000    0.50000000    1.00000000
2    0.00000000    0.50000000    0.50000000    1.00000000
3    0.50000000    0.50000000    0.00000000    1.00000000

Four-point quadrature:

————————————————————————————————————————————————————————–
i     \beta_1            \beta_2            \beta_3            w_i
————————————————————————————————————————————————————————–
1    0.33333333    0.33333333    0.33333333    0.28125000
2    0.73333333    0.13333333    0.13333333    0.26041666
3    0.13333333    0.73333333    0.13333333    0.26041666
4    0.13333333    0.13333333    0.73333333    0.26041666

Seven-point quadrature:

————————————————————————————————————————————————————————–
i     \beta_1            \beta_2            \beta_3            w_i
————————————————————————————————————————————————————————–
1    0.33333333    0.33333333    0.33333333    0.22500000
2    0.05961587    0.47014206    0.47014206    0.13239415
3    0.47014206    0.05961587    0.47014206    0.13239415
4    0.47014206    0.47014206    0.05961587    0.13239415
5    0.79742699    0.10128651    0.10128651    0.12593918
6    0.10128651    0.79742699    0.10128651    0.12593918
7    0.10128651    0.10128651    0.79742699    0.12593918

Source: http://everything2.com/title/Numerical+Quadrature+Over+Triangles

February 16, 2010

ODE: The method of undetermined coefficients

Filed under: PDEs — Ngô Quốc Anh @ 12:59

Happy Lunar New  Year!

This method is more limited in scope; it applies only to the special case of

\displaystyle y'+p(t)y=g(t)

where p(t) is a constant and g(t) has some special form. The advantage of the method is that it does not require any integrations and is therefore quick to use. The homogeneous equation

\displaystyle y'+\lambda y=0

has the solution

\displaystyle y_h=Ce^{-\lambda t}.

To solve the inhomogeneous equation

\displaystyle y'+\lambda y=g(t),

it suffices to find one particular solution y_p(t). If y_p(t) is any particular solution, then the general solution is

\displaystyle y(t)=y_p(t)+Ce^{-\lambda t}.

The idea behind the method of undetermined coefficients is to look for y_p(t) which is of a form like that of g(t). This is possible only for special functions g(t), but these special cases arise quite frequently in applications.

Case 1. We start with the case where g(t) is an exponential

\displaystyle g(t)=A e^{\alpha t}.

We look for y(t) in a similar form

\displaystyle y(t)=a e^{\alpha t}.

This leads to

\displaystyle y'=a\alpha e^{\alpha t}, \quad y'+\lambda y=(\alpha+\lambda)ae^{\lambda t}.

So the differential equation becomes

\displaystyle (\alpha+\lambda)ae^{\lambda t} = Ae^{\alpha t}.

We can solve this to find

\displaystyle a=\frac{A}{\alpha+\lambda}.

This leads to the particular solution

\displaystyle {y_p}(t) = \frac{A}{{\alpha + \lambda }}{e^{\alpha t}},

and the general solution

\displaystyle y(t) = \frac{A}{{\alpha +\lambda }}{e^{\alpha t}} + C{e^{ - \alpha t}}.

Case 2. If y is a polynomial of degree n, then y' is a polynomial of degree n-1. If g is a polynomial, we can therefore look for polynomial solutions. Consider

\displaystyle y'+2y=t^2.

The right hand side is a polynomial of degree 2, so we look for a solution in the same form y=at^2+bt+c. This leads to y'=2at+b, and

\displaystyle y'+2y=2at^2+(2a+2b)t+b+2c=t^2.

To satisfy this, we want to set

\displaystyle 2a=1, \quad 2a+2b=0, \quad b+2c=0.

This leads to a=\frac{1}{2}, b=-\frac{1}{2}, c=\frac{1}{4}. So a particular solution is

\displaystyle y_p=\frac{t^2}{2}-\frac{t}{2}+\frac{1}{4}.

The general solution is

\displaystyle y=\frac{t^2}{2}-\frac{t}{2}+\frac{1}{4}+Ce^{-2t}.

We note that the solution

\displaystyle y(t) = \frac{A}{{\alpha +\lambda }}{e^{\alpha t}} + C{e^{ - \alpha t}}

breaks down if \alpha=-\lambda, since it would involve a division by zero. More generally, if the equation reads

\displaystyle y'+\lambda y=g(t),

and g(t)=e^{\alpha t}P_n(t), with P_n(t) an nth degree polynomial, then we can find a particular solution

\displaystyle y_p(t)=e^{\alpha t} Q_n(t),

where Q_n(t) is some other nth degree polynomial as long as \alpha \ne -\lambda. If, on the other hand, \alpha=-\lambda, we have to modify the procedure. The modification is simply to include an extra factor t in the solution. That is, instead of setting

\displaystyle y_p(t)=e^{\alpha t} Q_n(t),

you set

\displaystyle y_p(t)=t e^{\alpha t} Q_n(t).

Source: http://www.math.vt.edu/people/renardym/class_home/firstorder/node2.html

February 13, 2010

Linear First-Order Equations: Variable Coefficients

Filed under: PDEs — Ngô Quốc Anh @ 0:47

Followed by this topic where we discuss the advection equation. We can extend the preceding notions to more complicated problems. First consider the linear initial value problem

\displaystyle u_t+c(x,t) u_x=0, \quad x \in \mathbb R, t>0

and

\displaystyle u(x,0)=u_0(x), \quad x\in \mathbb R

where c=c(x,t ) is a given continuous function. The left side of the PDE  is a total derivative along the curves in the xt plane defined by the differential equation

\displaystyle \frac{dx}{dt}=c(x,t).

Along these curves

\displaystyle\frac{{du}}{{dt}} = {u_t} + {u_x}\frac{{dx}}{{dt}} = {u_t} + c(x,t){u_x} = 0

or, in other words, u is constant. Therefore

\displaystyle u={\rm const.} on \displaystyle \frac{dx}{dt}=c(x,t).

Again the PDE reduces to an ordinary differential equation (which was integrated to a constant) along a special family of curves, the characteristics defined by

\displaystyle \frac{dx}{dt}=c(x,t).

The function c=c(x,t) gives the speed of these characteristic curves, which varies in spacetime.

Example. Consider the initial value problem

\displaystyle u_t -xt u_x=0, \quad x \in \mathbb R, t>0

and

\displaystyle u(x,0)=u_0(x), \quad x\in \mathbb R.

The characteristics are defined by the equation

\displaystyle \frac{dx}{dt}=-xt

which, on quadrature, gives

\displaystyle x = \xi {e^{ - \frac{{{t^2}}}{2}}}

where \xi is a constant. On these curves the PDE becomes

\displaystyle {u_t} - xt{u_x} = {u_t} - \frac{{dx}}{{dt}}{u_x} = \frac{{du}}{{dt}} = 0

or

\displaystyle u={\rm const.} on \displaystyle  \frac{dx}{dt}=\xi {e^{ - \frac{{{t^2}}}{2}}}.

From the constancy of u along the characteristics, we have

\displaystyle u(x,t) = {u_0}\left( {x{e^{\frac{{{t^2}}}{2}}}} \right).

This method, called the method of characteristics, can be extended to nonhomogeneous initial value problems. Let take an example when u_0(x)=\sin\left(\frac{\pi}{2}x\right). We have the following pictures.

The above picture shows that how characteristic curves look like.

This picture shows the shape of solution in the spacetime.

When nonlinear terms are introduced into PDEs, the situation changes dramatically from the linear case discussed here and here. We will cover these interesting stuffs later on.

Older Posts »

Blog at WordPress.com.