# Ngô Quốc Anh

## February 9, 2014

### Monotonicity of trace inequalities involving Hermitian matrices and weights

Filed under: Nghiên Cứu Khoa Học — Ngô Quốc Anh @ 23:21

This note aims to prove a very interesting property concerning the monotonicity of trace inequalities of square matrices.

Before going further, let us denote by $M_n$ the set (or the algebra) of $n \times n$ complex square matrices. Then, by $M_n^{\rm h}$ we mean the set of Hermitian matrices in $M_n$. We also denote by $M_n^+$ the set of positive semi-definite matrices in $M_n^{\rm h}$. In other words, there holds

$\displaystyle M_n^+ \subset M_n^{\rm h} \subset M_n.$

As usual, the notation $A \leq B$ means $B-A \in M_n^+$ for any $A, B \in M_n^{\rm h}$.

The following inequality is basically due to Hoa-Tikhonov [here].

Theorem 1. Let $n\geq 2$ and let a function $f :\mathbb R^+ \to \mathbb R$ be Borel measurable. The inequality

$\displaystyle \text{trace}(Af(A)) \leq \text{trace}(Af(B))$

holds for all $A,B \in M_n^+$ with $A \leq B$ if and only if the function $g(x)=xf(x)$ is convex on $\mathbb R^+$.

The proof of the above theorem is rather simple but elegant. The idea is to transform the condition $0 \leq A \leq B$ into the relation $A^\frac{1}{2} = U B^\frac{1}{2}$ for some $U \in M_n$ with $\|U\| \leq 1$. Then the theorem follows immediately from the Jensen trace inequality for contractions.

It is also interesting to note that the super-additivity property, i.e.

$\text{trace}(f(A)) + \text{trace}(f(B)) \leq \text{trace}(f(A+B)) \quad \forall A,B \in M_n^+$

is equivalent to the convexity of the function $f$.

## September 30, 2013

### The Lichnerowicz equation under some variable changes

Filed under: Linh Tinh, Nghiên Cứu Khoa Học, PDEs — Tags: — Ngô Quốc Anh @ 4:54

Let us consider the so-called Lichnerowicz equation

$-\Delta_g u + hu = fu^{2^\star-1}+au^{-2^\star-1} \quad u>0$

on $(M,g)$, a Riemannian manifold of dimension $n \geq 3$. Here $h$, $f$, and $a$ are smooth function with $a \geq 0$.

• We first use the the following variable change

$\displaystyle v=\log u \quad u=e^ v.$

Clearly,

$\displaystyle\Delta v = \frac{\Delta u}{u} - \frac{|\nabla u|^2}{u^2}$

and

$\displaystyle |\nabla v|^2 = \frac{|\nabla u|^2}{u^2}.$

Therefore, we can write

$\displaystyle -\Delta v =-\frac{\Delta u}{u} +|\nabla v|^2.$

Using this rule, we can rewrite the equation as follows

$\displaystyle \boxed{-\Delta v = -h+fu^{2^\star-2}+au^{-2^\star-2}+|\nabla v|^2=-h+fe^{(2^\star-2)v}+ae^{-(2^\star+2)v}+|\nabla v|^2. }$

## September 15, 2013

### Some integral identities on manifolds with boundary

In this note, I summary several useful integral identities on Riemannian manifolds with boundary.

1. Suppose that $f$ is a function and $X$ is a $1$-form, then

$\displaystyle\boxed{\int_M {f\text{div}Xd{v_g}} = - \int_M {\left\langle {\nabla f,X} \right\rangle_g d{v_g}} + \int_{\partial M} {f\left\langle {X,\nu } \right\rangle_g d{\sigma _g}}.}$

To prove this, we write everything in local coordinates as follows

$\begin{array}{lcl} \displaystyle\int_M {f \text{div} Xd{v_g}} &=& \displaystyle\int_M {f{\nabla _i}{X^i}d{v_g}} \hfill \\ &=& \displaystyle - \int_M {{\nabla _i}f{X^i}d{v_g}} + \int_{\partial M} {f\left\langle {X,\nu } \right\rangle_g d{\sigma _g}} \hfill \\ &=& \displaystyle - \int_M {\left\langle {\nabla f,X} \right\rangle_g d{v_g}} + \int_{\partial M} {f\left\langle {X,\nu } \right\rangle_g d{\sigma _g}}\end{array}$

as claimed.

2. Using the previous identity, we can prove the following

$\displaystyle \boxed{\int_M {{{\left\langle {X,\nabla (\text{div} X)} \right\rangle }_g}d{v_g}} = - \int_M {|\text{div} X|_g^2d{v_g}} + \int_{\partial M} {\text{div} X{{\left\langle {X,\nu } \right\rangle }_g}d{\sigma _g}}}$

where $X$ is again a vector field on $M$. To prove this, we simply apply the previous identity with $f$ replaced by $\text{div}(X)$ to get the desired result.
(more…)

## August 7, 2012

### Almost-Schur lemma

Filed under: Nghiên Cứu Khoa Học, Riemannian geometry — Tags: — Ngô Quốc Anh @ 16:23

Following this note, today we talk about an almost-Schur lemma recently obtained by De Lellis and Topping, see here. If we denote by $\mathop {{\text{Ric}}}\limits^ \circ$ the traceless Ricci tensor, the main theorem of the paper is the following

Theorem. For any integer $n \geqslant 3$, if $(M, g)$ is a closed Riemannian manifold of  dimension $n$ with nonnegative Ricci curvature, then

$\displaystyle \int_M {{{(R - \overline R )}^2}} \leqslant \frac{{4n(n - 1)}}{{{{(n - 2)}^2}}}\int_M {| \mathop {{\text{Ric}}}\limits^ \circ {|^2}}$

where $\overline R$ is the average value of the scalar curvature $R$ over $M$. Moreover equality holds if and only if $(M, g)$ is Einstein.

For a proof of the theorem, recall that the contracted second Bianchi identity tells us that

$\displaystyle\delta \text{Ric} + \frac{1}{2}dR = 0$

where

$\displaystyle {(\delta \text{Ric})_j} = - {\nabla _i}{R_{ij}}.$

and hence that

$\displaystyle\delta\mathop{\text{Ric}}\limits^\circ = - \frac{{n - 2}}{{2n}}dR.$

## April 4, 2011

### An iteration by Stampacchia

Filed under: Nghiên Cứu Khoa Học — Tags: — Ngô Quốc Anh @ 3:09

Recently, my friend, CR, has shown me an iteration by Stampacchia. Stampacchia proposed his iteration in a preprint [here] entitled Équations elliptiques du second ordre à coefficients discontinus published in Séminaire Jean Leray in 1963-1964.

Suppose $\varphi : [k_0, \infty) \to \mathbb R$ is a non-negative non-decreasing function satisfying

$\displaystyle \varphi (h) \leqslant \frac{c}{(h-k)^\alpha}\big(\varphi(k)\big)^\beta$

for any $h>k \geqslant k_0$ where $c, \alpha, \beta$ are positive given constants.

Then

• If $\beta >1$, it holds

$\varphi (k_0+d)=0$

where

$\displaystyle d^\alpha=c\big(\varphi(k_0)\big)^{\beta-1}2^\frac{\alpha\beta}{\beta-1}.$

• If $\beta=1$, one has

$\displaystyle \varphi (h)\leqslant e e^{-\eta (h-k_0)}\varphi (k_0)$

where

$\displaystyle \eta=(ec)^{-\frac{1}{\alpha}}.$

• If $\beta<1$ and $k_0>0$, then

$\displaystyle \varphi (h) \leqslant {2^{\frac{\mu }{{1 - \beta }}}}\left\{ {1 + {c^{\frac{1}{{\beta - 1}}}}{{(2{k_0})}^\mu }\varphi ({k_0})} \right\} = \frac{{{2^{\frac{\mu }{{1 - \beta }}}}}}{{{c^{\frac{1}{{1 - \beta }}}}}}\left\{ {{c^{\frac{1}{{1 - \beta }}}} + {{(2{k_0})}^\mu }\varphi ({k_0})} \right\}$

where

$\displaystyle \mu=\frac{\alpha}{1-\beta}.$

## January 11, 2011

### An example of sequence of blow-up solutions with finite limiting mass

Filed under: Nghiên Cứu Khoa Học, PDEs — Ngô Quốc Anh @ 14:35

In this note, we recall an example adapted from an elegant paper due to Y.Y. Li and I. Shafrir published in the Indiana Univ. Math. J. in 1994 [here].

Let us consider the asymptitic behavior of sequences of solutions of

$-\Delta u_n=V_n(x)e^{u_n}$

on a bounded domain $\Omega \subset \mathbb R^2$ with $V_n$ a non-negative continuous function. For each solution $u_n$, we call

$\displaystyle \alpha_n := \int_{B_R}V_ne^{u_n}dx$

the mass of $u_n$ (over a ball $B_R$). The terminology limiting mass will be referred to the limit $\lim_{n \to \infty} \alpha_n$.

For simplicity, we assume $V_n \equiv 1$. Given $m$, we are going to construct a sequence of solutions $\{u_n\}$ which blows up exactly at $m$ points, say at $a_1,...,a_m \in D$ where $D$ the unit disc of $\mathbb C$. Our equation reads as

$-\Delta u=e^{u}$

in $D$. Using the Liouville formula for solutions of the above equation, we get

$\displaystyle u(z) = \log \frac{{8|f'(z){|^2}}}{{{{(1 + |f(z){|^2})}^2}}}$

with $f$ an holomorphic function such that $f'(z) \ne 0$.

## June 5, 2010

### Why the conformal method is useful in studying the Einstein equations?

Filed under: Nghiên Cứu Khoa Học, PDEs, Riemannian geometry — Tags: — Ngô Quốc Anh @ 19:20

I presume you have some notions about general relativity, especially the Einstein equations

${\rm Eins}_{\alpha\beta}=T_{\alpha\beta}$.

As these equations are basically hyperbolic for a suitable metric, it is reasonable to study the Cauchy problems for them. Under the Gauss and Codazzi conditions, we have two constraints called Hamiltonian and Momentum constrains. Cauchy problem is to determine the solvable of these constrains of variables $K$-the extrinsic curvature and $g$-the spatial metric. Interestingly, the conformal method says that we can start with an arbitrary metric then we recast the constrain equations into a form which is more amenable to analysis by splitting the Cauchy data. In this method, we try to solve $\gamma$ within the conformal class represented by the initial metric. So, in general, the conformal factor is chosen so that we eventually have a simplest model.

This idea is given via the following theorem.

Theorem. Let $\mathcal D =(\gamma, \sigma, \tau,\psi,\pi)$ be a conformal initial data set for the Einstein-scalar field constraint equations on $\Sigma$. If

$\displaystyle \widetilde \gamma =\theta^\frac{4}{n-2}\gamma$

for a smooth positive function $\theta$, then we define the corresponding conformally transformed initial data set by

$\displaystyle\widetilde{\mathcal D} =(\widetilde\gamma, \widetilde \sigma, \widetilde \tau,\widetilde\psi,\widetilde \pi)=(\theta^\frac{4}{n-2}\gamma, \theta^{-2}\sigma, \tau,\psi,\theta^\frac{-2n}{n-2}\pi)$.

Let $W$ be the solution to the conformal form of the momentum constrain equation w.r.t. the conformal initial data set $\mathcal D$ and let $\widetilde W$ be the solution to the conformal form of the momentum constrain equation w.r.t. the conformal initial data set $\widetilde{\mathcal D}$ (we just assume both exist). Then $\varphi$ is a solution to the Einstein scalar field Lichnerowicz equation for the conformal data $\mathcal D$ with $W$

$\displaystyle \Delta_\gamma \varphi - \mathcal R_{\gamma, \psi}\varphi +\mathcal A_{\gamma, W, \pi}\varphi^{-\frac{3n-2}{n-2}}-\mathcal B_{\tau, \psi}\varphi^\frac{n+2}{n-2}=0$

if and only if $\theta^{-1}\varphi$ is a solution to the Einstein scalar field Lichnerowicz equation for the conformal data $\widetilde{\mathcal D}$ with $\widetilde W$

$\displaystyle \Delta_{\widetilde\gamma} (\theta^{-1}\varphi) - \mathcal R_{\widetilde\gamma, \widetilde\psi}(\theta^{-1}\varphi) +\mathcal A_{\widetilde\gamma, \widetilde W, \widetilde\pi}(\theta^{-1}\varphi)^{-\frac{3n-2}{n-2}}-\mathcal B_{\widetilde\tau, \widetilde\psi}(\theta^{-1}\varphi)^\frac{n+2}{n-2}=0$.

We refer the reader to a paper due to Yvonne Choquet-Bruhat et al. [here] published in Class. Quantum Grav. in 2007 for details. We adopt this theorem from that paper, however, there is no proof there.

(more…)

## May 1, 2010

### A useful identity in a book due to L. Ahlfors

Filed under: Các Bài Tập Nhỏ, Giải Tích 5, Nghiên Cứu Khoa Học — Tags: — Ngô Quốc Anh @ 3:12

Let $\mathbf{x},\mathbf{y}$ be points in $\mathbb R^n$. If we denote by $\mathbf{x}^\sharp$ the reflection point of $\mathbf{x}$ with respect to the unit ball, i.e.

$\displaystyle \mathbf{x}^\sharp = \frac{\mathbf{x}}{|\mathbf{x}|^2}$

we then have the following well-known identity

$\displaystyle |\mathbf{x}|\left| {{\mathbf{x}^\sharp } - \mathbf{y}} \right| = |\mathbf{y}|\left| {{\mathbf{y}^\sharp } - \mathbf{x}} \right|$.

The proof of the above identity comes from the fact that

$\displaystyle |\mathbf{x}|\left| {\frac{\mathbf{x}}{{|\mathbf{x}{|^2}}} - \mathbf{y}} \right| = \sqrt {1 + |\mathbf{x}{|^2}|\mathbf{y}|^2 - 2\mathbf{x} \cdot \mathbf{y}} = |\mathbf{y}|\left| {\frac{\mathbf{y}}{{|\mathbf{y}|^2}} - \mathbf{x}} \right|$.

Indeed, by squaring both sides of

$\displaystyle |\mathbf{x}|\left| {\frac{\mathbf{x}}{{|\mathbf{x}{|^2}}} - \mathbf{y}} \right| = \sqrt {1 + |\mathbf{x}|^2|\mathbf{y}|^2 - 2\mathbf{x} \cdot \mathbf{y}}$

we arrive at

$\displaystyle |\mathbf{x}|^2\left( {\frac{{|\mathbf{x}|^2}}{{|\mathbf{x}|^4}} - 2\frac{{\mathbf{x} \cdot \mathbf{y}}}{{|\mathbf{x}|^2}} + |\mathbf{y}|^2} \right) = 1 + |\mathbf{x}|^2|\mathbf{y}|^2 - 2\mathbf{x} \cdot \mathbf{y}$

which is obviously true. Similarly, the last identity also holds. If we replace $\mathbf{y}$ by $-\mathbf{y}$ we also have

$\displaystyle |\mathbf{x}|\left| {{\mathbf{x}^\sharp }+ \mathbf{y}} \right| = |\mathbf{y}|\left| {{\mathbf{y}^\sharp } + \mathbf{x}} \right|$.

Generally, if we consider the reflection point of $\mathbf{x}$ over a ball $B_r(0)$, i.e.

$\displaystyle \mathbf{x}^\sharp = \frac{r^2\mathbf{x}}{|\mathbf{x}|^2}$

we still have the fact

$\displaystyle |\mathbf{x}|\left| {{\mathbf{x}^\sharp } - \mathbf{y}} \right| = |\mathbf{y}|\left| {{\mathbf{y}^\sharp } - \mathbf{x}} \right|$.

Indeed, one gets

$\displaystyle |\mathbf{x}|\left| {\frac{{{r^2}\mathbf{x}}}{{|\mathbf{x}{|^2}}} - \mathbf{y}} \right| = {r^2}|\mathbf{x}|\left| {\frac{\mathbf{x}}{{|\mathbf{x}{|^2}}} - \frac{\mathbf{y}}{{{r^2}}}} \right| = {r^2}\left| {\frac{\mathbf{y}}{{{r^2}}}} \right|\left| {\frac{{\frac{\mathbf{y}}{{{r^2}}}}}{{{{\left| {\frac{\mathbf{y}}{{{r^2}}}} \right|}^2}}} - \mathbf{x}} \right| = \left| \mathbf{y} \right|\left| {\frac{{{r^2}\mathbf{y}}}{{|\mathbf{y}{|^2}}} - \mathbf{x}} \right|$.

Similarly,

$\displaystyle |\mathbf{x}|\left| {{\mathbf{x}^\sharp } + y} \right| = |y|\left| {{y^\sharp } + \mathbf{x}} \right|$.

Such identity is very useful. For example, in $\mathbb R^n$ ($n\geqslant 3$) the following holds

$\displaystyle\iint\limits_{{\mathbf{x}} = r} {\frac{{d{\sigma _{\mathbf{x}}}}}{{{{\left| {{\mathbf{x}} - {\mathbf{y}}} \right|}^{n - 2}}}}} = \min \left\{ {\frac{1}{{|{\mathbf{y}}|^{n - 2}}},\frac{1}{r^{n - 2}}} \right\}$.

This type of formula has been considered before when $n=3$ here. For a general case, Lieb and Loss introduced another method in their book published by AMS in 2001. Here we introduce a completely new proof. At first, if $|\mathbf{y}|>r$ by the potential theory, one easily gets

$\displaystyle\iint\limits_{{\mathbf{x}} = r} {\frac{{d{\sigma _{\mathbf{x}}}}}{{{{\left| {{\mathbf{x}} - {\mathbf{y}}} \right|}^{n - 2}}}}} = \frac{1}{{|{\mathbf{y}}|^{n - 2}}}$.

If $|\mathbf{y}|, one needs to make use of the reflection point of $\mathbf{y}$ and the above identity to go back to the first case. The point here is $|\mathbf{y}^\sharp|>r$. The integral is obviously continuous as a function of $\mathbf{y}$. The above argument is due to professor X.X.W.

## April 9, 2010

### Kelvin transform: Laplacian

Filed under: Các Bài Tập Nhỏ, Linh Tinh, Nghiên Cứu Khoa Học, PDEs — Tags: — Ngô Quốc Anh @ 0:01

For each point $x \ne 0$, denote $x=(x_1,...,x_n)$ and

$\displaystyle \xi = {x^\sharp } = \left( {\frac {{{x_1}}} {{{{\left| x \right|}^2}}},...,\frac {{{x_n}}} {{{{\left| x \right|}^2}}}} \right)$

is the inversion of  $x$ with respect to the unit sphere. We have the following identities

$\displaystyle\frac {{\partial {\xi _j}}} {{\partial {x_k}}} = \frac {1} {{{{\left| x \right|}^2}}}\left( {{\delta _{jk}} - 2\frac {{{x_j}{x_k}}} {{{{\left| x \right|}^2}}}} \right)$

and

$\displaystyle\sum\limits_{l = 1}^n {\frac {{\partial {\xi _l}}} {{\partial {x_j}}}\frac {{\partial {\xi _l}}} {{\partial {x_k}}}} = \frac {1} {{{{\left| x \right|}^4}}}{\delta _{jk}}$.

Thus,

$\displaystyle\sum\limits_{l = 1}^n {\frac {{\partial {x_l}}} {{\partial {\xi _j}}}\frac {{\partial {x_l}}} {{\partial {\xi _k}}}} = \frac {1} {{{{\left| \xi \right|}^4}}}{\delta _{jk}}$.

Next, think of $\xi$ as a system of orthogonal curvilinear coordinates for $x$, we deduce that the metric tensor of the Euclidean space in curvilinear coordinates

$\displaystyle {g_{j,k}}\left( {{\xi _j},{\xi _k}} \right) = \sum\limits_{l = 1}^n {\frac {{\partial {x_l}}} {{\partial {\xi _j}}}\frac {{\partial {x_l}}} {{\partial {\xi _k}}}} = \frac {1} {{{{\left| \xi \right|}^4}}}{\delta _{jk}}$.

This implies that the so-called Lame coefficients is

$\displaystyle {h_j} = \sqrt {{g_{j,j}}\left( {{\xi _j},{\xi _j}} \right)} = \frac {1} {{{{\left| \xi \right|}^2}}}$.

## March 14, 2010

### Classification of a system of n first-order PDEs

Filed under: Nghiên Cứu Khoa Học, PDEs — Ngô Quốc Anh @ 11:18

The classification of a system of $n$ first-order PDEs is based on whether there are $n$ directions along which the PDEs reduce to $n$ ODEs. To be more precise, assume that we are given a system of $n$ equations in $n$ unknowns $u_1, u_2,...,u_n$ which we write in matrix form as

$\displaystyle \mathbf{u}_t + A(x,t,\mathbf{u})\mathbf{u}_x = \mathbf{b}(x,t,\mathbf{u})$,

where $\mathbf{u}=(u_1,...,u_n)^t$, $\mathbf{b}=(b_1,...,b_n)^t$, and $A=(a_{ij}(x,t,\mathbf{u}))$ is an $n \times n$ matrix.

Now we ask whether there is a family of curves along which the PDEs reduce to a system of ODEs, that is, in which the directional derivative of each $u_i$ occurs in the same direction. We consider a row vector $\gamma = (\gamma_1,...,\gamma_n)^t$ to be determined later. Then

$\displaystyle \mathbf{\gamma}^t\mathbf{u}_t + \mathbf{\gamma}^tA(x,t,\mathbf{u})\mathbf{u}_x = \mathbf{\gamma}^t\mathbf{b}(x,t,\mathbf{u})$.

We want the above system to have the form of a linear combination of total derivatives of the $u_i$ in the same direction $\lambda$, that is, we want our system to have the form

$\displaystyle \mathbf{m}^t \left( {{{\mathbf{u}}_t} + \lambda {{\mathbf{u}}_x}} \right) = \mathbf{\gamma}^t{\mathbf{b}}$

for some $\mathbf{m}$. Consequently, we require

$\displaystyle \mathbf{m}=\gamma, \quad \mathbf{m}^t\lambda=\gamma^tA$

or

$\displaystyle \gamma^t A=\lambda \gamma^t$.

This means that $\lambda$ is an eigenvalue of $A$ and $\gamma^t$ is a corresponding left eigenvector. Note that $\lambda$ as well as $\gamma$ can depend on $x$, $t$, and $\mathbf{u}$. So, if $(\lambda, \gamma^t)$ is an eigenpair, then

$\displaystyle \gamma^t \frac{d\mathbf{u}}{dt}=\gamma^t\mathbf{b}$

along

$\displaystyle \frac{dx}{dt}=\lambda(x,t,\mathbf{u})$

and the system of PDEs is reduced to a single ODE along the family of curves, called characteristics, defined by $\frac{dx}{dt}=\lambda$. The eigenvalue $\lambda$ is called the characteristics direction. Clearly, because there are $n$ unknowns, it would appear that $n$ ODEs are required; but if $A$ has $n$ distinct real eigenvalues, there are $n$ ODEs, each holding along a characteristics direction defined by an eigenvalue. In this case we say that the system is hyperbolic.

Definition. The quasilinear system

$\displaystyle \mathbf{u}_t + A(x,t,\mathbf{u})\mathbf{u}_x = \mathbf{b}(x,t,\mathbf{u})$

is hyperbolic if $A$ has $n$ real eigenvalues and $n$ linearly independent left eigenvectors. Once these eigenvectors are distinct, the system is called stricly hyperbolic.

The system is called elliptic if $A$ has no real eigenvalues, and it is parabolic if $A$ has $n$ real eigenvalues but fewer then $n$ independent left eigenvectors.

No exhaustive classification is made in the case that $A$ has both real and complex eigenvalues. Note that once matrix $A$ has $n$ distinct, real eigenvalues it has $n$ independent left eigenvectors, because distinct eigenvalues have independent eigenvectors.

More general systems of the form

$\displaystyle B(x,t,\mathbf{u})\mathbf{u}_t + A(x,t,\mathbf{u})\mathbf{u}_x = \mathbf{b}(x,t,\mathbf{u})$

can be considered as well. We refer the reader to a book entitled “An introduction to nonlinear partial differential equations” due to J.D. Logan.

We are now in a position to see why a single first-order quasilinear PDE is hyperbolic. The coefficient matrix for the equation

$\displaystyle u_t + c(x,t,u)u_x=b(x,t,u)$

is just the real scalar function $c(x,t,u)$ which has the single eigenvalue $c(x,t,u)$ and its corresponding eigenvector $1$, a constant function. In this direction, once $\frac{dx}{dt}=c(x,t,u)$ the PDE reduces to the ODE $\frac{du}{dt}=b(x,t,u)$. We refer the reader to the following topic, called characteristic curves, where we consider when the equation has constant coefficients and variable coefficients.

We place here three more examples.

Example 1 (The shallow-water equations). The following system

$\displaystyle\begin{gathered} {h_t} + u{h_x} + h{u_x} = 0, \hfill \\ {u_t} + u{u_x} + g{h_x} = 0, \hfill \\ \end{gathered}$

is trictly hyperbolic.

Example 2. The following system

$\displaystyle\begin{gathered} {u_t} - {v_x} = 0, \hfill \\ {v_t} - c{u_x} = 0, \hfill \\ \end{gathered}$

is elliptic if $c<0$ and is hyperbolic if $c>0$.

Example 3 (The diffusion equations). The following equation

$\displaystyle u_t=u_{xx}$

may be written as the first-order system

$\displaystyle\begin{gathered}u_t-v_x=0, \hfill \\u_x-v = 0, \hfill \\ \end{gathered}$

and thus is parabolic.

Older Posts »