Ngô Quốc Anh

November 30, 2009

A property of the essentially bounded function 2

Filed under: Các Bài Tập Nhỏ, Giải Tích 6 (MA5205) — Ngô Quốc Anh @ 22:52

This topic is a companion to the following topic. In this topic, we consider the case when E is the whole space, i.e. E = \mathbb R^n. We also add an extra function g to a_n. To be precise, we have

Question. Suppose g>0 on \mathbb R^n is in L^1(\mathbb R^n) in Lebesgue sense. Let f \in L^\infty(\mathbb R^n) such that \| f\|_\infty > 0. Define

\displaystyle {a_n} = \int_{\mathbb R^n} {{{\left| f \right|}^ng}}

for n=1,2,3,... Show that

\displaystyle\mathop {\lim }\limits_{n \to \infty } \frac{{{a_{n + 1}}}} {{{a_n}}} = {\left\| f \right\|_\infty }.

Solution. For any \alpha with 0<\alpha < \|f\|_\infty, let

\displaystyle{E_\alpha } = \left\{ {x \in E: f\left( x \right) \geqslant \alpha } \right\}


\displaystyle {F_\alpha } = E\backslash {E_\alpha }

then \infty> |E_\alpha|>0. Clearly when \alpha is sufficiently closed to \|f\|_\infty, \int_{E_\alpha}g>0. For any k \in \mathbb N (k can be zero), note that

\displaystyle\int_{{E_\alpha }} {{{\left| f \right|}^n}g} \geqslant {\alpha ^n}\int_{{E_\alpha }} g


\displaystyle\int_{{F_\alpha }} {{{\left| f \right|}^{n + k}}g} \leqslant \left\| f \right\|_\infty ^k\int_{{F_\alpha }} {{{\left| f \right|}^n}g}.


\displaystyle\frac{{\int_{{F_\alpha }} {{{\left| f \right|}^{n + k}}g} }}{{\int_{{E_\alpha }} {{{\left| f \right|}^n}g} }} \leqslant \frac{{\left\| f \right\|_\infty ^k\int_{{F_\alpha }} {{{\left| f \right|}^n}g} }}{{{\alpha ^n}\int_{{E_\alpha }} g }} = \frac{{\left\| f \right\|_\infty ^k}}{{\int_{{E_\alpha }} g }}\int_{{F_\alpha }} {{{\left| {\frac{f}{\alpha }} \right|}^n}g}.

By the Dominated Convergence Theorem, one gets

\displaystyle 0 \leqslant \mathop {\lim }\limits_{n \to \infty } \left( {\frac{{\int_{{F_\alpha }} {{{\left| f \right|}^{n + k}}g} }}{{\int_{{E_\alpha }} {{{\left| f \right|}^n}g} }}} \right) \leqslant \mathop {\lim }\limits_{n \to \infty } \left( {\frac{{\left\| f \right\|_\infty ^k}}{{\int_{{E_\alpha }} g }}\int_{{F_\alpha }} {{{\left| {\frac{f}{\alpha }} \right|}^n}g} } \right) = 0.


\displaystyle\begin{gathered}\mathop {\lim \inf }\limits_{n \to \infty } \left( {\frac{{\int\limits_E {{{\left| f \right|}^{n + 1}}g} }}{{\int\limits_E {{{\left| f \right|}^n}g} }}} \right) \geqslant \mathop {\lim \inf }\limits_{n \to \infty } \left( {\frac{{\int\limits_{{F_\alpha }} {{{\left| f \right|}^{n + 1}}g} + \int\limits_{{E_\alpha }} {{{\left| f \right|}^{n + 1}}g} }}{{\int\limits_{{E_\alpha }} {{{\left| f \right|}^n}g} + \int\limits_{{F_\alpha }} {{{\left| f \right|}^n}g} }}} \right) \hfill \\\qquad \geqslant \mathop {\lim \inf }\limits_{n \to \infty } \left( {\frac{{\int\limits_{{F_\alpha }} {{{\left| f \right|}^{n + 1}}g} + \alpha \int\limits_{{E_\alpha }} {{{\left| f \right|}^n}g} }}{{\int\limits_{{E_\alpha }} {{{\left| f \right|}^n}g} + \int\limits_{{F_\alpha }} {{{\left| f \right|}^n}g} }}} \right) \hfill \\\qquad = \mathop {\lim \inf }\limits_{n \to \infty } \left( {\frac{{\int\limits_{{F_\alpha }} {{{\left| f \right|}^{n + 1}}g} }}{{\int\limits_{{E_\alpha }} {{{\left| f \right|}^n}g} }} + \alpha } \right)/\left( {1 + \frac{{\int\limits_{{F_\alpha }} {{{\left| f \right|}^n}g} }}{{\int\limits_{{E_\alpha }} {{{\left| f \right|}^n}g} }}} \right) \hfill \\ \qquad = \alpha . \hfill \\ \end{gathered}

Letting \alpha \nearrow {\left\| f \right\|_\infty }, we get that

\displaystyle\mathop{\lim }\limits_{n\to\infty }\left({\frac{{\int_{E}{{{\left| f\right|}^{n+1}g}}}}{{\int_{E}{{{\left| f\right|}^{n}g}}}}}\right) ={\left\| f\right\|_\infty }.

As an application, if we put a_0 = 1, then from

\displaystyle {a_{n + 1}} = \frac{{{a_1}}} {{{a_0}}}.\frac{{{a_2}}} {{{a_1}}} \cdots\frac{{{a_{n + 1}}}} {{{a_n}}}

we deduce that

\displaystyle\mathop{\lim }\limits_{n\to\infty }\sqrt[n]{{{a_{n}}}}=\mathop{\lim }\limits_{n\to\infty }\frac{{{a_{n+1}}}}{{{a_{n}}}}={\left\| f\right\|_\infty }.

In other words,

\displaystyle\mathop{\lim }\limits_{n\to\infty }{\left({\int_{E}{{{\left| f\right|}^{n}g}}}\right)^{\frac{1}{n}}}={\left\| f\right\|_\infty }.

November 26, 2009

R-G: Scalar curvature

Filed under: Riemannian geometry — Ngô Quốc Anh @ 20:51

In Riemannian geometry, the scalar curvature (or Ricci scalar) is the simplest curvature invariant of a Riemannian manifold. To each point on a Riemannian manifold, it assigns a single real number determined by the intrinsic geometry of the manifold near that point. Specifically, the scalar curvature represents the amount by which the volume of a geodesic ball in a curved Riemannian manifold deviates from that of the standard ball in Euclidean space. In two dimensions, the scalar curvature is twice the Gaussian curvature, and completely characterizes the curvature of a surface. In more than two dimensions, however, the curvature of Riemannian manifolds involves more than one functionally independent quantity.

In general relativity, the scalar curvature is the Lagrangian density for the Einstein–Hilbert action. The Euler–Lagrange equations for this Lagrangian under variations in the metric constitute the vacuum Einstein field equations, and the stationary metrics are known as Einstein metrics. The scalar curvature is defined as the trace of the Ricci tensor, and it can be characterized as a multiple of the average of the sectional curvatures at a point. Unlike the Ricci tensor and sectional curvature, however, global results involving only the scalar curvature are extremely subtle and difficult. One of the few is the positive mass theorem of Richard Schoen, Shing-Tung Yau and Edward Witten. Another is the Yamabe problem, which seeks extremal metrics in a given conformal class for which the scalar curvature is constant.

Definition. The scalar curvature is the function S defined as the trace of the Ricci tensor.

Since the Ricci tensor is an (2,0)-tensor field then in the local coordinates

S = {\rm Trace}( {\rm Ric}) = g^{jk}R_{jk}.

Theorem (Contracted Bianchi Identity). The covariant derivatives of the Ricci and scalar curvatures satisfy the following identity

\displaystyle {\rm div} {\rm Ric} = \frac{1}{2} \nabla S.

Examples 1. We still work on the two-dimensional spherical surface of radius R whose metric is

\displaystyle \left( {{g_{ij}}} \right) = {R^2}\left( {\begin{array}{*{20}{c}} 1 & 0 \\ 0 & {{{\sin }^2}\theta } \\ \end{array} } \right)

as the previous topic. Then

\displaystyle S = {g^{jk}}{R_{jk}} = {g^{11}}{R_{11}} + {g^{22}}{R_{22}} = \frac{2}{{{R^2}}}.

Examples 2. We now work for the two-dimensional space-like “upper hyperboloid” of the Minkowski space whose metric is

\displaystyle {\left( {ds} \right)^2} = \frac{{{R^2}}}{{{r^2} + {R^2}}}{\left( {dr} \right)^2} + {r^2}{\left( {d\phi } \right)^2}

that is

\displaystyle \left( {{g_{ij}}} \right) = \left( {\begin{array}{*{20}{c}} {\frac{{{R^2}}}{{{r^2} + {R^2}}}} & 0 \\ 0 & {{r^2}} \\ \end{array} } \right),\left( {{g^{ij}}} \right) = \left( {\begin{array}{*{20}{c}} {\frac{{{r^2} + {R^2}}}{{{r^2}}}} & 0 \\ 0 & {\frac{1}{{{r^2}}}} \\ \end{array} } \right).


\displaystyle {R_{11}} = - \frac{1}{{{r^2} + {R^2}}},{R_{12}} = {R_{21}} = 0,{R_{22}} = - \frac{{{r^2}}}{{{R^2}}}


\displaystyle S = - \frac{2}{{{R^2}}}.

R-G: Ricci curvature

Filed under: Riemannian geometry — Ngô Quốc Anh @ 19:56

In differential geometry, the Ricci curvature tensor, named after Gregorio Ricci-Curbastro, represents the amount by which the volume element of a geodesic ball in a curved Riemannian manifold deviates from that of the standard ball in Euclidean space. As such, it provides one way of measuring the degree to which the geometry determined by a given Riemannian metric might differ from that of ordinary Euclidean n-space. More generally, the Ricci tensor is defined on any pseudo-Riemannian manifold. Like the metric itself, the Ricci tensor is a symmetric bilinear form on the tangent space of the manifold.

The Ricci curvature is broadly applicable to modern Riemannian geometry and general relativity theory. In connection with the latter, it is up to an overall trace term, the portion of the Einstein field equation representing the geometry of spacetime, the other significant portion of which comes from the presence of matter and energy. In connection with the former, lower bounds on the Ricci tensor on a Riemannian manifold allow one to extract global geometric and topological information by comparison (cf. comparison theorem) with the geometry of a constant curvature space form. If the Ricci tensor satisfies the vacuum Einstein equation, then the manifold is an Einstein manifold, which have been extensively studied (cf. Besse 1987). In this connection, the Ricci flow equation governs the evolution of a given metric to an Einstein metric, the precise manner in which this occurs ultimately leads to the solution of the Poincaré conjecture.

Definition. Ricci curvature (or Ricci tensor) is an (2,0)-tensor field denoted by \rm Ric, that is {\rm Ric} : TM \times TM \to \mathbb R, defined by

{\rm Ric}(X,Y) = {\rm Trace}( x \to R(x, X)Y).

In local coordinates, \rm Ric is of the form

{\rm Ric} = R_{ij} dx^i \otimes dx^j.

We assume \frac{\partial}{\partial x^i} where i=1,2,...,n  is an orthonormal basis for T_pM, then

\displaystyle R\left( {\frac{\partial }{{\partial {x^i}}},X} \right)Y = {X^j}{Y^k}R\left( {\frac{\partial }{{\partial {x^i}}},\frac{\partial }{{\partial {x^j}}}} \right)\frac{\partial }{{\partial {x^k}}} = {X^j}{Y^k}R_{kij}^l\frac{\partial }{{\partial {x^l}}}.


\displaystyle {\rm Trace}\left( {x \mapsto R\left( {x,X} \right)Y} \right) = {X^j}{Y^k}R_{kij}^i.

In other words,

\displaystyle {\rm Ric}\left( {\frac{\partial }{{\partial {x^j}}},\frac{\partial }{{\partial {x^k}}}} \right) = {R_{ij}}d{x^i} \otimes d{x^j}\left( {\frac{\partial }{{\partial {x^j}}},\frac{\partial }{{\partial {x^k}}}} \right) = {R_{jk}} = R_{kij}^i.

To be exact, one should read

\displaystyle {R_{jk}} = \sum\limits_i {R_{jik}^i}.

A simple calculation shows us that

\displaystyle {R_{jk}} = R_{jik}^i = {g^{li}}{g_{li}}R_{jik}^i = {g^{li}}{R_{ljik}}.

Thus, Ricci tensor can be thought as the trace of curvature tensor R_{ljil}.

Example. For the two-dimensional spherical surface of radius R whose metric is

\displaystyle{\left( {ds} \right)^2} = {R^2}\left[ {{{\left( {d\theta } \right)}^2} + {{\sin }^2}\theta {{\left( {d\phi } \right)}^2}} \right]

we have

\displaystyle \left( {{g_{ij}}} \right) = {R^2}\left( {\begin{array}{*{20}{c}} 1 & 0 \\ 0 & {{{\sin }^2}\theta } \\ \end{array} } \right), \qquad \left( {{g^{ij}}} \right) = \frac{1}{{{R^2}}}\left( {\begin{array}{*{20}{c}} 1 & 0 \\ 0 & {\frac{1}{{{{\sin }^2}\theta }}} \\ \end{array} } \right).


\displaystyle\begin{gathered} {R_{11}} = R_{111}^1 + R_{121}^2 = {g^{22}}{R_{2121}} = {g^{22}}{R_{1212}} = 1, \hfill \\ {R_{12}} = R_{112}^1 + R_{122}^2 = 0, \hfill \\ {R_{22}} = R_{212}^1 + R_{222}^2 = {g^{11}}{R_{1212}} = {\sin ^2}\theta , \hfill \\ {R_{21}} = {R_{12}} = 0. \hfill \\ \end{gathered}

R-G: Sectional curvature

Filed under: Riemannian geometry — Ngô Quốc Anh @ 14:40

In Riemannian geometry, the sectional curvature is one of the ways to describe the curvature of Riemannian manifolds.

Definition. The sectional curvature of the plane spanned by the (linearly independent) tangent vectors X, Y \in T_xM of the Riemannian manifold M is

\displaystyle K\left( {X,Y} \right) = \frac{{\left\langle {R\left( {X,Y} \right)Y,X} \right\rangle }}{{\left\langle {X,X} \right\rangle \left\langle {Y,Y} \right\rangle - {{\left\langle {X,Y} \right\rangle }^2}}}.

In local coordinates, if

\displaystyle X = {X^i}\frac{\partial }{{\partial {x^i}}}, \quad Y = {Y^j}\frac{\partial }{{\partial {x^j}}}

we then have

\displaystyle R\left( {X,Y} \right)Y = {X^i}{Y^j}{Y^k}R\left( {\frac{\partial }{{\partial {x^i}}},\frac{\partial }{{\partial {x^j}}}} \right)\frac{\partial }{{\partial {x^k}}} = {X^i}{Y^j}{Y^k}R_{kij}^l\frac{\partial }{{\partial {x^l}}}

which implies

\displaystyle\begin{gathered} \left\langle {R\left( {X,Y} \right)Y,X} \right\rangle = {X^i}{Y^j}{Y^k}R_{kij}^l\left\langle {\frac{\partial }{{\partial {x^l}}},{X^m}\frac{\partial }{{\partial {x^m}}}} \right\rangle \hfill \\ \qquad= {X^i}{Y^j}{X^m}{Y^k}R_{kij}^l{g_{lm}} \hfill \\ \qquad= {R_{mkij}}{X^i}{Y^j}{X^m}{Y^k} \hfill \\ \qquad = {R_{ijmk}}{X^i}{Y^j}{X^m}{Y^k}. \hfill \\ \end{gathered}


\displaystyle\begin{gathered} \left\langle {X,X} \right\rangle \left\langle {Y,Y} \right\rangle - {\left\langle {X,Y} \right\rangle ^2} = {X^i}{X^m}{g_{im}}{Y^j}{Y^k}{g_{jk}} - {\left( {{X^\alpha }{Y^\beta }{g_{\alpha \beta }}} \right)^2} \hfill \\ \qquad= {X^i}{X^m}{g_{im}}{Y^j}{Y^k}{g_{jk}} - {X^\alpha }{Y^\beta }{g_{\alpha \beta }}{X^\gamma }{Y^\delta }{g_{\gamma \delta }} \hfill \\ \qquad= \left( {{g_{im}}{g_{jk}} - {g_{ij}}{g_{mk}}} \right){X^i}{X^m}{Y^j}{Y^k}. \hfill \\\end{gathered}


\displaystyle K\left( {X,Y} \right) = \frac{{{R_{ijmk}}{X^i}{Y^j}{X^m}{Y^k}}}{{\left( {{g_{im}}{g_{jk}} - {g_{ij}}{g_{mk}}} \right){X^i}{X^m}{Y^j}{Y^k}}}.

To be exact, without using Einstein summation convention, one reads the above identity as following

\displaystyle K\left( {X,Y} \right) = \frac{{\sum\limits_{ijmk} {{R_{ijmk}}{X^i}{Y^j}{X^m}{Y^k}} }}{{\sum\limits_{ijmk} {\left( {{g_{im}}{g_{jk}} - {g_{ij}}{g_{mk}}} \right){X^i}{X^m}{Y^j}{Y^k}} }}.

We refer the reader to this topic for examples. In addition, if we choose

\displaystyle {g_{ij}} = {\left( {\displaystyle\frac{2}{{1 - {{\left| y \right|}^2}}}} \right)^2}{\delta _{ij}}

then the sectional curvature of g is -1.

R-G: Hessian and Laplacian

Filed under: Riemannian geometry — Ngô Quốc Anh @ 1:17

For a given smooth function f on manifold M, the gradient of f is given by

\displaystyle \nabla f = g^{kj} \dfrac{\partial f}{\partial x^j} \frac{\partial}{\partial x^k}.

Note that gradient of f is also a vector field on M. Thus, for each X \in TM, it is reasonable to talk about \nabla_X \nabla f.

Definition 1. Hessian of f, denoted by {\rm Hess}, is defined as the symmetric (0,2)-tensor

{\rm Hess} f (X,Y)=g(\nabla_X \nabla f, Y).

We also denote by f_{ij} the following

\displaystyle {\rm Hess} f \left(\frac{\partial}{\partial x^i}, \frac{\partial}{\partial x^j}\right).


\displaystyle\begin{gathered} {f_{ij}} = g\left( {{\nabla _{\frac{\partial }{{\partial {x^i}}}}}\nabla f,\frac{\partial }{{\partial {x^j}}}} \right) = g\left( {{\nabla _{\frac{\partial }{{\partial {x^i}}}}}\left( {{g^{kl}}\frac{{\partial f}}{{\partial {x^l}}}\frac{\partial }{{\partial {x^k}}}} \right),\frac{\partial }{{\partial {x^j}}}} \right) \hfill \\ \quad\; = g\left( {\frac{\partial }{{\partial {x^i}}}\left( {{g^{kl}}\frac{{\partial f}}{{\partial {x^l}}}} \right)\frac{\partial }{{\partial {x^k}}} + {g^{kl}}\frac{{\partial f}}{{\partial {x^l}}}{\nabla _{\frac{\partial }{{\partial {x^i}}}}}\frac{\partial }{{\partial {x^k}}},\frac{\partial }{{\partial {x^j}}}} \right) \hfill \\ \quad\; = \frac{\partial }{{\partial {x^i}}}\left( {{g^{kl}}\frac{{\partial f}}{{\partial {x^l}}}} \right)g\left( {\frac{\partial }{{\partial {x^k}}},\frac{\partial }{{\partial {x^j}}}} \right) + {g^{kl}}\frac{{\partial f}}{{\partial {x^l}}}g\left( {{\nabla _{\frac{\partial }{{\partial {x^i}}}}}\frac{\partial }{{\partial {x^k}}},\frac{\partial }{{\partial {x^j}}}} \right) \hfill \\\quad\; = \frac{\partial }{{\partial {x^i}}}\left( {{g^{kl}}\frac{{\partial f}}{{\partial {x^l}}}} \right){g_{kj}} + {g^{kl}}\frac{{\partial f}}{{\partial {x^l}}} \left[ \frac{1}{2}\left( {-\frac{{\partial {g_{ki}}}}{{\partial {x^j}}} + \frac{{\partial {g_{ij}}}}{{\partial {x^k}}} + \frac{{\partial {g_{kj}}}}{{\partial {x^i}}}} \right)\right]. \hfill\end{gathered}

Note that

\displaystyle \frac{\partial }{{\partial {x^i}}}\left( {{g^{kl}}\frac{{\partial f}}{{\partial {x^l}}}} \right){g_{kj}} = \frac{{\partial {g^{kl}}}}{{\partial {x^i}}}\frac{{\partial f}}{{\partial {x^l}}}{g_{kj}} + {g^{kl}}\frac{\partial }{{\partial {x^i}}}\left( {\frac{{\partial f}}{{\partial {x^l}}}} \right){g_{kj}} = \frac{{\partial {g^{kl}}}}{{\partial {x^i}}}\frac{{\partial f}}{{\partial {x^l}}}{g_{kj}} + \frac{{{\partial ^2}f}}{{\partial {x^i}\partial {x^j}}}.

Since 0=\frac{\partial}{\partial x^i}(g^{kl}g_{kj}) then

\displaystyle\frac{{\partial {g^{kl}}}}{{\partial {x^i}}}\frac{{\partial f}}{{\partial {x^l}}}{g_{kj}} = - \frac{{\partial {g_{kj}}}}{{\partial {x^i}}}\frac{{\partial f}}{{\partial {x^l}}}{g^{kl}}

which implies

\displaystyle f_{ij} =\frac{{{\partial ^2}f}}{{\partial {x^i}\partial {x^j}}} - \Gamma _{ij}^m\frac{{\partial f}}{{\partial {x^m}}}.

Definition 2. Laplacian of f, denoted by \Delta f, is defined as the trace of {\rm Hess} f.

Note that {\rm Hess} f is a (0,2)-tensor, then in local coordinates, one has

\displaystyle \Delta f = {g^{ij}}{f_{ij}}.

It is clear that \nabla X is a (1,1)-tensor field. To see this fact, one can assume X=X^i \frac{\partial}{\partial x^i} then from

\displaystyle {\nabla _{\frac{\partial }{{\partial {x^j}}}}}\left( {{X^i}\frac{\partial }{{\partial {x^i}}}} \right) = \frac{{\partial {X^i}}}{{\partial {x^j}}}\frac{\partial }{{\partial {x^i}}} + {X^i}\Gamma _{ji}^l\frac{\partial }{{\partial {x^l}}}

one has

\displaystyle\nabla X = \left[ {\frac{{\partial {X^i}}}{{\partial {x^j}}}\frac{\partial }{{\partial {x^i}}} + {X^i}\Gamma _{ji}^l\frac{\partial }{{\partial {x^l}}}} \right] \otimes d{x^j}

since {\nabla _Y}X = \left\langle {Y,\nabla X} \right\rangle which is exactly an (1,1)-tensor. Then we can define divergence of a vector field X as following

Definition 3. Divergence of vector field X is given by

\displaystyle {\rm div} X = {\rm Trace}(\nabla X).

In coordinates, this is

\displaystyle {\rm div} X = dx^i \left( \nabla_{\frac{\partial}{\partial x^i}} X\right)

and with respect to an orthornormal basis

\displaystyle {\rm div} X =g\left( {{\nabla _{\frac{\partial }{{\partial {x^i}}}}}X,\frac{\partial }{{\partial {x^i}}}} \right).

Thus \Delta f = {\rm Trace}(\nabla(\nabla f)) = {\rm div}(\nabla f).

NOTICE: To avoid any inconvenience caused, from now we denote gradient of f by {\rm grad}f instead of \nabla f. This is because \nabla f is covariant derivative of f, this is an (1,0)-tensor instead of a vector field as mentioned in this entry.

November 17, 2009

R-G: Bianchi identities

Filed under: Riemannian geometry — Ngô Quốc Anh @ 0:22

Recall that R_{ikl}^j is defined to be

\displaystyle R_{ikl}^j = \left\langle {R\left( {\frac{\partial }{{\partial {x^k}}},\frac{\partial }{{\partial {x^l}}}} \right)\frac{\partial }{{\partial {x^i}}},d{x^j}} \right\rangle .


\displaystyle R_{ijkl}=g_{hj} R_{ikl}^h.

The way to understand R_{ijkl} is to look at the following 4-covariant tensor

R(X,Y,Z,T) = g(R(X,Y)Z, T).

As can be seen, the components of R(X,Y,Z,T) are R_{ijkl}.

We first obtain the following result.

Theorem 1. The curvature tensor R_{ijkl}  satisfies the following property

{R_{ijkl}} = - {R_{ijlk}} = - {R_{jikl}} .


The proof relies on the definition of the 4-covariant tensor above. To be precise, one has

\displaystyle g\left( {R\left( {\frac{\partial }{{\partial {x^k}}},\frac{\partial }{{\partial {x^l}}}} \right)\frac{\partial }{{\partial {x^i}}},\frac{\partial }{{\partial {x^j}}}} \right) = g\left( {R_{ikl}^h\frac{\partial }{{\partial {x^h}}},\frac{\partial }{{\partial {x^j}}}} \right) = {g_{hj}}R_{ikl}^h = {R_{ijkl}}


\displaystyle g\left( {R\left( {\frac{\partial }{{\partial {x^l}}},\frac{\partial }{{\partial {x^k}}}} \right)\frac{\partial }{{\partial {x^i}}},\frac{\partial }{{\partial {x^j}}}} \right) = {R_{ijlk}}.


\displaystyle R\left( {\frac{\partial }{{\partial {x^l}}},\frac{\partial }{{\partial {x^k}}}} \right)\frac{\partial }{{\partial {x^i}}} = - R\left( {\frac{\partial }{{\partial {x^k}}},\frac{\partial }{{\partial {x^l}}}} \right)\frac{\partial }{{\partial {x^i}}}

then {R_{ijkl}} = - {R_{ijlk}}. This comes from the definition of curvature tensor and the fact that

\displaystyle\left[ {\frac{\partial }{{\partial {x^m}}},\frac{\partial }{{\partial {x^n}}}} \right] = 0.

Similarly, for the latter case, one can argue as follows

\displaystyle \begin{gathered} g\left( {R\left( {\frac{\partial }{{\partial {x^k}}},\frac{\partial }{{\partial {x^l}}}} \right)\frac{\partial }{{\partial {x^i}}},\frac{\partial }{{\partial {x^i}}}} \right) \hfill \\ \qquad\qquad= g\left( {{\nabla _{\frac{\partial }{{\partial {x^k}}}}}{\nabla _{\frac{\partial }{{\partial {x^l}}}}}\frac{\partial }{{\partial {x^i}}},\frac{\partial }{{\partial {x^i}}}} \right) - g\left( {{\nabla _{\frac{\partial }{{\partial {x^l}}}}}{\nabla _{\frac{\partial }{{\partial {x^k}}}}}\frac{\partial }{{\partial {x^i}}},\frac{\partial }{{\partial {x^i}}}} \right) - g\left( {{\nabla _{\left[ {\frac{\partial }{{\partial {x^k}}},\frac{\partial }{{\partial {x^l}}}} \right]}}\frac{\partial }{\partial x^i},\frac{\partial }{{\partial {x^i}}}} \right) \hfill \\ \end{gathered}.

We now use the fact that \nabla is a metric connection. Indeed,

\displaystyle \begin{gathered} \;\;\; g\left( {{\nabla _{\frac{\partial }{{\partial {x^k}}}}}{\nabla _{\frac{\partial }{{\partial {x^l}}}}}\frac{\partial }{{\partial {x^i}}},\frac{\partial }{{\partial {x^i}}}} \right) = \frac{\partial }{{\partial {x^k}}}g\left( {{\nabla _{\frac{\partial }{{\partial {x^l}}}}}\frac{\partial }{{\partial {x^i}}},\frac{\partial }{{\partial {x^i}}}} \right) - g\left( {{\nabla _{\frac{\partial }{{\partial {x^l}}}}}\frac{\partial }{{\partial {x^i}}},{\nabla _{\frac{\partial }{{\partial {x^k}}}}}\frac{\partial }{{\partial {x^i}}}} \right) \hfill \\ - g\left( {{\nabla _{\frac{\partial }{{\partial {x^l}}}}}{\nabla _{\frac{\partial }{{\partial {x^k}}}}}\frac{\partial }{{\partial {x^i}}},\frac{\partial }{{\partial {x^i}}}} \right) = - \frac{\partial }{{\partial {x^l}}}g\left( {{\nabla _{\frac{\partial }{{\partial {x^k}}}}}\frac{\partial }{{\partial {x^i}}},\frac{\partial }{{\partial {x^i}}}} \right) + g\left( {{\nabla _{\frac{\partial }{{\partial {x^k}}}}}\frac{\partial }{{\partial {x^i}}},{\nabla _{\frac{\partial }{{\partial {x^l}}}}}\frac{\partial }{{\partial {x^i}}}} \right) \hfill \\ - g\left( {{\nabla _{\left[ {\frac{\partial }{{\partial {x^k}}},\frac{\partial }{{\partial {x^l}}}} \right]}}\frac{\partial }{{\partial {x^i}}},\frac{\partial }{{\partial {x^i}}}} \right) = - \left[ {\frac{\partial }{{\partial {x^k}}},\frac{\partial }{{\partial {x^l}}}} \right]g\left( {\frac{\partial }{{\partial {x^i}}},\frac{\partial }{{\partial {x^i}}}} \right) + g\left( {\frac{\partial }{{\partial {x^i}}},{\nabla _{\left[ {\frac{\partial }{{\partial {x^k}}},\frac{\partial }{{\partial {x^l}}}} \right]}}\frac{\partial }{{\partial {x^i}}}} \right). \hfill \\\end{gathered}


\displaystyle \begin{gathered} g\left( {R\left( {\frac{\partial }{{\partial {x^k}}},\frac{\partial }{{\partial {x^l}}}} \right)\frac{\partial }{{\partial {x^i}}},\frac{\partial }{{\partial {x^i}}}} \right) \hfill \\ \qquad\qquad= \frac{\partial }{{\partial {x^k}}}g\left( {{\nabla _{\frac{\partial }{{\partial {x^l}}}}}\frac{\partial }{{\partial {x^i}}},\frac{\partial }{{\partial {x^i}}}} \right) - \frac{\partial }{{\partial {x^l}}}g\left( {{\nabla _{\frac{\partial }{{\partial {x^k}}}}}\frac{\partial }{{\partial {x^i}}},\frac{\partial }{{\partial {x^i}}}} \right) - \frac{1}{2}\left[ {\frac{\partial }{{\partial {x^k}}},\frac{\partial }{{\partial {x^l}}}} \right]g\left( {\frac{\partial }{{\partial {x^i}}},\frac{\partial }{{\partial {x^i}}}} \right) = 0. \hfill \\\end{gathered}


\displaystyle g\left( {R\left( {\frac{\partial }{{\partial {x^k}}},\frac{\partial }{{\partial {x^l}}}} \right)\frac{\partial }{{\partial {x^i}}},\frac{\partial }{{\partial {x^i}}}} \right) = 0.

The above identity also holds if we replace \frac{\partial}{\partial x^i} by a vector field X. Thus

\displaystyle g\left( {R\left( {\frac{\partial }{{\partial {x^k}}},\frac{\partial }{{\partial {x^l}}}} \right)\left( {\frac{\partial }{{\partial {x^i}}} + \frac{\partial }{{\partial {x^j}}}} \right),\frac{\partial }{{\partial {x^i}}} + \frac{\partial }{{\partial {x^j}}}} \right) = 0

which implies

\displaystyle g\left( {R\left( {\frac{\partial }{{\partial {x^k}}},\frac{\partial }{{\partial {x^l}}}} \right)\frac{\partial }{{\partial {x^i}}},\frac{\partial }{{\partial {x^j}}}} \right) = - g\left( {R\left( {\frac{\partial }{{\partial {x^k}}},\frac{\partial }{{\partial {x^l}}}} \right)\frac{\partial }{{\partial {x^j}}},\frac{\partial }{{\partial {x^i}}}} \right).

Therefore, {R_{ijkl}} = - {R_{jikl}} .

Corollary 1. R(X,Y)Z=-R(Y,Z)Z and \left\langle {R\left( {X,Y} \right)Z,W} \right\rangle = - \left\langle {R\left( {X,Y} \right)W,Z} \right\rangle.

Theorem 2 (the first Bianchi identity). The curvature tensor R_{ijkl}  satisfies the following property

{R_{ijlk}} + {R_{iklj}} + {R_{iljk}} = 0.

Proof. Since

\displaystyle R\left( {\frac{\partial }{{\partial {x^k}}},\frac{\partial }{{\partial {x^l}}}} \right)\frac{\partial }{{\partial {x^i}}} = {\nabla _{\frac{\partial }{{\partial {x^k}}}}}{\nabla _{\frac{\partial }{{\partial {x^l}}}}}\frac{\partial }{{\partial {x^i}}} - {\nabla _{\frac{\partial }{{\partial {x^l}}}}}{\nabla _{\frac{\partial }{{\partial {x^k}}}}}\frac{\partial }{{\partial {x^i}}} - \underbrace {{\nabla _{\left[ {\frac{\partial }{{\partial {x^k}}},\frac{\partial }{{\partial {x^l}}}} \right]}}\frac{\partial }{{\partial {x^i}}},\frac{\partial }{{\partial {x^i}}}}_0


\displaystyle R\left( {\frac{\partial }{{\partial {x^k}}},\frac{\partial }{{\partial {x^l}}}} \right)\frac{\partial }{{\partial {x^i}}} = {\nabla _{\frac{\partial }{{\partial {x^k}}}}}{\nabla _{\frac{\partial }{{\partial {x^l}}}}}\frac{\partial }{{\partial {x^i}}} - {\nabla _{\frac{\partial }{{\partial {x^l}}}}}{\nabla _{\frac{\partial }{{\partial {x^k}}}}}\frac{\partial }{{\partial {x^i}}}.


\displaystyle \begin{gathered} R\left( {\frac{\partial }{{\partial {x^i}}},\frac{\partial }{{\partial {x^k}}}} \right)\frac{\partial }{{\partial {x^l}}} = {\nabla _{\frac{\partial }{{\partial {x^i}}}}}{\nabla _{\frac{\partial }{{\partial {x^k}}}}}\frac{\partial }{{\partial {x^l}}} - {\nabla _{\frac{\partial }{{\partial {x^k}}}}}{\nabla _{\frac{\partial }{{\partial {x^i}}}}}\frac{\partial }{{\partial {x^l}}}, \hfill \\ R\left( {\frac{\partial }{{\partial {x^l}}},\frac{\partial }{{\partial {x^i}}}} \right)\frac{\partial }{{\partial {x^k}}} = {\nabla _{\frac{\partial }{{\partial {x^l}}}}}{\nabla _{\frac{\partial }{{\partial {x^i}}}}}\frac{\partial }{{\partial {x^k}}} - {\nabla _{\frac{\partial }{{\partial {x^i}}}}}{\nabla _{\frac{\partial }{{\partial {x^l}}}}}\frac{\partial }{{\partial {x^k}}}. \hfill \\ \end{gathered}.

Since \nabla is torsion free, one gets

\displaystyle {\nabla _{\frac{\partial }{{\partial {x^l}}}}}\frac{\partial }{{\partial {x^i}}} = {\nabla _{\frac{\partial }{{\partial {x^i}}}}}\frac{\partial }{{\partial {x^l}}}, \quad {\nabla _{\frac{\partial }{{\partial {x^k}}}}}\frac{\partial }{{\partial {x^i}}} = {\nabla _{\frac{\partial }{{\partial {x^i}}}}}\frac{\partial }{{\partial {x^k}}}, \quad {\nabla _{\frac{\partial }{{\partial {x^k}}}}}\frac{\partial }{{\partial {x^l}}} = {\nabla _{\frac{\partial }{{\partial {x^l}}}}}\frac{\partial }{{\partial {x^k}}}.

As a consequence,

\displaystyle R\left( {\frac{\partial }{{\partial {x^k}}},\frac{\partial }{{\partial {x^l}}}} \right)\frac{\partial }{{\partial {x^i}}} + R\left( {\frac{\partial }{{\partial {x^i}}},\frac{\partial }{{\partial {x^k}}}} \right)\frac{\partial }{{\partial {x^l}}} + R\left( {\frac{\partial }{{\partial {x^l}}},\frac{\partial }{{\partial {x^i}}}} \right)\frac{\partial }{{\partial {x^k}}} = 0.


\displaystyle g\left( {R\left( {\frac{\partial }{{\partial {x^k}}},\frac{\partial }{{\partial {x^l}}}} \right)\frac{\partial }{{\partial {x^i}}},\frac{\partial }{{\partial {x^j}}}} \right) + g\left( {R\left( {\frac{\partial }{{\partial {x^i}}},\frac{\partial }{{\partial {x^k}}}} \right)\frac{\partial }{{\partial {x^l}}},\frac{\partial }{{\partial {x^j}}}} \right) + g\left( {R\left( {\frac{\partial }{{\partial {x^l}}},\frac{\partial }{{\partial {x^i}}}} \right)\frac{\partial }{{\partial {x^k}}},\frac{\partial }{{\partial {x^j}}}} \right) = 0

which implies

\displaystyle {R_{ijkl}} + {R_{ljik}} + {R_{kjli}} = 0.

If we change i by j, j by i we then obtain

\displaystyle \underbrace {{R_{ijkl}}}_{{R_{jikl}}} + \underbrace {{R_{ljik}}}_{{R_{lijk}}} + \underbrace {{R_{kjli}}}_{{R_{kilj}}} = 0

which implies, by using Theorem 1,

\displaystyle - {R_{ijkl}} - {R_{iljk}} - {R_{iklj}} = 0.

Corollary 2. R\left( {X,Y} \right)Z + R\left( {Z,X} \right)Y + R\left( {Y,Z} \right)X = 0.

Corollary 3. Followed from the proof of Theorem 2, by pairing with dx^m to the both sides one has

\displaystyle R_{ikl}^m + R_{lki}^m + R_{kil}^m = 0.

Theorem 3 . The curvature tensor R_{ijkl}  satisfies the following property


Proof. By the first Bianchi indentity,

\displaystyle \begin{gathered} {R_{ijkl}} + {R_{iljk}} + {R_{iklj}} = 0, \hfill \\ {R_{jikl}} + {R_{jlik}} + {R_{jkli}} = 0, \hfill \\ \end{gathered}

which implies

\displaystyle 2{R_{ijkl}} + {R_{iljk}} - {R_{jlik}} + {R_{iklj}} - {R_{jkli}} = 0.


\displaystyle 2{R_{ijkl}} + {R_{iljk}} + {R_{ikjl}} + {R_{iklj}} + {R_{lijk}} = 0.

Similarly, by changing i \to k, j \to l, k \to i and l \to j one gets

\displaystyle 2{R_{klij}} + {R_{kjli}} + {R_{kilj}} + {R_{kijl}} + {R_{jkli}} = 0.

Hence R_{ijkl}=R_{klij} by using Theorem 1.

Theorem 4 (the second Bianchi identity). The curvature tensor R_{ijkl}  satisfies the following property

{R_{ijkl,h}} + {R_{ijlh,k}} + {R_{ijhk,l}} = 0.

Proof. One can use the normal coordinates in order to simplify the calculation. Indeed, normal coordinates tell us at a given point that

g_{ij}=\delta_{ij} and g_{ij,k}=\Gamma_{ij}^k=0

for all i, j, k. Thus,

\displaystyle\begin{gathered} R_{ikl}^h\frac{\partial }{{\partial {x^h}}} \;= R\left( {\frac{\partial }{{\partial {x^k}}},\frac{\partial }{{\partial {x^l}}}} \right)\frac{\partial }{{\partial {x^i}}} = {\nabla _{\frac{\partial }{{\partial {x^k}}}}}{\nabla _{\frac{\partial }{{\partial {x^l}}}}}\frac{\partial }{{\partial {x^i}}} - {\nabla _{\frac{\partial }{{\partial {x^l}}}}}{\nabla _{\frac{\partial }{{\partial {x^k}}}}}\frac{\partial }{{\partial {x^i}}} \hfill \\ \qquad\qquad = {\nabla _{\frac{\partial }{{\partial {x^k}}}}}\left( {\Gamma _{li}^m\frac{\partial }{{\partial {x^m}}}} \right) - {\nabla _{\frac{\partial }{{\partial {x^l}}}}}\left( {\Gamma _{ki}^n\frac{\partial }{{\partial {x^n}}}} \right) = \frac{{\partial \Gamma _{li}^m}}{{\partial {x^k}}}\frac{\partial }{{\partial {x^m}}} - \frac{{\partial \Gamma _{ki}^n}}{{\partial {x^l}}}\frac{\partial }{{\partial {x^n}}} \hfill \\\qquad\qquad = \left( {\frac{{\partial \Gamma _{li}^m}}{{\partial {x^k}}} - \frac{{\partial \Gamma _{ki}^m}}{{\partial {x^l}}}} \right)\frac{\partial }{{\partial {x^m}}} \hfill \\ \end{gathered}

which implies

\displaystyle \begin{gathered}R_{ikl}^h \;= \frac{{\partial \Gamma _{li}^h}}{{\partial {x^k}}} - \frac{{\partial \Gamma _{ki}^h}}{{\partial {x^l}}} \hfill \\ \qquad= \frac{\partial }{{\partial {x^k}}}\left( {\frac{1}{2}{g^{hm}}\left( {{g_{ml,i}} + {g_{mi,l}} - {g_{li,m}}} \right)} \right) - \frac{\partial }{{\partial {x^l}}}\left( {\frac{1}{2}{g^{hn}}\left( {{g_{nk,i}} + {g_{ni,k}} - {g_{ki,n}}} \right)} \right) \hfill \\ \qquad= \frac{1}{2}{g^{hm}}\left( {{g_{ml,ik}} + {g_{mi,lk}} - {g_{mk,il}} - {g_{mi,kl}} - {g_{li,mk}} + {g_{ki,ml}}} \right) \hfill \\ \qquad = \frac{1}{2}{g^{hm}}\left( {{g_{ml,ik}} - {g_{mk,il}} - {g_{li,mk}} + {g_{ki,ml}}} \right). \hfill \\ \end{gathered}


\displaystyle {R_{ijkl}} = {g_{jh}}R_{ikl}^h = \frac{1}{2}\left( {{g_{jl,ik}} - {g_{jk,il}} - {g_{li,jk}} + {g_{ki,jl}}} \right)

which implies

\displaystyle{R_{ijkl,h}} = \frac{1}{2}\left( {{g_{jl,ikh}} - {g_{jk,ilh}} - {g_{li,jkh}} + {g_{ki,jlh}}} \right).

Similarly, we can write down R_{ijlh,k} and R_{ijhk,l}. Summing up we get the desired result.

November 16, 2009

R-G: Levi-Civita connection

Filed under: Riemannian geometry — Ngô Quốc Anh @ 2:39

Suppose M is a differentiable manifold of dimension n.

Connection on vector bundles

Definition 1. A connection on a vector bundle E is a map

D : \Gamma(E) \to \Gamma(T^\star(M) \otimes E)

which satisfies the following conditions

  • For any s_1, s_2 \in \Gamma(E), D(s_1+s_2)=Ds_1 + Ds_2.
  • For s \in \Gamma(E) and any \alpha \in C^\infty(M), D(\alpha s)=d\alpha \otimes s + \alpha Ds.

If X is a tangent vector field on M (i.e. a section of the tangent bundle TM) one can define a covariant derivative along X, denoted by D_X, as follows

{D_X}s = \left\langle {X,Ds} \right\rangle

where \left\langle \cdot, \cdot \right\rangle represents the pairing between TM and T^\star M.

Locally, a connection is given by a set of differential 1-forms. Suppose U is a coordinate neighborhood of M with local coordinates x^i, 1 \leq i \leq n. Choose q smooth sections s_\alpha of E on U such that they are linearly independent everywhere. Such a set of q sections is called a local frame field of E on U. It is obvious that at every point P \in U

\displaystyle \{ dx^i \otimes s_\alpha, 1 \leq i \leq n, 1 \leq \alpha \leq q\}

forms a basis for the tensor space T_P^\star \otimes E_P. Because Ds_\alpha is a local section on U, we can write

\displaystyle D{s_\alpha } = \Gamma _{\alpha i}^\beta d{x^i} \otimes {s_\beta }

where \Gamma_{\alpha i}^\beta are smooth functions on U. Denote \omega _\alpha ^\beta = \Gamma _{\alpha i}^\beta d{x^i} then D{s_\alpha } = \omega _\alpha ^\beta \otimes {s_\beta }.

Definition 2 (curvature operator). Suppose X, Y are two arbitrary smooth tangent vector fields on the manifold M. Then

\displaystyle R(X, Y) = D_XD_Y - D_YD_X - D_{[X,Y]}

is the curvature operator of the connection D.

Obviously, R(X,Y) has the following properties

  • R(X,Y)=-R(Y,X),
  • R(fX, Y)=f \cdot R(X,Y),
  • R(X,Y)(fs)=f \cdot (R(X,Y)s),

where X, Y \in \Gamma(TM), f \in C^\infty(M) and s \in \Gamma(E).

Connection on tangent bundles (affine connections)

A tangent bundle TM is an n-dimensional vector bundle determined intrinsically by the differentiable structure of an n-dimensional smooth manifold M. A connection of TM is called an affine connection on M. Affine connection is usually denoted by \nabla.

Definition 3 (curvature tensor). The curvature tensor is a (1,3)-tensor defined by

R(X,Y)Z = D_XD_YZ - D_YD_XZ - D_{[X,Y]}Z.

In local coordinates, the curvature tensor is given by

R = R_{ikl}^j\dfrac{\partial }{{\partial {x^j}}} \otimes d{x^i} \otimes d{x^k} \otimes d{x^l}.

A simple calculation shows us that

\displaystyle R_{ikl}^j = \left\langle {R\left( {\frac{\partial }{{\partial {x^k}}},\frac{\partial }{{\partial {x^l}}}} \right)\frac{\partial }{{\partial {x^i}}},d{x^j}} \right\rangle .

Definition 4 (torsion tensor). The torsion tensor is a (1,2)-tensor defined by

T(X,Y) = D_XY - D_YX - [X,Y].

In local coordinates, the torsion tensor is given by

\displaystyle T = T_{ij}^k\frac{\partial }{{\partial {x^k}}}\otimes d{x^i} \otimes d{x^j}.

A simple calculation shows us that

\displaystyle T_{ij}^k = \Gamma _{ji}^k - \Gamma _{ij}^k.

Definition 5 (torsion free). If the torsion tensor of an affine connection D is zero, then the connection is said to be torsion free.

When M is a Riemannian manifold with metric g then we have the following definition

Definition 6 (Levi-Civita connection). An affine connection \nabla is called a Levi-Civita connection if:

  • It preserves the metric, i.e., for any vector fields X, Y, Z we have

    X(g(Y,Z))=g(\nabla_X Y,Z) + g(Y, \nabla_X Z)

    where X(g(Y,Z)) denotes the derivative of the function g(Y,Z) along the vector field X.

  • It is torsion free.

The first condition above is called metric connection condition. Thus, the Levi-Civita connection is the torsion free metric connection, i.e., the torsion free connection on the tangent bundle (an affine connection) preserving a given Riemannian metric.

There is a theorem in the literature saying that the Levi-Civita connection is unique and it is given by the following identity

\displaystyle g({\nabla _X}Y,W) = \frac{1}{2}\left( {X(g(Y,W)) + Y(g(X,W)) - W(g(X,Y)) + g([X,Y],W) + g([W,X],Y) - g([Y,W],X)} \right).

In local coordinate, the Levi-Civita connection \nabla is given by

\displaystyle {\nabla _{\frac{\partial }{{\partial {x^i}}}}}\frac{\partial }{{\partial {x^j}}} = \Gamma _{ij}^k\frac{\partial }{{\partial {x^k}}}

where \Gamma _{ij}^k are called Christoffel symbols which are determined by

\displaystyle \Gamma _{ij}^k = \frac{1}{2}{g^{kl}}\left( {{g_{il,j}} + {g_{jl,i}} - {g_{ij,l}}} \right)

where {g_{,m}} = \frac{{\partial g}}{{\partial {x^m}}}.

R-G: Tangent space, gradient

Filed under: Riemannian geometry — Ngô Quốc Anh @ 0:20

Let’s start with a differentiable manifold M of dimension n. Throughout this topic, we denote by P a point on M and (M,\varphi) its local chart (at P). A point P is determined by \varphi(P) hence it is often identified with \varphi(P). We usually denote by \varphi(P)=\{ x^i\} \in \mathbb R^n the local coordinates of P.

Definition 1. A tangent vector at P is a map X : f \mapsto X(f) \in \mathbb R defined on the set of the differentiable functions in a neighborhood of P, where X satisfies the following conditions

  • X is linear, that is to say: if \lambda, \mu \in \mathbb R, then X(\lambda f + \mu g)=\lambda X(f) + \mu X(g).
  • X(f)=0 if f is flat at P, i.e. d(f \circ \varphi^{-1})=0 at \varphi(P).
  • X(fg)=f(P)X(g)+g(P)X(f).

Definition 2. The tangent space T_P(M) at P is the set of tangent vectors at P.

From the definition 1, let us show that the tangent space of definition 2 has a natural vector space structure of dimension n. We set

(X+Y)(f) = X(f)+Y(f) and (\lambda X)(f)=\lambda X(f).

With this sum and this product, T_P(M) is a vector space. And now let us exhibit a basis. It is reasonable to define the tangent vector \dfrac{\partial}{\partial x^i} at P. Precisely,

Definition 3. The tangent vector \dfrac{\partial}{\partial x^i} at P is defined to be

\displaystyle\frac{\partial }{{\partial {x^i}}}\left( f \right) = \left( {\frac{\partial }{{\partial {x^i}}}\left( {f \circ {\varphi ^{ - 1}}} \right)} \right){\bigg|_{\varphi (P)}}.

The vectors \dfrac{\partial}{\partial x^i} are independent and they form a basis for T_P(M). We usually call \dfrac{\partial f}{\partial x^i} the directional derivative of f in the direction x^i. For an arbitrary vector X, one can define the directional derivative of f in the direction X as following

\displaystyle{\partial _X}(f) = X(f) = {X^i}\frac{{\partial f}}{{\partial {x^i}}}

where X^i denotes the i-th component of X in this coordinate chart.

We now assume further that (M, g) is a Riemannian manifold where g is its metric. We are now in a position to define gradient for a smooth function.

Definition 4. For any smooth function f on a Riemannian manifold (M, g), the gradient of f is the vector field \nabla f such that for any vector field X,

\displaystyle g(\nabla f,X) = {\partial _X}f,   i.e.   \displaystyle {g_P}({(\nabla f)_P},{X_P}) = ({\partial _X}f)(P)

where g_P(\cdot, \cdot) denotes the inner product of tangent vectors at P defined by the metric g.

We now express the local form of the gradient at P. By definition 4, one has in the local coordinates

\displaystyle {g_P}({(\nabla f)_P},{X_P}) = {X^i}\frac{{\partial f}}{{\partial {x^i}}}.

If we assume \displaystyle {(\nabla f)_P} = {Y^i}\frac{\partial }{{\partial {x^i}}} we then have

\displaystyle {g_P}({(\nabla f)_P},{X_P}) = {g_P}\left( {{Y^i}\frac{\partial }{{\partial {x^i}}},{X^j}\frac{\partial }{{\partial {x^j}}}} \right) = {Y^i}{X^j}{g_{ij}}.


\displaystyle {Y^i}{X^j}{g_{ij}} = {X^i}\frac{{\partial f}}{{\partial {x^i}}}

which implies after multiplying both sides by the matrix (g^{ij})

\displaystyle {Y^i} = {g^{ij}}\frac{{\partial f}}{{\partial {x^j}}}.


\displaystyle {(\nabla f)_P} = {g^{ij}}\frac{{\partial f}}{{\partial {x^j}}}\frac{\partial }{{\partial {x^i}}}.

We end this topic by showing what |\nabla f| is? Roughly speaking, at a point P since \nabla f is a vector, then |\nabla f| is nothing but its magnitude. To be exact, one defines

\displaystyle \left| {\nabla f} \right| = \sqrt {{g_{ij}}{Y^i}{Y^j}} = \sqrt {{g_{ij}}\left( {{g^{im}}\frac{{\partial f}}{{\partial {x^m}}}} \right)\left( {{g^{jn}}\frac{{\partial f}}{{\partial {x^n}}}} \right)} = \sqrt {{g^{mn}}\frac{{\partial f}}{{\partial {x^m}}}\frac{{\partial f}}{{\partial {x^n}}}} .

In Riemannian geometry, the lower index means differentiation and the upper index means component, therefore, we usually use f_k to denote the quantity \frac{\partial f}{\partial x^k}. With this notation, \left| {\nabla f} \right|=\sqrt{g^{mn}f_mf_n}.

of f in the direction X

November 11, 2009

A non-existence result for positive solutions to the Lichnerowicz equation in R^N

Filed under: Nghiên Cứu Khoa Học, PDEs — Tags: , — Ngô Quốc Anh @ 22:36

In this topic, adapted from a paper due to Li Ma and Xingwang Xu published in Comptes Rendus Mathematique we shall give a non-existence result concerning the following Lichnerowicz equation in \mathbb R^N

\Delta u + R(x) u + A(x) u^{-p-1} + B(x) u^{p-1}=0, u>0 on \mathbb R^N

where R(x) \geq 0, A(x) \geq 0, and B(x) are given smooth functions of x \in \mathbb R^N. To be precise, we obtain the following

Theorem. Suppose A:=A(x) \geq 0, B := B(x) \geq 0, and R(x) \geq 0. Let \beta = \frac{p+1}{2p}. Assume that

\displaystyle \int_0^{ + \infty } {\left( {\int_{B\left( {0,r} \right)} {{A^{1 - \beta }}{B^\beta }dx} } \right){r^{1 - N}}dr} = +\infty .

Then there exists no positive solution to the above Lichnerowicz equation.

Let us denote the integral

\displaystyle\frac{1}{{{\omega _n}{r^{N - 1}}}}\int_{\partial B\left( {0,r} \right)} {f\left( x \right)dS_x}

by \overline f. We call \overline f the average of f on the sphere S(0,r) of radius r, or sphere mean of a function around the origin.

Proof. Note that a simple calculation shows us that

\displaystyle\frac{1}{{{\omega _n}{r^{N - 1}}}}\int_{\partial B\left( {0,r} \right)} {f\left( x \right)d{S_x}} = \frac{1}{{{\omega _n}}}\int_{\partial B\left( {0,1} \right)} {f\left( {rx} \right)d{S_x}} .


\displaystyle {\overline u ^\prime }= \frac{d}{{dr}}\overline u = \frac{1}{{{\omega _n}}}\int_{\partial B\left( {0,1} \right)} {\frac{d}{{dr}}u\left( {xr} \right)d{S_x}} =\frac{1}{{{\omega _n}}}\int_{\partial B\left( {0,1} \right)} {\sum\limits_{k = 1}^N {{x_i}{u_{{x_i}}}} d{S_x}}.

Since on the sphere S(0,1), x=(x_1,...,x_N) is also the outer normal vector, therefore

\displaystyle\frac{1}{{{\omega _n}}}\int_{\partial B\left( {0,1} \right)} {\sum\limits_{k = 1}^N {{x_i}{u_{{x_i}}}\left( {xr} \right)} d{S_x}} = \frac{1}{{{\omega _n}}}\int_{\partial B\left( {0,1} \right)} {\nabla u\left( {xr} \right) \cdot {n_x}d{S_x}}

Thus by the divergence theorem, one gets

\displaystyle \frac{1}{{{\omega _n}}}\int_{\partial B\left( {0,1} \right)} {\nabla u\left( {xr} \right)\cdot {n_x}d{S_x}} = \frac{r}{{{\omega _n}}}\int_{B\left( {0,1} \right)} {\Delta u dx} = \frac{1}{{{\omega _n}{r^{N-1}}}}\int_{B\left( {0,r} \right)} {\Delta udx}.


\displaystyle{\overline u ^\prime } = \frac{1}{{{\omega _n}{r^{N - 1}}}}\int_{B\left( {0,r} \right)} {\Delta udx}.

Differentiating once more yields

\displaystyle{\overline u ^\prime }^\prime = \frac{d}{{dr}}{\overline u ^\prime } = - \underbrace {\frac{{N - 1}}{{{\omega _n}{r^N}}}\int_{B\left( {0,r} \right)} {\Delta udx} }_{\frac{{N - 1}}{r}\overline u'} + \frac{1}{{{\omega _n}{r^{N - 1}}}}\frac{d}{{dr}}\left( {\int_{B\left( {0,r} \right)} {\Delta udx} } \right).


\displaystyle\frac{d}{{dr}}\left( {\int_{B\left( {0,r} \right)} {\Delta udx} } \right) =\int_{\partial B\left( {0,r} \right)} {\Delta udx}


\displaystyle{\overline u ^\prime }^\prime = - \frac{{N - 1}}{r}\overline u' + \frac{1}{{{\omega _n}{r^{N - 1}}}}\int_{\partial B\left( {0,r} \right)} {\Delta udx} = - \frac{{N - 1}}{r}\overline u' + \overline {\Delta u}.


\displaystyle\overline {\Delta u} = {\overline u ^\prime }^\prime + \frac{{N - 1}}{r}\overline u'

Therefore, taking this average operation we have

\displaystyle - {\overline u ^\prime }^\prime - \frac{{N - 1}}{r}\overline u' = \overline {R(x)u} + \overline {A(x){u^{ - p - 1}} + B(x){u^{p - 1}}} .

Since for each fixed x\in \mathbb R^N,

\displaystyle\begin{gathered} A{u^{ - p - 1}} + B{u^{p - 1}} = \frac{{2p}}{{p - 1}}\frac{{p - 1}}{{2p}}A{u^{ - p - 1}} + \frac{{2p}}{{p + 1}}\frac{{p + 1}}{{2p}}B{u^{p - 1}} \\\qquad\quad\geq \frac{{p - 1}}{{2p}}A{u^{ - p - 1}} + \frac{{p + 1}}{{2p}}B{u^{p - 1}}. \\ \end{gathered}

Then by using the general Cauchy inequality, one gets

\displaystyle\frac{{p - 1}}{{2p}}A{u^{ - p - 1}} + \frac{{p + 1}}{{2p}}B{u^{p - 1}} \geqslant {\left( {A{u^{ - p - 1}}} \right)^{\frac{{p - 1}}{{2p}}}}{\left( {B{u^{p - 1}}} \right)^{\frac{{p + 1}}{{2p}}}} = {A^{\frac{{p - 1}}{{2p}}}}{B^{\frac{{p + 1}}{{2p}}}}.


\displaystyle\overline {A{u^{ - p - 1}} + B{u^{p - 1}}} \geq \overline {{A^{\frac{{p - 1}}{{2p}}}}{B^{\frac{{p + 1}}{{2p}}}}}.

It turns out that

\displaystyle - {\left( r^{N - 1}\overline u ' \right)^\prime } \geq {r^{N - 1}}\left( {\overline {R(x)u}+ \overline {{A^{\frac{{p - 1}}{{2p}}}}{B^{\frac{{p + 1}}{{2p}}}}} } \right),

which implies that

\displaystyle - {r^{N - 1}}\overline u ' \geq\frac{1}{\omega _n}\int_{B\left( {0,r} \right)} {{A^{\frac{{p - 1}}{{2p}}}}{B^{\frac{{p + 1}}{{2p}}}}dx} + \frac{1}{\omega _n}\int_{B\left( {0,r} \right)} {R(x)udx} \geq \frac{1}{\omega _n}\int_{B\left( {0,r} \right)} {{A^{\frac{{p - 1}}{{2p}}}}{B^{\frac{{p + 1}}{{2p}}}}dx}

after an integration. This is because, by definition of the sphere mean,

\displaystyle\begin{gathered}{r^{N - 1}}\left( {\overline {R(x)u} + \overline {{A^{\frac{{p - 1}}{{2p}}}}{B^{\frac{{p + 1}}{{2p}}}}} } \right) = {r^{N - 1}}\left( {\frac{1}{{{\omega _n}{r^{N - 1}}}}\int_{\partial B\left( {0,r} \right)} {R(x)ud{S_x}} + \frac{1}{{{\omega _n}{r^{N - 1}}}}\int_{\partial B\left( {0,r} \right)} {{A^{\frac{{p - 1}}{{2p}}}}{B^{\frac{{p + 1}}{{2p}}}}d{S_x}} } \right) \\\qquad\quad\;\;\;= \frac{1}{{{\omega _n}}}\left( {\int_{\partial B\left( {0,r} \right)} {R(x)ud{S_x}} + \int_{\partial B\left( {0,r} \right)} {{A^{\frac{{p - 1}}{{2p}}}}{B^{\frac{{p + 1}}{{2p}}}}d{S_x}} } \right) \\\qquad\qquad\qquad\qquad\;\;\;= \frac{1}{{{\omega _n}}}\frac{d}{{dr}}\left[ {\int_0^r {\left( {\int_{\partial B\left( {0,s} \right)} {R(x)ud{S_x}} + \int_{\partial B\left( {0,s} \right)} {{A^{\frac{{p - 1}}{{2p}}}}{B^{\frac{{p + 1}}{{2p}}}}d{S_x}} } \right)ds} } \right] .\\\end{gathered}

Dividing both sides by r^{N-1} and integrating this inequality over [0, r_0], we have

\displaystyle \overline u (0) \geq \overline u (0) - \overline u ({r_0}) \geqslant \int_0^{{r_0}} {\left( {{r^{1 - N}}\int_{B\left( {0,r} \right)} {{A^{\frac{{p - 1}}{{2p}}}}{B^{\frac{{p + 1}}{{2p}}}}dx} } \right)dr}.

Sending r_0 \to \infty we have

\displaystyle\overline u (0) \geq \int_0^{ + \infty } {\left( {{r^{1 - N}}\int_{B\left( {0,r} \right)} {{A^{\frac{{p - 1}}{{2p}}}}{B^{\frac{{p + 1}}{{2p}}}}dx} } \right)dr},

which is impossible by our assumption. The proof is complete.

November 10, 2009

A trivial identity of probability measures

Filed under: Các Bài Tập Nhỏ, Giải Tích 6 (MA5205) — Ngô Quốc Anh @ 14:36

Let us consider a probability space (X,\mathcal B,\mu), i.e., (X,\mathcal B,\mu) is a measurable space together with \mu(X)=1. We assume further that A, B \in \mathcal B are such that \mu(A)=\mu(B)=1. Then we conclude that \mu(A \cap B)=1.

Indeed, since A \subset A \cup B \subset X then 1=\mu(A\cup B). We write A \cup B in the following way

A\cup B = A\backslash B \quad \bigcup \quad A \cap B \quad\bigcup \quad B\backslash A.

We then see that \mu(A\backslash B)=0 since A\backslash B \subset X\backslash B. Similarly, \mu(B\backslash A)=0. Hence, \mu(A \cap B)=1.

Older Posts »

Create a free website or blog at