Ngô Quốc Anh

September 15, 2013

Some integral identities on manifolds with boundary


In this note, I summary several useful integral identities on Riemannian manifolds with boundary.

  1. Suppose that f is a function and X is a 1-form, then

    \displaystyle\boxed{\int_M {f\text{div}Xd{v_g}} = - \int_M {\left\langle {\nabla f,X} \right\rangle_g d{v_g}} + \int_{\partial M} {f\left\langle {X,\nu } \right\rangle_g d{\sigma _g}}.}

    To prove this, we write everything in local coordinates as follows

    \begin{array}{lcl} \displaystyle\int_M {f \text{div} Xd{v_g}} &=& \displaystyle\int_M {f{\nabla _i}{X^i}d{v_g}} \hfill \\ &=& \displaystyle - \int_M {{\nabla _i}f{X^i}d{v_g}} + \int_{\partial M} {f\left\langle {X,\nu } \right\rangle_g d{\sigma _g}} \hfill \\ &=& \displaystyle - \int_M {\left\langle {\nabla f,X} \right\rangle_g d{v_g}} + \int_{\partial M} {f\left\langle {X,\nu } \right\rangle_g d{\sigma _g}}\end{array}

    as claimed.

  2. Using the previous identity, we can prove the following

    \displaystyle \boxed{\int_M {{{\left\langle {X,\nabla (\text{div} X)} \right\rangle }_g}d{v_g}} = - \int_M {|\text{div} X|_g^2d{v_g}} + \int_{\partial M} {\text{div} X{{\left\langle {X,\nu } \right\rangle }_g}d{\sigma _g}}}

    where X is again a vector field on M. To prove this, we simply apply the previous identity with f replaced by \text{div}(X) to get the desired result.
    (more…)

April 25, 2013

The Cauchy formula for repeated integration

Filed under: Các Bài Tập Nhỏ, Giải Tích 2 — Ngô Quốc Anh @ 23:41

The Cauchy formula for repeated integration, named after Augustin Louis Cauchy, allows one to compress n antidifferentiations of a function into a single integral.

Let f be a continuous function on the real line. Then the n-th repeated integral of f based at a,

\displaystyle f^{(-n)}(x) = \int_a^x \int_a^{\sigma_1} \cdots \int_a^{\sigma_{n-1}} f(\sigma_{n}) \, \mathrm{d}\sigma_{n} \cdots \, \mathrm{d}\sigma_2 \, \mathrm{d}\sigma_1,

is given by single integration

\displaystyle f^{(-n)}(x) = \frac{1}{(n-1)!} \int_a^x\left(x-t\right)^{n-1} f(t)\,\mathrm{d}t.

A proof is given by induction. Since f is continuous, the base case follows from the Fundamental theorem of calculus

\displaystyle\frac{\mathrm{d}}{\mathrm{d}x} f^{(-1)}(x) = \frac{\mathrm{d}}{\mathrm{d}x}\int_a^x f(t)\,\mathrm{d}t = f(x);

where

\displaystyle f^{(-1)}(a) = \int_a^a f(t)\,\mathrm{d}t = 0.

(more…)

September 20, 2012

Poly-Laplacian of rotationally symmetric functions in R^3

Filed under: Các Bài Tập Nhỏ, PDEs — Ngô Quốc Anh @ 5:24

In \mathbb R^3, it is known that for any rotationally symmetric function f, i.e. f depends only on the radius r, the following holds

\displaystyle \Delta f= f'' + \frac{2}{r} f' = \frac{1}{r^2}(r^2 f')' .

By a simple calculation, it is easy to have

\displaystyle\begin{gathered} {\Delta ^2}f = \Delta \left( {f'' + \frac{2}{r}f'} \right) \hfill \\ \qquad= {\left( {f'' + \frac{2}{r}f'} \right)^\prime }^\prime + \frac{2}{r}{\left( {f'' + \frac{2}{r}f'} \right)^\prime } \hfill \\ \qquad= {f^{(4)}} + {\left( { - \frac{2}{{{r^2}}}f' + \frac{2}{r}f''} \right)^\prime } + \frac{2}{r}{f^{(3)}} - \frac{4}{{{r^3}}}f' + \frac{4}{{{r^2}}}f'' \hfill \\ \qquad= {f^{(4)}} + \left( {\frac{4}{{{r^3}}}f' - \frac{2}{{{r^2}}}f'' - \frac{2}{{{r^2}}}f'' + \frac{2}{r}{f^{(3)}}} \right) + \frac{2}{r}{f^{(3)}} - \frac{4}{{{r^3}}}f' + \frac{4}{{{r^2}}}f'' \hfill \\ \qquad= {f^{(4)}} + \frac{4}{r}{f^{(3)}}. \hfill \\ \end{gathered}

In other words, there holds

\displaystyle {\Delta ^2}f = \frac{1}{{{r^4}}}({r^4}{f^{(3)}})'.

(more…)

April 1, 2011

Several interesting limits from a paper by Chang-Qing-Yang


Recently, I have learnt from my friend, ZJ, the following result

Assume that F:\mathbb R \to \mathbb R is absolutely integrable. Then

\displaystyle\begin{gathered} \mathop {\lim }\limits_{t \to \pm \infty } {e^{2t}}\int_t^{ + \infty } {F(x){e^{ - 2x}}dx} = 0, \hfill \\ \mathop {\lim }\limits_{t \to \pm \infty } {e^{ - 2t}}\int_{ - \infty }^t {F(x){e^{ - 2x}}dx} = 0. \hfill \\ \end{gathered}

The result seems reasonable by the following observation, for example, we consider the first identity when t \to +\infty. Then the factor

\displaystyle\int_t^{ + \infty } {F(x){e^{ - 2x}}dx}

decays faster then the exponent function \exp (2t). This may be true, of course we need to prove mathematically, because the integrand contains the term \exp (-2x) which turns out to be a good term since x \geqslant t. So here is the trick in order to solve such a problem.

(more…)

January 8, 2011

A funny limit involving sine function

Filed under: Các Bài Tập Nhỏ, Giải Tích 1 — Ngô Quốc Anh @ 2:32

Today, I have been asked to calculate the following limit

\displaystyle \mathop {\lim }\limits_{n \to + \infty } \sin (\sin \overbrace {(...(}^n\sin x)...))

for each fixed x \in [0,2\pi]. From the mathematical point of view, we can assume x \in (-\frac{\pi}{2}, \frac{\pi}{2}) as we just replace x by \sin (\sin x)) if necessary.

There are three possible cases

Case 1. x \in (0, \frac{\pi}{2}). In this case, it is well known that function \frac{\sin x}{x} is monotone decreasing since

\displaystyle {\left( {\frac{{\sin x}}{x}} \right)^\prime } = \frac{{x\cos x - \sin x}}{{{x^2}}} = \frac{{\cos x}}{{{x^2}}}\left( {x - \tan x} \right) \leqslant 0

in its domain. Consequently, it holds

(more…)

November 2, 2010

Jacobi’s formula for the differential of the determinant of matrices

Filed under: Các Bài Tập Nhỏ — Ngô Quốc Anh @ 15:34

In matrix calculus, Jacobi’s formula expresses the differential of the determinant of a matrix A in terms of the adjugate of A and the differential of A. The formula is

\displaystyle d\, \mbox{det} (A) = \mbox{tr} (\mbox{adj}(A) \, dA).

It is named after the mathematician C.G.J. Jacobi.

We first prove a preliminary lemma.

Lemma. Given a pair of square matrices A and B of the same dimension n, then

\displaystyle\sum_i \sum_j A_{ij} B_{ij} = \mbox{tr} (A^\top B).

Proof. The product AB of the pair of matrices has components

\displaystyle (AB)_{jk} = \sum_i A_{ji} B_{ik}.

Replacing the matrix A by its transpose A^\top is equivalent to permuting the indices of its components

\displaystyle (A^\top B)_{jk} = \sum_i A_{ij} B_{ik}.

The result follows by taking the trace of both sides

\displaystyle \mbox{tr} (A^\top B) = \sum_j (A^\top B)_{jj} = \sum_j \sum_i A_{ij} B_{ij} = \sum_i \sum_j A_{ij} B_{ij}.

Theorem. It holds

\displaystyle d \, \mbox{det} (A) = \mbox{tr} (\mbox{adj}(A) \, dA).

(more…)

October 18, 2010

1/infinity = 0 is equivalent to 1/0=infinity?

Filed under: Các Bài Tập Nhỏ, Linh Tinh — Ngô Quốc Anh @ 12:05

It is now the time to discuss some funny thing. I just learn from GR class this morning a proof of the following statement

\displaystyle \frac{1}{\infty}=0 \quad \Longleftrightarrow \quad \frac{1}{0}=\infty.

Okay, let us start with the left hand side. By rotating 90 degrees counter-clockwise both sides of

\displaystyle \frac{1}{\infty}=0

we get

\displaystyle -18=0.

Now adding both sides by 8 we arrive at

\displaystyle -10=8.

Again, rotating 90 degrees clockwise both sides we reach to

\displaystyle \frac{1}{0}=\infty.

The reverse case can be treated similarly.

September 22, 2010

An identity of differentiation involving the Kelvin transform

Filed under: Các Bài Tập Nhỏ, Giải Tích 1 — Tags: — Ngô Quốc Anh @ 15:47

This short note is to prove the following

\displaystyle {\nabla _x}\left( {u\left( {\frac{x}{{{{\left| x \right|}^2}}}} \right)} \right) \cdot x = - {\nabla _y}\left( {u\left( y \right)} \right) \cdot y

where x and y are connected by

\displaystyle y = \frac{x}{{{{\left| x \right|}^2}}} \in {\mathbb{R}^2}.

The proof is straightforward as follows.

  • Calculation of \frac{\partial}{\partial x_1}.

We see that

\displaystyle\begin{gathered} \frac{\partial }{{\partial {x_1}}}\left( {u\left( {\frac{x}{{{{\left| x \right|}^2}}}} \right)} \right){x_1} = \frac{\partial }{{\partial {y_1}}}\left( {u\left( y \right)} \right)\frac{\partial }{{\partial {x_1}}}\left( {\frac{{{x_1}}}{{{{\left| x \right|}^2}}}} \right){x_1} + \frac{\partial }{{\partial {y_2}}}\left( {u\left( y \right)} \right)\frac{\partial }{{\partial {x_1}}}\left( {\frac{{{x_2}}}{{{{\left| x \right|}^2}}}} \right){x_1} \hfill \\ \qquad\qquad\qquad= \frac{\partial }{{\partial {y_1}}}\left( {u\left( y \right)} \right)\left( {\frac{1}{{{{\left| x \right|}^2}}} - \frac{{2x_1^2}}{{{{\left| x \right|}^4}}}} \right){x_1} + \frac{\partial }{{\partial {y_2}}}\left( {u\left( y \right)} \right)\left( { - \frac{{2{x_1}{x_2}}}{{{{\left| x \right|}^4}}}} \right){x_1}. \hfill \\ \end{gathered}

  • Calculation of \frac{\partial}{\partial x_2}.

Similarly, we get

\displaystyle\begin{gathered} \frac{\partial }{{\partial {x_2}}}\left( {u\left( {\frac{x}{{{{\left| x \right|}^2}}}} \right)} \right){x_2} = \frac{\partial }{{\partial {y_1}}}\left( {u\left( y \right)} \right)\frac{\partial }{{\partial {x_2}}}\left( {\frac{{{x_1}}}{{{{\left| x \right|}^2}}}} \right){x_2} + \frac{\partial }{{\partial {y_2}}}\left( {u\left( y \right)} \right)\frac{\partial }{{\partial {x_2}}}\left( {\frac{{{x_2}}}{{{{\left| x \right|}^2}}}} \right){x_2} \hfill \\ \qquad\qquad\qquad= \frac{\partial }{{\partial {y_1}}}\left( {u\left( y \right)} \right)\left( { - \frac{{2{x_1}{x_2}}}{{{{\left| x \right|}^4}}}} \right){x_2} + \frac{\partial }{{\partial {y_2}}}\left( {u\left( y \right)} \right)\left( {\frac{1}{{{{\left| x \right|}^2}}} - \frac{{2x_2^2}}{{{{\left| x \right|}^4}}}} \right){x_2}. \hfill \\ \end{gathered}

(more…)

July 19, 2010

On the determinant of a matrix

Filed under: Các Bài Tập Nhỏ — Ngô Quốc Anh @ 20:10

Several days ago, I placed a question on MathLinks asking the relation between \det A and \det(A-\lambda I). The point is how to evaluate

\displaystyle\det\begin{bmatrix}1+|x|^{2}-2x_{1}^{2}&-x_{1}x_{2}&\cdots&-x_{1}x_{n}\\  -x_{1}x_{2}&1+|x|^{2}-2x_{2}^{2}&\cdots&-x_{2}x_{n}\\ \vdots  &\vdots &\ddots &\vdots\\  -x_{n}x_{1}&-x_{n}x_{2}&\cdots&1+|x|^{2}-2x_{n}^{2}\end{bmatrix}.

Interestingly, K.M. showed me a new way to attack such a problem but slightly different from the original one. He proved

\displaystyle\det\begin{bmatrix}1+|x|^{2}-x_{1}^{2}&-x_{1}x_{2}&\cdots&-x_{1}x_{n}\\ -x_{1}x_{2}&1+|x|^{2}-x_{2}^{2}&\cdots&-x_{2}x_{n}\\ \vdots &\vdots &\ddots &\vdots\\ -x_{n}x_{1}&-x_{n}x_{2}&\cdots&1+|x|^{2}-x_{n}^{2}\end{bmatrix}=\left(1+|x|^{2}\right)^{n-1}.

Let us discuss the proof of this modified problem.

Let

x=\begin{bmatrix}x_1\\\vdots\\x_n\end{bmatrix}

and let

A=xx^T.

The determinant we are trying to compute is

\displaystyle \det\left((1+|x|^2)I-A\right),

which is the characteristic polynomial of A evaluated at 1+|x|^2.

Now, A is certainly diagonalizable (which doesn’t even matter, but it makes it easier to think about), and we know its eigenvalues. Why do we know its eigenvalues? Because A is a matrix of rank 1, hence nullity n-1, hence n-1 of its n eigenvalues are zero. What is the other eigenvalue? It’s the same as the sum of the eigenvalues, which is the trace of A, which is |x|^2. Put that information together, and we have that the characteristic polynomial of A is

\det(\lambda I-A)=\left(\lambda-|x|^2\right)\lambda^{n-1}=\lambda^n-|x|^2\lambda^{n-1}.

Substitute 1+|x|^2 for \lambda to get the result quoted.

May 1, 2010

A useful identity in a book due to L. Ahlfors

Filed under: Các Bài Tập Nhỏ, Giải Tích 5, Nghiên Cứu Khoa Học — Tags: — Ngô Quốc Anh @ 3:12

Let \mathbf{x},\mathbf{y} be points in \mathbb R^n. If we denote by \mathbf{x}^\sharp the reflection point of \mathbf{x} with respect to the unit ball, i.e.

\displaystyle \mathbf{x}^\sharp = \frac{\mathbf{x}}{|\mathbf{x}|^2}

we then have the following well-known identity

\displaystyle |\mathbf{x}|\left| {{\mathbf{x}^\sharp } - \mathbf{y}} \right| = |\mathbf{y}|\left| {{\mathbf{y}^\sharp } - \mathbf{x}} \right|.

The proof of the above identity comes from the fact that

\displaystyle |\mathbf{x}|\left| {\frac{\mathbf{x}}{{|\mathbf{x}{|^2}}} - \mathbf{y}} \right| = \sqrt {1 + |\mathbf{x}{|^2}|\mathbf{y}|^2 - 2\mathbf{x} \cdot \mathbf{y}} = |\mathbf{y}|\left| {\frac{\mathbf{y}}{{|\mathbf{y}|^2}} - \mathbf{x}} \right|.

Indeed, by squaring both sides of

\displaystyle |\mathbf{x}|\left| {\frac{\mathbf{x}}{{|\mathbf{x}{|^2}}} - \mathbf{y}} \right| = \sqrt  {1 + |\mathbf{x}|^2|\mathbf{y}|^2 - 2\mathbf{x} \cdot \mathbf{y}}

we arrive at

\displaystyle |\mathbf{x}|^2\left( {\frac{{|\mathbf{x}|^2}}{{|\mathbf{x}|^4}} - 2\frac{{\mathbf{x} \cdot \mathbf{y}}}{{|\mathbf{x}|^2}} + |\mathbf{y}|^2} \right) = 1 + |\mathbf{x}|^2|\mathbf{y}|^2 - 2\mathbf{x} \cdot \mathbf{y}

which is obviously true. Similarly, the last identity also holds. If we replace \mathbf{y} by -\mathbf{y} we also have

\displaystyle |\mathbf{x}|\left| {{\mathbf{x}^\sharp }+ \mathbf{y}} \right| = |\mathbf{y}|\left|  {{\mathbf{y}^\sharp } + \mathbf{x}} \right|.

Generally, if we consider the reflection point of \mathbf{x} over a ball B_r(0), i.e.

\displaystyle \mathbf{x}^\sharp = \frac{r^2\mathbf{x}}{|\mathbf{x}|^2}

we still have the fact

\displaystyle |\mathbf{x}|\left| {{\mathbf{x}^\sharp } - \mathbf{y}} \right| = |\mathbf{y}|\left|  {{\mathbf{y}^\sharp } - \mathbf{x}} \right|.

Indeed, one gets

\displaystyle |\mathbf{x}|\left| {\frac{{{r^2}\mathbf{x}}}{{|\mathbf{x}{|^2}}} - \mathbf{y}} \right| = {r^2}|\mathbf{x}|\left| {\frac{\mathbf{x}}{{|\mathbf{x}{|^2}}} - \frac{\mathbf{y}}{{{r^2}}}} \right| = {r^2}\left| {\frac{\mathbf{y}}{{{r^2}}}} \right|\left| {\frac{{\frac{\mathbf{y}}{{{r^2}}}}}{{{{\left| {\frac{\mathbf{y}}{{{r^2}}}} \right|}^2}}} - \mathbf{x}} \right| = \left| \mathbf{y} \right|\left| {\frac{{{r^2}\mathbf{y}}}{{|\mathbf{y}{|^2}}} - \mathbf{x}} \right|.

Similarly,

\displaystyle |\mathbf{x}|\left| {{\mathbf{x}^\sharp } + y} \right| = |y|\left|   {{y^\sharp } + \mathbf{x}} \right|.

Such identity is very useful. For example, in \mathbb R^n (n\geqslant 3) the following holds

\displaystyle\iint\limits_{{\mathbf{x}} = r} {\frac{{d{\sigma _{\mathbf{x}}}}}{{{{\left| {{\mathbf{x}} - {\mathbf{y}}} \right|}^{n - 2}}}}} = \min \left\{ {\frac{1}{{|{\mathbf{y}}|^{n - 2}}},\frac{1}{r^{n - 2}}} \right\}.

This type of formula has been considered before when n=3 here. For a general case, Lieb and Loss introduced another method in their book published by AMS in 2001. Here we introduce a completely new proof. At first, if |\mathbf{y}|>r by the potential theory, one easily gets

\displaystyle\iint\limits_{{\mathbf{x}} = r} {\frac{{d{\sigma _{\mathbf{x}}}}}{{{{\left| {{\mathbf{x}} - {\mathbf{y}}} \right|}^{n - 2}}}}} = \frac{1}{{|{\mathbf{y}}|^{n - 2}}}.

If |\mathbf{y}|<r, one needs to make use of the reflection point of \mathbf{y} and the above identity to go back to the first case. The point here is |\mathbf{y}^\sharp|>r. The integral is obviously continuous as a function of \mathbf{y}. The above argument is due to professor X.X.W.

Older Posts »

Blog at WordPress.com.