Ngô Quốc Anh

July 31, 2009

A property of the essentially bounded function

Let E be a subset of \mathbb R^n with |E| < \infty in Lebesgue sense. Suppose f \in L^\infty(E) and \| f\|_\infty > 0. Set

\displaystyle {a_n} = \int_E {{{\left| f \right|}^n}}

for n=1,2,3,... Show that

\displaystyle\mathop {\lim }\limits_{n \to \infty } \frac{{{a_{n + 1}}}} {{{a_n}}} = {\left\| f \right\|_\infty }.

Solution. For any \alpha with 0<\alpha < \|f\|_\infty, let

\displaystyle{E_\alpha } = \left\{ {x \in E: f\left( x \right) \geqslant \alpha } \right\}


\displaystyle {F_\alpha } = E\backslash {E_\alpha }

then |E_\alpha|>0. For any k \in \mathbb N, by the Dominated Convergence Theorem,

\displaystyle\mathop{\lim }\limits_{n\to\infty }\left({\dfrac{{\int_{{F_\alpha }}{{{\left| f\right|}^{n+k}}}}}{{\int_{{E_\alpha }}{{{\left| f\right|}^{n}}}}}}\right)\leqslant\underbrace{\mathop{\lim }\limits_{n\to\infty }\frac{1}{{\left|{{E_\alpha }}\right|}}\int_{{F_\alpha }}{{{\left|{\frac{f}{\alpha }}\right|}^{n}}\left\| f\right\|_\infty^{k}}}_{0}.


\displaystyle\mathop{\lim\inf }\limits_{n\to\infty }\left({\frac{{\int_{E}{{{\left| f\right|}^{n+1}}}}}{{\int_{E}{{{\left| f\right|}^{n}}}}}}\right)\geqslant\mathop{\lim\inf }\limits_{n\to\infty }\left({\frac{{\alpha\int_{{E_\alpha }}{{{\left| f\right|}^{n}}}+\int_{{F_\alpha }}{{{\left| f\right|}^{n+1}}}}}{{\int_{{E_\alpha }}{{{\left| f\right|}^{n}}}+\int_{{F_\alpha }}{{{\left| f\right|}^{n}}}}}}\right) =\alpha .

Letting \alpha \nearrow {\left\| f \right\|_\infty }, we get that

\displaystyle\mathop{\lim }\limits_{n\to\infty }\left({\frac{{\int_{E}{{{\left| f\right|}^{n+1}}}}}{{\int_{E}{{{\left| f\right|}^{n}}}}}}\right) ={\left\| f\right\|_\infty }.

As an application, if we put a_0 = 1, then from

\displaystyle {a_{n + 1}} = \frac{{{a_1}}} {{{a_0}}}.\frac{{{a_2}}} {{{a_1}}} \cdots\frac{{{a_{n + 1}}}} {{{a_n}}}

we deduce that

\displaystyle\mathop{\lim }\limits_{n\to\infty }\sqrt[n]{{{a_{n}}}}=\mathop{\lim }\limits_{n\to\infty }\frac{{{a_{n+1}}}}{{{a_{n}}}}={\left\| f\right\|_\infty }.

In other words,

\displaystyle\mathop{\lim }\limits_{n\to\infty }{\left({\int_{E}{{{\left| f\right|}^{n}}}}\right)^{\frac{1}{n}}}={\left\| f\right\|_\infty }.

Picard’s Theorem + Hadamard’s Theorem = ?

Question. Let f be an entire non-constant function that satisfies the functional equation

\displaystyle f(1 - z) = 1 - f(z)

for all z \in \mathbb C. Show that f(\mathbb C) = \mathbb C.

Solution. Assume by contradiction, then W.L.O.G. by the Picard’s Theorem we can assume that f misses a \in \mathbb C. By the Hadamard’s Theorem,

\displaystyle f(z)-a = e^{p(z)}

for some polynomial p. Therefore,

\displaystyle f(z) = a +e^{p(z)}

for all z \in \mathbb C. From the fact that

\displaystyle f(1-z)=1-f(z)

we get

\displaystyle \underbrace{a+{e^{p\left({1-z}\right)}}}_{f\left({1-z}\right)}=\underbrace{1-\left({a+{e^{p\left( z\right)}}}\right)}_{1-f\left( z\right)}

which yields

\displaystyle {e^{p\left( z \right)}} = 2a - 1 + {e^{p\left( {1 - z} \right)}}.

Put z=0 and z=1, we obtain

\displaystyle {e^{p\left( 0 \right)}} = 2a - 1 + {e^{p\left( 1 \right)}}, \quad {e^{p\left( 1 \right)}} = 2a - 1 + {e^{p\left( 0 \right)}}.


\displaystyle {e^{p\left( 0 \right)}} = 2\left( {2a - 1} \right) + {e^{p\left( 0 \right)}}

which implies a=\frac{1}{2}. From the identity f(1-z)=1-f(z) put z=\frac{1}{2} we then deduce that

\displaystyle f\left( {\frac{1} {2}} \right) = \underbrace {\frac{1} {2}}_a

a contradiction.

Note: I think I should post some applications of the Hadamard’s Theorem.

Corrigendum to the previous proof. Assume by contradiction, then f takes all values except possibly some a. Also notethat f(1/2) = 1/2. So we can assume a \neq 1/2 which means that 1-a \ne a. It then does take the value 1-a, and hence takes the value a. The proof is now complete.

I thank Xu Wei Biao for pointing out a mistake in the previous solution.

July 30, 2009

A couple of complex integrals involving exp(itx) for a real parameter t

In this turn, I will consider a couple of examples of complex contour integrals with respect to variable x involving the following factor e^{itx} where t a real parameter.

Problem 1. Evaluate the integral

\displaystyle I\left( t \right) = \int\limits_{ - \infty }^\infty {\frac{{{e^{itx}}}} {{{{\left( {x + i} \right)}^2}}}dx}

where -\infty < t<\infty.

Solution. Let

\displaystyle {f_t}(z) = \frac{{{e^{itz}}}}{{{{(z + i)}^2}}}

and consider first the case t>0. Then |f_t(z)| is bounded in the upper half-plane by


For R>1 let

\displaystyle C_R=\Gamma_R \cup [-R, R],

where \Gamma_R is the semicircle centered at the origin joining R and -R, oriented counterclockwise.



July 26, 2009

On the positive definite property of the Schur complement

Filed under: Các Bài Tập Nhỏ, Linh Tinh, Nghiên Cứu Khoa Học — Tags: — Ngô Quốc Anh @ 0:48

The following question was proposed in the NUS Q.E. in 2009: Given matrices A \in \mathbb R^{n \times n}, B \in \mathbb R^{n \times m} and C \in \mathbb R^{m \times m}. Suppose A and C are symmetric. Consider the following matrices

\displaystyle H = \left( {\begin{array}{*{20}{c}} A&B \\ {{B^T}}&C \end{array}} \right), \qquad S = C - {B^T}{A^{ - 1}}B.

Show that H is positive definite if and only if A and S are positive definite.

In the literature, the matric S is called the Schur complement, usually, it is denoted by H|A with respect to A. In other word, H|C is of the form A-B^TC^{-1}B. It is worth noting that the letter H used in the above notation indicates the full matrix H, roughly speaking, by H|A we mean the Schur complement of H with respect to A.

Throughout this entry, by A >0 (resp. A \geq 0) we mean that A is positive definite (resp. positive semi-definite). In order to solve the above problem, one needs the following matrix identity, the Aitken block-diagonalization formula,

\displaystyle\left( {\begin{array}{*{20}{c}} I&0 \\ { - {B^T}{A^{ - 1}}}&I \end{array}} \right)\left( {\begin{array}{*{20}{c}} A&B \\ {{B^T}}&C \end{array}} \right)\left( {\begin{array}{*{20}{c}} I&{ - {A^{ - 1}}B} \\ 0&I \end{array}} \right) = \left( {\begin{array}{*{20}{c}} A&0 \\ 0&{H|A} \end{array}} \right).

Now we assume A >0 and H|A >0. Then the following property

\displaystyle \left( {\begin{array}{*{20}{c}} A & 0 \\ 0 & {C - {B^T}{A^{ - 1}}B} \\ \end{array} } \right) > 0

holds true. Indeed, since

\displaystyle\left( {\begin{array}{*{20}{c}} {{A^{n \times n}}}&0 \\ 0&{{C^{m \times m}} - {B^T}{A^{ - 1}}B} \end{array}} \right) \in \text{Mat}(n + m)

then for every

\displaystyle\left( {\begin{array}{*{20}{c}} x&y \end{array}} \right) \in \text{Mat}(n + m)

one has

\displaystyle\left( {\begin{array}{*{20}{c}} {{x^T}}&{{y^T}} \end{array}} \right)\left( {\begin{array}{*{20}{c}} A&0 \\ 0&{M|A} \end{array}} \right)\left( {\begin{array}{*{20}{c}} x&y \end{array}} \right) = {x^T}Ax + {y^T}(M|A)y.

Note that at least x or y is not a zero vector so that

\displaystyle {x^T}Ax + {y^T}\left( {C - {B^T}{A^{ - 1}}B} \right)y > 0

which proves the positive definite property of

\displaystyle \left( {\begin{array}{*{20}{c}} A & 0 \\ 0 & {C - {B^T}{A^{ - 1}}B} \\ \end{array} } \right).

Now by means of the above matrix identity we claims that H>0.

Conversely, for every x \in \mathbb R^n, one has

\displaystyle 0 < \left( {\begin{array}{*{20}{c}} {{x^T}}&0 \end{array}} \right)H\left( {\begin{array}{*{20}{c}} x&0 \end{array}} \right) = \left( {\begin{array}{*{20}{c}} {{x^T}}&0 \end{array}} \right)\left( {\begin{array}{*{20}{c}} A&B \\ {{B^T}}&C \end{array}} \right)\left( {\begin{array}{*{20}{c}} x&0 \end{array}} \right) = {x^T}Ax

whenever x \ne 0. Thus, this and the fact that A is symmetric implies that A >0. As a consequence, A^{-1} exists which helps us to say that S is well-defined.

Now with the help of the matrix identity, one gets

\displaystyle\left( {\begin{array}{*{20}{c}} 0&{{y^T}} \end{array}} \right)\left( {\begin{array}{*{20}{c}} I&0 \\ { - {B^T}{A^{ - 1}}}&I \end{array}} \right)\left( {\begin{array}{*{20}{c}} A&B \\ {{B^T}}&C \end{array}} \right)\left( {\begin{array}{*{20}{c}} I&{ - {A^{ - 1}}B} \\ 0&I \end{array}} \right)\left( {\begin{array}{*{20}{c}} 0&y \end{array}} \right) = \left( {\begin{array}{*{20}{c}} 0&{{y^T}} \end{array}} \right)\left( {\begin{array}{*{20}{c}} A&0 \\ 0&{H|A} \end{array}} \right)\left( {\begin{array}{*{20}{c}} 0&y \end{array}} \right).

Note that, the left left side of the above identity is nothing but

\displaystyle\underbrace {\left( {\begin{array}{*{20}{c}} 0&{{y^T}} \end{array}} \right)\left( {\begin{array}{*{20}{c}} I&0 \\ { - {B^T}{A^{ - 1}}}&I \end{array}} \right)}_D\left( {\begin{array}{*{20}{c}} A&B \\ {{B^T}}&C \end{array}} \right){D^T}

which is positive by the assumption provided y \ne 0. Hence, if y \ne 0 is arbitrary, the right hand side equals to

\displaystyle {{y^T}\left( {C - {B^T}{A^{ - 1}}B} \right)y}

which proves that S>0. The proof is complete.

If I have time, I will provide another proof using the Sylvester’s Law of Inertia. For your convenience regarding to the Schur complement, I prefer you to the book entitled THE SCHUR COMPLEMENT AND ITS APPLICATIONS due to Fuzhen Zhang (edt.) for details.

July 24, 2009

Linear shooting method

In numerical analysis, the shooting method is a method for solving a boundary value problem by reducing it to the solution of an initial value problem. The following exposition may be clarified by this illustration of the shooting method.

For a boundary value problem of a second-order ordinary differential equation, the method is stated as follows. Let

y''(t) = f(t, y(t), y'(t)), \quad y(t_0) = y_0, \quad y(t_1) = y_1

be the boundary value problem. Let y(t, a) denote the solution of the initial value problem

y''(t) = f(t, y(t), y'(t)), \quad y(t_0) = y_0, \quad y'(t_0) = a

Define the function F(a) as the difference between y(t_1, a) and the specified boundary value y_1.

F(a) = y(t_1, a) - y_1 \,

If the boundary value problem has a solution, then F has a root, and that root is just the value of y'(t_0) which yields a solution y(t) of the boundary value problem. The usual methods for finding roots may be employed here, such as the bisection method or Newton’s method.

Linear shooting method

The boundary value problem is linear if f has the form

f(t, y(t), y'(t))=p(t)y'(t)+q(t)y(t)+r(t).

In this case, the solution to the boundary value problem is usually given by

y(t) = y_{(1)}(t)+\frac{y_1-y_{(1)}(t_1)}{y_{(2)}(t_1)}y_{(2)}(t)

where y_{(1)}(t) is the solution to the initial value problem

y''(t) = f(t, y(t), y'(t)),\quad y(t_0) = y_0, \quad y'(t_0) = 0,

and y_{(2)}(t) is the solution to the initial value problem

y''(t) = p(t)y'(t)+q(t)y(t),\quad y(t_0) = 0, \quad y'(t_0) = 1.

See the proof for the precise condition under which this result holds.


One can easily see that if y_{(2)}(t_1) = 0 then y_{(2)}(t) \equiv 0 for all t. Thus y(t) = y_{(1)}(t) for all t. Besides, the formula

y(t) = y_{(1)}(t)+\frac{y_1-y_{(1)}(t_1)}{y_{(2)}(t_1)}y_{(2)}(t)

comes from the fact that we need to find y(t) as a combination of y_{(1)}(t) and y_{(2)}(t). In this manner, y(t) should be of the form

y(t) = y_{(1)}(t) + Cy_{(2)}(t) for all t.

At x=t_0,

{y_0} = y({t_0}) = \underbrace {{y_{(1)}}({t_0})}_{{y_0}} + C{y_{(2)}}({t_0})

which implies that y_{(2)}(t_0)=0. This is self-satisfied.
At x=t_1,

{y_1} = y({t_1}) = {y_{(1)}}({t_1}) + C{y_{(2)}}({t_1})

which implies that

C = \frac{{{y_1} - {y_{(1)}}({t_1})}} {{{y_{(2)}}({t_1})}}.


y(t) = {y_{(1)}}(t) + \frac{{{y_1} - {y_{(1)}}({t_1})}} {{{y_{(2)}}({t_1})}}{y_{(2)}}(t).


A boundary value problem is given as follows by Stoer and Bulirsch.

w''(t) = \frac{3}{2} w^2, \quad w(0) = 4, \quad w(1) = 1.

The initial value problem

w''(t) = \frac{3}{2} w^2, \quad w(0) = 4, \quad w'(0) = s

was solved for s = -1, -2, -3, ..., -100, and F(s) = w(1,s) -1 plotted in the first figure. Inspecting the plot of F, we see that there are roots near -8 and -36. Some trajectories of w(t,s) are shown in the second figure.

Solutions of the initial value problem were computed by using the LSODE algorithm, as implemented in the mathematics package GNU Octave. Stoer and Bulirsch state that there are two solutions, which can be found by algebraic methods. These correspond to the initial conditions w'(0) = -8 and w'(0) = -35.9 (approximately).

The function F(s) = w(1,s)-1.

Trajectories w(t,s) for s = w'(0) equal to -7, -8, -10, -36, and -40 (red, green, blue, cyan, and magenta, respectively). The point (1,1) is marked with a red diamond.


July 20, 2009

A generalization of the Morera’s Theorem

In complex analysis, a branch of mathematics, Morera’s theorem, named after Giacinto Morera, gives an important criterion for proving that a function is holomorphic.

Morera’s theorem states that if f is a continuous, complex-valued function defined on an open set D in the complex plane, satisfying

\displaystyle\oint_C f(z) dz = 0

for every triangle C in D, then f must be holomorphic on D.

The assumption of Morera’s theorem is equivalent to that f has an anti-derivative on D. The converse of the theorem is not true in general. A holomorphic function need not possess an antiderivative on its domain, unless one imposes additional assumptions. For instance, Cauchy’s integral theorem states that the line integral of a holomorphic function along a closed curve is zero, provided that the domain of the function is simply connected.

Now we state and prove the following generalization of the Morera’s theorem: Suppose that f is continuous on \mathbb C, and

\displaystyle\int_C f(z) dz = 0

for every circle C. Prove f is holomorphic.

File:Morera's Theorem.png


July 18, 2009

Schwarz Reflection Principle and several applications, 2

I want to continue the topic “Schwarz Reflection Principle and several applications”. Today we discuss the following.

Question. If entire function g satisfies |g(z)|=1 whenever |z|=1, then there exist a non-negative integer number n and a constant c satisfying |c| = 1 such that g(z) = cz^n.

Solution. Suppose g has a zero at the origin of order n for some n\ge 0 and the other zeros of g in the unit disk are listed as a_1,a_2,\dots,a_m, repeating as necessary to account for multiplicities. The set of zeros in the unit disk must be finite since that set cannot have any limit points.


\displaystyle B(z) = z^n\prod_{j = 1}^m\frac {z - a_j}{1 - \overline{a_j}z}.

Note that |B(z)| = 1 for all |z| = 1. Let f(z) = \frac {g(z)}{B(z)}. Then f is an entire function with no zeros in the unit disk and with the property that |f(z)| = 1 for |z| = 1.

By the maximum modulus principle |f(z)| and \frac {1}{|f(z)|} must both achieve their maximum values for the unit disk on the boundary, from which we conclude that |f(z)| = 1 for all |z|\le 1. But then, that means that f(z) = c must be constant, with |c| = 1.

So we have that

\displaystyle g(z) = cB(z) = cz^n\prod_{j = 1}^m\frac {z - a_j}{1 - \overline{a_j}z}.

But that expression has poles whenever z = \frac1{\overline{a_j}}. The only way we could have g be an entire function is for there to be no such a_j. Hence we conclude that g(z) = cz^n for some nonnegative n.

Here is an even further simplification.

Dividing by some finite power of z (which does not change the property), we can assume that g(0)\ne 0. Now consider

\displaystyle \frac1{g(1/z)} \quad \text{ and } \quad \overline{g(\bar z)}.

They are meromorphic on \mathbb C and coincide on the unit circle. Thus they coincide everywhere. Hence

\displaystyle g(z)\to \frac1{g(0)} \quad \text{ as } \quad z\to\infty,

so g is bounded and, thereby, constant.

I will provide another proof mainly based on the Schwarz Reflection Principle. However, to this purpose, an extension of Morera’s Theorem for toy contours should be introduced firstly.

July 17, 2009

3 indefinite integral problems involving sinx/x via residue

Problem 1. Compute

\displaystyle\int\limits_{ - \infty }^\infty {\frac{{\sin x}} {x}dx}

via complex variable methods.

Problem 2. Compute

\displaystyle\int\limits_{ - \infty }^\infty {\frac{{\sin^2 x}} {x^2}dx}

via complex variable methods.

Problem 3. Compute

\displaystyle\int\limits_{ - \infty }^\infty {\frac{{\sin^3 x}} {x^3}dx}

via complex variable methods.


July 16, 2009

A beautiful inequality regarding complex variables

The following inequality

\displaystyle \left|{\frac{{{z_{1}}-{z_{2}}}}{{1-\overline{{z_{1}}}{z_{2}}}}}\right|\geqslant\frac{{\left|{{z_{1}}}\right|-\left|{{z_{2}}}\right|}}{{1-\left|{{z_{1}}}\right|\left|{{z_{2}}}\right|}},\quad\forall{z_{1}},{z_{2}}\in D\left({0,1}\right)

holds true. To prove, we do as follows: By a direct computation, we get

\displaystyle {\left|{\frac{{{z_{1}}-{z_{2}}}}{{1-\overline{{z_{1}}}{z_{2}}}}}\right|^{2}}= 1-\frac{{\left({1-{{\left|{{z_{1}}}\right|}^{2}}}\right)\left({1-{{\left|{{z_{2}}}\right|}^{2}}}\right)}}{{{{\left|{1-\overline{{z_{1}}}{z_{2}}}\right|}^{2}}}}


\displaystyle 1-\frac{{\left({1-{{\left|{{z_{1}}}\right|}^{2}}}\right)\left({1-{{\left|{{z_{2}}}\right|}^{2}}}\right)}}{{{{\left|{1-\overline{{z_{1}}}{z_{2}}}\right|}^{2}}}}\geqslant 1-\frac{{\left({1-{{\left|{{z_{1}}}\right|}^{2}}}\right)\left({1-{{\left|{{z_{2}}}\right|}^{2}}}\right)}}{{{{\left|{1-\left|{{z_{1}}}\right|\left|{{z_{2}}}\right|}\right|}^{2}}}}.


\displaystyle 1-\frac{{\left({1-{{\left|{{z_{1}}}\right|}^{2}}}\right)\left({1-{{\left|{{z_{2}}}\right|}^{2}}}\right)}}{{{{\left|{1-\left|{{z_{1}}}\right|\left|{{z_{2}}}\right|}\right|}^{2}}}}=\frac{{{{\left({\left|{{z_{1}}}\right|-\left|{{z_{2}}}\right|}\right)}^{2}}}}{{{{\left|{1-\left|{{z_{1}}}\right|\left|{{z_{2}}}\right|}\right|}^{2}}}}.



As an application we can prove the following Lindelof theorem. It says that if f is assumed to be holomorphic and bounded by 1 in D(0, 1). Then

\displaystyle\left|{f\left( z\right)}\right|\leqslant\frac{{\left|{f\left( 0\right)}\right|+\left| z\right|}}{{1+\left|{f\left( 0\right)}\right|\left| z\right|}},\quad\forall z\in D\left({0,1}\right).


Blog at