Ngô Quốc Anh

October 30, 2009

A characteristic of essentially bounded functions


In this topic, we prove the following statement

Statement: Let (X,\mathcal B, m) be a probability space. Let h \in L^2(m). Then h is essentially bounded iff h \cdot f \in L^2(m) for all f \in L^2(m).

Proof. If h is bounded, then by using the Holder inequality one has

\displaystyle\int_X {{{\left| {h \cdot f} \right|}^2}dm}\leq \underbrace {\sqrt {\int_X {{{\left| h \right|}^2}dm} } }_{ \leqslant c}\sqrt {\int_X {{{\left| f \right|}^2}dm} }<+\infty

for all f \in L^2(m). Conversely, we suppose h is such that h \cdot f \in L^2(m) whenever f \in L^2(m). Let

\displaystyle X_n = \{ x \in X : n-1 \leq |h(x)| < n\}, \quad \forall n>0.

Then \{X_n\}_1^\infty partitions X. Let

\displaystyle f\left( x \right) =\sum\limits_{n = 1}^\infty{\frac{1}{{n\sqrt {m\left( {{X_n}} \right)} }}{\chi_{{X_n}}}\left( x \right)} ,

where it is understood that the n-term is omiited if m(X_n)=0. Then

\displaystyle\int_X {{{\left| f \right|}^2}dm}=\int_X {{{\left({\sum\limits_{n = 1}^\infty {\frac{1}{{n\sqrt {m\left( {{X_n}}\right)} }}{\chi _{{X_n}}}\left( x \right)} } \right)}^2}dm}\leq \sum\limits_{n = 1}^\infty{\frac{1}{{{n^2}}}}<\infty

which implies f \in L^2(m). Since

\displaystyle\int_X {{{\left| {hf} \right|}^2}dm}=\sum\limits_{n \in F} {\int_{{X_n}} {{{\left| {hf} \right|}^2}dm}}\geq\sum\limits_{n \in F}{{{\left( {\frac{{n - 1}}{n}}\right)}^2}}

where F = \left\{ {n:m\left( {{X_n}} \right) \ne 0} \right\}. Sincc h \cdot f \in L^2(m) we have that F is finite and therefore h is essentially bounded.

October 28, 2009

The weak and weak* topologies: A few words

Filed under: Giải tích 8 (MA5206), Linh Tinh, Nghiên Cứu Khoa Học — Ngô Quốc Anh @ 2:48

The weak and weak* topologies are the weakest in which certain linear functionals are continuous.

We start with a normed linear space X. The dual space of X, denoted by X', is the collection of all continuous linear functionals, i.e., the set of all mapping \ell : X \to \mathbb R satisfying

\ell(ax)=a \ell (x), \ell(x+y)=\ell(x)+\ell(y)

and

\displaystyle\lim_{n \to \infty} \ell(x_n) = \ell(x) when \displaystyle\lim_{n \to \infty} \|x_n - x\|=0.

Definition 1. In X, the strong topology is the norm topology, i.e., we can talk about an open set of X, for example U in the following sense: U \subset X is said to be open if and only if for each x_0 \in U, there exists \varepsilon>0 such that \{ x \in X: \|x-x_0\|<\varepsilon\} \subset U.

Claim 1. Bounded linear functionals are continuous in the strong topology.

Proof. We first recall that a linear functional \ell is said to be bounded if there is a positive number c such that |\ell (x)| \leq c\|x\| for all x \in X.

Now we assume \ell is continuous but not bounded; then for any choice of c=n, one has \ell(x_n) > n \|x_n\|. Clearly, x_n can be replaced by any multiple of x_n; if we normalize x_n so that

\displaystyle \|x_n\|=\frac{1}{\sqrt{n}}

then x_n \to 0 but \ell (x_n) \to \infty. This shows the lack of boundedness implies the lack of continuity.

Now we assume \ell is bounded. For arbitray x_n and x, one gets

|\ell(x_n)-\ell (x)| = |\ell (x_n-x)| \leq c\|x_n-x\|;

this shows that boundedness implies continuity.

Definition 2. In X, the weak topology is the weakest topology in which all bounded linear functionals are continuous.

The open sets in the weak topology are unions of finite intersections of sets of the form

\{ x : a< \ell(x) < b\}.

Clearly, in an infinite-dimensional space the intersection of a finite number of sets of the above form is unbounded. This shows that every set that is open in the weak topology is unbounded. In particular, the balls

\{ x : \|x\|<R\}

opens in the strong topology, are not open is the weak topology.

Definition 3. In X' the dual space of X, the weak* topology is the crudest topology in which all linear functionals

x: X' \to \mathbb R, x(\ell) := \ell(x)

are continuous.

If X' is nonreflexive, the weak* topology is genuinely coarser than the weak topology, as will be clear from the following theorem due to Alaoglu

Theorem (Alaoglu). The closed unit ball in X' is compact in the weak* topology.

We end this topic by the following theorem

Theorem. The closed unit ball in X is compact in the weak topology if and only if X is reflexive.

October 17, 2009

The Brezis-Lieb lemma and several applications

Filed under: Các Bài Tập Nhỏ, Giải Tích 6 (MA5205), Nghiên Cứu Khoa Học — Tags: — Ngô Quốc Anh @ 14:34

What we did in this topic was just an L^p version of the Brezis-Lieb lemma. In this topic, we will discuss the generalization of this lemma.

Roughly speaking, what we are going to prove is the following:  If j : \mathbb C \to \mathbb C is a continuous function such that j(0) = 0, then, when f_n \to f a.e. and

\displaystyle\int |j(f_n(x))| d\mu(x) \leq C < \infty

we claim that

\displaystyle\lim\limits_{n \to \infty} \int \left[ j(f_n) - j(f_n - f)\right] = \int j(f)

under suitable conditions on j and/or \{f_n\}.

To be exact, in addition let j satisfy the following hypothesis:

For every sufficiently small \varepsilon>0, there exists two continuous, nonnegative functions \varphi_\varepsilon and \psi_\varepsilon such that

\displaystyle |j(a+b)-j(a)| \leq \varepsilon \varphi_\varepsilon(a) + \psi_\varepsilon(b)

for all a, b \in \mathbb C.

Theorem. Let j satisfy the above hypothesis and let f_n = f+g_n be a sequence of measurable functions from \Omega to \mathbb C such that

  1. g_n \to 0 a.e.
  2. j(f) \in L^1.
  3. \displaystyle\int \varphi_\varepsilon(g_n(x))d\mu(x) \leq C < \infty for some constant C, independent of \varepsilon and n.
  4. \displaystyle\int \psi_\varepsilon(f(x)) d\mu(x) < \infty for all \varepsilon >0.

Then, as n \to \infty,

\displaystyle\lim\limits_{n \to \infty} \int \left| j(f+g_n) - j(g_n) - j(f) \right| d\mu =0.

Proof. Fix \varepsilon >0 and let

\displaystyle W_{\varepsilon, n} (x) = \Big[ \big|j(f_n(x)) -j(g_n(x)) - j(f(x))\big| - \varepsilon \varphi_\varepsilon (g_n(x))\Big]_+,

where [a]_+ = \max\{a,0\}. As n \to \infty, W_{\varepsilon, n} (x) \to 0 a.e. On the other hand,

\displaystyle \big| j(f_n) - f(g_n) - j(f)\big| \leq |j(f_n) - j(g_n)| + |j(f)| \leq \varepsilon \varphi_\varepsilon(g_n) + \psi_\varepsilon(f) + |j(f)|.

Therefore, W_{\varepsilon, n} \leq \psi_\varepsilon(f) + |j(f)| \in L^1. By the Lebesgue Dominated Convergence theorem, \displaystyle\int W_{\varepsilon, n} d\mu \to 0 as n \to \infty. However,

\displaystyle |j(f_n) - j(g_n) - j(f)| \leq W_{\varepsilon, n} +\varepsilon \varphi_\varepsilon(g_n)

and thus

\displaystyle I_n \equiv \int \big| j(f_n) - j(g_n) - j(f) \big| d\mu\leq \int \big[ W_{\varepsilon, n} + \varepsilon \varphi_\varepsilon(g_n)\big] d\mu .

Consequently, \limsup_{n \to \infty} I_n \leq \varepsilon C. Now let \varepsilon \to 0.

Applications.

  • The simplest example is when we choose j(x)=|x|^p where 0< p<\infty. In this situation, one has

\displaystyle \int \Big(|f_n|^p - |f_n - f|^p - |f|^p \Big) d\mu \to 0.

  • We now assume u_n \rightharpoonup u in W^{1, 2}. As a consequence and up to a subsequence, u_n \to u in L^\alpha for every 1<\alpha<2^\star := \frac{2n}{n-2} and u_n \to u a.e. Therefore, for a fixed q \in (2, 2^\star), the fact that u_n \to u in L^q implies, by the Brezis-Lieb lemma, that

    \displaystyle u_n^{q-1} \to u^{q-1} in L^\frac{q}{q-1}.

    This is because \{u_n^{q-1}\}_n \subset L^\frac{q}{q-1} is bounded, u_n^{q-1} \to u^{q-1} a.e. and

\displaystyle\mathop {\lim }\limits_{n \to \infty } \int {\Big( {\underbrace {{{\left| {u_n^{q - 1}} \right|}^{\frac{q}{{q - 1}}}}}_{{{\left| {{u_n}} \right|}^q}} - {{\left| {u_n^{q - 1} - {u^{q - 1}}} \right|}^{\frac{q}{{q - 1}}}}} \Big)d\mu }=\int {\underbrace {{{\left| {{u^{q - 1}}} \right|}^{\frac{q}{{q - 1}}}}}_{{{\left| u \right|}^q}}d\mu }.

    The fact that u_n \to u strongly in L^p implies that \lim_{n\to \infty} \int |u_n|^p d\mu = \int |u|^p d\mu. Therefore,

\displaystyle \mathop {\lim }\limits_{x \to \infty }\int {{{\left| {u_n^{q - 1} - {u^{q - 1}}} \right|}^{\frac{q}{{q - 1}}}}d\mu } = 0.

    As a consequence, one has the following result

    \displaystyle \mathop {\lim }\limits_{x \to \infty } \int {\left( {u_n^{q - 1} - {u^{q - 1}}} \right)\left( {{u_n} - u} \right)d\mu } = 0.

October 13, 2009

Strong convergence in L^p implies convergence a.e.

Filed under: Các Bài Tập Nhỏ, Giải Tích 6 (MA5205), Nghiên Cứu Khoa Học — Ngô Quốc Anh @ 21:49

This topic is to show how to prove the following statement:

if \{u_n\}_n converges strongly to some u in L^p(\Omega), then up to a subsequence, \{u_n\}_n converges almost everywhere to u in \Omega.

The proof relies on the so-called Tchebyshev’s inequality. To this end, we first observe that \{u_n\}_n converges strongly to u in L^p(\Omega) means

\displaystyle\lim\limits_{n \to \infty } \int_\Omega {{{\left| {{u_n} - u} \right|}^p}dx} = 0.

We now apply the Tchebyshev’s inequality, indeed, for each \varepsilon>0 one has

\displaystyle {\rm meas}\left\{ {x:\left| {{u_n}(x) - u(x)} \right| >\varepsilon } \right\} \leqslant \frac{1}{{{\varepsilon ^p}}}\int_{\left\{ {x:\left| {{u_n}(x) - u(x)} \right| > \varepsilon } \right\}} {{{\left| {{u_n} - u} \right|}^p}dx} .

The right hand side of the above inequality can be dominated by

\displaystyle\frac{1}{{{\varepsilon ^p}}}\int_\Omega {{{\left| {{u_n} - u} \right|}^p}dx}

which implies that

\displaystyle 0 \leqslant \mathop {\lim }\limits_{n \to \infty } {\rm meas}\left\{ {x:\left| {{u_n}(x) - u(x)} \right| > \varepsilon } \right\} \leqslant \mathop {\lim }\limits_{n \to \infty } \left( {\frac{1} {{{\varepsilon ^p}}}\int_\Omega {{{\left| {{u_n} - u} \right|}^p}dx} } \right) = 0.

Thus u_n converges to u in measure. It turns out that up to a subsequence, u_n converges to u almost everywhere.

October 11, 2009

An example of sectional curvature of sphere

Filed under: Nghiên Cứu Khoa Học, Riemannian geometry — Ngô Quốc Anh @ 15:34

In \mathbb S^n we consider the metric

{g_{ij}} = {\left( {\displaystyle\frac{2}{{1 + {{\left| y \right|}^2}}}} \right)^2}{\delta _{ij}}.

We now find the sectional curvature of g.

Since

{g_{ij}} = \displaystyle{\left( {\frac{2}{{1 + {{\left| y \right|}^2}}}} \right)^2}{\delta _{ij}}

then

{g^{ij}} = \displaystyle{\left( {\frac{{1 + {{\left| y \right|}^2}}}{2}} \right)^2}{\delta _{ij}}.

Now we need to calculate

\Gamma _{ij}^k = \displaystyle\frac{1}{2}{g^{kl}}\left( {\frac{{\partial {g_{il}}}}{{\partial {y_j}}} + \frac{{\partial {g_{lj}}}}{{\partial {y_i}}} - \frac{{\partial {g_{ij}}}}{{\partial {y_l}}}} \right).

Clearly,

\displaystyle\frac{{\partial {g_{il}}}}{{\partial {y_j}}} = \frac{\partial }{{\partial {y_j}}}\left( {{{\left( {\frac{2}{{1 + {{\left| y \right|}^2}}}} \right)}^2}{\delta _{il}}} \right) = 2\frac{2}{{1 + {{\left| y \right|}^2}}}\frac{{ - 2}}{{{{\left( {1 + {{\left| y \right|}^2}} \right)}^2}}}\left( {2{y_j}} \right){\delta _{il}} = - 2{\left( {\frac{2}{{1 + {{\left| y \right|}^2}}}} \right)^3}{y_j}{\delta _{il}}.

Similarly, one has the following

\displaystyle\frac{{\partial {g_{lj}}}}{{\partial {y_i}}} = - 2{\left( {\frac{2}{{1 + {{\left| y \right|}^2}}}} \right)^3}{y_i}{\delta _{lj}}, \quad \frac{{\partial {g_{ij}}}}{{\partial {y_l}}} = - 2{\left( {\frac{2}{{1 + {{\left| y \right|}^2}}}} \right)^3}{y_l}{\delta _{ij}}.

Therefore,

\displaystyle \Gamma _{ij}^k= \frac{{ - 2}}{{1 + {{\left| y \right|}^2}}}\left( {{y_j}{\delta _{ik}} + {y_i}{\delta _{kj}} - {y_k}{\delta _{ij}}} \right).

Next we need to calculate coefficients R^m_{lij} of the curvature tensor. To this purpose, we use

\displaystyle R_{lij}^m = \frac{{\partial \Gamma _{lj}^m}}{{\partial {y_i}}} - \frac{{\partial \Gamma _{li}^m}}{{\partial {y_j}}} + \Gamma _{in}^m\Gamma _{jl}^n - \Gamma _{jn}^m\Gamma _{il}^n.

We have

\displaystyle\frac{{\partial \Gamma _{lj}^m}}{{\partial {y_i}}} = \frac{\partial }{{\partial {y_i}}}\left( {\frac{{ - 2}}{{1 + {{\left| y \right|}^2}}}\left( {{y_j}{\delta _{ml}} + {y_l}{\delta _{mj}} - {y_m}{\delta _{lj}}} \right)} \right)

which yields

\displaystyle\frac{{\partial \Gamma _{lj}^m}}{{\partial {y_i}}} ={\left( {\frac{2}{{1 + {{\left| y \right|}^2}}}} \right)^2}\left( {{y_i}{y_j}{\delta _{ml}} + {y_i}{y_l}{\delta _{mj}} - {y_i}{y_m}{\delta _{lj}}} \right) + \frac{{ - 2}}{{1 + {{\left| y \right|}^2}}}\left( {{\delta _{ij}}{\delta _{ml}} + {\delta _{il}}{\delta _{mj}} - {\delta _{im}}{\delta _{lj}}} \right).

Similarly, one has

\displaystyle\frac{{\partial \Gamma _{li}^m}}{{\partial {y_j}}} = {\left( {\frac{2}{{1 + {{\left| y \right|}^2}}}} \right)^2}\left( {{y_i}{y_j}{\delta _{ml}} + {y_j}{y_l}{\delta _{mi}} - {y_j}{y_m}{\delta _{li}}} \right) + \frac{{ - 2}}{{1 + {{\left| y \right|}^2}}}\left( {{\delta _{ji}}{\delta _{ml}} + {\delta _{jl}}{\delta _{mi}} - {\delta _{jm}}{\delta _{li}}} \right).

Therefore,

\displaystyle\frac{{\partial \Gamma _{lj}^m}}{{\partial {y_i}}} - \frac{{\partial \Gamma _{li}^m}}{{\partial {y_j}}} = - \frac{4}{{1 + {{\left| y \right|}^2}}}\left( {{\delta _{il}}{\delta _{mj}} - {\delta _{im}}{\delta _{lj}}} \right).

On the other hands,

\displaystyle\Gamma _{in}^m\Gamma _{jl}^n - \Gamma _{jn}^m\Gamma _{il}^n = {\left( {\frac{2}{{1 + {{\left| y \right|}^2}}}} \right)^2}\left\{ \begin{gathered}  {y_i}{y_j}{\delta _{ml}} + {y_i}{y_l}{\delta _{mj}} - {y_i}{y_m}{\delta _{jl}} + \hfill \\  {y_l}{y_j}{\delta _{im}} + {y_j}{y_l}{\delta _{im}} - y_n^2{\delta _{jl}}{\delta _{im}} - \hfill \\ {y_m}{y_j}{\delta _{il}} - {y_m}{y_l}{\delta _{ij}} + {y_m}{y_i}{\delta _{jl}} \hfill \\ \hfill \\ - {y_i}{y_j}{\delta _{ml}} - {y_j}{y_l}{\delta _{mi}} + {y_j}{y_m}{\delta _{il}} - \hfill \\ {y_l}{y_i}{\delta _{jm}} - {y_i}{y_l}{\delta _{jm}} + y_n^2{\delta _{il}}{\delta _{jm}} + \hfill \\ {y_m}{y_i}{\delta _{jl}} + {y_m}{y_l}{\delta _{ij}} - {y_m}{y_j}{\delta _{il}} \end{gathered} \right\}

which yields

\displaystyle\Gamma _{in}^m\Gamma _{jl}^n - \Gamma _{jn}^m\Gamma _{il}^n = {\left( {\frac{2}{{1 + {{\left| y \right|}^2}}}} \right)^2}{\left| y \right|^2}\left( {{\delta _{il}}{\delta _{jm}} - {\delta _{jl}}{\delta _{im}}} \right).

Thus,

\displaystyle R_{lij}^m = - \frac{4}{{1 + {{\left| y \right|}^2}}}\left( {{\delta _{il}}{\delta _{mj}} - {\delta _{im}}{\delta _{lj}}} \right) + {\left( {\frac{2}{{1 + {{\left| y \right|}^2}}}} \right)^2}{\left| y \right|^2}\left( {{\delta _{il}}{\delta _{jm}} - {\delta _{jl}}{\delta _{im}}} \right) = {\left( {\frac{2}{{1 + {{\left| y \right|}^2}}}} \right)^2}\left( {{\delta _{im}}{\delta _{lj}} - {\delta _{il}}{\delta _{mj}}} \right).

Finally, one obtains

\displaystyle K\left( {{e_i},{e_j}} \right) = \frac{{\left\langle {R\left( {{e_i},{e_j}} \right){e_j},{e_i}} \right\rangle }}{{\left\langle {{e_i},{e_i}} \right\rangle \left\langle {{e_j},{e_j}} \right\rangle - {{\left\langle {{e_i},{e_j}} \right\rangle }^2}}} = \frac{{R_{jij}^m\left\langle {{e_m},{e_i}} \right\rangle }}{{{{\left( {\frac{2}{{1 + {{\left| y \right|}^2}}}} \right)}^4}}} = \frac{{R_{jij}^m{{\left( {\frac{2}{{1 + {{\left| y \right|}^2}}}} \right)}^2}{\delta _{mi}}}}{{{{\left( {\frac{2}{{1 + {{\left| y \right|}^2}}}} \right)}^4}}} = 1.

Blog at WordPress.com.