# MA1506 Mathematics II Tutorials

## Tutorial 1: First order differential equations

The aim of this tutorial was to introduce some elements of differential equations such as definition, types of equations, related physical problems, etc. The slides used in the class can be downloaded from here .

As the main objective of this tutorial is about ODEs, I would like to mention some basic idea of ODEs. Let take an example from the lecture notes.

Example. Solve $y'=(1+y^2)e^x$.

It is not hard to see that this ODE is of separable form and one can easily solve it using integration. After several steps, one gets that the general solution of the above ODE is

$\displaystyle y=\tan (e^x-c).$

As noticed in the lecture notes, this ODE basically has infinitely many solution as you can see from the above formula, the presence of some constant $c$. In fact, such an ODE illustrates lots of flows so that each flow represents a particular constant $c$. In addition, each particular constant $c$ can be determined using the so-called “initial value”. Roughly speaking, you may set the value of $y$ at a particular point $x$. Since there is a unique flow passing though this “initial value”, the corresponding value $c$ can be found.

To support my explanation, I have used Maple to plot the following picture (called phase portrait)

If you are interested in Maple, you can read this, and the code that I have used is

with(DEtools):
NLC := diff(y(x),x) = (1+y(x)^2)*exp(x);
ivs := [y(0)=-10,y(0)=0,y(0)=10];
DEplot(NLC, y(x), x=-10..10, ivs, arrows=medium, linecolor=magenta, color=blue);

As you can see, I have used three initial values: $y(0)=-10$, $y(0)=0$, and $y(0)=10$. In the picture above, we have three pink curves corresponding to these three “initial values”: $y(0)=-10$, $y(0)=0$, and $y(0)=10$, to be exact, to three particular curves: $y=\tan (e^x+10)$, $y=\tan (e^x)$, and $y=\tan (e^x-10)$. Besides, we also have blue arrows in the above picture. These arrows present other flows associated to other initial values. As such, we may think that general solution of ODE presents a collection of flows.

Q1. This question is standard since they are of separable form. In order to solve it, you use integration. However, there was one thing that I left during the class concerning part (d). In order to prove the statement saying that either $y \equiv 0$ or $y\ne 0$ at all points, we have the following phase portrait associated to the ODE

$(1+y)y'+(1-2x)y^2=0.$

Three pink curves correspond to three initial values $y(0)=-100$, $y(0)=0$, and $y(0)=100$. As you can see, there is a unique curve with $y(0)=0$. In order to prove the statement, you need to adopt something. Roughly speaking, any solution with a particular initial value is a continuous curve. This is not true in general, but in the case of our equation above, this is okay.

We now prove by contradiction. Without loss of generality, we may assume that $y(x_0)=0$ and $|y(x_1)|=\varepsilon \in (0,1)$ for some $x_1 \in (x_0,x_0+1)$, i.e., the solution $y$ will be nonzero at some $x_1$ close enough to $x_0$. In addition, by using the ODE, there holds $y'(x_0) \ne 0$. Using the continuity of solution, there holds $|y(x)|<\varepsilon$ for any $x \in (x_0, x_1)$.

By going back to the ODE, there holds

$\displaystyle |y'(x)| = \left| {\frac{{2x - 1}}{{1 + y}}{y^2}} \right| \leqslant c({x_0}){\varepsilon ^2}, \quad \forall x \in ({x_0},{x_1}) \subset ({x_0},{x_0} + 1)$

where the constant $c$ depends only on $x_0$. Therefore, by the fundamental theorem of calculus, there holds

$\displaystyle \varepsilon=|y({x_1})| = \left| {\int_{{x_0}}^{{x_1}} {y'(x)dx} + \underbrace {y({x_0})}_0} \right| \leqslant c({x_0}){\varepsilon ^2}({x_1} - {x_0}) \leqslant c({x_0}){\varepsilon ^2}.$

By taking $\varepsilon>0$ sufficiently small, we have a contradiction. In other words, for a sufficiently small neighborhood of a zero point of $y$, $y$ vanishes. By shifting, $y$ vanishes at all points.

Q2. The only thing in this question is that why do we need a negative sign in the following ODE

$\displaystyle\frac{{dT}}{{dt}} = - k(T - {T_{\rm env}})$

where $k>0$ is constant. This is easy since $T$ is decreasing in time and $T-T_{\rm env}>0$ as the initial temperature of the ball is greater than its of environment. If you are still not sure, by replacing $-k$ by $k$, eventually you arrive at

$\displaystyle \frac{125}{225}=e^\frac{k}{2},$

which immediately implies that $k<0$.

Q3. Same as in the previous question, we still need a negative sign in either

$\displaystyle\frac{{dV}}{{dt}} = - bA$

or

$\displaystyle\frac{{dV}}{{dt}} = - bA^2$

since $V$ is decreasing in time and $A$ is positive.

Q4. I think the slides and the answer paper are clear enough. The only thing that you probably worry is that eventually you arrive at

$\displaystyle \mathbf r = Re^\frac{\theta}{\tan \psi}$

which is not well-defined at $\psi = \frac{\pi}{2}$. This corresponds to the case, as in the picture from wikipedia, when $V_{||}=0$. Fortunately, if this is the case, it is immediate to see that the moth flies around the candle.

Q5. This is a standard question, I omit it. Notice that in part (b), we can use $v=x+y+1$ but later on, the corresponding integral will cost you some trouble.

## Tutorial 2: First order differential equations (cont’)

The aim of this tutorial was to consider the first order differential equations of general form. The slides used in the class can be downloaded from here .

Q1. This question is standard and I don’t want to spend much time on this. However, it seems that some of you may have trouble with the following integrals

$\displaystyle\int {{x^{ - n}}{e^x}dx} ,\quad \int {{x^m}{e^x}dx}$

where $m,n$ are positive integers. Notice that by a variable change $x:=-x$, we can easily rewrite these integrals as

$\displaystyle\int {{x^{ - n}}{e^{-x}}dx} ,\quad \int {{x^m}{e^{-x}}dx}.$

For the sake of simplicity, we only need to consider these integrals with $e^x$. Regarding to the first integral, we do the following, by integration by parts,

$\displaystyle\begin{gathered} \int {{x^{ - n}}{e^x}dx} = - \frac{1}{{n - 2}}\int {{e^x}d({x^{ - (n - 1)}})} \hfill \\ \qquad\qquad= - \frac{1}{{n - 2}}{e^{ - (n - 1)}}{e^x} + \frac{1}{{n - 2}}\int {{x^{ - (n - 1)}}{e^x}dx} . \hfill \\\end{gathered}$

As you can see, we have just raised $x^{-n}$ to $x^{-n+1}$ by combining $x^{-n}$ and $dx$. By repeating this procedure, eventually we arrive at $\int e^xdx$ which is easily calculated.

For the second integral, we do the same trick. However, since the term $x^m$ has a positive exponent, we need to lower $m$. To this purpose, instead of combining $x^m$ and $dx$, we have to combine $e^x$ and $dx$

$\displaystyle\int {{x^m}{e^x}dx} = \int {{x^m}d({e^x})} = {x^m}{e^x} - m\int {{x^{m - 1}}{e^x}dx} .$

Q2. As I have already mentioned during the class, this question typically considers the integral form of ODEs which can easily deduce to the original form by differentiating. Within this question, we have already touched a second order ODE which basically needs two conditions in order to find a particular solution. We shall consider these ODEs in the next tutorial.

Keep in mind that for a second order ODE, the general solution mainly depends on two parameters. As such, in order to find those parameters, we need two conditions, one involves $y(x_0)$ and the other involves $\frac{dy}{dx}(x_0)$.

Q3, Q4, Q5. Please follow the slides used during the class.

If you are interested in MATLAB, we may heard a program called MuPAD. In fact, I have used MuPAD to plot all diagrams shown in the slides. Here is a sample that I have used to create a diagram in Q3.

f1:= plot::Function2d(2-2*exp(-1/3*x), LegendText = "2-2*exp(-1/3*x)", x = -10/10 .. 57/10, Color = RGB::Blue):
f2:= plot::Function2d(2-2*exp(1/3*x), LegendText = "2-2*exp(1/3*x)", x = -10/10 .. 57/10, Color = RGB::Red):
plot(f1, f2, LegendVisible = TRUE, Scaling = Constrained)

In this blog, we have talked about MuPAD several times, you can find these posts here.

## Tutorial 3: Second order differential equations

The aim of this tutorial was to consider the second order differential equations of general form. The slides used in the class can be downloaded from here .

Q4. I think I should mention two things concerning to this question: the sign of $r'$ and the way we calculate the required time $t$.

As you can see from the slides, it is necessary to find the precise sign for $r'$. Physically, due to gravity (since we have the presence of massive body, like the Sun), the earth has the so-called gravitational potential energy. Once the Earth falls to the Sun, such potential energy starts decreasing. According to the conservation law of energy, the lost potential energy will transform to the kinetic energy. That kinetic energy will speed up the Earth. In other words, the speed of falling will increase from time to time. Since the distance function $r$ is measuring based on the Sun, that function itself is decreasing (the Earth is approaching the Sun). And since the speed of falling is increasing, it is obvious to see that the function $r'$ is also decreasing in time. As such, we have to choose the negative sign for $r'(t)$.

Concerning to the formula for $t$ after solving the separable ODE, as you can see, we eventually arrive at

$\displaystyle t=- \frac{{{R^{\frac{3}{2}}}}}{{\sqrt {2GM} }}\int {\frac{{dx}}{{\sqrt {\frac{1}{x} - 1} }}}, \quad x=\frac{r}{R}.$

The point is that what does this formula mean? In fact, the above formula gives us a relation between time $t$, as a function of the distance $r$ and the distance $r$ itself. To see this more precise, we play another game. Assuming you have two points $A$ and $B$. It is known that some body is moving from $A$ to $B$. We assume further that the arrival time for both $A$ and $B$ is a function of some distance to some fixed point, say $C$. That is to say, $t=t(r)$ where $r$ is measuring based on $C$. If

$\displaystyle t_1=t(r(A)), \quad t_2=t(r(B))$

are moments that the body meets $A$ and $B$ respectively. Then it is clear to see that $t_2-t_1$ is the time travel from $A$ to $B$. In other words, that time is nothing but

$\displaystyle t(r(B))-t(r(A)).$

Let us go back to our problem. If we denote by $f$ the right hand side of the formula for $t$, i.e.,

$t=f(x)$

we then immediately see that the time travel is

$f(2/3)-f(1).$

But thanks to the fundamental theorem of calculus, there holds

$\displaystyle f(2/3)-f(1)=-\frac{{{R^{\frac{3}{2}}}}}{{\sqrt {2GM} }}\int_{1}^\frac{2}{3} {\frac{{dx}}{{\sqrt {\frac{1}{x} - 1} }}}.$

And this is what we need. Notice that we have used the following

$\displaystyle \int \frac{dx}{\sqrt{\frac{1}{x}-1}}=\int \frac{\sqrt x}{\sqrt{1-x}}dx = \arcsin \sqrt{x} - \sqrt{x(1-x)} + C.$

Q1, Q3: I think we don’t have any trouble with these two questions.

Q2: I have used a slightly different approach for finding a particular solution using the method of undetermined coefficients.

## Tutorial 4: The harmonic oscillator

The aim of this tutorial was to consider the harmonic oscillator. The slides used in the class can be downloaded from here . I want to emphasize something about the stability of equilibrium points for ODEs. At first, any second order ODEs $\ddot x = f(x)$ can be transformed into a system of first order ODEs as shown below

$\begin{gathered} {{\dot x}_1} = {x_2}, \hfill \\ {{\dot x}_2} = f({x_1}). \hfill \\ \end{gathered}$

In this way, it suffices to study the stability of equilibrium points for the following system

$\begin{gathered} {{\dot x}_1} = f_1(x_1,x_2), \hfill \\ {{\dot x}_2} = f_2(x_1, x_2), \hfill \\ \end{gathered}$

where $f_1$ and $f_2$ are functions. Assuming $(\overline x_1, \overline x_2)$ is an equilibrium point for the above system. In other words, there holds

$f_1(\overline x_1, \overline x_2)=f_2(\overline x_1, \overline x_2)=0.$

As we have discussed in the class, in order to study the stability of equilibrium points, it is important to consider its linearized system, that is,

$\displaystyle\left( {\begin{array}{*{20}{c}} {{{\dot x}_1}} \\ {{{\dot x}_2}} \end{array}} \right) = \left( {\begin{array}{*{20}{c}} {\frac{\partial }{{\partial {x_1}}}{f_1}({{\overline x }_1},{{\overline x }_2})}&{\frac{\partial }{{\partial {x_2}}}{f_1}({{\overline x }_1},{{\overline x }_2})} \\ {\frac{\partial }{{\partial {x_1}}}{f_2}({{\overline x }_1},{{\overline x }_2})}&{\frac{\partial }{{\partial {x_2}}}{f_2}({{\overline x }_1},{{\overline x }_2})} \end{array}} \right)\left( {\begin{array}{*{20}{c}} {{x_1}} \\ {{x_2}} \end{array}} \right).$

In terms of vectors, the above system is usually written as

$\displaystyle \dot{\mathbf x}=D\mathbf f(\overline x_1, \overline x_2) \mathbf x$

where $\mathbf x = (x_1, x_2)$ and $\mathbf f = (f_1, f_2)$.

Hyperbolic equilibrium point.

An equilibrium point $(\overline x_1, \overline x_2)$ is called a hyperbolic equilibrium point of the system if none of the eigenvalues of the matrix

$\displaystyle\left( {\begin{array}{*{20}{c}} {\frac{\partial }{{\partial {x_1}}}{f_1}({{\overline x }_1},{{\overline x }_2})}&{\frac{\partial }{{\partial {x_2}}}{f_1}({{\overline x }_1},{{\overline x }_2})} \\ {\frac{\partial }{{\partial {x_1}}}{f_2}({{\overline x }_1},{{\overline x }_2})}&{\frac{\partial }{{\partial {x_2}}}{f_2}({{\overline x }_1},{{\overline x }_2})} \end{array}} \right)$

have zero real part.

Sink equilibrium point. This is the case when all of the eigenvalues of the matrix have negative real part.

Source equilibrium point. This is the case when all of the eigenvalues of the matrix have positive real part.

Saddle equilibrium point. This is the case when the matrix has at least one eigenvalue with a positive real part and at least one with a negative real part.

Example. Let us classify all of the equilibrium points of the nonlinear system

$\displaystyle\begin{gathered} {{\dot x}_1} = x_1^2 - x_2^2 - 1, \hfill \\ {{\dot x}_2} = 2{x_2}. \hfill \\ \end{gathered}$

We obviously have two equilibrium points $(1,0)^T$ and $(-1,0)^T$. In terms of matrix, we have

$\displaystyle Df(1,0) = \left( {\begin{array}{*{20}{c}} 2&0 \\ 0&2 \end{array}} \right),Df( - 1,0) = \left( {\begin{array}{*{20}{c}} { - 2}&0 \\ 0&2 \end{array}} \right).$

Thus, $(1,0)$ is a source and $(-1,0)$ is a saddle. To illustrate this, following is the phase portrait

Obviously, for the transformed system for $\ddot x = f(x)$, its linearized system is nothing but

$\displaystyle\left( {\begin{array}{*{20}{c}} {{{\dot x}_1}} \\ {{{\dot x}_2}} \end{array}} \right) = \left( {\begin{array}{*{20}{c}} 0&1 \\ {f'({\overline x_1})}&0 \end{array}} \right)\left( {\begin{array}{*{20}{c}} {{x_1}} \\ {{x_2}} \end{array}} \right).$

Therefore, if $f'(\overline x_1)=0$, the equilibrium point $(\overline x_1, \overline x_2)$ cannot be hyperbolic.

Non-hyperbolic equilibrium point.

The stability of nonhyperbolic equilibrium points is typically more difficult to determine. A method, due to Liapunov, that is very useful for deciding the stability of nonhyperbolic equilibrium points is presented here.

A Lyapunov candidate function. A continuous scalar function$V:\mathbb{R}^2 \to \mathbb{R}$ is called be a Lyapunov-candidate-function at $(\overline x_1, \overline x_2)$ if it is a locally positive-definite function. That is,

$V(\overline x_1, \overline x_2) = 0, \quad V(x_1, x_2)>0 \quad \forall (x_1,x_2) \ne (\overline x_1, \overline x_2).$

Once we have the function $V$ in hand, we can easily calculate the so-called time derivative of $V$, that is,

$\displaystyle\dot{V}(x_1, x_2) = \frac{\partial V}{\partial \mathbf x}\cdot \frac{d\mathbf x}{dt} = \nabla V \cdot \dot{\mathbf x} \quad \mathbf x = (x_1, x_2).$

For example, with $V(\mathbf x)=x_1^4+x_2^4$ one can see that $\dot V(\mathbf x)=4x_1^3\dot x_1+4x_2^3\dot x_2$.

Stable equilibrium. If the time derivative of the Lyapunov-candidate-function is locally negative semidefinite in a neighborhood $U$ of $(\overline x_1, \overline x_2)$, that is,

$\dot V(x_1, x_2) \leqslant 0 \quad \forall U \ni (x_1,x_2) \ne (\overline x_1, \overline x_2),$

then $(\overline x_1, \overline x_2)$ is proven to be a stable equilibrium point.

Locally asymptotically stable equilibrium. If the time derivative of the Lyapunov-candidate-function is locally negative definite in a neighborhood $U$ of $(\overline x_1, \overline x_2)$, that is,

$\dot V(x_1, x_2) < 0 \quad \forall U \ni (x_1,x_2) \ne (\overline x_1, \overline x_2),$

then $(\overline x_1, \overline x_2)$ is proven to be locally asymptotically stable.

Globally asymptotically stable equilibrium. If the time derivative of the Lyapunov-candidate-function is globally negative definite in a neighborhood of $(\overline x_1, \overline x_2)$, that is,

$\dot V(x_1, x_2) < 0 \quad \forall (x_1,x_2) \ne (\overline x_1, \overline x_2),$

then $(\overline x_1, \overline x_2)$ is proven to be globally asymptotically stable.

Unstable equilibrium. If the time derivative of the Lyapunov-candidate-function is locally positive definite in a neighborhood $U$ of $(\overline x_1, \overline x_2)$, that is,

$\dot V(x_1, x_2) > 0 \quad \forall U \ni (x_1,x_2) \ne (\overline x_1, \overline x_2),$

then $(\overline x_1, \overline x_2)$ is proven to be a unstable equilibrium point.

Notice that the difference between asymptotically stability and stability is that for the latter case, the solution stays in a neighborhood of the equilibrium point while in the former case, it is required that the solution needs to converge to the equilibrium point.

Example. Let us go back to the ODE $\ddot x = f(x)$ where the continuous function $f$ verifies $xf(x)<0$ for any $x \ne 0$. In this case, the following Lyapunov function

$\displaystyle V(\mathbf x) = \frac{1}{2}x_2^2 - \int_0^{{x_1}} {f(s)ds}$

is well-known. For example, if $f(x)=-x^3$ then

$\displaystyle V(x) = \frac{1}{2}x_2^2 + \frac{1}{4}x_1^4.$

Obviously, $\dot V(\mathbf x)=0$. Thus, the origin is stable.

Example. For the system

$\displaystyle\begin{gathered} {{\dot x}_1} = {x_2} - x_1^2 + 2, \hfill \\ {{\dot x}_2} = 2x_1^2 - 2{x_1}{x_2}. \hfill \\ \end{gathered}$

the following is the phase portrait

## Tutorial 5: The Malthusian and logistic growth models

The aim of this tutorial was to consider the Malthusian and logistic growth models. The slides used in the class can be downloaded from here .

There is one thing that I should mention here is the No-Crossing Principle. In fact, we have already used once in Tutorial 1 when we deal with the following ODE

$\displaystyle (1+y)y'+(1-2x)y^2=0.$

As you may see, $y \equiv 0$ is a trivial solution. Thus, if a solution $y$ has no zero, it is non-zero everywhere.

## Tutorial 6: The Malthusian and logistic growth models (cont’)

The aim of this tutorial was to continue to consider some growth models such as the harvesting model, the logistic model, etc. The slides used in the class can be downloaded from here .

In Q1, to help you understand better, when $E \geqslant 141$, the general solution for the following IVP

$\displaystyle \frac{d}{dt}P(t)=(B-sP(t))P(t)-E$

with $P(0)=200$, $B=\frac{3}{2}$ and $s=\frac{3}{752}$ is nothing but

$\displaystyle P(t)=188 - \frac{{4\sqrt {141} }}{3}\tan \left( {\frac{{\sqrt {141} \sqrt {E - 141} }}{{188}}t - {{\tan }^{ - 1}}\frac{9}{{\sqrt {141} \sqrt {E - 141} }}} \right)\sqrt {E - 141}.$

As you may see, this function will decrease very fast.

In Q5, when we talk about the number of molecules of $H_2$ being destroyed by the reaction inside the small piece of tube, we need to consider the volume of that piece. Although we can consider that piece as a cylinder, thus giving us the volume $A(x) \delta x$, since $\delta x$ is small, we can consider a general situation.

In the literature, the volume of that piece is nothing but

$\displaystyle \delta V = \int_0^{\delta x} {A(x + s)ds}.$

In other words, if you know the area of its cross-section, you simply integrate to get the volume. As such, the term $-2rA(x)\delta x$ is simply transformed to

$\displaystyle - 2r\int_0^{\delta x} {A(x + s)ds}$

and after dividing by $A(x)u\delta x$, we force to find

$\displaystyle\mathop {\lim }\limits_{\delta x \to 0} \frac{{ - 2r\int_0^{\delta x} {A(x + s)ds} }}{{A(x)u\delta x}}.$

Thanks to the L’Hopital rule, this is easy as shown below

$\displaystyle\mathop {\lim }\limits_{\delta x \to 0} \frac{{ - 2r\int_0^{\delta x} {A(x + s)ds} }}{{A(x)u\delta x}} = \frac{{ - 2r}}{{A(x)u}}\mathop {\lim }\limits_{\delta x \to 0} \frac{{\int_0^{\delta x} {A(x + s)ds} }}{{\delta x}} = \frac{{ - 2r}}{u}.$

Another important stuff lying in this tutorial is about understanding phase portraits. As I have mentioned in the class when we was talking about Q5, it is important to find equilibrium points, to identity whether they are stable or unstable, to find the behavior of the population, etc.

## Tutorial 7: The Laplace transformation

The aim of this tutorial was to introduce basic notations about the Laplace transformation. The slides used in the class can be downloaded from here . I only want to emphasize that in Q5, since the force is exerted suddenly, this suggests that we need a Dirac delta function. Therefore, the force will be proportional to $\delta (t-T)$, that is,

$\mathbf F = f(t)\delta(t-T).$

Although the coefficient function $f(t)$ is unclear, after taking the Laplace transform, we only need information about $f$ at $t=T$. Indeed, by using the integral equation

$\displaystyle P=\int_0^{+\infty} f(t)\delta(t-T)dt$

one can see that $f(T)=P$. Let us consider the corresponding ODE

$\displaystyle \ddot x=-\frac{\rho A g}{M}x-\frac{f}{M}\delta(t-T)$.

By taking the Laplace transform of both sides we get that

$\displaystyle\begin{gathered} {s^2}X(s) - sX(0) - x'(0) = - \frac{{\rho Ag}}{M}X(s) - \mathcal L\left( {\frac{f}{M}\delta (t - T)} \right) \hfill \\\qquad\qquad\qquad\qquad\qquad= - \frac{{\rho Ag}}{M}X(s) - \frac{{f(T)}}{M}{e^{ - Ts}} \hfill \\ \qquad\qquad\qquad\qquad\qquad= - \frac{{\rho Ag}}{M}X(s) - \frac{P}{M}{e^{ - Ts}}. \hfill \\ \end{gathered}$

Thus, as one can see, $f(T)$ plays the central role in our argument. Since the value of $f$ at any $t\ne T$ can be arbitrary, one can assign $f$ to be the constant function $P$.

In Q6, suppose that we don’t want to assume that the wave hits instantaneously: we want to model the situation by assuming that the momentum $P$ is imparted to the ship over a short but nonzero period of time $\tau$ starting at $t=T$. In this context, instead of using the Dirac delta function, the force will be proportional to $u(t-T)-u(t-T-\tau)$ which is just a cut-off function as mentioned during the class. Consequently, one can write

$\mathbf F = f(t)\big( u(t-T)-u(t-T-\tau) \big).$

Notice that although we don’t know anything about $f$, since $\tau$ is quite small, we can assume that $f$ is constant for any $t \in [T, T+\tau]$, i.e., the change of $f$ is not much within this interval of time. Therefore, by integrating

$\displaystyle P=\int_0^{+\infty} f(t)\big( u(t-T)-u(t-T-\tau) \big)dt$

one easily gets that $\displaystyle f(t)=\frac{P}{\tau}$ for all $t \in [T, T+\tau]$. In particular, our ODE is nothing but

$\displaystyle \ddot x=-\frac{\rho A g}{M}x-\frac{f}{M\tau}\big( u(t-T)-u(t-T-\tau) \big)$.

To take the Laplace transform of both sides, we only need to consider the extra part, $\frac{f}{M\tau}\big( u(t-T)-u(t-T-\tau) \big)$. In fact, one gets by definition that

$\displaystyle\begin{gathered} \mathcal{L}\left( {\frac{{f(t)}}{{M\tau }}(u(t - T) - u(t - T - \tau ))} \right) \hfill \\ \qquad= \frac{1}{{M\tau }}\int_0^{ + \infty } {{e^{ - st}}f(t)(u(t - T) - u(t - T - \tau ))dt} \hfill \\ \qquad= \frac{1}{{M\tau }}\int_T^{T + \tau } {{e^{ - st}}f(t)dt} \hfill \\ \qquad= \frac{{f(T)}}{{M\tau }}\int_T^{T + \tau } {{e^{ - st}}dt} \hfill \\ \qquad= \frac{{f(T)}}{{M\tau }}\frac{{{e^{ - Ts}} - {e^{ - (T + \tau )s}}}}{s}. \hfill \\ \end{gathered}$

Again, as you can see from the previous argument, the values for $f$ when $t<\,T$ and $t>T+\tau$ play no role as you eventually integrate over $[T,T+\tau]$. Thus, we can assume $f$ is nothing but $\frac{P}{\tau}$ for all $t$. This is the reason why I was able to write the formula for the force where $f$ was replaced by $\frac{P}{\tau}$ as shown in the slides.

## Tutorial 8: The theory of matrices

The aim of this tutorial was to introduce basic notations about linear transformation, matrices, rotation and shearing. The slides used in the class can be downloaded from here . I would like to highlight that throughout the tutorial, we have touched several matrix decomposition. I would take this chance to introduce more decompositions. For those who are interested in, please click this.

As you can see, we have use the Jordan normal decomposition in the last question. We shall come back to this in the next tutorial when we deal with the role of the eigenvalue 1 in the Markov processes.

## Tutorial 9: The theory of matrices (cont’)

The aim of this tutorial was to introduce basic notations about Markov chains, eigenvalues and eigenvectors. The slides used in the class can be downloaded from here . I just want to emphasize that if $M$ is a Markov matrix then any $M^p$ is also a Markov matrix. This gives a contradiction to the last part of Q5.

Indeed, suppose $M$ is a $n \times n$ matrix which is given below

$\displaystyle M = \left( {\begin{array}{*{20}{c}} {{p_{11}}}&{{p_{12}}}& \cdots &{{p_{1n}}} \\ {{p_{21}}}&{{p_{22}}}& \ddots &{{p_{2n}}} \\ \vdots & \ddots & \ddots & \vdots \\ {{p_{n1}}}&{{p_{n2}}}& \cdots &{{p_{nn}}} \end{array}} \right).$

We shall prove by induction. Assuming $M^p$ is a Markov matrix, we need to prove that $M^{p+1}$ is also a Markov matrix. To simplify our calculation, we denote

$\displaystyle M^p = \left( {\begin{array}{*{20}{c}} {{a_{11}}}&{{a_{12}}}& \cdots &{{a_{1n}}} \\ {{a_{21}}}&{{a_{22}}}& \ddots &{{a_{2n}}} \\ \vdots & \ddots & \ddots & \vdots \\ {{a_{n1}}}&{{a_{n2}}}& \cdots &{{a_{nn}}} \end{array}} \right)$

and

$\displaystyle M^{p+1} = \left( {\begin{array}{*{20}{c}} {{b_{11}}}&{{b_{12}}}& \cdots &{{b_{1n}}} \\ {{b_{21}}}&{{b_{22}}}& \ddots &{{b_{2n}}} \\ \vdots & \ddots & \ddots & \vdots \\ {{b_{n1}}}&{{b_{n2}}}& \cdots &{{b_{nn}}} \end{array}} \right).$

In order to prove that $M^{p+1}$ is also a Markov matrix, it suffices to show that

$\displaystyle\sum\limits_{i = 1}^n {{b_{ik}}} = 1$

for each fixed $k = \overline{1,n}$. To do this, we calculate $b_{ik}$. In fact, since

$\displaystyle\left( {\begin{array}{*{20}{c}} {{a_{11}}}&{{a_{12}}}& \cdots &{{a_{1n}}} \\ {{a_{21}}}&{{a_{22}}}& \ddots &{{a_{2n}}} \\ \vdots & \ddots & \ddots & \vdots \\ {{a_{n1}}}&{{a_{n2}}}& \cdots &{{a_{nn}}} \end{array}} \right)\left( {\begin{array}{*{20}{c}} {{p_{11}}}&{{p_{12}}}& \cdots &{{p_{1n}}} \\ {{p_{21}}}&{{p_{22}}}& \ddots &{{p_{2n}}} \\ \vdots & \ddots & \ddots & \vdots \\ {{p_{n1}}}&{{p_{n2}}}& \cdots &{{p_{nn}}} \end{array}} \right) = \left( {\begin{array}{*{20}{c}} {{b_{11}}}&{{b_{12}}}& \cdots &{{b_{1n}}} \\ {{b_{21}}}&{{b_{22}}}& \ddots &{{b_{2n}}} \\ \vdots & \ddots & \ddots & \vdots \\ {{b_{n1}}}&{{b_{n2}}}& \cdots &{{b_{nn}}} \end{array}} \right)$

we find that

$\displaystyle\sum\limits_{i = 1}^n {{b_{ik}}} = \sum\limits_{i = 1}^n {\left( {\sum\limits_{l = 1}^n {{a_{il}}{p_{lk}}} } \right)}.$

We now switch the double sum appeared in the preceding identity, we get

$\displaystyle\begin{gathered} \sum\limits_{i = 1}^n {{b_{ik}}} = \sum\limits_{i = 1}^n {\left( {\sum\limits_{l = 1}^n {{a_{il}}{p_{lk}}} } \right)} = \sum\limits_{l = 1}^n {\left( {\sum\limits_{i = 1}^n {{a_{il}}{p_{lk}}} } \right)} \hfill \\ \qquad\quad= \sum\limits_{l = 1}^n {{p_{lk}}\underbrace {\left( {\sum\limits_{i = 1}^n {{a_{il}}} } \right)}_1} = \sum\limits_{l = 1}^n {{p_{lk}}} = 1. \hfill \\ \end{gathered}$

Thus by induction, the matrix $M^{p+1}$ is Markov.

## Tutorial 10: Systems of first order ODEs

The aim of this tutorial was to introduce some notion about systems of first order ODEs. The slides used in the class can be downloaded from here . A typical example is the following

$\displaystyle\begin{gathered} \dot x(t) = ax(t) + by(t), \hfill \\ \dot y(t) = cx(t) + dy(t), \hfill \\ \end{gathered}$

for $t>0$ with some initial condition $x(0)$ and $y(0)$. In terms of matrices, we can rewrite the above system as the following

$\displaystyle\begin{pmatrix}\dot x \\ \dot y \end{pmatrix}=\begin{pmatrix} a & b \\ c & d\end{pmatrix}\begin{pmatrix} x \\ y\end{pmatrix}.$

I just realized that there is no typical approach for solving the above system in the lecture notes. I propose here an approach in the case the coefficient matrix has two distinct eigenvalues, say $\lambda_1$ and $\lambda_2$.

By the Jordan decomposition, if we denote $\mathbf v_1$ and $\mathbf v_2$ are two eigenvectors associated to the two eigenvalues $\lambda_1$ and $\lambda_2$ respectively, we have the following

$\displaystyle\begin{pmatrix} a & b \\ c & d\end{pmatrix}=\begin{pmatrix} \mathbf v_1 & \mathbf v_2\end{pmatrix}\begin{pmatrix} \lambda_1 & 0 \\ 0 & \lambda_2\end{pmatrix}\begin{pmatrix} \mathbf v_1 & \mathbf v_2\end{pmatrix}^{-1}.$

Using this and multiply both sides of the system by $\begin{pmatrix} \mathbf v_1 & \mathbf v_2\end{pmatrix}^{-1}$ on the left, we get that

$\displaystyle\begin{pmatrix} \mathbf v_1 & \mathbf v_2\end{pmatrix}^{-1}\begin{pmatrix}\dot x \\ \dot y\end{pmatrix}=\begin{pmatrix} \mathbf v_1 & \mathbf v_2\end{pmatrix}^{-1}\begin{pmatrix} a & b \\ c & d\end{pmatrix}\begin{pmatrix} x \\ y\end{pmatrix}=\begin{pmatrix} \lambda_1 & 0 \\ 0 & \lambda_2\end{pmatrix}\begin{pmatrix} \mathbf v_1 & \mathbf v_2\end{pmatrix}^{-1}\begin{pmatrix} x \\ y\end{pmatrix}.$

Therefore, if we think that

$\displaystyle \begin{pmatrix} \mathbf v_1 & \mathbf v_2\end{pmatrix}^{-1}\begin{pmatrix} x \\ y\end{pmatrix} = \begin{pmatrix} u \\ v\end{pmatrix}$

is a new variable, we then get the following decouped system for $u$ and $v$ as the following

$\displaystyle\begin{pmatrix}\dot u \\ \dot v \end{pmatrix}=\begin{pmatrix} \lambda_1 & 0 \\ 0 & \lambda_2\end{pmatrix}\begin{pmatrix} u \\ v\end{pmatrix}.$

Since the preceding system is decoupled, we can solve $u$ and $v$ solely. To get $x$ and $y$, we do the reverse

$\displaystyle\begin{pmatrix} x \\ y\end{pmatrix} = \begin{pmatrix} \mathbf v_1 & \mathbf v_2\end{pmatrix}\begin{pmatrix} u \\ v\end{pmatrix}.$

A careful analysis shows that the solution $(x,y)$ is of the following form

$\displaystyle\begin{pmatrix} x(t) \\ y(t)\end{pmatrix}=\begin{pmatrix} \mathbf v_1 & \mathbf v_2\end{pmatrix}\begin{pmatrix} e^{\lambda_1 t} & 0 \\ 0 & e^{\lambda_2 t}\end{pmatrix}\begin{pmatrix} \mathbf v_1 & \mathbf v_2\end{pmatrix}^{-1}\displaystyle\begin{pmatrix} x(0) \\ y(0)\end{pmatrix}.$

In the general case, the following is the so-called fundamental fact for the initial value problem

Let $A$ be an $n \times n$ matrix. For each $\mathbf x_0 \in \mathbb R^n$, the initial value problem

$\dot{\mathbf x}=A\mathbf x \quad \mathbf x(0)=\mathbf x_0,$

has  a unique solution for all $t \in \mathbb R$ which is given by

$\mathbf x(t)=e^{A t} \mathbf x_0.$

Now we turn to the case when $A$ is a $2 \times 2$ matrix. When we diagonalize $A$, we may face one of the following three cases: either

$\displaystyle A= P\begin{pmatrix} \lambda_1 & 0 \\ 0 & \lambda_2\end{pmatrix}P^{-1},$

$\displaystyle A= P\begin{pmatrix} \lambda & 1 \\ 0 & \lambda\end{pmatrix}P^{-1},$

or

$\displaystyle A= P\begin{pmatrix} a & -b \\ b & a \end{pmatrix}P^{-1}.$

Respectively, we may have one of the followin: either

$\displaystyle \mathbf x(t)=\begin{pmatrix} e^{\lambda_1 t} & 0 \\ 0 & e^{\lambda_2 t} \end{pmatrix}\mathbf x_0,$

$\displaystyle \mathbf x(t)=e^{\lambda t}\begin{pmatrix} 1 & t \\ 0 & 1 \end{pmatrix}\mathbf x_0,$

or

$\displaystyle \mathbf x(t)=e^{at} \begin{pmatrix} \cos (bt) & -\sin(bt) \\ \sin(bt) & \cos(bt) \end{pmatrix}\mathbf x_0.$

We now list the various phase portraits that result from thesesolutions, grouped according to their topological type with a finer classificationof sources and sinks into various types of unstable and stable nodesand foci:

Case 1. Suppose that

$\displaystyle A= P\begin{pmatrix} \lambda_1 & 0 \\ 0 & \lambda_2\end{pmatrix}P^{-1},$

with $\lambda_1<0<\lambda_2$.

In this case, we have a saddle at the origin. The only difference between $\lambda_1<0<\lambda_2$ and $\lambda_1>0>\lambda_2$ is that the arrow will be opposite.

Case 2. Suppose that either

$\displaystyle A= P\begin{pmatrix} \lambda_1 & 0 \\ 0 & \lambda_2\end{pmatrix}P^{-1},$

with $\lambda_1 \leqslant\lambda_2<0$ or

$\displaystyle A= P\begin{pmatrix} \lambda & 1 \\ 0 & \lambda\end{pmatrix}P^{-1},$

with $\lambda<0$.

In this case, we have stable node at the origin.

Case 3. Suppose that

$\displaystyle A= P\begin{pmatrix} a & -b \\ b & a \end{pmatrix}P^{-1}.$

with $a<0$.

In this case, we have a stable focus at the origin, i.e., a spiral sink.

Case 4. Suppose that

$\displaystyle A= P\begin{pmatrix} 0 & -b \\ b & 0 \end{pmatrix}P^{-1}.$

In this case, we only have a center at the origin.

If you have time, please read this note. Now it’s time to enjoy some music. Following is my favorite song, Les Rois Du Monde (king of the world), please enjoy

## Tutorial 11: Partial Differential Equations

The aim of this tutorial was to introduce the notion of partial differential equations. The method used is separation of variables. The slides used in the class can be downloaded from here .

No comments yet.