© Springer Nature Switzerland AG 2018
Michael Oberguggenberger and Alexander OstermannAnalysis for Computer ScientistsUndergraduate Topics in Computer Sciencehttps://doi.org/10.1007/978-3-319-91155-7_20

20. Systems of Differential Equations

Michael Oberguggenberger1   and Alexander Ostermann1  
(1)
University of Innsbruck, Innsbruck, Austria
 
 
Michael Oberguggenberger (Corresponding author)
 
Alexander Ostermann

Systems of differential equations, often called differentiable dynamical systems, play a vital role in modelling time-dependent processes in mechanics, meteorology, biology, medicine, economics and other sciences. We limit ourselves to two-dimensional systems, whose solutions (trajectories) can be graphically represented as curves in the plane. The first section introduces linear systems, which can be solved analytically as will be shown. In many applications, however, nonlinear systems are required. In general, their solution cannot be given explicitly. Here it is of primary interest to understand the qualitative behaviour of solutions. In the second section of this chapter, we touch upon the rich qualitative theory of dynamical systems. The third section is devoted to analysing the mathematical pendulum in various ways. Numerical methods will be discussed in Chap. 21.

20.1 Systems of Linear Differential Equations

We start with the description of various situations which lead to systems of differential equations. In Chap. 19 Malthus’ population model was presented, where the rate of change of a population x(t) was assumed proportional to the existing population:
$$\begin{aligned} \dot{x}(t) = ax(t). \end{aligned}$$
The presence of a second population y(t) could result in a decrease or increase of the rate of change of x(t). Conversely, the population x(t) could also affect the rate of change of y(t). This results in a coupled system of equations
$$\begin{aligned} \dot{x}(t)= & {} ax(t) + by (t),\\ \dot{y} (t)= & {} cx(t) + dy(t), \end{aligned}$$
with positive or negative coefficients b and c, which describe the interaction of the populations. This is the general form of a linear system of differential equations in two unknowns, written for short as
$$\begin{aligned} \dot{x}= & {} ax + by,\\ \dot{y}= & {} cx + dy. \end{aligned}$$
Refined models are obtained, if one takes into account the dependence of the rate of growth on food supply, for instance. For one species this would result in an equation of the form
$$ \dot{x} = (v-n)x, $$
where v denotes the available food supply and n a threshold value. So, the population is increasing if the available quantity of food is larger than n, and is otherwise decreasing. In the case of a predator–prey relationship of species x to species y, in which y is the food for x, the relative rates of change are not constant. A common assumption is that these rates contain a term that depends linearly on the other species. Under this assumption, one obtains the nonlinear system
$$\begin{aligned} \dot{x}= & {} (ay - n)x,\\ \dot{y}= & {} (d - cx)y. \end{aligned}$$
This is the famous predator–prey model of Lotka1 and Volterra2 (for a detailed derivation we refer to [13, Chap. 12.2]).
The general form of a system of nonlinear differential equations is
$$\begin{aligned} \dot{x}= & {} f(x,y),\\ \dot{y}= & {} g(x, y). \end{aligned}$$
Geometrically this can be interpreted in the following way. The right-hand side defines a vector field
$$ (x,y) \mapsto \begin{bmatrix} f(x,y)\\ g(x, y) \end{bmatrix} $$
on $$\mathbb R^2$$; the left-hand side is the velocity vector of a plane curve
$$ t \mapsto \begin{bmatrix} x(t)\\ y(t) \end{bmatrix}. $$
The solutions are thus plane curves whose velocity vectors are given by the vector field.

Example 20.1

(Rotation of the plane)   The vector field
$$ (x, y) \mapsto \begin{bmatrix} -y\\ x \end{bmatrix} $$
is perpendicular to the corresponding position vectors $$[x, y]^\mathsf{T}$$, see Fig. 20.1. The solutions of the system of differential equations
$$\begin{aligned} \dot{x}= & {} -y,\\ \dot{y}= & {} x \end{aligned}$$
are the circles (Fig. 20.1)
$$\begin{aligned} x(t)= & {} R \cos t,\\ y(t)= & {} R \sin t, \end{aligned}$$
where the radius R is given by the initial values, for instance, $$x(0)=R$$ and $$y(0) = 0$$.
images/215236_2_En_20_Chapter/215236_2_En_20_Fig1_HTML.gif
Fig. 20.1

Vector field and solution curves

Remark 20.2

The geometrical, two-dimensional representation is made possible by the fact that the right-hand side of the system does not dependent on time t explicitly. Such systems are called autonomous . A representation which includes the time axis (like in Chap. 19) would require a three-dimensional plot with a three-dimensional direction field
$$ (x,y,t) \mapsto \begin{bmatrix} f(x,y)\\ g(x, y)\\ 1 \end{bmatrix}. $$
The solutions are represented as spatial curves
$$ t \mapsto \begin{bmatrix} x(t)\\ y(t)\\ t \end{bmatrix}, $$
see the space-time diagram in Fig. 20.2 .
images/215236_2_En_20_Chapter/215236_2_En_20_Fig2_HTML.gif
Fig. 20.2

Direction field and space-time diagram for $$\dot{x} = -y, \dot{y} = x$$

Example 20.3

Another type of example which demonstrates the meaning of the vector field and the solution curves is obtained from the flow of ideal fluids. For example,
$$\begin{aligned} \dot{x}= & {} 1 - \frac{x^2 - y^2}{(x^2 + y^2)^2},\\ \dot{y}= & {} \frac{-2xy}{(x^2 + y^2)^2} \end{aligned}$$
images/215236_2_En_20_Chapter/215236_2_En_20_Fig3_HTML.gif
Fig. 20.3

Plane potential flow around a cylinder

describes a plane, stationary potential flow around the cylinder $$x^2 + y^2 \le 1$$ (Fig. 20.3). The right-hand side describes the flow velocity at the point (xy). The solution curves follow the streamlines
$$ y \Big (1 - \frac{1}{x^2 + y^2}\Big ) = C. $$
Here C denotes a constant. This can be checked by differentiating the above relation with respect to t and substituting $$\dot{x}$$ and $$\dot{y}$$ by the right-hand side of the differential equation.

Experiment 20.4

Using the applet Dynamical systems in the plane, study the vector field and the solution curves of the system of differential equations from Examples 20.1 and 20.3. In a similar way, study the systems of differential equations
$$ \begin{array}{ccl}\dot{x} &{} = &{} y,\\ \dot{y} &{}=&{}-x,\end{array}\qquad \begin{array}{ccc}\dot{x} &{} = &{} y,\\ \dot{y} &{}=&{}x,\end{array}\qquad \begin{array}{ccc}\dot{x} &{} = &{} -y,\\ \dot{y} &{}=&{}-x,\end{array}\qquad \begin{array}{ccc}\dot{x} &{} = &{} x,\\ \dot{y} &{}=&{}x,\end{array}\qquad \begin{array}{ccc}\dot{x} &{} = &{} y,\\ \dot{y} &{}=&{}y\end{array} $$
and try to understand the behaviour of the solution curves.
Before turning to the solution theory of planar linear systems of differential equations, it is useful to introduce a couple of notions that serve to describe the qualitative behaviour of solution curves. The system of differential equations
$$\begin{aligned} \dot{x}(t)= & {} f\big (x(t),y(t)\big ),\\ \dot{y}(t)= & {} g\big (x(t), y(t)\big ) \end{aligned}$$
together with prescribed values at $$t = 0$$
$$ x(0) = x_0,\quad y(0) = y_0, $$
is again called an initial value problem . In this chapter we assume the functions f and g to be at least continuous. By a solution curve or a trajectory we mean a continuously differentiable curve $$t\mapsto [x(t)\ y(t)]^\mathsf{T}$$ whose components fulfil the system of differential equations.

For the case of a single differential equation the notion of an equilibrium point was introduced in Definition 19.​21. For systems of differential equations one has an analogous notion.

Definition 20.5

(Equilibrium point)   A point $$(x^*, y^*)$$ is called equilibrium point or equilibrium of the system of differential equations, if $$f(x^*, y^*) = 0$$ and $$g(x^*, y^*) = 0$$.

The name comes from the fact that a solution with initial value $$x_0 = x^*$$, $$y_0 = y^*$$ remains at $$(x^*, y^*)$$ for all times; in other words, if $$(x^*, y^*)$$ is an equilibrium point, then $$x(t)= x^*$$, $$y(t)= y^*$$ is a solution to the system of differential equations since both the left- and right-hand sides will be zero.

From Chap. 19 we know that solutions of differential equations do not have to exist for large times. However, if solutions with initial values in a neighbourhood of an equilibrium point exist for all times then the following notions are meaningful.

Definition 20.6

Let $$(x^*, y^*)$$ be an equilibrium point. If there is a neighbourhood U of $$(x^*, y^*)$$ so that all trajectories with initial values $$(x_0,y_0)$$ in U converge to the equilibrium point $$(x^*, y^*)$$ as $$t\rightarrow \infty $$, then this equilibrium is called asymptotically stable . If for every neighbourhood V of $$(x^*, y^*)$$ there is a neighbourhood W of $$(x^*, y^*)$$ so that all trajectories with initial values $$(x_0,y_0)$$ in W stay entirely in V, then the equilibrium $$(x^*, y^*)$$ is called stable . An equilibrium point which is not stable is called unstable .

In short, stability means that trajectories that start close to the equilibrium point remain close to it; asymptotic stability means that the trajectories are attracted by the equilibrium point. In the case of an unstable equilibrium point there are trajectories that move away from it; in linear systems these trajectories are unbounded, and in the nonlinear case they can also converge to another equilibrium or a periodic solution (for instance, see the discussion of the mathematical pendulum in Sect. 20.3 or [13]).

In the following we determine the solution to the initial value problem
$$\begin{aligned} \dot{x}= & {} ax + by, \qquad x(0) = x_0,\\ \dot{y}= & {} cx + dy, \qquad y(0) = y_0. \end{aligned}$$
This is a two-dimensional system of first-order linear differential equations. For this purpose we first discuss the three basic types of such systems and then show how arbitrary systems can be transformed to a system of basic type.
We denote the coefficient matrix by
$$ \mathbf {A}= \begin{bmatrix} a&b\\ c&d \end{bmatrix}. $$
The decisive question is whether $$\mathbf {A}$$ is similar to a matrix of type I, II or III, as described in Appendix B.2. A matrix of type I has real eigenvalues and is similar to a diagonal matrix. A matrix of type II has a double real eigenvalue; its canonical form, however, contains a nilpotent part. The case of two complex conjugate eigenvalues is finally covered by type III.
Type I—real eigenvalues, diagonalisable matrix. In this case the standard form of the system is
$$\begin{aligned} \dot{x}= & {} \alpha x, \qquad x(0) = x_0,\\ \dot{y}= & {} \beta y, \qquad y(0) = y_0. \end{aligned}$$
We know from Example 19.​7 that the solutions are given by
$$ x(t) = x_0 \mathrm{e}^{\alpha t}, \quad y(t) = y_0 \mathrm{e}^{\beta t} $$
and in particular exist for all times $$t\in \mathbb R$$. Obviously $$(x^{*}, y^{*}) = (0,0)$$ is an equilibrium point. If $$\alpha < 0$$ and $$\beta < 0$$, then all solution curves approach the equilibrium (0, 0) as $$t \rightarrow \infty $$; this equilibrium is asymptotically stable. If $$\alpha \ge 0$$, $$\beta \ge 0$$ (not both equal to zero), then the solution curves leave every neighbourhood of (0, 0) and the equilibrium is unstable. Similarly, instability is present in the case where $$\alpha > 0$$, $$\beta < 0$$ (or vice versa). One calls such an equilibrium a saddle point .
If $$\alpha \ne 0$$ and $$ x_0 \ne 0$$, then one can solve for t and represent the solution curves as graphs of functions:
$$ \mathrm{e}^t = \Big (\frac{x}{x_0}\Big )^{1/\alpha }, \qquad y = y_0 \Big (\frac{x}{x_0}\Big )^{\beta /\alpha }. $$

Example 20.7

The three systems
$$ \begin{array}{ccl}\dot{x} &{} = &{} x,\\ \dot{y} &{} = &{} 2y,\end{array}\qquad \begin{array}{ccl}\dot{x} &{} = &{} -x,\\ \dot{y} &{} = &{} -2y,\end{array}\qquad \begin{array}{ccl}\dot{x} &{} = &{} x,\\ \dot{y} &{} = &{} -2y\end{array} $$
have the solutions
$$ \begin{array}{ccl} x(t) &{} = &{} x_0\mathrm{e}^t, \\ y(t) &{} = &{} y_0\mathrm{e}^{2t}, \end{array}\qquad \begin{array}{ccl} x(t) &{} = &{} x_0\mathrm{e}^{-t}, \\ y(t) &{} = &{} y_0\mathrm{e}^{-2t}, \end{array}\qquad \begin{array}{ccl} x(t) &{} = &{} x_0\mathrm{e}^{t}, \\ y(t) &{} = &{} y_0\mathrm{e}^{-2t}, \end{array} $$
respectively. The vector fields and some solutions are shown in Figs. 20.4, 20.5 and 20.6. One recognises that all coordinate half axes are solutions curves.
images/215236_2_En_20_Chapter/215236_2_En_20_Fig4_HTML.gif
Fig. 20.4

Real eigenvalues, unstable equilibrium

images/215236_2_En_20_Chapter/215236_2_En_20_Fig5_HTML.gif
Fig. 20.5

Real eigenvalues, asymptotically stable equilibrium

images/215236_2_En_20_Chapter/215236_2_En_20_Fig6_HTML.gif
Fig. 20.6

Real eigenvalues, saddle point

images/215236_2_En_20_Chapter/215236_2_En_20_Fig7_HTML.gif
Fig. 20.7

Double real eigenvalue, matrix not diagonalisable

Type II—double real eigenvalue, not diagonalisable. The case of a double real eigenvalue $$\alpha = \beta $$ is a special case of type I, if the coefficient matrix is diagonalisable. There is, however, the particular situation of a double eigenvalue and a nilpotent part. Then the standard form of the system is
$$ \begin{array}{ccl}\dot{x} &{} = &{} \alpha x + y,\\ \dot{y} &{} = &{} \alpha y,\end{array}\quad \begin{array}{ccl}x(0) &{} = &{} x_0,\\ y(0) &{} = &{} y_0.\end{array} $$
We compute the solution component
$$ y(t) = y_0\mathrm{e}^{\alpha t}, $$
substitute it into the first equation
$$ \dot{x}(t) = \alpha x(t) + y_0 \mathrm{e}^{\alpha t}, \quad x(0) = x_0 $$
and apply the variation of constants formula from Chap. 19:
$$ x(t) = \mathrm{e}^{\alpha t} \Big (x_0 + \int ^t_0 \mathrm{e}^{-\alpha s} y_0 \mathrm{e}^{\alpha s}\,{\mathrm{d}}s\Big ) = \mathrm{e}^{\alpha t} \big (x_0 + t y_0\big ). $$
The vector fields and some solution curves for the case $$\alpha = -1$$ are depicted in Fig. 20.7.
Type III—complex conjugate eigenvalues. In this case the standard form of the system is
$$\begin{aligned} \dot{x}= & {} \alpha x - \beta y, \qquad x(0) = x_0,\\ \dot{y}= & {} \beta x + \alpha y, \qquad y(0) = y_0. \end{aligned}$$
By introducing the complex variable z and the complex coefficients $$\gamma , z_0$$ as
$$ z = x + \mathrm{i}y, \quad \gamma = \alpha + \mathrm{i}\beta , \quad z_0 = x_0 + \mathrm{i}y_0, $$
we see that the above system represents the real and the imaginary parts of the equation
$$ (\dot{x} + \mathrm{i}\dot{y}) = (\alpha + \mathrm{i}\beta )(x + \mathrm{i}y), \quad x(0) + \mathrm{i}y(0) = x_0 + \mathrm{i}y_0. $$
From the complex formulation
$$ \dot{z} = \gamma z, \quad z(0) = z_0, $$
the solutions can be derived immediately:
$$ z(t) = z_0 \mathrm{e}^{\gamma t}. $$
Splitting the left- and right-hand sides into real and imaginary parts, one obtains
$$\begin{aligned} x(t) + \mathrm{i}y(t)= & {} (x_0 + \mathrm{i}y_0) \mathrm{e}^{(\alpha + \mathrm{i}\beta ) t}\\= & {} (x_0 + \mathrm{i}y_0) \mathrm{e}^{\alpha t} (\cos \beta t + \mathrm{i}\sin \beta t). \end{aligned}$$
From that we get (see Sect. 4.​2)
$$\begin{aligned} x(t)= & {} x_0 \mathrm{e}^{\alpha t} \cos \beta t - y_0 \mathrm{e}^{\alpha t} \sin \beta t,\\ y(t)= & {} x_0 \mathrm{e}^{\alpha t} \sin \beta t + y_0 \mathrm{e}^{\alpha t} \cos \beta t. \end{aligned}$$
The point $$(x^{*}, y^{*}) = (0,0)$$ is again an equilibrium point. In the case $$\alpha < 0$$ it is asymptotically stable; for $$\alpha > 0$$ it is unstable; for $$\alpha = 0$$ it is stable but not asymptotically stable. Indeed the solution curves are circles and hence bounded, but are not attracted by the origin as $$t\rightarrow \infty $$.

Example 20.8

The vector fields and solutions curves for the two systems
$$ \begin{array}{ccl}\dot{x} &{} = &{} \frac{1}{10}x - y, \\ \dot{y} &{} = &{} x + \frac{1}{10}y,\end{array}\qquad \begin{array}{ccl}\dot{x} &{} = &{} - \frac{1}{10}x - y, \\ \dot{y} &{} = &{} x - \frac{1}{10}y\end{array} $$
images/215236_2_En_20_Chapter/215236_2_En_20_Fig8_HTML.gif
Fig. 20.8

Complex eigenvalues, unstable

images/215236_2_En_20_Chapter/215236_2_En_20_Fig9_HTML.gif
Fig. 20.9

Complex eigenvalues, asymptotically stable

are given in Figs. 20.8 and 20.9. For the stable case $$\dot{x} = -y$$, $$\dot{y} = x$$ we refer to Fig. 20.1.

General solution of a linear system of differential equations. The similarity transformation from Appendix B allows us to solve arbitrary linear systems of differential equations by reduction to the three standard cases.

Proposition 20.9

For an arbitrary $$(2\times 2)$$-matrix $$\mathbf {A}$$, the initial value problem
$$ \begin{bmatrix} \dot{x}(t)\\ \dot{y}(t) \end{bmatrix} = \mathbf {A}\begin{bmatrix} x(t)\\ y(t) \end{bmatrix}, \quad \begin{bmatrix} x(0)\\ y(0) \end{bmatrix} = \begin{bmatrix} x_0\\ y_0 \end{bmatrix} $$
has a unique solution that exists for all times $$t\in \mathbb R$$. This solution can be computed explicitly by transformation to one of the types I, II or III.

Proof

According to Appendix B.2 there is an invertible matrix $$\mathbf {T}$$ such that
$$ \mathbf {T}^{-1}\mathbf {A}\mathbf {T}= \mathbf {B}, $$
where $$\mathbf {B}$$ belongs to one of the standard types I, II, III. We set
$$ \begin{bmatrix} u\\ v \end{bmatrix} = \mathbf {T}^{-1}\begin{bmatrix} x\\ y \end{bmatrix} $$
and obtain the transformed system
$$ \begin{bmatrix} \dot{u}\\ \dot{v} \end{bmatrix} = \mathbf {T}^{-1} \begin{bmatrix} \dot{x}\\ \dot{y} \end{bmatrix} = \mathbf {T}^{-1} \mathbf {A}\begin{bmatrix} x\\ y \end{bmatrix} = \mathbf {T}^{-1} \mathbf {A}\mathbf {T}\begin{bmatrix} u\\ v \end{bmatrix} = \mathbf {B}\begin{bmatrix} u\\ v \end{bmatrix}, \quad \begin{bmatrix} u(0)\\ v(0) \end{bmatrix} = \mathbf {T}^{-1} \begin{bmatrix} x_0\\ y_0 \end{bmatrix}. $$
We solve this system of differential equations depending on its type, as explained above. Each of these systems in standard form has a unique solution which exists for all times. The reverse transformation
$$ \begin{bmatrix} x\\ y \end{bmatrix} = \mathbf {T}\begin{bmatrix} u\\ v \end{bmatrix} $$
yields the solution of the original system.    $$\square $$

Thus, modulo a linear transformation, the types I, II, III actually comprise all cases that can occur.

Example 20.10

We study the solution curves of the system
$$\begin{aligned} \dot{x}= & {} x + 2y,\\ \dot{y}= & {} 2x + y. \end{aligned}$$
The corresponding coefficient matrix
$$ \mathbf {A}= \begin{bmatrix} 1&2\\ 2&1 \end{bmatrix} $$
has the eigenvalues $$\lambda _1 = 3$$ and $$\lambda _2 = -1$$ with respective eigenvectors $$\mathbf {e}_1 = [1\ 1]^\mathsf{T}$$ and $$\mathbf {e}_2 = [-1\ 1]^\mathsf{T}$$. It is of type I, and the origin is a saddle point. The vector field and some solutions can be seen in Fig. 20.10.
images/215236_2_En_20_Chapter/215236_2_En_20_Fig10_HTML.gif
Fig. 20.10

Example 20.10

Remark 20.11

The proof of Proposition 20.9 shows the structure of the general solution of a linear system of differential equations. Assume, for example, that the roots $$\lambda _1, \lambda _2$$ of the characteristic polynomial of the coefficient matrix are real and distinct, so the system is of type I. The general solution in transformed coordinates is given by
$$ u(t) = C_1\mathrm{e}^{\lambda _1t},\quad v(t) = C_2\mathrm{e}^{\lambda _2t}. $$
If we denote the columns of the transformation matrix by $$\mathbf {t}_1, \mathbf {t}_2$$, then the solution in the original coordinates is
$$ \begin{bmatrix} x(t)\\ y(t) \end{bmatrix} = \mathbf {t}_{1}u(t)+ \mathbf {t}_{2}v(t) = \begin{bmatrix} t_{11}C_1\mathrm{e}^{\lambda _1t} + t_{12}C_2\mathrm{e}^{\lambda _2t} \\ t_{21}C_1\mathrm{e}^{\lambda _1t} + t_{22}C_2\mathrm{e}^{\lambda _2t} \end{bmatrix}. $$
Every component is a particular linear combination of the transformed solutions u(t), v(t). In the case of complex conjugate roots $$\mu \pm I\nu $$ (type III) the components of the general solution are particular linear combinations of the functions $$\mathrm{e}^{\mu t}\cos \nu t$$ and $$\mathrm{e}^{\mu t}\sin \nu t$$. In the case of a double root $$\alpha $$ (type II), the components are given as linear combinations of the functions $$\mathrm{e}^{\alpha t}$$ and $$t\mathrm{e}^{\alpha t}$$.

20.2 Systems of Nonlinear Differential Equations

In contrast to linear systems of differential equations, the solutions to nonlinear systems can generally not be expressed by explicit formulas. Apart from numerical methods (Chap. 21) the qualitative theory is of interest. It describes the behaviour of solutions without knowing them explicitly. In this section we will demonstrate this with the help of an example from population dynamics.

The Lotka–Volterra model. In Sect. 20.1 the predator–prey model of Lotka and Volterra was introduced. In order to simplify the presentation, we set all coefficients equal to one. Thus the system becomes
$$\begin{aligned} \dot{x}= & {} x (y-1),\\ \dot{y}= & {} y (1-x). \end{aligned}$$
The equilibrium points are $$(x^{*}, y^{*}) = (1,1)$$ and $$(x^{**}, y^{**}) = (0,0)$$. Obviously, the coordinate half axes are solution curves given by
$$\begin{aligned} \begin{array}{lrlclc} x(t) &{}=&{}x_0\mathrm{e}^{-t}, \qquad &{}&{} x(t)= 0,\\ y(t) &{}=&{} 0, \qquad &{}&{} y(t)= y_0\mathrm{e}^t. \end{array} \end{aligned}$$
The equilibrium (0, 0) is thus a saddle point (unstable); we will later analyse the type of equilibrium (1, 1). In the following we will only consider the first quadrant $$x \ge 0$$, $$y \ge 0$$, which is relevant in biological models. Along the straight line $$x = 1$$ the vector field is horizontal, and along the straight line $$y = 1$$ it is vertical. It looks as if the solution curves rotate about the equilibrium point (1, 1), see Fig. 20.11.
In order to be able to verify this conjecture we search for a function H(xy) which is constant along the solution curves:
$$ H (x(t), y(t)) = C. $$
Such a function is called a first integral, invariant or conserved quantity of the system of differential equations. Consequently, we have
$$ \frac{\,{\mathrm{d}}}{\,{\mathrm{d}}t} H (x(t), y(t)) = 0 $$
or by the chain rule for functions in two variables (Proposition 15.​16)
$$ \frac{\partial H}{\partial x} \dot{x} + \frac{\partial H}{\partial y} \dot{y} = 0. $$
With the ansatz
$$ H(x, y) = F(x) + G(y), $$
images/215236_2_En_20_Chapter/215236_2_En_20_Fig11_HTML.gif
Fig. 20.11

Vector field of the Lotka–Volterra model

we should have
$$ F'(x)\dot{x} + G'(y)\dot{y} = 0. $$
Inserting the differential equations we obtain
$$ F'(x) x(y-1) + G'(y) y(1-x) = 0, $$
and a separation of the variables yields
$$ \frac{xF'(x)}{x-1} = \frac{yG'(y)}{y-1}. $$
Since the variables x and y are independent of each other, this is only possible if both sides are constant:
$$ \frac{x F'(x)}{x-1} = C, \quad \frac{y G'(y)}{y-1} = C. $$
It follows that
$$ F'(x) = C\left( 1 - \frac{1}{x}\right) , \quad G'(y) = C\left( 1 - \frac{1}{y}\right) $$
and thus
$$ H(x, y) = C(x- \log x + y - \log y) + D. $$
This function has a global minimum at $$(x^{*}, y^{*}) = (1,1)$$, as can also be seen in Fig. 20.12.
images/215236_2_En_20_Chapter/215236_2_En_20_Fig12_HTML.gif
Fig. 20.12

First integral and level sets

The solution curves of the Lotka–Volterra system lie on the level sets
$$ x - \log x + y - \log y = \text {const}. $$
These level sets are obviously closed curves. The question arises whether the solution curves are also closed, and the solutions thus are periodic. In the following proposition we will answer this question affirmatively. Periodic, closed solution curves are called periodic orbits.

Proposition 20.12

For initial values $$x_0>0$$, $$y_0>0$$ the solution curves of the Lotka–Volterra system are periodic orbits and $$(x^{*}, y^{*}) = (1,1)$$ is a stable equilibrium point.

Outline of proof The proof of the fact that the solution
$$ t \mapsto \begin{bmatrix} x(t)\\ y(t) \end{bmatrix}, \quad \begin{bmatrix} x(0)\\ y(0) \end{bmatrix} = \begin{bmatrix} x_0\\ y_0 \end{bmatrix} $$
exists (and is unique) for all initial values $$x_0 \ge 0$$, $$y_0 \ge 0$$ and all times $$t\in \mathbb R$$ requires methods that go beyond the scope of this book. The interested reader is referred to [13, Chap. 8]. In order to prove periodicity, we take initial values $$(x_0,y_0) \ne (1,1)$$ and show that the corresponding solution curves return to the initial value after finite time $$\tau > 0$$. For that we split the first quadrant $$x>0$$, $$y>0$$ into four regions
$$ \begin{array}{ll} Q_1 : x> 1, y> 1;&{}\qquad Q_2 : x< 1, y> 1;\\ Q_3 : x< 1, y< 1;&{}\qquad Q_4 : x > 1, y < 1 \end{array} $$
and show that every solution curve moves (clockwise) through all four regions in finite time. For instance, consider the case $$(x_0,y_0)\in Q_3$$, so $$0<x_0<1$$, $$0<y_0<1$$. We want to show that the solution curve reaches the region $$Q_2$$ in finite time; i.e. y(t) assumes the value one. From the differential equations it follows that
$$ \dot{x} = x(y-1) < 0,\qquad \dot{y} = y(1-x) > 0 $$
in region $$Q_3$$ and thus
$$ x(t) < x_0,\quad y(t)> y_0, \quad \dot{y}(t) > y_0(1-x_0), $$
as long as (x(t), y(t)) stays in region $$Q_3$$. If y(t) were less than one for all times $$t > 0$$, then the following inequalities would hold:
$$ 1> y(t) = y_0 + \int _0^t\dot{y}(s)\,{\mathrm{d}}s > y_0 + \int _0^ty_0(1-x_0)\,{\mathrm{d}}s = y_0 + t y_0(1-x_0). $$
However, the latter expression diverges to infinity as $$t\rightarrow \infty $$, a contradiction. Consequently, y(t) has to reach the value 1 and thus the region $$Q_2$$ in finite time. Likewise one reasons for the other regions. Thus there exists a time $$\tau > 0$$ such that $$(x(\tau ), y(\tau )) = (x_0,y_0)$$.
From that the periodicity of the orbit follows. Since the system of differential equations is autonomous, $$t\mapsto (x(t+\tau ), y(t+\tau ))$$ is a solution as well. As just shown, both solutions have the same initial value at $$t=0$$. The uniqueness of the solution of initial value problems implies that the two solutions are identical, so
$$ x(t) = x(t+\tau ),\quad y(t) = y(t+\tau ) $$
is fulfilled for all times $$t\in \mathbb R$$. However, this proves that the solution $$t\mapsto (x(t), y(t))$$ is periodic with period $$\tau $$.

All solution curves in the first quadrant with the exception of the equilibrium are thus periodic orbits. Solution curves that start close to $$(x^{*}, y^{*}) = (1,1)$$ stay close, see Fig. 20.12. The point (1, 1) is thus a stable equilibrium.    $$\square $$

Figure 20.13 shows some solution curves. The populations of predator and prey thus increase and decrease periodically and in opposite direction. For further population models we refer to [6].
images/215236_2_En_20_Chapter/215236_2_En_20_Fig13_HTML.gif
Fig. 20.13

Solution curves of the Lotka–Volterra model

20.3 The Pendulum Equation

As a second example of a nonlinear system we consider the mathematical pendulum . It models an object of mass m that is attached to the origin with a (massless) cord of length l and moves under the gravitational force $$-m\mathrm{g}$$, see Fig. 20.14. The variable x(t) denotes the angle of deflection from the vertical direction, measured in counterclockwise direction. The tangential acceleration of the object is equal to $$l\ddot{x}(t)$$, and the tangential component of the gravitational force is $$-m\mathrm{g}\sin x(t)$$. According to Newton’s law, force = mass $$\times $$ acceleration, we have
$$ -m\mathrm{g}\sin x = ml\ddot{x} $$
or
$$ ml\ddot{x} + m\mathrm{g}\sin x = 0. $$
This is a second-order nonlinear differential equation. We will later reduce it to a first-order system, but for a start, we wish to derive a conserved quantity.
images/215236_2_En_20_Chapter/215236_2_En_20_Fig14_HTML.gif
Fig. 20.14

Derivation of the pendulum equation

Conservation of energy. Multiplying the pendulum equation by $$l\dot{x}$$ gives
$$ ml^2\dot{x}\ddot{x} + m\mathrm{g}l \dot{x} \sin x = 0. $$
We identify $$\dot{x}\ddot{x}$$ as the derivative of $$\frac{1}{2}\dot{x}^2$$ and $$\dot{x} \sin x$$ as the derivative of $$1- \cos x$$ and arrive at a conserved quantity, which we denote by $$H(x,\dot{x})$$:
$$ \frac{\,{\mathrm{d}}}{\,{\mathrm{d}}t}H(x,\dot{x}) = \frac{\,{\mathrm{d}}}{\,{\mathrm{d}}t}\left( \frac{1}{2} m l^2 \dot{x}^2 + mgl\big (1-\cos x\big )\right) = 0; $$
that is, $$H(x(t),\dot{x}(t))$$ is constant when x(t) is a solution of the pendulum equation.
Recall from mechanics that the kinetic energy of the moving mass is given by
$$ T(\dot{x}) = \frac{1}{2} m l^2 \dot{x}^2. $$
The potential energy is defined as the work required to move the mass from its height $$-l$$ at rest to position $$-l\cos x$$, that is
$$ U(x) = \int _{-l}^{-l\cos x} mg\,\,{\mathrm{d}}\xi = mgl\big (1-\cos x\big ). $$
Thus the conserved quantity is identified as the total energy
$$ H(x,\dot{x}) = T(\dot{x}) + U(x), $$
in accordance with the well-known mechanical principle of conservation of total energy.
Note that the linearisation
$$ \sin x = x + {{\mathcal O}}(x^3) \approx x $$
for small angles x leads to the approximation
$$ ml\ddot{x} + m\mathrm{g}x = 0. $$
For convenience, we will cancel m in the equation and set $$\mathrm{g}/l =1$$. Then the pendulum equation reads
$$ \ddot{x} = -\sin x, $$
with the conserved quantity
$$ H (x,\dot{x}) = \frac{1}{2} \dot{x}^2 + 1- \cos x, $$
while the linearised pendulum equation reads
$$ \ddot{x} = - x. $$
Reduction to a first-order system. Every explicit second-order differential equation $$\ddot{x} = f(x,\dot{x})$$ can be reduced to a first-order system by introducing the new variable $$y = \dot{x}$$, resulting in the system
$$ \begin{array}{ccl} \dot{x} &{} = &{} y, \\ \dot{y} &{} = &{} f(x, y). \end{array} $$
Applying this procedure to the pendulum equation and adjoining initial data leads to the system
$$ \begin{array}{cclcccl} \dot{x} &{} = &{} y, &{}\qquad \quad &{}x(0) &{}=&{} x_0,\\ \dot{y} &{} = &{} - \sin x, &{}\qquad \quad &{} y(0) &{}= &{}y_0 \end{array} $$
for the mathematical pendulum. Here x denotes the angle of deflection and y the angular velocity of the object.
The linearised pendulum equation can be written as the system
$$ \begin{array}{cclcccl} \dot{x} &{} = &{} y, &{}\quad \qquad &{}x(0) &{}=&{} x_0,\\ \dot{y} &{} = &{} - x, &{}\quad \qquad &{} y(0) &{}= &{}y_0. \end{array} $$
Apart from the change in sign this system of differential equations coincides with that of Example 20.1. It is a system of type III; hence its solution is given by
$$\begin{aligned} x(t)= & {} x_0 \cos t + y_0 \sin t,\\ y(t)= & {} - x_0 \sin t + y_0 \cos t. \end{aligned}$$
The first line exhibits the solution to the second-order linearised equation $$\ddot{x} = -x$$ with initial data $$x(0) = x_0$$, $$\dot{x}(0) = y_0$$. The same result can be obtained directly by the methods of Sect. 19.​6.
Solution trajectories of the nonlinear pendulum. In the coordinates (xy), the total energy reads
$$ H(x, y) = \frac{1}{2} {y^2} + 1 - \cos x. $$
As was shown above, it is a conserved quantity; hence solution curves for prescribed initial values $$(x_0,y_0)$$ lie on the level sets $$H(x, y)=C$$; i.e.
$$\begin{aligned} \frac{1}{2}{y^2} + 1 - \cos x = \frac{1}{2} {y_0^2} + 1 - \cos x_0,\\ y = \pm \sqrt{y^2_0 - 2 \cos x_0 + 2 \cos x}. \end{aligned}$$
Figure 20.15 shows some solution curves. There are unstable equilibria at $$y = 0$$, $$x = \ldots , -3\pi , -\pi , \pi , 3\pi , \ldots $$ which are connected by limit curves. One of the two limit curves passes through $$x_0 = 0$$, $$y_0 = 2$$. The solution with these initial values lies on the limit curve and approaches the equilibrium $$(\pi , 0)$$ as $$t\rightarrow \infty $$, and $$(-\pi , 0)$$ as $$t\rightarrow -\infty $$. Initial values that lie between these limit curves (for instance the values $$x_0 = 0$$, $$|y_0| < 2$$) give rise to periodic solutions of small amplitude (less than $$\pi $$). The solutions outside represent large oscillations where the pendulum loops. We remark that effects of friction are not taken into account in this model.
images/215236_2_En_20_Chapter/215236_2_En_20_Fig15_HTML.gif
Fig. 20.15

Solution curves, mathematical pendulum

Power series solutions. The method of power series for solving differential equations has been introduced in Chap. 19. We have seen that the linearised pendulum equation $$\ddot{x} = -x$$ can be solved explicitly by the methods of Sects. 19.​6 and 20.1. Also, the nonlinear pendulum equation can be solved explicitly with the aid of certain higher transcendental functions, the Jacobian elliptic functions. Nevertheless, it is of interest to analyse the solutions of these equations by means of powers series, especially in view of the fact that they can be readily obtained in maple.

Example 20.13

(Power series for the linearised pendulum)   As an example, we solve the initial value problem
$$ \ddot{x} = -x,\qquad x(0) = a,\quad \dot{x}(0) = 0 $$
by means of the power series ansatz
$$ x(t) = \sum _{n=0}^\infty c_nt^n = c_0 + c_1t + c_2t^2 + c_3t^3 + c_4t^4 + \cdots $$
We have
$$\begin{aligned} \dot{x}(t)= & {} \sum _{n=1}^\infty nc_nt^{n-1} = c_1 + 2c_2t + 3c_3t^2 + 4c_4t^3 + \cdots \\ \ddot{x}(t)= & {} \sum _{n=2}^\infty n(n-1)c_nt^{n-2} = 2c_2 + 6c_3t+ 12c_4t^2 + \cdots \end{aligned}$$
We know that $$c_0 = a$$ and $$c_1 = 0$$. Equating $$\ddot{x}(t)$$ with $$-x(t)$$ gives, up to second degree,
$$ 2c_2 + 6c_3t+ 12c_4t^2 + \cdots = -a - c_2t^2 - \cdots $$
thus
$$ c_2 = -\frac{a}{2},\quad c_3 = 0,\quad c_4 = -\frac{c_2}{12} = \frac{a}{24},\ \ldots $$
The power series expansion starts with
$$ x(t) = a\Big ( 1 - \frac{1}{2} t^2 + \frac{1}{24}t^4 \mp \ldots \Big ) $$
and seemingly coincides with the Taylor series of the known solution $$x(t) = a\cos t$$.

Example 20.14

(Power series for the nonlinear pendulum)   We turn to the initial value problem for the nonlinear pendulum equation
$$ \ddot{x} = -\sin x,\qquad x(0) = a,\quad \dot{x}(0) = 0, $$
making the same power series ansatz as in Example 20.13. Developing the sine function into its Taylor series, inserting the lowest order terms of the power series of x(t) and noting that $$c_0 = a$$, $$c_1 = 0$$ yields
$$\begin{aligned}&-\sin x(t) = -\Big (x(t) -\frac{1}{3!}x(t)^3 + \frac{1}{5!}x(t)^5 + \dots \Big )\\&\qquad \,\, = -\big (a + c_2t^2 + \cdots \big ) + \frac{1}{3!}\big (a + c_2 t^2 + \ldots \big )^3 - \frac{1}{5!}\big (a + c_2 t^2 + \cdots \big )^5\\&\qquad \,\,= -\big (a + c_2t^2 + \cdots \big ) + \frac{1}{6}\big (a^3 + 3a^2c_2 t^2 + \cdots \big )\\&\qquad \quad - \frac{1}{120}\big (a^5 + 5a^4c_2 t^2 + \cdots \big ), \end{aligned}$$
where we have used the binomial formulas. Equating the last line with
$$ \ddot{x}(t) = 2c_2 + 6c_3 t + 12c_4t^2 + \cdots $$
shows that
$$\begin{aligned} 2c_2= & {} -a + \frac{1}{6} a^3 - \frac{1}{120}a^5 \pm \ldots \\ 6c_3= & {} 0\\ 12c_4= & {} c_2\Big (-1 + \frac{3}{6}a^2 - \frac{5}{120}a^4 \pm \ldots \Big ) \end{aligned}$$
which suggests that
$$ c_2 = - \frac{1}{2}\sin a,\quad c_4 = \frac{1}{24}\sin a \cos a. $$
Collecting terms and factoring a out finally results in the expansion
$$ x(t) = a\left( 1 - \frac{1}{2}\frac{\sin a}{a}\, t^2 + \frac{1}{24}\frac{\sin a\cos a}{a}\, t^4 \pm \ldots \right) . $$
The expansion can be checked by means of the maple command
images/215236_2_En_20_Chapter/215236_2_En_20_Equ104_HTML.gif
If the initial deflection $$x_0 = a$$ is sufficiently small, then
$$ \frac{\sin a}{a} \approx 1,\qquad \cos a \approx 1, $$
see Proposition 6.​10, and so the solution x(t) is close to the solution $$a\cos t$$ of the linearised pendulum equation, as expected.

20.4 Exercises

1.
The space-time diagram of a two-dimensional system of differential equations (Remark 20.2) can be obtained by introducing time as third variable $$z(t) = t$$ and passing to the three-dimensional system
$$ \begin{bmatrix} \dot{x}\\ \dot{y}\\ \dot{z} \end{bmatrix} = \begin{bmatrix} f(x,y)\\ g(x, y)\\ 1 \end{bmatrix}. $$
Use this observation to visualise the systems from Examples 20.1 and 20.3. Study the time-dependent solution curves with the applet Dynamical systems in space.
2.
Compute the general solutions of the following three systems of differential equations by transformation to standard form:
$$ \begin{array}{ccl}\dot{x} &{}= &{} \frac{3}{5}x - \frac{4}{5}y, \\ \dot{y} &{}=&{} -\frac{4}{5}x - \frac{3}{5}y,\end{array}\qquad \begin{array}{ccl}\dot{x} &{}=&{} -3y,\\ \dot{y} &{}=&{} x,\end{array} \qquad \begin{array}{ccl}\dot{x} &{}=&{} \frac{7}{4}x - \frac{5}{4}y, \\ \dot{y} &{}=&{} \frac{5}{4}x + \frac{1}{4}y.\end{array} $$
Visualise the solution curves with the applet Dynamical systems in the plane.
3.

Small, undamped oscillations of an object of mass m attached to a spring are described by the differential equation $$m\ddot{x} + kx = 0$$. Here, $$x = x(t)$$ denotes the displacement from the position of rest and k is the spring stiffness. Introduce the variable $$y = \dot{x}$$ and rewrite the second-order differential equation as a linear system of differential equations. Find the general solution.

4.
A company deposits its profits in an account with continuous interest rate $$a\%$$. The balance is denoted by x(t). Simultaneously the amount y(t) is withdrawn continuously from the account, where the rate of withdrawal is equal to $$b\%$$ of the account balance. With $$r = a/100$$, $$s = b/100$$ this leads to the linear system of differential equations
$$ \begin{array}{rcl} \dot{x}(t) &{}= &{}r(x(t) - y(t)),\\ \dot{y}(t) &{}= &{} sx(t). \end{array} $$
Find the solution (x(t), y(t)) for the initial values $$x(0) = 1$$, $$y(0) = 0$$ and analyse how big s can be in comparison with r so that the account balance x(t) is increasing for all times without oscillations.
5.
A national economy has two sectors (for instance industry and agriculture) with the production volumes $$x_1(t)$$, $$x_2(t)$$ at time t. If one assumes that the investments are proportional to the respective growth rate, then the classical model of Leontief 3 [24, Chap. 9.5] states
$$ \begin{array}{l} x_1(t) = a_{11}x_1(t) + a_{12}x_2(t) + b_1\dot{x}_1(t) + c_1(t),\\ x_2(t) = a_{21}x_1(t) + a_{22}x_2(t) + b_2\dot{x}_2(t) + c_2(t). \end{array} $$
Here $$a_{ij}$$ denotes the required amount of goods from sector i to produce one unit of goods in sector j. Further $$b_i\dot{x}_i(t)$$ are the investments, and $$c_i(t)$$ is the consumption in sector i. Under the simplifying assumptions $$a_{11} = a_{22} = 0$$, $$a_{12} = a_{21} = a$$ $$(0< a < 1)$$, $$b_1 = b_2 = 1$$, $$c_1(t) = c_2(t) = 0$$ (no consumption) one obtains the system of differential equations
$$ \begin{array}{rcl} \dot{x}_1(t) &{}= &{}x_1(t) - ax_2(t),\\ \dot{x}_2(t) &{}= &{} -ax_1(t) + x_2(t). \end{array} $$
Find the general solution and discuss the result.
6.

Use the applet Dynamical systems in the plane to analyse the solution curves of the differential equations of the mathematical pendulum and translate the mathematical results to statements about the mechanical behaviour.

7.

Derive the conserved quantity $$H(x, y) = \frac{1}{2} {y^2} + 1 - \cos x$$ of the pendulum equation by means of the ansatz $$H(x, y) = F(x) + G(y)$$ as for the Lotka–Volterra system.

8.
Using maple, find the power series solution to the nonlinear pendulum equation $$\ddot{x} = - \sin x$$ with initial data
$$ x(0) = a,\quad \dot{x}(0) = 0\qquad \text{ and }\qquad x(0) = 0,\quad \dot{x}(0) = b. $$
Check by how much its coefficients differ from the ones of the power series solution of the corresponding linearised pendulum equation $$\ddot{x} = - x$$ for various values of ab between 0 and 1.
9.
The differential equation $$m\ddot{x}(t) + kx(t) + 2cx^3(t) = 0$$ describes a nonlinear mass–spring system where x(t) is the displacement of the mass m, k is the stiffness of the spring and the term $$cx^3$$ models nonlinear effects ($$c>0\ldots $$ hardening, $$c<0\ldots $$ softening).
(a)
Show that
$$ H(x,\dot{x}) = \frac{1}{2}\left( m\dot{x}^2 + kx^2 + cx^4\right) $$
is a conserved quantity.
(b)

Assume that $$m = 1$$, $$k = 1$$ and $$x(0) = 0$$, $$\dot{x}(0) = 1$$. Reduce the second-order equation to a first-order system. Making use of the conserved quantity, plot the solution curves for the values of $$c=0$$, $$c = -0.2$$, $$c=0.2$$ and $$c = 5$$.

Hint. A typical maple command is

with(plots, implicitplot); c:=5;

images/215236_2_En_20_Chapter/215236_2_En_20_IEq182_HTML.gif

10.

Using maple, find the power series solution to the nonlinear differential equation $$\ddot{x}(t) + x(t) + 2cx^3(t) = 0$$ with initial data $$x(0) = a$$, $$\dot{x}(0) = b$$. Compare it to the solution with $$c=0$$.