Lecture 3 - Hyperbolic Equilibria and Periodic Orbits

Published

January 15, 2026

Based on notes created by Sam Coogan and Murat Arcak. Licensed under a “Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License”

Review

Last class, we discussed phase portraits of planar linear systems. Let’s consider an example.

Example

Let’s consider our pendulum linearized at the upright angle \dot x = \begin{bmatrix} 0 & 1 \\ \frac{g}{l} & -\frac{k}{m} \end{bmatrix} x

Let’s specifically take g = 9.8, k = 0, m = 1, and l = 1. The eigenvalues for the system are then \lambda_1 = 3.13, \lambda_2 = -3.13. From Lecture 2, we know that this yields a saddle node, but we can further illustrate this using the phase portrait:

Show code
import numpy as np
import matplotlib.pyplot as plt

A = np.array([[0, 1], [9.8, 0]])

x1 = np.linspace(-3, 3, 20)
x2 = np.linspace(-9.81, 9.81, 20)
X1, X2 = np.meshgrid(x1, x2)

# Compute derivatives
U = A[0,0]*X1 + A[0,1]*X2
V = A[1,0]*X1 + A[1,1]*X2

# Streamplot
fig, ax = plt.subplots(figsize=(6, 4))

ax.streamplot(X1, X2, U, V, color='blue', density=1.2, arrowsize=1)
ax.set_title("Original Coordinates")
ax.set_xlabel('$x_1$')
ax.set_ylabel('$x_2$')
ax.grid(True)
ax.axhline(0, color='black', linewidth=0.8)
ax.axvline(0, color='black', linewidth=0.8)
# ax.set_aspect('equal')
ax.set_xlim([-3, 3])
ax.set_ylim([-9.8, 9.8])

plt.tight_layout()
plt.show()

Original Coordinates Phase Portrait

We can transform this into Jordan Form by first setting our Jordan form matrix to:

J = \begin{bmatrix} 3.13 & 0 \\ 0 & -3.13 \end{bmatrix}

which yields the eigenvectors1: v_1 = \begin{bmatrix} 1 \\ 3.13 \end{bmatrix}, \quad v_2 = \begin{bmatrix} 1 \\ -3.13 \end{bmatrix}

1 \begin{align*} \text{div}(f(x)) &= \nabla \cdot f(x) \\ &= \left( \frac{\partial}{\partial x_1}, \cdots, \frac{\partial}{\partial x_n} \right) \cdot (f_1, \cdots, f_n) \\ &= \sum_{i=1}^n \frac{\partial f_i}{\partial x_i}.\end{align*} Note: the form \nabla \cdot f(x) is sometimes considered an abuse of notation since we should not apply an operator (\nabla) through multiplication.

Thus the transformation into jordan form is provided by J = P^{-1}AP with the matrix: P = \begin{bmatrix} v_1 & v_2 \end{bmatrix} = \begin{bmatrix} 1 & 1 \\ 3.13 & -3.13 \end{bmatrix}

We can recalculate J as a sanity check. Finaly, we can transform our coordinates using the transformation: z = P^{-1} x with the dynamics \dot{z}_1 = \lambda_1 z_1, \quad \dot{z} = \lambda_2 z_2

This new system yields the phase portrait shown below

Show code
import numpy as np
import matplotlib.pyplot as plt

# Jordan eigenvalues
lambda1 = 3.13
lambda2 = -3.13

# Regular grid in z-coordinates
z1 = np.linspace(-3, 3, 20)
z2 = np.linspace(-3, 3, 20)
Z1, Z2 = np.meshgrid(z1, z2)

# Derivatives in z-coordinates
U = lambda1 * Z1
V = lambda2 * Z2

# Streamplot
fig, ax = plt.subplots(figsize=(6, 4))
ax.streamplot(Z1, Z2, U, V, color='blue', density=1.2, arrowsize=1)
ax.set_title("Jordan Form Coordinates")
ax.set_xlabel('$z_1$')
ax.set_ylabel('$z_2$')
ax.grid(True)
ax.axhline(0, color='black', linewidth=0.8)
ax.axvline(0, color='black', linewidth=0.8)
ax.set_xlim([-3, 3])
ax.set_ylim([-3, 3])
plt.tight_layout()
plt.show()

Original Coordinates Phase Portrait

Phase Portraits of Nonlinear Systems Near Hyperbolic Equilibria

Definition: Hyperbolic Equilibrium
Linearization has no eigenvalues on the imaginary axis.

Phase portraits of nonlinear systems near hyperbolic equilibria are qualitatively similar to the phase portraits of their linearization. According to the Hartman-Grobman Theorem (defined below) a continuous deformation maps one phase portrait to the other2.

2 A common example of a continuous deformation is the Mercator projection

Continuous Deformation h(x)

Theorem: Hartman-Grobman Theorem. If x^* is a hyperbolic equilibrium of \dot{x} = f(x), x\in\R^n, then there exists a homeomorphism3 z = h(x) defined in a neighborhood of x^* that maps trajectories of \dot{x} = f(x) to those of \dot{z} = Az where A \triangleq \frac{\partial f}{\partial x}|_{x=x^*}

3 a continuous map with a continuous inverse. This can be conceptualized as a continuous map without tearing or folding

Important: the hyperbolicity condition can’t be removed:

Example

This system can also be converted into polar form as \begin{align*} \dot{r} &= ar^3 \\ \dot{\theta} &= 1 \end{align*}

\begin{align*} \dot{x}_1 &= -x_2 + a x_1(x_1^2 + x_2^2) \\ \dot{x}_2 &= x_1 + a x_2 (x_1^2 + x_2^2) \end{align*}

First, we can observe that the equilibrium point is at x^* = (0,0). Linearizing the system about the equilibrium point yields the linear system:

A = \frac{\partial f}{\partial x}_{x = x^*} = \begin{bmatrix} 0 & -1 \\ 1 & 0 \end{bmatrix}

However, this linearized system suggests that the equilbrium is a center. There is no continuous deformation that maps a center to that of the original nonlinear model (for a > 0).

Show code
import numpy as np
import matplotlib.pyplot as plt

# Parameters
a = 0.2  # Try different values for a

# Grid
x1 = np.linspace(-2, 2, 20)
x2 = np.linspace(-2, 2, 20)
X1, X2 = np.meshgrid(x1, x2)

# Linearized system derivatives
U_lin = -X2
V_lin = X1

# Nonlinear system derivatives
r2 = X1**2 + X2**2
U_nonlin = -X2 + a * X1 * r2
V_nonlin = X1 + a * X2 * r2

# Plot side-by-side
fig, axes = plt.subplots(1, 2, figsize=(6, 3))

# Linearized system
axes[0].streamplot(X1, X2, U_lin, V_lin, color='blue', density=1.2, arrowsize=1)
axes[0].set_title("Linearized System")
axes[0].set_xlabel('$x_1$')
axes[0].set_ylabel('$x_2$')
axes[0].axhline(0, color='black', linewidth=0.8)
axes[0].axvline(0, color='black', linewidth=0.8)
axes[0].set_xlim([-2, 2])
axes[0].set_ylim([-2, 2])
axes[0].grid(True)

# Nonlinear system
axes[1].streamplot(X1, X2, U_nonlin, V_nonlin, color='orange', density=1.2, arrowsize=1)
axes[1].set_title("Nonlinear System")
axes[1].set_xlabel('$x_1$')
axes[1].set_ylabel('$x_2$')
axes[1].axhline(0, color='black', linewidth=0.8)
axes[1].axvline(0, color='black', linewidth=0.8)
axes[1].set_xlim([-2, 2])
axes[1].set_ylim([-2, 2])
axes[1].grid(True)

plt.tight_layout()
plt.show()

Example demonstrating the requirement of hyperbolicity condition

Periodic Orbits in the Plane

Theorem: Bendixson’s Theorem. For a time-invariant planar system

\dot{x}_1 = f_1(x_1, x_2), \quad \dot{x}_2 = f_2(x_1, x_2)

if the divergence4 \nabla \cdot f(x) = \frac{\partial f_1}{\partial x_1} + \frac{\partial f_2}{\partial x_2} is not identically zero and does not change sign in a simply connected region D, then there are no periodic orbits lying entirely in D.

4 \begin{align*} \text{div}(f(x)) &= \nabla \cdot f(x) \\ &= \left( \frac{\partial}{\partial x_1}, \cdots, \frac{\partial}{\partial x_n} \right) \cdot (f_1, \cdots, f_n) \\ &= \sum_{i=1}^n \frac{\partial f_i}{\partial x_i}.\end{align*} Note: the form \nabla \cdot f(x) is sometimes considered an abuse of notation since we should not apply an operator (\nabla) through multiplication.

5 

Illustration of divergence and region S

Proof5: By contradiction. Suppose a periodic orbit J lies in D. Let S denote the region enclosed by J and n(x) the normal vector to J at x. Then f(x) \cdot n(x) = 0 for all x \in J. By the Divergence Theorem:

\underbrace{\int_J f(x)\cdot n(x) dl}_{=0} = \underbrace{\iint_S \nabla \cdot f(x) dx}_{\neq 0}.

Example

\begin{align*} \dot{x}_1 &= x_2 \\ \dot{x}_2 &= -\delta x_2 + x_1 - x_1^3 + x_1^2x_2, ~\delta > 0 \end{align*}

Calculating the divergence gives: \nabla \cdot f(x) = \frac{\partial f_1}{\partial x_1} + \frac{\partial f_2}{\partial x_2} = x_1^2 - \delta

This quantity is not identically zero and will not change sign within the intervals x_1 \leq -\sqrt{\delta}, -\sqrt{\delta} \leq x_1 \leq \sqrt{\delta}, and x_1 \sqrt{\delta}.

Regions of possible periodic orbits based on Bendixson’s Theorem

Invariant Sets

Notation: \varphi(t, x_0) denotes a trajectory of \dot{x} = f(x) with initial condition x(0) = x_0.

Definition: A set M \subset \R^n is positively (negatively) invariant if, for each x_0 \in M, \varphi(t, x_0) \in M for all t \geq 0 (t \leq 0).

While the definition is straightforward, it does not seem like something that is easy to verify. However, it is surprisingly easy using Nagumo’s Theorem (Nagumo 1942).

Nagumo, Mitio. 1942. Über Die Lage Der Integralkurven Gewöhnlicher Differentialgleichungen.” Proceedings of the Physico-Mathematical Society of Japan 24: 551–59.

If f(x) \cdot n(x) \leq 0 on the boundary then M is positively invariant

Example 1

A preditor-prety model (Lotka-Volterra equations)

\begin{align*} \dot{x} &= (a - by)x \qquad \text{Prey (exponential growth when $y = 0$)} \\ \dot{y} &= (cx - d)y \qquad \text{Predator (exponential decay when $x = 0$)} \end{align*} with a, b, c, d > 0.

The nonnegative quadrant is invariant:

\begin{align*} \text{(x-axis:) }& \begin{bmatrix} ax \\ 0 \end{bmatrix} \cdot \begin{bmatrix} 0 \\ -1 \end{bmatrix} = 0 \\ \text{(y-axis:) }& \begin{bmatrix} 0 \\ -dy \end{bmatrix} \cdot \begin{bmatrix} -1 \\ 0 \end{bmatrix} = 0 \end{align*}

Show code
import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import odeint

# Parameters
a = 0.1   # Prey birth rate
b = 0.02  # Predation rate
c = 0.01  # Predator reproduction rate
d = 0.1   # Predator death rate

def predator_prey_model(X, t, a, b, c, d):
    prey, predator = X
    dprey_dt = (a - b * predator) * prey
    dpredator_dt = (c * prey - d) * predator
    return [dprey_dt, dpredator_dt]

# Time vector and initial conditions
t = np.linspace(0, 200, 10000)
X0 = [40, 9]  # 40 prey, 9 predators

# Solve ODE
solution = odeint(predator_prey_model, X0, t, args=(a, b, c, d))
prey, predator = solution.T

# Plot phase diagram
plt.figure()
plt.plot(prey, predator, 'k', label='Trajectory')
plt.xlabel('Prey Population')
plt.ylabel('Predator Population')
plt.title('Phase Diagram of Predator-Prey Model')
plt.grid()

# Plot the vector field
Y, X = np.mgrid[0:50:20j, 0:30:20j]
U = (a - b * X) * Y
V = (c * Y - d) * X
N = np.sqrt(U**2 + V**2)
U2 = np.divide(U, N, out=np.zeros_like(U), where=N!=0)
V2 = np.divide(V, N, out=np.zeros_like(V), where=N!=0)
plt.quiver(Y, X, U2, V2, color='grey', alpha=0.6)

# Plot equilibrium point
prey_eq = d / c
predator_eq = a / b
plt.plot(prey_eq, predator_eq, 'ro', label='Equilibrium')
plt.text(prey_eq + 1, predator_eq + 1, r'$\left(\frac{d}{c}, \frac{a}{b}\right)$', fontsize=16, ha='left')
plt.plot(0, 0, 'ro')

plt.legend()
plt.show()

Example 2

\begin{align*} \dot{x}_1 = x_1 + x_2 - x_1(x_1^2 + x_2^2) \\ \dot{x}_2 = -2 x_1 + x_2 - x_2(x_1^2 + x_2^2) \end{align*}

Show that B_r \triangleq \{ x \mid x_1^2 + x_2^2 \leq r^2 \} is positively invariant for sufficiently large r.

\begin{align*} f(x) \cdot n(x) &= \begin{bmatrix} x_1 + x_2 - x_1(x_1^2 + x_2^2) \\ -2x_1 + x_2 - x_2(x_1^2 + x_2^2) \end{bmatrix} \cdot \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} \\ &= {\color{orange} x_1^2} + {\color{blue} x_1 x_2} - x_1^2(x_1^2 + x_2^2) {\color{blue} - 2x_1x_2} + {\color{orange}x_2^2} - x_2^2(x_1^2 + x_2^2) \\ &= {\color{blue} - x_1 x_2} + {\color{orange} (x_1^2 + x_2^2)} - (x_1^2 + x_2^2)^2 \end{align*}

Next, we can use the inequality6

6 This is a special case of the Cauchy-Schwarz inequality: |\langle a, b \rangle | \le \|a\| \|b\| with a = (x_1, x_2) and b = (x_2, x_1): \begin{align*} |x_1x_2 + x_2x_1| &\leq \sqrt{(x_1^2 + x_2^2)(x_1^2 + x_2^2)} \\ |2 x_1 x_2 | &\leq x_1^2 + x_2^2 \end{align*}

\begin{align*} |2x_1x_2| \le x_1^2 + x_2^2, \end{align*}

to arrive at the final condition: \begin{align*} f(x) \cdot n(x) &\leq \frac{1}{2}(x_1^2 + x_2^2) + (x_1^2 + x_2^2) - (x_1^2 + x_2^2)^2 \\ &= \frac{3}{2}r^2 - r^4 \end{align*}

Therefore, f(x) \cdot n(x) \leq \frac{3}{2}r^2 - r^4 \leq 0 if r^2 \geq \frac{3}{2}.