Lecture 2 - Essentially Nonlinear Phenomena

Published

January 15, 2026

Based on notes created by Sam Coogan and Murat Arcak. Licensed under a “Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License”

Review

Last class we ended with the pendulum example. Note, we assumed that there was a frictional force resisting the motion that was proportional to the speed of the mass: F_{ext} = -kv = -k \ell \dot{\theta}.

Using the Lagrangian, we derived the equations of motion as:

From the Lagrangian \mathcal{L}(\theta, \dot \theta) = \frac{1}{2}m \ell^2 \dot \theta^2 - m g \ell \cos \theta

\begin{aligned} \frac{d}{dt}\left(\frac{\partial \mathcal{L}}{\partial \dot \theta}\right) - \frac{\partial \mathcal{L}}{\partial \theta} &= \tau_{ext} \\ \frac{d}{dt}\left( m \ell^2 \dot{\theta} \right) + mg \ell \sin \theta &= -k \ell^2 \dot{\theta} \\ m \ell^2 \ddot{\theta} + mg \ell \sin \theta &= -k \ell^2 \dot{\theta} \\ \ddot{\theta} + \frac{g}{\ell} \sin \theta &= -\frac{k}{m} \dot{\theta} \\ \ddot{\theta} &= -\frac{k}{m} \dot{\theta} - \frac{g}{\ell} \sin \theta \end{aligned}

From these equations of motion, we can derive a state-space representation of the system by defining the state variables x_1 = \theta and x_2 = \dot{\theta}:

\begin{aligned} \dot{x}_1 &= x_2 \\ \dot{x}_2 &= -\frac{k}{m} x_2 - \frac{g}{\ell} \sin x_1 \end{aligned}

or in matrix form:

\dot{x} = \begin{bmatrix} x_2 \\ -\frac{k}{m} x_2 - \frac{g}{\ell} \sin x_1 \end{bmatrix}

We can identify the equilibrium points of the system by identifying x^* such that f({x^*}) = 0:

x^* = \{ (0,0), (\pi, 0) \}

Finally, we can determine the stability of the equilibrium points by linearizing our system around each of the equilibrium points:

A = \frac{\partial f}{\partial x}\Big|_{x^*} \triangleq \begin{bmatrix} \frac{\partial f_1}{\partial x_1} & \frac{\partial f_1}{\partial x_2} \\ \frac{\partial f_2}{\partial x_1} & \frac{\partial f_2}{\partial x_2}\end{bmatrix}\Bigg|_{x^*}

\frac{\partial f}{\partial x} = \begin{bmatrix} 0 & 1 \\ -\frac{g}{\ell} \cos x_1 & -\frac{k}{m} \end{bmatrix}

Linearization at x^* = (0,0):

Plugging in x=(0,0) for our Jacobian, we get the linearization: A = \begin{bmatrix} 0 & 1 \\ -\frac{g}{\ell} & -\frac{k}{m} \end{bmatrix}

The characteristic equation for this system is \lambda^2 + \frac{k}{m}\lambda + \frac{g}{l} = 0

The characteristic equation is obtained by solving for \det(\lambda I - A) = 0

which gives us the eigenvalues \lambda = -\frac{k}{2m} \pm \sqrt{\left( \frac{k}{2m} \right)^2 - \frac{g}{l}} Since \frac{k}{m} > 0 and \frac{g}{l} > 0, the real-part of this quantity will always be negative (i.e., Re(\lambda_i(A)) < 0). This implies that the equilibrium point at (0,0) is stable.

Linearization at x^* = (\pi,0):

Repeating the above but with x=(\pi,0) gives us the linearization: A = \begin{bmatrix} 0 & 1 \\ \frac{g}{\ell} & -\frac{k}{m} \end{bmatrix}

This gives us the eigenvalues \lambda = -\frac{k}{2m} \pm \sqrt{\left( \frac{k}{2m} \right)^2 + \frac{g}{l}}

which will always result in one positive and one negative real-part (i.e., Re(\lambda_1(A)) < 0 and Re(\lambda_2(A))) > 0)). This implies that the equilibrium point at (\pi,0) is unstable. We will learn later in the lecture that it is specifically characterized as a “saddle”.

Takeaway:

This example demonstrates a situation in which linearization is a valid approach towards analyzing a nonlinear system. As stated in Khalil, “whenever possible, we should make use of linearization to learn as much as we can about the behavior of a nonlinear system”. However, linearization has two basic limitations:

  • Linearization is only valid in a neighborhood of the equilibrium point (“local approximation”).
  • Nonlinear system dynamics are much richer than the dynamics of a linear system.

The following are phenomena that can only take place in the presence of nonlinearities and cannot be captured by linear models.

Essentially Nonlinear Phenomena

1. Finite Escape Time

Example: \dot x = x^2

\begin{aligned} \frac{d x}{dt } &= x^2 \\ \frac{1}{x^2}dx &= dt \\ -\frac{1}{x} &= t + C \\ x(t) &= \frac{1}{C - t} \\ x(t) &= \frac{1}{\frac{1}{x_0} - t} \\ \Rightarrow \quad & t_{\textrm{escape}} = \frac{1}{x_0} \end{aligned}

The state goes to infinity in finite time.

Show code
import numpy as np
import matplotlib.pyplot as plt

x0 = 2
t_escape = 1/x0

t = np.linspace(0, t_escape-0.01, 100)
x = 1 / (1/x0 - t)

plt.figure(figsize=(3, 2.5))
plt.plot(t, x, linewidth=2)
plt.axvline(x=t_escape, linestyle='--', alpha=0.7, label='$t_{escape}$')
plt.xlabel(r'$t$')
plt.ylabel(r'$x$')
plt.xlim(0, t_escape + 0.2)
plt.ylim(0, 50)
plt.tight_layout()
plt.yticks([0, 10, 20, 30, 40, 50, x0])
plt.xticks([0, t_escape])
plt.legend()

plt.show()

For linear systems, x(t) \to \infty cannot happen in finite time.

2. Multiple Isolated Equilibria

  • Linear systems: either unique equilibrium or a continuum

  • Pendulum: two isolated equilibria (one stable, one unstable)

  • “Multi-stable” systems: two or more stable equilibria

Example: Bistable switch

\begin{array}{ll} \dot x_1 = -ax_1 + x_2 & \quad x_1: \text{ concentration of protein} \\ \dot x_2 = \frac{x_1^2}{1 + x_1^2} -b x_2 & \quad x_2: \text{ concentration of mRNA} \end{array}

a>0, b>0 are constants. State space: \mathbb{R}_{\ge 0} \times \mathbb{R}_{\ge 0}.

This model describes a positive feedback where the protein encoded by a gene stimulates more transcription via the term \frac{x_1^2}{1 + x_1^2}.

Single equilibrium at the origin when ab>0.5. If ab<0.5, the line where \dot{x}_1=0 intersects the sigmoidal curve where \dot{x}_2=0 at two other points, giving rise to a total of three equilibria:

Show code
import numpy as np
import matplotlib.pyplot as plt

a = 0.5
b = 0.9

# Grid
x1 = np.linspace(0, 2, 200)
x2 = np.linspace(0, 1, 200)
X1, X2 = np.meshgrid(x1, x2)

U = -a*X1 + X2
V = (X1**2)/(1 + X1**2) - b*X2

# Equilibria
disc = np.sqrt(1 - 4*a**2*b**2)
x1_minus = (1 - disc)/(2*a*b)
x1_plus  = (1 + disc)/(2*a*b)

eq_stable_low  = (0, 0)
eq_saddle      = (x1_minus, a*x1_minus)
eq_stable_high = (x1_plus,  a*x1_plus)

# Plot streamlines
plt.streamplot(
    X1, X2, U, V,
    density=1.2,
    linewidth=0.8,
    arrowsize=0.7,
    color='black'
)

# Nullclines
x = np.linspace(0, 2.2, 400)
plt.plot(x, a*x, color='orange', lw=1.5, label=r'$\dot x_1=0$')
plt.plot(x, x**2/(b*(1+x**2)), color='blue', lw=1.5, label=r'$\dot x_2=0$')

# Equilibria
plt.plot(*eq_stable_low,  'ko', ms=6)
plt.plot(*eq_saddle,      'ko', ms=6, mfc='white')
plt.plot(*eq_stable_high, 'ko', ms=6)

# Axes
plt.xlabel(r'$x_1$')
plt.ylabel(r'$x_2$')
plt.xlim(0, 2)
plt.ylim(0, 1)
plt.legend(frameon=False)
plt.tight_layout()
plt.show()

Intuitive Interpretation:

This system admits up to three equilibrium points, corresponding to distinct gene expression states.

The equilibrium at (x_1, x_2) = (0, 0) represents a gene-off state, in which neither protein nor mRNA is present. Small perturbations away from this state decay back to zero: insufficient protein leads to insufficient transcription, and the system relaxes to silence. This equilibrium is therefore stable.

A second equilibrium appears at an intermediate protein and mRNA level. This equilibrium is unstable (a saddle) and acts as a threshold between gene-off and gene-on behavior. Initial conditions on one side of this point decay toward the off state, while those on the other side are amplified toward sustained expression. Biologically, this corresponds to a critical protein concentration required to maintain transcriptional activity.

The third equilibrium corresponds to a gene-on state, with high protein and mRNA concentrations. Here, positive feedback between protein abundance and transcription balances degradation, leading to persistent expression. Small perturbations around this point decay, making it a stable high-expression state.

Together, these equilibria give rise to bistability: the system can settle into either a low-expression or high-expression state depending on initial conditions, with the unstable equilibrium defining the boundary between their basins of attraction.

3. Limit Cycles

Linear oscillators exhibit a continuum of periodic orbits; e.g., every circle is a periodic orbit for \dot{x}=Ax where

A = \begin{bmatrix}0 & -\beta \\ \beta & 0\end{bmatrix} \quad (\lambda_{1, 2} = \mp j\beta).

In contrast, a limit cycle is an isolated periodic orbit and can occur only in nonlinear systems.

Harmonic oscillator vs. limit cycle

v_C(t): the voltage across the capacitor (representing stored electric energy)

i_L(t): the current through the inductor (representing stored magnetic energy).

C: parameter of capacitor

L: parameter of inductor

Van der Pol circuit

Example: Van der Pol oscillator (models a system with self-sustained oscillations)

\begin{aligned} C \dot v_C &= -i_L + v_C - v_C^3 \\ L \dot i_L &= v_C \end{aligned}

Show code
import numpy as np
import matplotlib.pyplot as plt

fig, ax = plt.subplots(figsize=(10, 4))

x = np.linspace(-2, 2, 200)
y = np.linspace(-2, 2, 200)
X, Y = np.meshgrid(x, y)

# Van der Pol Oscillator vector field
C = 1.0
L = 1.0
U = (1/C) * (-Y + X - X**3)
V = (1/L) * X

ax.streamplot(X, Y, U, V, color='C1', density=1.2)
ax.set_title("Phase Portrait")
ax.set_xlabel(r"$v_c$")
ax.set_ylabel(r"$i_L$")
ax.set_aspect('equal')
ax.set_xlim(-2, 2)
ax.set_ylim(-2, 2)

plt.tight_layout()
plt.show()

Van der Pol Oscillator Phase Portrait (nonlinear, isolated limit cycle).

4. Chaos

Irregular oscillations, never exactly repeating.

Example: Lorenz system (derived by Ed Lorenz in 1963 as a simplified model of convection rolls in the atmosphere):

\begin{aligned} \dot x &= \sigma(y - x) \\ \dot y &= rx - y - xz \\ \dot z &= xy - bz \end{aligned}

Chaotic behavior with \sigma = 10, b = 8/3, r = 28:

Show code
import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import solve_ivp

# Lorenz system parameters
sigma = 10.0
b = 8/3
r = 28.0

def lorenz(t, X):
  x, y, z = X
  dx = sigma * (y - x)
  dy = r * x - y - x * z
  dz = x * y - b * z
  return [dx, dy, dz]

# Initial condition and time span
X0 = [1.0, 1.0, 1.0]
t_span = (0, 30)
t_eval = np.linspace(*t_span, 5000)

sol = solve_ivp(lorenz, t_span, X0, t_eval=t_eval)

# Create figure and axes explicitly
fig = plt.figure(figsize=(5, 8))

# 2D projection (x vs z)
ax2d = fig.add_subplot(2, 1, 2)
ax2d.plot(sol.y[0], sol.y[2], color='C1', lw=1)
ax2d.set_xlabel('$x$')
ax2d.set_ylabel('$z$')

plt.tight_layout()
plt.show()

Lorenz system: 2D projection.
Show code
import numpy as np
import plotly.graph_objects as go
from scipy.integrate import solve_ivp

# Lorenz system
sigma, b, r = 10.0, 8/3, 28.0

def lorenz(t, X):
    x, y, z = X
    return [
        sigma * (y - x),
        r * x - y - x * z,
        x * y - b * z
    ]

# Solve system
sol = solve_ivp(
    lorenz,
    (0, 30),
    [1, 1, 1],
    t_eval=np.linspace(0, 30, 2000)
)

# Create animated frames
frames = [
    go.Frame(
        data=[go.Scatter3d(
            x=sol.y[0][:k],
            y=sol.y[1][:k],
            z=sol.y[2][:k],
            mode="lines",
            line=dict(color="royalblue", width=4)
        )]
    )
    for k in range(10, len(sol.y[0]), 20)
]

fig = go.Figure(
    data=frames[0].data,
    frames=frames
)

fig.update_layout(
    scene=dict(
        xaxis=dict(title='x', range=[-30, 30]),
        yaxis=dict(title='y', range=[-30, 30]),
        zaxis=dict(title='z', range=[0, 50])
    ),
    updatemenus=[{
        "type": "buttons",
        "buttons": [{
            "label": "Play",
            "method": "animate",
            "args": [None, {"frame": {"duration": 30}, "fromcurrent": True}]
        }]
    }]
)

fig

Key observations:

  • For continuous-time, time-invariant systems, n \ge 3 state variables required for chaos.
    • For n = 1: x(t) is monotone in t, so no oscillations are possible. 1D system
    • For n = 2: The Poincaré-Bendixson Theorem (to be studied in Lecture 4) guarantees regular (non-chaotic) behavior.
    • Poincaré-Bendixson does not apply to time-varying systems, and n \ge 2 is enough for chaos (e.g., Van der Pol oscillator can exhibit chaos).
    • For discrete-time systems, n = 1 is enough for chaos (we will see an example in Lecture 6).

5. Multiple Modes of Behavior

Hybrid systems exhibit both continuous and discrete dynamics. Examples include a bouncing ball, or a legged robot.

Hybrid system

Planar (Second Order) Dynamical Systems

Chapter 2 in both Sastry and Khalil

\begin{aligned} \dot x_1 &= f_1(x_1, x_2) \\ \dot x_2 &= f_2(x_1, x_2) \end{aligned}

Second-order autonomous systems are convenient to study because solution trajectories (i.e., x(t) = x_1(t), x_2(t)) can be represented as curves in the plane. This “plane” is usually called the phase plane with f(x) the vector field on the phase plane.

The family of all trajectories (solution curves) is called the phase portrait of the system. For example, recall that the phase portrait of the pendulum system was:

Pendulum phase portrait

Phase Portraits of Linear Systems: \dot x = A x

Depending on the eigenvalues of A, the real Jordan form J = T^{-1} A T has one of three forms:

Distinct real eigenvalues

T^{-1}A T = \begin{bmatrix} \lambda_1 & 0 \\ 0 & \lambda_2 \end{bmatrix}

In z = T^{-1} x coordinates:

\dot z_1 = \lambda_1 z_1, \quad \dot z_2 = \lambda_2 z_2

The equilibrium is called a node when \lambda_1 and \lambda_2 have the same sign (stable node when negative and unstable when positive). It is called a saddle point when \lambda_1 and \lambda_2 have opposite signs.

Complex eigenvalues: \lambda_{1, 2} = \alpha \mp j\beta

T^{-1}A T = \begin{bmatrix} \alpha & -\beta \\ \beta & \alpha \end{bmatrix}

\begin{aligned} \dot z_1 &= \alpha z_1 - \beta z_2 \\ \dot z_2 &= \alpha z_2 + \beta z_1 \end{aligned} \quad \rightarrow \quad \text{polar coordinates} \quad \rightarrow \quad \begin{aligned} \dot r &= \alpha r \\ \dot \theta &= \beta \end{aligned}

The phase portraits above assume \beta>0 so that the direction of rotation is counter-clockwise: \dot{\theta}=\beta>0.