Lecture 10 - Lyapunov’s Indirect Method and Regions of Attraction
Published
February 12, 2026
Based on notes created by Sam Coogan and Murat Arcak. Licensed under a “Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License”
Additional Reading
Khalil, Chapter 4.3-4.7, 8.2
Overview
Further tools for studying systems based on their linearization
Define region of attraction
Obtain Lyapunov estimates of the region of attraction
In the last lecture, we introduced the Lyapunov equation1 and showed that it can be used to test whether or not a matrix A is Hurtwitz, by defining some positive definite matrix Q and solving the Lyapunov equation for P. Then, if the Lyapunov equation has a positive definite solution, we conclude that A is Hurwitz.
1PA + A^TP = -Q
For linear systems, this method has no computational advantage over calculating the eigenvalues of A.
However, for nonlinear systems, we can establish that V = x^TPx (derived using the linearized system) is locally a Lyapunov function for the nonlinear system.
Lyapunov’s Indirect Method
Let’s go back to our nonlinear system:
\dot x = f(x), \quad f(0) = 0,
where f: D \to \R^n is C^1 and D \subset \R^n is a neighborhood of the equilibrium point x=0.
We can rewrite the nonlinear system to be in the form \dot{x} = Ax + g(x) by leveraging the mean value theorem:
\begin{align*}
\frac{\partial f_i}{\partial x}(z_i) &= \frac{f_i(x) - f_i(0)}{x - 0}
\end{align*}
with z_i being a point on the line segment connecting 0 and x. This can be rearranged to get our desired form:
\begin{align*}
f_i(x) &= \cancel{f_i(0)} + \frac{\partial f_i}{\partial x}(z_i)x \\
&= \frac{\partial f_i}{\partial x}(0)x + \underbrace{\left[\frac{\partial f_i}{\partial x}(z_i) - \frac{\partial f_i}{\partial x}(0) \right]x}_{g_i(x)} \\
f(x) &= A x + g(x)
\end{align*}
The function g_i(x) satisfies:
\begin{align*}
|g_i(x)| \leq \left\| \frac{\partial f_i}{\partial x}(z_i) - \frac{\partial f_i}{\partial x}(0) \right\| \|x\|
\end{align*}
By continuity of [\partial f / \partial x], we see that
\begin{align*}
\frac{\|g(x)\|}{\|x \|} \to 0 \quad \text{as} \quad \|x\| \to 0
\end{align*}
This suggests that in a small neighborhood of the origin we can approximate the nonlinear system by its linearization about the origin.
We can formalize this conclusion in a Theorem that follows the same logic we’ve been using throughout the course (i.e., Linear Stability Theory and Hartman-Grobman Theorem). However, we can only now formalize and prove this theorem using Lyapunov’s Stability Theorem. Because the proof relies on Lyapunov functions, this approach is known as Lyapunov’s indirect method.
Let x = 0 be an equilibrium point for the nonlinear system \dot{x} = f(x), where f: D \to \R^n is C^1 and D \subset \R^n is a neighborhood of the origin. Let
A = \left. \frac{\partial f(x)}{\partial x} \right|_{x=0}
Then,
If \Re\{\lambda_i(A)\} < 0 for all eigenvalues of A, then the origin is asymptotically stable for the nonlinear system.
If \Re\{\lambda_i(A)\} > 0 for some eigenvalue of A, then the origin is unstable for the nonlinear system.
Note: We can conclude only local asymptotic stability from this linearization. Inconclusive if A has eigenvalues on the imaginary axis.
Proof. Find P = P^T > 0 such that A^TP + PA = -Q < 0. Use V(x) = x^T P x as a Lyapunov function for the nonlinear system \dot x = Ax + g(x).
\begin{align*}
\dot V(x) &= x^TP \dot{x} + \dot{x}^T P x \\
&= x^TP(Ax + g(x)) + (Ax + g(x))^TPx \\
&= x^T(PA+ A^T P)x + 2x^TPg(x) \\
&= -x^TQx + 2x^TPg(x) \\
\end{align*}
The second term is (in general) indefinite, but since we know that \|g(x)\| / \|x\| \to 0 as x \to 0, we can find a ball around the origin where the second term is negative definite. This is mathematically stated as: for any \gamma > 0, there exists r > 0 such that
\|g(x)\| < \gamma \|x\|, \quad \forall \|x\| < r
see the illustration below for the case x \in \mathbb{R}.
We can use this to bound our previous expression for \dot{V}(x): \begin{align*}
\dot{V}(x) &= -x^TQx + 2x^TPg(x) \\
&< -x^TQx + 2\gamma \|P\| \|x\|^2, \quad \forall \|x\| < r
% &\leq -x^TQx + 2\gamma \|P\| \|x\|^2
\end{align*}
This can be further bounded by observing the bounds on x^TQx:
\lambda_{\min}(Q) \|x\|^2 \le x^T Q x \le \lambda_{\max}(Q)\|x\|^2
Finally, we can choose \gamma < \frac{\lambda_{\min}(Q)}{2\|P\|} so that \dot{V}(x) is negative definite in a ball of radius r(\gamma) around the origin. We can then appeal to Lyapunov’s Stability Theorem (previous lecture) to conclude (local) asymptotic stability.
Region of Attraction
R_A = \{x : \phi(t, x) \to 0\}
“Quantifies” local asymptotic stability. Global asymptotic stability: R_A = \R^n.
If x = 0 is asymptotically stable, then its region of attraction is an open, connected, invariant set. Moreover, the boundary is formed by trajectories.
Example: van der Pol system in reverse time:
\begin{split}
\dot x_1 &= -x_2 \\
\dot x_2 &= x_1 - x_2 + x_2^3
\end{split}
The boundary is the (unstable) limit cycle. Trajectories starting within the limit cycle2 converge to the origin.
2 A limit cycle is an isolated periodic orbit.
Show code
import numpy as npimport matplotlib.pyplot as plt# Define the systemdef vdp_reverse(t, x): x1, x2 = x dx1 =-x2 dx2 = x1 - x2 + x2**3return np.array([dx1, dx2])# Create grid for vector fieldx1 = np.linspace(-3, 3, 20)x2 = np.linspace(-3, 3, 20)X1, X2 = np.meshgrid(x1, x2)U =-X2V = X1 - X2 + X2**3# Plotfig, ax = plt.subplots(figsize=(6, 4))ax.streamplot(X1, X2, U, V, color='gray', density=1.5, linewidth=1)# Plot trajectoriesfrom scipy.integrate import solve_ivpmax_time =100times = np.linspace(0, max_time, 1000)# Generate a grid of initial conditions initial_conditions = []for r in np.linspace(0.3, 2.5, 5):for theta in np.linspace(0, 2*np.pi, 12, endpoint=False): x0 = r * np.array([np.cos(theta), np.sin(theta)]) initial_conditions.append(x0)# Integrate trajectories from all initial conditionsfor x0 in initial_conditions: sol = solve_ivp(vdp_reverse, (0, max_time), x0, t_eval=times, dense_output=True) ax.plot(sol.y[0], sol.y[1], 'b-', linewidth=0.8, alpha=0.7)ax.set_xlabel('$x_1$')ax.set_ylabel('$x_2$')ax.set_title('Phase Portrait: van der Pol (Reverse Time)')ax.grid(True, alpha=0.3)ax.set_xlim(-3, 3)ax.set_ylim(-3, 3)plt.show()
Estimating the Region of Attraction with a Lyapunov Function
Suppose \dot V(x)< 0 in D - \{0\}. The level sets of V inside D are invariant and trajectories starting in them converge to the origin. Therefore we can use the largest levet set of V that fits into D as an (under)approximation of the region of attraction.
This estimate depends on the choice of Lyapunov function. A simple (but often conservative) choice is: V(x) = x^TPx where P is selected for the linearization (see p.1).
Example
Consider again the van der Pol system (a slightly different version for now): \begin{align*}
\dot x_1 &= -x_2 \\
% \dot x_2 &= x_1 - x_2 + x_2^3
\dot x_2 &= x_1 + (x_1^2 - 1)x_2
\end{align*}
Show code
import numpy as npimport matplotlib.pyplot as pltfrom scipy.integrate import solve_ivp# Define the Van der Pol oscillatordef van_der_pol_reverse(t, x, mu): dxdt = [-x[1], x[0] + mu*(x[0]**2-1) * x[1]]return dxdt# Parametersmu =1.0t_span = (0, 20)dt =0.01# Initial conditionsinitial_conditions = [[0, 2], [0, -2], [2, 0], [-2, 0]]initial_conditions += [[2.1, 0], [-2.1, 0], [0, 2.1], [0, -2.1]]# Create vector fieldx1 = np.linspace(-4, 4, 20)x2 = np.linspace(-4, 4, 20)X1, X2 = np.meshgrid(x1, x2)U =-X2V = X1 + (X1**2-1) * X2plt.figure(figsize=(8, 8))plt.streamplot(X1, X2, U, V, color='gray', density=1.5, linewidth=1)for ic in initial_conditions: sol = solve_ivp(van_der_pol_reverse, t_span, ic, t_eval=np.arange(t_span[0], t_span[1], dt), args=(mu,), dense_output=True) plt.plot(sol.y[0], sol.y[1], 'b-', linewidth=1.5) num_arrows =5 arrow_indices = np.linspace(0, len(sol.y[0]) -2, num_arrows, dtype=int)for idx in arrow_indices: plt.arrow(sol.y[0][idx], sol.y[1][idx], sol.y[0][idx+1] - sol.y[0][idx], sol.y[1][idx+1] - sol.y[1][idx], head_width=0.15, head_length=0.15, fc='b', ec='b')plt.xlabel(r'$x_1$')plt.ylabel(r'$x_2$')plt.title('Van der Pol Oscillator Trajectories for Different Initial Conditions')plt.grid()plt.xlim([-4, 4])plt.ylim([-4, 4])plt.show()
We will leverage our Lyapunov equation to find the region of attraction.
First, we linearize the system about the origin: \begin{align*}
A = \left. \frac{\partial f(x)}{\partial x} \right|_{x=0} = \begin{bmatrix} 0 & -1 \\ 1 & -1 \end{bmatrix}
\end{align*}
We can observe that A is Hurwitz, with eigenvalues \lambda = -0.5 \pm 0.866i. We can then find a P such that A^TP + PA = -Q where Q is positive definite.
Selecting Q = I, the unique solution for P (the solution to the Lyapunov equation PA + A^TP = -I) is: \begin{align*}
P = \begin{bmatrix} 1.5 & -0.5 \\ -0.5 & 1 \end{bmatrix}
\end{align*}
Thus, the quadratic function V(x) = x^TPx is a Lypaunov function for the system in a certain neighborhood of the origin.
Since we want to estimate the region of attraction, we need to find the largest level set of V(x) that fits into the domain D such that \Omega_c = \{V(x) \leq c\}
Thus, we can conclude that \dot{V}(x) is negative definite within a ball of radius r^2 = \frac{2}{\sqrt{5}} = 0.8944 around the origin.
Finally, we can find a level set within this open ball (\Omega_c \subset B_{r}(0)) by choosing: \begin{align*}
c < \min_{\|x\|_2 = r} V(x) = \lambda_{\min}(P)r^2
\end{align*} which gives us: \begin{align*}
c = 0.617 < 0.69(0.8944) = 0.6171
\end{align*}
The set \Omega_c with c = 0.6171 is then an underapproximation of the region of attraction for the origin.
Show code
x = np.linspace(-4, 4, 400)y = np.linspace(-4, 4, 400)X, Y = np.meshgrid(x, y)U =-YV = mu * (X**2-1) * Y + X# Define the function for dotVdef dotV(X, Y):return-(X**2+ Y**2) - (X**3* Y - (2* X**2* Y**2))def Lyap(X, Y): P = np.array([[1.5, -0.5], [-0.5, 1]])return P[0, 0] * X**2+2* P[0, 1] * X * Y + P[1, 1] * Y**2# # Calculate dotV valuesdotV_values = dotV(X, Y)Lyap_values = Lyap(X, Y)# Plot the contours where dotV = 0plt.streamplot(X, Y, U, V, color='b')plt.contour(X, Y, dotV_values, levels=[0], colors='k')plt.contour(X, Y, Lyap_values, levels=[0.6171], colors='r')plt.xlabel(r'$x_1$')plt.ylabel(r'$x_2$')plt.title(r'Contours of $\dot{V}(x) = 0$ with $V(x) = 0.6171$')plt.grid()plt.show()
A less conservative estimate can be obtained by plotting contours of \dot{V}(x) = 0 and V(x) = c for increasing values of c until we find the largest level set that fits into the domain \{\dot{V}(x) < 0\}.
Show code
def is_within_contour(level_set_value, dotV_values, Lyap_values):# Check if the level set is within the contour where dotV < 0return np.all(dotV_values[Lyap_values <= level_set_value] <0)# Initial level set valuelevel_set_value =0.6171increment =0.01# Iteratively find the largest level set valuewhile is_within_contour(level_set_value, dotV_values, Lyap_values): level_set_value += increment# The largest level set value that is within the contourlargest_level_set_value = level_set_value - incrementprint(f"Largest level set value: {largest_level_set_value}")# Plot the resultplt.streamplot(X, Y, U, V, color='b')plt.contour(X, Y, dotV_values, levels=[0], colors='k')plt.contour(X, Y, Lyap_values, levels=[largest_level_set_value], colors='r')plt.xlabel(r'$x_1$')plt.ylabel(r'$x_2$')plt.title(r'Largest Level Set within $\{\dot{V}(x) < 0\}$ with $V(x) = $'+str(round(largest_level_set_value, 2)))plt.grid()plt.show()