Lecture 8 - Introduction to Lyapunov Stability Theory
Published
February 5, 2026
Based on notes created by Sam Coogan and Murat Arcak. Licensed under a “Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License”
Additional Reading
Khalil, Chapter 4.5
Sastry, Chapter 5
Overview
Define Lyapunov stability notions
Lyapunov stability theorems
Lyanpunov Stability Theory
Consider a time-invariant dynamical system \dot{x} = f(x), \quad f(0) = 0
Lyapunov (1857–1918)
If the equilibrium of interest is x^* \neq 0, we can always shift the system through the transformation \tilde{x} = x - x^*, yielding:
The equilibrium x = 0 is stable if for each \epsilon > 0, there exists a \delta > 0 such that
\underbrace{\|x(0)\| < \delta}_{\textrm{If you start within some }\delta} \implies \underbrace{\|x(t)\| < \epsilon}_{\textrm{You stay within }\epsilon}, \quad \forall t \geq 0
It is unstable if it is not stable.
It is asymptotically stable if stable and x(t) \to 0 for all x(0) in a neighborhood of x = 0.
It is globally asymptotically stable if stable and x(t) \to 0 for all x(0) \in \mathbb{R}^n.
Note: x(t) \to 0 does not necessarily imply stability. One can construct an example where trajectories converge to the origin but only after a large detour that violates the stability definition.
This example is a homoclinic orbit. The equilibrium at the origin is unstable, even though all trajectories starting at the origin converge to the origin.
Let D be an open, connected subset of \mathbb{R}^n containing the origin. If there exists a C^1 function V: D \to \mathbb{R} such that
V(0) = 0, \quad \textrm{ and } V(x) > 0 \quad \forall x \in D \setminus \{0\}
\textrm{ (positive definite)}
and
\dot{V}(x) := \Delta V(x)^T f(x) \leq 0 \quad \forall x \in D \textrm{ (negative semi-definite)}
then x = 0 is stable.
If \dot{V}(x) < 0 for all x \in D \setminus \{0\} (negative definite), then x = 0 is asymptotically stable.
If, in addition, D = \mathbb{R}^n and V(x) \to \infty \textrm{ as } \|x\| \to \infty \textrm{ (radially unbounded)}
then x = 0 is globally asymptotically stable.
Proof. (Sketch) The sets \Omega_C \triangleq \{ x \mid V(x) \leq C \} for constraints c are called level sets of V and are positively invariant because \Delta V(x)^T f(x) \leq 0.
Stability: choose a level set inside the ball of radius \epsilon and a ball of radius \delta inside this level set. Trajectories starting in \mathcal{B}_{\delta} cannot leave \mathcal{B}_{\epsilon} since they remain inside the level set.
Asymptotic Stability: since V(x(t)) is decreasing and bounded below by 0, we conclude
V(x(t)) \to c \geq 0
We will show c = 0 (i.e., x(t) \to 0) by contradiction. Suppose c \neq 0. This would mean that level set is bounded above and below:
where the maximum exists because it is evaluated over a bounded1 set, and is positive because \dot{V}(x) < 0 away from x=0. Then,
1 By positive definiteness of V, the level sets \{x\mid V(x) \leq \textrm{constant}\} are bounded when the constant is sufficiently small. Since we are proving local asymptotic stability we can assume x_0 is close enough to the origin that the small constant V(x_0) is sufficiently small.
which implies V(x(t)) < 0 for t > \frac{V(x_0)}{\gamma}, a contradiction because V \geq 0. Therefore, c=0 which implies x(t) \to 0.
Global Asymptotic Stability: Why do we need radial unboundedness?
Example:
V(x) = \frac{x_1^2}{1+x_1^2} + x_2^2
Since x_2 = 0, let x_1 \to \infty: V(x) \to 1 (not radially unbounded). Then, \Omega_c is not a bounded set for c \geq 1. Therefore, x_1(t) may grow unbounded while V(x(t)) is decreasing.
\dot{x} = -g(x), \quad x \in \mathbb{R}, ~xg(x) > 0 ~\forall x \neq 0
V(x) = \frac{1}{2}x^2 is positive definite and radially unbounded.
\dot{V}(x) = x \dot{x} = -x g(x) < 0 (negative definite). Therefore, x=0 is globally asymptotically stable.
If xg(x) > 0 only in some domain (-b,c) \setminus \{0\}, then we can only conclude local asymptotic stability of x=0 on the domain D = (-b,c).
If there are other equilibria where g(x) = 0 then we can also only conclude local asymptotic stability of x=0.
Example: Consider the nonlinear system2
\begin{align*}
\dot{x}_1 &= x_2 \\
\dot{x}_2 &= -a x_2 - g(x_1), \quad a \geq 0, xg(x) > 0 ~\forall x \in (-b,c) \setminus \{0\}
\end{align*}
2 The pendulum is a special case of this system with g(x) = \sin(x), which has equilibrium points at x=0 and x = \pm \pi. Therefore, we know that global stability will not be possible for this system and we will need to restrict our domain to D = (-\pi, \pi).
The choice V(x) = \frac{1}{2}x_1^2 + \frac{1}{2}x_2^2 doesn’t work because \dot{V}(x) is sign indefinite (you can try this for yourself).
Instead we will consider the function
V(x) = \int_0^{x_1} g(y) dy + \frac{1}{2} x_2^2.
This function is positive definite on D=(-b,c) \setminus \{0\}. Checking it’s derivative:
Therefore, we can conclude stability of our equilibrium within D.
If a = 0, we will not be able to conclude asymptotic stability since
\dot{V}(x) = 0 \implies V(x(t)) = V(x_0)
If a > 0, we actually will be able to conclude asymptotic stability but we will need to use the LaSalle-Krasovskii Invariance Principle which is covered in the next lecture. This is because we cannot yet conclude that \dot{V}(x) < 0 ~\forall x \in D \setminus \{0\}.