Lecture 21 - Convex Optimization Problems

Published

April 2, 2026

Based on notes created by Sam Coogan and Murat Arcak. Licensed under a “Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License”

Motivating Examples

Before developing the theory, we look at two problems from nonlinear control that are naturally cast as convex optimization problems.

Minimum-Effort Stabilization via CLF

Given the system \(\dot{x} = f(x) + g(x)u\) and a Control Lyapunov Function (CLF) \(V(x)\), we can synthesize a minimum-effort controller by solving

\[k(x) = \arg\min_{u} \;\|u\|^2 \quad \text{subject to} \quad \frac{\partial V}{\partial x}(f(x) + g(x)u) \leq -\varepsilon V(x)\]

where \(\varepsilon > 0\) is user-chosen. We use \(\leq -\varepsilon V(x)\) rather than a strict inequality because strict inequality constraints are not tractable in optimization. This is a quadratic program (QP): the cost is quadratic and the CLF constraint is affine in \(u\).

Finding Polynomial Lyapunov Functions

Given \(\dot{x} = f(x)\), we can search for a Lyapunov function by solving

\[c^* = \arg\min_{c} \; 0 \quad \text{s.t.} \quad V(x) \geq \varepsilon_1 A(x) \;\forall x, \quad \frac{\partial V}{\partial x}f(x) \leq -\varepsilon_2 V(x) \;\forall x\]

e.g., for \(x \in \mathbb{R}^2\), \(V(x) = c_1 x_1^4 + c_2 x_1^3 x_2 + \cdots + c_n\). Here, \(A(x)\) is a positive definite function and the zero cost means this is a feasibility problem. The “\(\forall x\)” quantifier introduces infinitely many constraints, which are handled in practice via SOS or SDP relaxations.

Optimization Problems

We consider problems of the form

To maximize \(\tilde{f}_0(x)\), set \(f_0 = -\tilde{f}_0\). Equality constraints \(h(x) = 0\) are encoded as \(h(x) \leq 0\) and \(-h(x) \leq 0\).

\[\begin{aligned} \text{minimize}_{x} \quad & f_0(x) \\ \text{subject to} \quad & f_i(x) \leq 0, \quad i = 1, \ldots, m \end{aligned}\]

where \(x \in \mathbb{R}^n\) is the optimization variable, \(f_0\) is the objective function, and \(f_i(x)\) are constraint functions.

The optimal value is the smallest value of \(f_0\) on the feasible set; a point achieving it is an optimal point.

Convex Functions and Sets

Definition: Convex Function

\(f : \mathbb{R}^n \to \mathbb{R}\) is convex if for all \(x, y\) and all \(0 \leq \theta \leq 1\):

\[f(\theta x + (1-\theta)y) \leq \theta f(x) + (1-\theta)f(y)\]

Interactive — drag the sliders to explore \(f(\theta x + (1-\theta)y) \leq \theta f(x) + (1-\theta)f(y)\): the red dot (function value at the blend) always lies at or below the green diamond (chord value).

Show code
Plotly = require("plotly.js-dist@2")

viewof x_val = Inputs.range([-2, 1], {value: -1.5, step: 0.05,
  label: html`<b>x</b> &mdash; position of p&#x2081;`})
viewof y_val = Inputs.range([-2, 1], {value: 0.0, step: 0.05,
  label: html`<b>y</b> &mdash; position of p&#x2082;`})
viewof theta_val = Inputs.range([0, 1], {value: 0.5, step: 0.05,
  label: html`<b>&theta;</b> &mdash; blend (0&nbsp;=&nbsp;p&#x2082;, 1&nbsp;=&nbsp;p&#x2081;)`})

{
  const f = t => t * t + 0.2;
  const N = 200;
  const ts = Array.from({length: N}, (_, i) => -2.1 + i * 3.2 / (N - 1));
  const ys = ts.map(f);

  const fp1 = f(x_val);
  const fp2 = f(y_val);
  const bx     = theta_val * x_val + (1 - theta_val) * y_val;
  const z_surf  = f(bx);
  const z_chord = theta_val * fp1 + (1 - theta_val) * fp2;
  const gap     = (z_chord - z_surf).toFixed(4);

  const data = [
    {
      type: 'scatter', x: ts, y: ys, mode: 'lines',
      line: {color: 'steelblue', width: 2.5},
      name: 'f(t) = t\u00b2 + 0.2'
    },
    {
      type: 'scatter',
      x: [x_val, y_val], y: [fp1, fp2],
      mode: 'lines+markers',
      line: {color: 'black', width: 1.5},
      marker: {color: 'black', size: 8},
      name: 'Chord'
    },
    {
      type: 'scatter', x: [bx], y: [z_surf], mode: 'markers',
      marker: {color: 'red', size: 11, symbol: 'circle'},
      name: `f(\u03b8p\u2081+(1-\u03b8)p\u2082) = ${z_surf.toFixed(3)}`
    },
    {
      type: 'scatter', x: [bx], y: [z_chord], mode: 'markers',
      marker: {color: 'green', size: 11, symbol: 'diamond-open',
               line: {width: 3, color: 'green'}},
      name: `\u03b8f(p\u2081)+(1-\u03b8)f(p\u2082) = ${z_chord.toFixed(3)}`
    },
    {
      type: 'scatter', x: [bx, bx], y: [z_surf, z_chord], mode: 'lines',
      line: {color: 'purple', width: 2, dash: 'dot'},
      name: `gap = ${gap}`
    }
  ];

  const layout = {
    title: 'Convex function f(t) = t\u00b2 + 0.2: chord lies above graph',
    xaxis: {title: 't', range: [-2.2, 1.2],
            zeroline: true, zerolinewidth: 0.8, zerolinecolor: '#888'},
    yaxis: {title: 'f(t)', range: [-0.15, 2.6]},
    legend: {x: 0.01, y: 0.99, bgcolor: 'rgba(255,255,255,0.85)'},
    margin: {l: 50, r: 20, t: 45, b: 40},
    height: 370,
    annotations: [
      {x: x_val, y: fp1, text: 'p\u2081', showarrow: true,
       arrowhead: 2, ax: 25, ay: -35, font: {size: 13}},
      {x: y_val, y: fp2, text: 'p\u2082', showarrow: true,
       arrowhead: 2, ax: -25, ay: -35, font: {size: 13}}
    ]
  };

  const div = document.createElement('div');
  Plotly.newPlot(div, data, layout, {responsive: true, displayModeBar: false});
  return div;
}

First- and second-order tests. When \(f\) is once differentiable, \(f\) is convex iff

\[f(y) \geq f(x) + \nabla f(x)^\top (y - x) \quad \text{for all } x, y\]

When \(f\) is twice differentiable, \(f\) is convex iff \(\nabla^2 f(x) \succeq 0\) for all \(x\).

Key convexity facts (the following will be used throughout the course):

  1. Linear functions are convex: \(f(\theta x + (1-\theta)y) = \theta f(x) + (1-\theta)f(y)\) (equality holds).
  2. Quadratic functions: \(f(x) = \frac{1}{2}x^\top Px + q^\top x + r\) is convex iff \(P \succeq 0\), since \(\nabla^2 f = P\).
  3. Norms are convex: \(\|\theta x + (1-\theta)y\| \leq \theta\|x\| + (1-\theta)\|y\|\) by the triangle inequality.
  4. Affine composition: if \(f\) is convex then \(g(x) = f(Ax+b)\) is convex for any \(A, b\).

Definition: Convex Set

A set \(C\) is convex if \(x_1, x_2 \in C\) implies \(\theta x_1 + (1-\theta)x_2 \in C\) for all \(0 \leq \theta \leq 1\).

Show code
import matplotlib.pyplot as plt
from matplotlib.patches import Polygon
from matplotlib.collections import PatchCollection

fig, ax = plt.subplots(figsize=(3, 3))
triangle = Polygon([(0, 0), (1, 3), (3, 1)], closed=True,
                   facecolor='#c5d9f0', edgecolor='#3b5f8a', linewidth=1.5)
ax.add_patch(triangle)
ax.set_xlim(-0.5, 4)
ax.set_ylim(-0.5, 4)
ax.set_aspect('equal')
ax.axis('off')
ax.set_title('Convex set', fontsize=12)
plt.tight_layout()
plt.show()

Examples of Convex Sets:

  1. Probability simplex: \(\{x \in \mathbb{R}^n : x \geq 0,\, \mathbf{1}^\top x = 1\}\) is convex.
  2. PSD matrices: the set of symmetric positive semidefinite matrices is convex, since \(x^\top(\theta_1 X_1 + \theta_2 X_2)x = \theta_1 \underbrace{x^\top X_1 x}_{\geq 0} + \theta_2 \underbrace{x^\top X_2 x}_{\geq 0} \geq 0\).
  3. Sublevel sets: any \(\alpha\)-sublevel set \(C_\alpha = \{x : f(x) \leq \alpha\}\) of a convex function is convex. (The converse does not hold.)

Convex Optimization

The optimization problem above is convex if \(f_0\) and all \(f_i\)’s are convex, in which case the feasible set is also convex. Equality constraints are permitted only if affine (of the form \(Ax + b = 0\)), since requiring both \(f_i\) and \(-f_i\) to be convex forces \(f_i\) to be affine.

Example: Least Squares is a Convex QP

\(\text{minimize}_x \|Ax - b\|_2^2\): the norm is convex, squaring preserves convexity, and composition with \(x \mapsto Ax - b\) (affine) preserves convexity. Expanding gives \(\|Ax-b\|_2^2 = x^\top A^\top Ax - 2b^\top Ax + b^\top b\) with \(A^\top A \succeq 0\), confirming it is a QP. Closed-form solution: \(x^* = (A^\top A)^{-1}A^\top b\).

Theorem: Optimality Condition

For a convex optimization problem, a feasible point \(x\) is optimal if and only if

\[\nabla f_0(x)^\top(y - x) \geq 0 \quad \text{for all feasible } y\]

Proof.

(if) By convexity, \(f_0(y) \geq f_0(x) + \nabla f_0(x)^\top(y-x) \geq f_0(x)\) for all feasible \(y\).

(only if) Suppose \(x\) is optimal but \(\nabla f_0(x)^\top(y-x) < 0\) for some feasible \(y\). Then \(z_\theta = \theta y + (1-\theta)x\) is feasible (convex feasible set), and \(\frac{d}{d\theta}f_0(z_\theta)|_{\theta=0} = \nabla f_0(x)^\top(y-x) < 0\), so \(f_0(z_\theta) < f_0(x)\) for small \(\theta\), contradicting optimality. \(\square\)

For unconstrained problems the condition reduces to \(\nabla f_0(x) = 0\). For \(\text{minimize}_x \frac{1}{2}x^\top Px + q^\top x + r\) with \(P \succeq 0\), this gives \(Px + q = 0\), with three cases:

  1. No solution if \(q \notin \mathrm{Range}(P)\).
  2. Unique solution \(x^* = -P^{-1}q\) if \(P \succ 0\).
  3. Affine solution set \(\{x^* + y : y \in \mathrm{Null}(P)\}\) if \(P\) is singular but \(q \in \mathrm{Range}(P)\).

Important Classes of Convex Optimization Problems

Linear Programs (LP)

\[\text{minimize}_{x}\; c^\top x \quad \text{s.t.} \quad a_i^\top x \leq b_i, ~i =1, \dots,m\]

LPs are solved very efficiently; when the feasible set is compact, an optimal point is attained at the vertex of the feasible region.

We can apply this type of optimization to control of the unicycle model. Recall that this model was: \[ \dot{x} = \begin{bmatrix} \cos x_3 & 0 \\ \sin x_3 & 0 \\ 0 & 1 \end{bmatrix} \begin{bmatrix} u_1 \\ u_2 \end{bmatrix} \] with \(x_1\) and \(x_2\) the position, \(x_3\) the heading, \(u_1\) as the forward velocity, and \(u_2\) the angular velocity. We can synthesize a CLF-based controller by solving an LP at each state:

To synthesize a CLF candidate, consider \(V(x) = \frac{1}{2}(x_1^2 + x_2^2)\), which is positive definite and \[ L_g V = \nabla V^\top g(x) = \begin{bmatrix} x_1 & x_2 & 0 \end{bmatrix} \begin{bmatrix} \cos x_3 & 0\\ \sin x_3 & 0 \\ 0 & 1 \end{bmatrix} \\ = \begin{bmatrix} x_1\cos x_3 + x_2\sin x_3 & 0 \end{bmatrix} \] This means that \(u_2\) will never appear in the CLF, that it will only depends on \(u_1\). We can get around this by designing a separate heading control law. However, we will lose control authority when \(x_1\cos x_3 + x_2\sin x_3 = 0\) (the unicycle is perpendicular to the line connecting it to the origin).

With this CLF, we can synthesize a minimum-effort controller by solving the following LP at each state:

\[\text{minimize}_{u}\; c^\top u \quad \text{s.t.} \quad L_f V(x) + L_g V(x)\cdot u \leq -\varepsilon V(x), ~i =1, \dots,m\] where, for example, \(c^\top = [1, 0]\) if we only care to minimize thrust, or \(c^\top = [0, 1]\) if we only care to minimize angular velocity.

With the implementation of the LP:

import numpy as np
import cvxpy as cp

def wrap_to_pi(angle):
    return (angle + np.pi) % (2 * np.pi) - np.pi

def clf_lp_controller(x, eps=1.0, v_max=1.0, w_max=2.0, k_theta=2.0):
    """
    CLF-based controller for the unicycle using an LP.

    V(x) = 1/2 (x1^2 + x2^2)
    L_f V = 0
    L_g V = [x1 cos(x3) + x2 sin(x3), 0]

    Since u2 does not appear in the CLF constraint, we choose u2 from
    a separate heading feedback law and solve an LP only to find u1.
    """
    x1, x2, x3 = x
    V = 0.5 * (x1**2 + x2**2)
    a = x1 * np.cos(x3) + x2 * np.sin(x3)

    # Heading controller: point toward the origin
    theta_des = np.arctan2(-x2, -x1)
    u2_nom = k_theta * wrap_to_pi(theta_des - x3)
    u2_nom = np.clip(u2_nom, -w_max, w_max)

    # LP variables: control u and auxiliary t for |u|
    u = cp.Variable(2)
    c = np.array([1.0, 0.0])
    objective = cp.Minimize(c @ u)

    constraints = [
        a * u[0] <= -eps * V,     # CLF constraint
        u[1] == u2_nom,             # separate heading law
        u[0] <= v_max,
        u[0] >= -v_max,
        u[1] <= w_max,
        u[1] >= -w_max,
    ]

    prob = cp.Problem(objective, constraints)
    prob.solve(solver=cp.CLARABEL)

    if u.value is None:
        return np.array([0.0, u2_nom])

    return np.array(u.value).flatten()

and a simulation of the system:

Show code
import numpy as np
import matplotlib.pyplot as plt

# Simulate the unicycle with the CLF-LP controller
dt = 0.05
T = 12.0
N = int(T / dt)

x = np.array([3.0, 2.0, -np.pi / 2])   # [x1, x2, x3]
traj = np.zeros((N + 1, 3))
u_hist = np.zeros((N, 2))
traj[0] = x

for k in range(N):
    u = clf_lp_controller(x, eps=0.8, v_max=1.5, w_max=2.5, k_theta=2.0)
    u_hist[k] = u

    # Unicycle dynamics
    xdot = np.array([
        np.cos(x[2]) * u[0],
        np.sin(x[2]) * u[0],
        u[1],
    ])
    x = x + dt * xdot
    traj[k + 1] = x

# Plot trajectory
plt.figure(figsize=(6, 6))
plt.plot(traj[:, 0], traj[:, 1], linewidth=2, label="trajectory")
plt.plot(0, 0, "ro", markersize=8, label="goal")
plt.xlabel(r"$x_1$")
plt.ylabel(r"$x_2$")
plt.axis("equal")
plt.grid(True)
plt.legend()
plt.title("Unicycle CLF-LP Simulation")
plt.show()

# Plot controls
t = np.arange(N) * dt
fig, ax = plt.subplots(2, 1, figsize=(7, 5), sharex=True)
ax[0].plot(t, u_hist[:, 0], linewidth=2)
ax[0].set_ylabel(r"$u_1$")
ax[0].grid(True)

ax[1].plot(t, u_hist[:, 1], linewidth=2)
ax[1].set_ylabel(r"$u_2$")
ax[1].set_xlabel("time (s)")
ax[1].grid(True)

plt.tight_layout()
plt.show()

Quadratic Programs (QP)

\[\text{minimize}_{x} \; \frac{1}{2} x^\top P x + q^\top x + r \quad \text{s.t.} \quad a_i^\top x \leq b_i, ~1, \dots, m,~P\succeq 0\]

The minimum-effort CLF controller is a QP (quadratic cost, affine constraint). All LPs are QPs with \(P = 0\).

For our unicycle example, the benefit of a QP over an LP is that the QP naturally minimizes control energy rather than just control effort:

\[\text{minimize}_{u \in \mathbb{R}^2} \; u^\top u \quad \text{s.t.} \quad L_g V(x)\cdot u \leq -L_f V(x) - \varepsilon V(x)\]

For a scalar input, this equals \(u^*_{\text{LP}}\) at every state. The CVXPY formulation is shown for verification.

import numpy as np
import cvxpy as cp

def clf_qp_controller(x, eps=1.0, v_max=1.0, w_max=2.0, k_theta=2.0):
    """
    Minimum-energy CLF controller for the unicycle:
        minimize_u    ||u||_2^2
        subject to    L_g V(x) u <= -L_f V(x) - eps V(x)

    with V(x) = 1/2 (x1^2 + x2^2).
    """
    x1, x2, x3 = x
    V = 0.5 * (x1**2 + x2**2)
    a = x1 * np.cos(x3) + x2 * np.sin(x3)

    # Heading controller: point toward the origin
    theta_des = np.arctan2(-x2, -x1)
    u2_nom = k_theta * wrap_to_pi(theta_des - x3)
    u2_nom = np.clip(u2_nom, -w_max, w_max)

    # LP variables: control u and auxiliary t for |u|
    u = cp.Variable(2)
    objective = cp.Minimize(cp.quad_form(u, np.eye(2)))

    constraints = [
        a * u[0] <= -eps * V,     # CLF constraint
        u[1] == u2_nom,             # separate heading law
        u[0] <= v_max,
        u[0] >= -v_max,
        u[1] <= w_max,
        u[1] >= -w_max,
    ]

    prob = cp.Problem(objective, constraints)
    prob.solve(solver=cp.CLARABEL)

    if u.value is None:
        return np.array([0.0, u2_nom])

    return np.array(u.value).flatten()
Show code
import numpy as np
import matplotlib.pyplot as plt

# Simulate the unicycle with the CLF-QP controller
dt = 0.05
T = 12.0
N = int(T / dt)

x = np.array([3.0, 2.0, -np.pi / 2])   # initial state [x1, x2, x3]
traj = np.zeros((N + 1, 3))
u_hist = np.zeros((N, 2))
traj[0] = x

for k in range(N):
    u = clf_qp_controller(x, eps=0.8, v_max=1.5, w_max=2.5, k_theta=2.0)
    u_hist[k] = u

    # Unicycle dynamics
    xdot = np.array([
        np.cos(x[2]) * u[0],
        np.sin(x[2]) * u[0],
        u[1],
    ])
    x = x + dt * xdot
    traj[k + 1] = x

# Plot state trajectory in the plane
plt.figure(figsize=(6, 6))
plt.plot(traj[:, 0], traj[:, 1], linewidth=2, label="trajectory")
plt.plot(0, 0, "ro", markersize=8, label="goal")
plt.xlabel(r"$x_1$")
plt.ylabel(r"$x_2$")
plt.axis("equal")
plt.grid(True)
plt.legend()
plt.title("Unicycle CLF-QP Simulation")
plt.show()

# Plot controls over time
t = np.arange(N) * dt
fig, ax = plt.subplots(2, 1, figsize=(7, 5), sharex=True)

ax[0].plot(t, u_hist[:, 0], linewidth=2)
ax[0].set_ylabel(r"$u_1$")
ax[0].grid(True)

ax[1].plot(t, u_hist[:, 1], linewidth=2)
ax[1].set_ylabel(r"$u_2$")
ax[1].set_xlabel("time (s)")
ax[1].grid(True)

plt.tight_layout()
plt.show()

Quadratically Constrained Quadratic Programs (QCQP)

\[\text{minimize}_{x} \; \tfrac{1}{2}x^\top P_0 x + q_0^\top x + r_0 \quad \text{s.t.} \quad \tfrac{1}{2}x^\top P_i x + q_i^\top x + r_i \leq 0, \quad i=1,\dots,m, \quad P_i \succeq 0 \]

All QPs are QCQPs.

This formulation would allow us to add a constraint on the total energy: \(u^2 \leq T_{\max}\), sometimes called an actuator thermal limit (sustained large torques overheat the motor).

Second-Order Cone Programs (SOCP)

\[\text{minimize}_{x} \; f^\top x \quad \text{s.t.} \quad \|A_i x + b_i \|_2 \leq c_i^\top x + d_i, \quad i = 1,\dots, m\]

An SOCP adds a second-order cone constraint to the LP. For example, \(|u| \leq u_{\max}\) is a hard torque saturation (peak motor rating). Setting \(u_{\max} = \infty\) recovers the LP; setting the objective to zero and using \(u_{\max}^2 = T_{\max}\) recovers QCQP.

The general SOCP form allows the bound to depend linearly on \(u\): \(\|A_i u + b_i\|_2 \leq c_i^\top u + d_i\). A friction cone \(|u_{\mathrm{lat}}| \leq \mu\, u_{\mathrm{norm}}\) has this form and is genuinely SOCP (not LP or QCQP).

Linear Matrix Inequalities and Semidefinite Programs

Instead of scalar inequalities (\(\leq\)) in constraints, we can allow matrix inequalities (\(\preceq\)).

SDP: First Form

\[\text{minimize}_{x} \; c^\top x \quad \text{s.t.} \quad x_1 F_1 + \cdots + x_n F_n + G \preceq 0\]

where \(F_1,\ldots,F_n, G\) are symmetric matrices. This constraint is called a linear matrix inequality (LMI), and the problem is a semidefinite program (SDP). When all \(F_i, G\) are scalars, the LMI reduces to an affine inequality (LP).

The LMI constraint leads to a convex feasible set. For any two feasible \(x\) and \(\hat{x}\):

\[\begin{aligned} &(\theta x_1 + (1-\theta)\hat{x}_1)F_1 + \cdots + (\theta x_n + (1-\theta)\hat{x}_n)F_n + G \\ &\quad= \theta(x_1F_1+\cdots+x_nF_n+G) + (1-\theta)(\hat{x}_1F_1+\cdots+\hat{x}_nF_n+G) \preceq 0 \end{aligned}\]

Multiple LMIs can be combined into one via block diagonalization:

\[x_1 \begin{bmatrix} F_1 & 0 \\ 0 & \hat{F}_1 \end{bmatrix} + \cdots + x_n \begin{bmatrix} F_n & 0 \\ 0 & \hat{F}_n \end{bmatrix} + \begin{bmatrix} G & 0 \\ 0 & \hat{G} \end{bmatrix} \preceq 0\]

SDP: Second Form

\[\text{minimize}_{X} \; \mathrm{trace}(CX) \quad \text{s.t.} \quad \mathrm{trace}(A_i X) = b_i, \; i = 1,\ldots,m, \quad X \succeq 0\]

The two forms are equivalent. Matrix variables appearing linearly in semidefinite constraints are permitted.

LMI Examples

Example: Lyapunov Inequality

The map \(\mathcal{L}(X) = A^\top X + XA\) is linear in \(X\) (verify: \(\mathcal{L}(aX_1 + bX_2) = a\mathcal{L}(X_1) + b\mathcal{L}(X_2)\)). We know \(A\) is Hurwitz iff there exists \(X \succ 0\) with \(\mathcal{L}(X) \prec 0\). Thus \(\mathcal{L}(X) \preceq -\varepsilon I\) is an LMI constraint in the variable \(X\).

Example: Common Lyapunov Function for Switched Systems

Consider \(\dot{x} = A(t)x\) where \(A(t) \in \{A_1,\ldots,A_m\}\). Even if all \(A_i\) are Hurwitz, stability of the switched system is not guaranteed. We can search for a common Lyapunov function \(V(x) = x^\top Px\) via the SDP:

\[\text{minimize}_{P} \; \mathrm{trace}(P) \quad \text{s.t.} \quad PA_i + A_i^\top P \preceq -\varepsilon I \;\; \forall i, \quad P \succeq I\]

SDP for CLF Design (Running Example)

For the pendulum running example, we used the LQR Riccati equation to obtain \(P\) above. Here we show how the same (or better) \(P\) can be found via SDP, jointly designing both the CLF matrix \(P\) and a stabilizing linear feedback gain \(K \in \mathbb{R}^{1\times 2}\).

The condition \((A + BK)^\top P + P(A + BK) + \varepsilon P \preceq 0\) is not linear in \((P, K)\) together. We linearize it via the change of variables \(Q = P^{-1}\), \(Y = KQ\):

\[Q A^\top + A Q + Y^\top B^\top + B Y + \varepsilon Q \preceq 0, \quad Q \succ 0\]

This is an LMI in \((Q, Y)\) — linear in both variables. We recover \(P = Q^{-1}\) and \(K = Y Q^{-1}\). Under this design, the controller \(u = K e\) satisfies \(\dot V \leq -\varepsilon V\) without any online QP — the optimization is done once offline.

Solving Convex Optimization Problems in Practice

As a student, you have free access to GitHub Copilot, which can help you try out new coding languages faster by providing help with syntax.

Analytic solutions rarely exist, but modern solvers are fast enough that exact solutions are treated as readily available.

  • General-purpose: CVX (MATLAB), CVXPY (Python), CVXOPT, YALMIP.
  • Specialized: e.g., MATLAB’s quadprog for QPs.

Example: CVXPY — Constrained Least Squares

Show code
import cvxpy as cp

x = cp.Variable(n)
prob = cp.Problem(cp.Minimize(cp.norm(A @ x - b, 2)),
                  [C @ x <= d])
prob.solve()

Example: CVXPY — CLF-QP Controller

Show code
import cvxpy as cp

u    = cp.Variable(m)
Lf_V = ...   # partial V/partial x @ f(x)
Lg_V = ...   # partial V/partial x @ g(x)
A_x  = ...   # A(x)

prob = cp.Problem(cp.Minimize(cp.sum_squares(u)),
                  [Lf_V + Lg_V @ u <= -eps * A_x])
prob.solve()
u_opt = u.value

The following demonstrates the minimum-effort CLF-QP controller for a 1D nonlinear system, showing how the controller shape changes with the CLF:

Show code
import cvxpy as cp
import numpy as np
import matplotlib.pyplot as plt

# Simple 1D system: xdot = -x^3 + u, V(x) = x^2
# CLF condition: dV/dx * (f(x) + g(x)*u) <= -eps*V(x)
# => 2x*(-x^3 + u) <= -eps*x^2

eps = 1.0
x_vals = np.linspace(-2, 2, 100)
u_clf = []

for x_val in x_vals:
    u = cp.Variable()
    Lf_V = 2 * x_val * (-x_val**3)
    Lg_V = 2 * x_val
    V_x  = x_val**2

    prob = cp.Problem(cp.Minimize(cp.sum_squares(u)),
                      [Lf_V + Lg_V * u <= -eps * V_x])
    prob.solve(solver=cp.CLARABEL, verbose=False)
    u_clf.append(u.value if u.value is not None else np.nan)

fig, ax = plt.subplots(figsize=(7, 4))
ax.plot(x_vals, u_clf, 'steelblue', linewidth=2, label='CLF-QP minimum effort $u^*(x)$')
ax.axhline(0, color='k', linewidth=0.5)
ax.set_xlabel('$x$', fontsize=13)
ax.set_ylabel('$u^*(x)$', fontsize=13)
ax.set_title('Minimum-Effort CLF-QP Controller\n$\\dot{x} = -x^3 + u$, $V(x) = x^2$', fontsize=11)
ax.legend(fontsize=11)
ax.grid(True, alpha=0.3)
plt.tight_layout()
plt.show()