fbpx

In a previous post of mine, I talked about CVXOPT Programming. In this one, we’ll introduce cone programming. So first things first. Cone programming is a term (short for second-order cone program (SOCP)) is a convex optimization problem of the following form

(1)   \begin{equation*}\begin{aligned}& \underset{x}{\text{minimize}}& & f^T x \\& \text{subject to}& & \lVert A_i x + b_i \rVert_2 \leq c_i^T x + d_i,\quad i = 1,\dots,m \\& & & Fx = g\end{aligned}\end{equation*}

where matrices A_i fall in \mathbb{R}^{n_i \times n} and vectors b_i \in \mathbb{R}^{n_i \times 1}. On the other hand, vectors c_i \in \mathbb{R}^{n \times 1} and f \in \mathbb{R}^{n \times 1}. Also d_i \in \mathbb{R}. Finally, F \in \mathbb{R}^{p \times n } and g \in \mathbb{R}^{p \times 1}.

SOCPs are easily solved on CVXOPT using the socp solver found in the solvers module of CVXOPT. Please refer to the video for a step-by-step tutorial.

On the other hand, Quadratically Constrained Quadratic Programs (a.k.a QCQPs) are a bit tricky. QCQPs are optimization problems in which both the objective function and the constraints are quadratic functions, i.e.

(2)   \begin{equation*}\begin{aligned}& \underset{x}{\text{minimize}}& &  x^\mathrm{T} P_0 x + q_0^\mathrm{T} x \\& \text{subject to}& & x^\mathrm{T} P_i x + q_i^\mathrm{T} x + r_i \leq 0,\quad i = 1,\dots,m \\& & & Ax = b\end{aligned}\end{equation*}

where all matrices P_0 \ldots P_m are given matrices. The QCQP problem is convex if matrices P_0 \ldots P_m are positive semi-definite. Note that a QCQP boils down to a QP (Quadratic Program) when matrices P_1 \ldots P_m are zero matrices. If in addition to the aforementioned statements, we also have P_0 = 0, then the problem is an LP (Linear Program).

That’s enough background to get you going with QCQPs. Turns out that CVXOPT does not have a “QCQP” solver. On the bright side, we can cast a QCQP as an SOCP enabling us to solve QCQPs via the SOCP solver of CVXOPT. How do we do that ? Well minimizing the cost of the QCQP, i.e. minimizing \tfrac12 x^\mathrm{T} P_0 x + q_0^\mathrm{T} x is the same as minimizing an upper bound of \tfrac12 x^\mathrm{T} P_0 x + q_0^\mathrm{T} x as

(3)   \begin{equation*}\begin{aligned}& \underset{x,\gamma}{\text{minimize}}& & \gamma  \\& \text{subject to}& &  x^\mathrm{T} P_i x + q_i^\mathrm{T} x + r_i \leq 0,\quad i = 1,\dots,m \\& & & Ax = b \\& & &  x^\mathrm{T} P_0 x + q_0^\mathrm{T} x \leq \gamma \end{aligned}\end{equation*}



We will now make use of an important theorem of positive semi-definite matrices, that is

Theorem 1:  Let A be a positive semidefinite matrix in \mathbb{C}^{n \times n}. Then there is exactly one positive semidefinite matrix B such that A = B^H B.

Actually, the above theorem is very intuitive. It’s a generalization of something you learned back in high school. It’s a matrix way of saying that every non-negative number a admits a square root b = \sqrt{a}. So making use of the matrix square root decomposition, we will write down the square root decompositions of all matrices P_0 \ldots P_m as

(4)   \begin{equation*}P_i = A_i^T A_i, \qquad i = 1 \ldots m\end{equation*}


We get

(5)   \begin{equation*}\begin{aligned}& \underset{x,\gamma}{\text{minimize}}& & \gamma \\& \text{subject to}& &  x^\mathrm{T} A_i^T A_i x + q_i^\mathrm{T} x + r_i \leq 0,\quad i = 1,\dots,m \\& & & Ax = b \\& & &  x^\mathrm{T} A^T A x + q_0^\mathrm{T} x \leq \gamma\end{aligned}\end{equation*}



We shall also decompose q_i‘s as

(6)   \begin{equation*}q_i = 2 A_i^T b_i, \qquad i = 0 \ldots m\end{equation*}



So we get

(7)   \begin{equation*}\begin{aligned}& \underset{x,\gamma}{\text{minimize}}& & \gamma \\& \text{subject to}& &  x^\mathrm{T} A_i^T A_i x + 2x^T A_i^T b_i + r_i \leq 0,\quad i = 1,\dots,m \\& & & Ax = b \\& & &  x^\mathrm{T} A_0^T A_0 x + 2 x^T A_0^T b_0 \leq \gamma\end{aligned}\end{equation*}



Now let us include the term b_i^T b_i in the first m+1 constraints as

(8)   \begin{equation*}\begin{aligned}& \underset{x,\gamma}{\text{minimize}}& & \gamma \\& \text{subject to}& &  x^\mathrm{T} A_i^T A_i x + 2x^T A_i^T b_i +b_i^T b_i - b_i^T b_i +r_i \leq 0,\quad i = 1,\dots,m \\& & & Ax = b \\& & &  x^\mathrm{T} A_0^T A_0 x + 2 x^T A_0^T b_0 \leq \gamma\end{aligned}\end{equation*}



Similarly, we include the term b^T b in the last constraint as

(9)   \begin{equation*}\begin{aligned}& \underset{x,\gamma}{\text{minimize}}& & \gamma \\& \text{subject to}& &  x^\mathrm{T} A_i^T A_i x + 2x^T A_i^T b_i +b_i^T b_i - b_i^T b_i +r_i \leq 0,\quad i = 1,\dots,m \\& & & Ax = b \\& & &  x^\mathrm{T} A_0^T A_0 x + 2 x^T A_0^T b_0 +b^T b_0 - b_0^T b_0 \leq \gamma\end{aligned}\end{equation*}



Why did we do that ? Well, notice that

(10)   \begin{equation*}\Vert Ax + b \Vert^2 = x^T A^T A x + 2x^T A^T b +b^T b\end{equation*}



This means that

(11)   \begin{equation*}\begin{aligned}& \underset{x,\gamma}{\text{minimize}}& & \gamma \\& \text{subject to}& &  \Vert A_ix + b_i \Vert^2  \leq b_i^T b_i - r_i ,\quad i = 1,\dots,m \\& & & Ax = b \\& & & \Vert A_0x + b_0 \Vert^2  \leq b_0^T b_0 + \gamma\end{aligned}\end{equation*}


The problem in equation (11) is really close to that of (1). The only thing that is a bit “annoying” is the \gamma. To bypass this problem, we bound \gamma by a user-defined parameter as \gamma \leq T

(12)   \begin{equation*}\begin{aligned}& \underset{x,\gamma}{\text{minimize}}& & \gamma \\& \text{subject to}& & \Vert A_ix + b_i \Vert^2 \leq b_i^T b_i - r_i ,\quad i = 1,\dots,m \\& & & Ax = b \\& & & \Vert A_0x + b_0 \Vert^2 \leq b_0^T b_0 + T \\& & & \gamma \leq T\end{aligned}\end{equation*}



The above problem is an SOCP as

(13)   \begin{equation*}\begin{aligned}& \underset{\bar{x}}{\text{minimize}}& & f^T \bar{x} \\& \text{subject to}& & \lVert \bar{A}_i \bar{x} + b_i \rVert_2 \leq c_i^T \bar{x} + d_i,\quad i = 0,\dots,m+1 \\& & & F\bar{x} = g\end{aligned}\end{equation*}



where

(14)   \begin{align*}\bar{x} &=\begin{bmatrix}\gamma & x\end{bmatrix}^T \\f &= \begin{bmatrix}1 & 0 \ldots 0\end{bmatrix}^T \\\bar{A}_i &=\begin{bmatrix}\mathbf{0} & A_i\end{bmatrix}, \qquad i = 0 \ldots m \\\bar{A}_{m+1} &= \begin{bmatrix}1 & 0  & \ldots & 0 \\0 & 0 & \ldots & 0 \\\vdots &\vdots & \vdots & \vdots \\0 & 0 & \ldots & 0 \end{bmatrix} \\b_{m+1} &= \mathbf{0} \\c_i &= \mathbf{0}, \qquad i = 0 \ldots m+1 \\d_0 &= b_0^T b_0 + T \\ d_i &= b_i^Tb_i - r_i , \qquad i = 1 \ldots m  \\d_{m+1} &= T^2 \\F &= A \\g &= b\end{align*}



So, in short, by choosing the above variables, we can go from a QCQP to an SOCP and hence solve the former on CVXOPT. The lecture uses Jupyter notebook on my favorite Google Chrome browser.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may also like