Perturbation & Sensitivity Analysis

In this lecture, we talk about Perturbation and Sensitivity Analysis. But what does that mean ? Well, consider our good old looking optimization problem that looks like this

(1)   \begin{equation*}\begin{aligned}& \underset{x}{\text{minimize}}& & f_0(x) \\& \text{subject to}& & f_i(x) \leq 0, \; i = 1, \ldots, m \\& & & h_i(x) = 0, \; i = 1, \ldots, p\end{aligned}\end{equation*}


One way to tell how the above problem reacts to perturbation is to actually perturb it and check how its optimal value behaves with perturbation parameters. To this end, consider the following perturbed problem

(2)   \begin{equation*}\begin{aligned}& \underset{x}{\text{minimize}}& & f_0(x) \\& \text{subject to}& & f_i(x) \leq u_i, \; i = 1, \ldots, m \\& & & h_i(x) = v_i, \; i = 1, \ldots, p\end{aligned}\end{equation*}



What did we just do ? Well instead of having strict equalities and inequalities. We “perturb” the zero boundary. If u_i > 0, we say that the i^{th} inequality constraint is relaxed by an amount of u_i. Likewise, when v_i > 0, the i^{th} equality constraint is relaxed by an amount of v_i. Also we can see that problem (2) “boils down” to problem (1) when perturbation parameters u_i‘s and v_i‘s are set to zero, which makes sense right ? Now if u_i < 0, you can see that we “tighten” the i^{th} inequality constraint. Similarly, if v_i < 0, the i^{th} equality constraint is said to be “tightened”. Going back to our main concern that is “check how its optimal value behaves with perturbation parameters”, we have to quantify that perturbation on p^*, the optimal value of problem (1). To this extent, let us define the optimal value of problem (2) as

(3)   \begin{equation*} \begin{split} p^*(u,v)=\inf\lbracef_0(x) \vert\exists x \in \mathcal{D}, f_i(x) &\leq u_i, i = 1 \ldots m; \\h_i(x) &= v_i , i = 1 \ldots p \rbrace\end{split}\end{equation*}



The function p^*(u,v) tells us how the optimal value of problem (2) as a function of perturbation parameters

(4)   \begin{align*} u &= \begin{bmatrix} u_1 & \ldots & u_m \end{bmatrix} \\ v &= \begin{bmatrix} v_1 & \ldots & v_p \end{bmatrix} \end{align*}



Note that for the particular case u = 0 and v = 0, we have that p^*(0,0) = p^. In the lecture, we prove an inequality that shows us how far p^*(u,v) is from p^* which is the following


(5)   \begin{equation*} p^*(u,v) \geq p^* - u^T \lambda^* - v^T \nu^*\end{equation*}



where (\lambda^*,\nu^*) are the optimal dual Lagrangian multipliers. The above also provides a global view on how far we are from the optimal unperturbed problem in terms of the optimal dual Lagrangian multipliers. Now, if we impose extra properties on p^*(u,v), i.e. differentiable at (u,v) = (0,0) and strong duality, we can get a feel on what happens locally around (u,v) = (0,0). In the above lecture, we also prove that given the previous two conditions (strong duality and differentiability around (u,v) = (0,0)), we have that

(6)   \begin{align*} \lambda_i^* &= - \frac{\partial p^*(0,0)}{\partial u_i} \\\nu_i^* &= - \frac{\partial p^*(0,0)}{\partial v_i} \end{align*}


The above allows us to quantify how active a constraint is at the optimal point x^*.

Ahmad Bazzi

Ahmad Bazzi is an Electrical Engineer and YouTuber. Ahmad Bazzi is a signal processing engineer at CEVA DSP. He holds a PhD in Electrical Engineering from EURECOM. Ahmad Bazzi is a EURECOM alumni. Ahmad Bazzi obtained his Master studies (Summa Cum Laude) in Electrical Engineering at SUPELEC, Paris. Ahmad Bazzi has many publications in well-known IEEE conferences, including a nomination award and is a co-inventor in several patents. Ahmad Bazzi dedicates time to publish high level lectures on Mathematics (including convex optimization) and Programming. Ahmad Bazzi also focuses on Machine Learning, Convex Optimization, Linear Algebra, Python, SymPy, NumPy, Pandas, CVXOPT, MATLAB, C++ for firmware systems.

Leave a Reply

Your email address will not be published. Required fields are marked *

Next Post

Weak Alternatives

Sat Nov 28 , 2020
Let us say we are interested in checking whether system hereby defined as (1)   is feasible or not. In other words, could we find a vector that satisfies set ? In many cases, it may turn out to be hard to answer the question by exhaustively searching all possible […]