This post deals with the minimization of the log barrier function that is:

where defined as

We shall attempt an intuitive explanation on the solution of the following unconstrained minimization problem

First off, let’s say our set of inequalities defined by is closed, the optimal solution of problem can not lie outside the closed area defined by since function would not be defined. So we have no other choice than to force to lie inside . If is close to the hyperplane defined by , say the one, i.e. , then the term would explode to infinity, hence the minimization process fails. The best compromise would be to place as far as possible from all hyperplanes , which is then termed the analytic center of the inequalities for all . See the above video for a nice illustration.

]]>

The Selenium Python package is automates web driver interaction via Python. Selenium provides a simple API to write functional tests using WebDriver. Selenium Python bindings provide a convenient API to access Selenium WebDrivers like Chrome, Safari, Firefox, Opera, Edge, Blackberry, and many more. This lecture aims at covering all related Selenium functionality and acceptance tests. This lecture is outlined as follows:

Outline

00:00 Introduction

02:12 Installing selenium

02:36 Installing chromedriver

03:08 Setting vscode

03:34 Importing selenium

03:44 Open chrome browser

04:43 URL access

07:00 Page interaction

07:25 Find element by name

09:08 Entering text

13:20 Clicking buttons

13:56 Get element by id

15:12 Sleep waiting

17:11 Cookie injection

19:28 History navigation

23:47 Drag and Drop by offset

29:08 Drag and Drop to target

33:12 Explicit and Implicit wait

34:47 Web scraping

41:56 Headless browser

44:10 Disable GPU

44:31 Browser size

45:04 Disable security

45:16 Sandbox

45:45 Insecure content

45:54 Disable WebGL

46:30 Disable popups

46:38 Summary

48:48 Outro

Donations

If possible, any donation is appreciated thru Paypal Patreon or Bitcoin at 327qhzF7yxQa2CiyL1Vnd63ccuMyAH1Ss9

Material

> Selenium

> Chromedriver

> VSCode

> Google

Books

Automate the Boring Stuff with Python

“True conversational AI is a voice assistant that can engage in human-like dialogue, capturing context and providing intelligent responses. Such AI models must be massive and highly complex,” Sid Sharma from ‘What Is Conversational AI?’. This lecture attempts to demystify conversational AI by covering its counterparts that include, but not limited to: Automatic Speech Recognition, Natural Language Processing & Understanding, Text-to-Speech Synthesis, Intention Extraction and Identification, etc.. We use NVIDIA‘s Jarvis, an application framework for multimodal conversational AI services that delivers real-time performance on GPUs, to perform sophisticated conversational AI tasks. By the end of the lecture, we present a Question/Answering Demo powered by NVIDIA‘s Jarvis.

The lecture above shows you how to install Jarvis on your machine. This is done by first installing Docker, then CUDA, registering to NGC, then finally setting up Jarvis. You can work with Jarvis on a Jupyter notebook.

Ahmad Bazzi then shows you how to work with the most essential components of Jarvis that are: ASR (Automatic Speech Recognition)

NLP (Natural Language Processing) and Core NLP, and finally TTS (Text-to-Speech) Synthesis.

A very cool Question/Answering Jarvis-based demo is finally presented in the tutorial. It is trained via Wikipedia articles making use of the wikipedia python package.

The following lecture talks about the Markowitz Portfolio Optimization problem in convex optimization. Indeed, many variants of this problem exists, but the classical one looks like this

where is an sized vector containing the amount of assets to invest in. The vector is the mean of the relative price asset change and the matrix is the matrix of variance-covariance of assets. The parameter is minimum accepted returns.

Learn more about the above problem and its application to the stock market by watching the above lecture.

About

This lecture focuses on the theoretical as well as practical aspects of the Support Vector Machines. It is a supervised learning model associated with learning algorithms that analyze data used for classification and regression analysis. Developed at AT&T Bell Laboratories by Vapnik with colleagues (Boser et al., 1992, Guyon et al., 1993, Vapnik et al., 1997), it presents one of the most robust prediction methods, based on the statistical learning framework or VC theory proposed by Vapnik and Chervonenkis (1974) and Vapnik (1982, 1995).

Outline

00:00:00 Introduction

00:01:11 Support Vector Machines

00:03:55 Supporting Vectors and Hyperplanes

00:07:05 SVM Mathematical Modelling

00:08:58 Hard Margin SVM

00:47:21 Outlier Sensitivity & Linear Separability

00:49:11 Hard Margin SVM on Python

01:13:15 Soft Margin SVM

01:27:09 Soft Margin SVM on Python

01:31:47 Outro

Mathematical optimization is a problem that takes the following form

(1)

where is a vector containing all the variables of the problem

(2)

The function is referred to as the cost or the objective function. Moreover, the functions are referred to as constraint functions. In most cases, our goal is to find a (or the) point which is feasible (i.e. satisfies and is minimum. Rigorously stated, is optimal if

(3)

Well, we can state many applications. In finance and stock analysis, a well-known one is Markowitz portfolio optimization. This problem takes the form

(4)

Here will reflect the number of assets (or stocks) held over a period of time. For example, let’s say you decide to buy stocks in the period of time between today and 6 months from now. You are interested in the following stocks: CEVA, GOOGL, LVMH and NIO. This means you have decided on 4 assets and hence your . Furthermore, let’s denote by the amount of asset held throughout the period of investment. A long position in asset would indicate , and a short position in asset would mean . Moreover, is the change in price divided by the initial price (i.e. today’s price). Your return will simply be

(5)

Anyone investing (short or long term) would simply want to maximize . However, no constraints would simply mean that is a vector of all-, which is unrealistic. Keeping our feet on the ground, we should understand that a vector of all- is un-achievable, but we can accept a minimum return as

(6)

where is a minimum return you seek from the investment over your investing period. The above equation will then suit one of our constraints. Note that the above is a way of saying “I want maximum return”. To embed risk somewhere, volatility has to be included. A suitable measure of volatility seems to be the variance of the asset prices, which is captured in covariance matrix . The variance would then by the term . Markowitz introduced the problem of minimizing risk subject to maximum and acceptable return

(7)

Note that the constraint along with imposes a probability constraint on vector . In other words, we are interested in vectors that contains probabilities (or proportions). Markowitz portfolio optimization lies under the category of convex optimization problems of type QP (Quadratic Programming). In Electrical Engineering, convex optimization finds application in many communication and electronic manufacturing problems, such as water filling and electronic micro scale design. ]]>

The above lecture is brought to you by Skillshare. In a previous post of mine, we introduced weak alternatives. As a small reminder, consider the following two sets

(1)

and

(2)

where

(3)

is the dual function and is the domain of the problem. Since we did not impose any convexity assumption on ‘s neither did we assume that our ‘s are affine, then all we can say about and is that they form weak alternatives. In other words,

- If is feasible, then is infeasible.
- If is feasible, then is infeasible.

In this lecture, we assume the following

- are convex
- ‘s are affine, i.e.
- such that

In that case, we write as

(4)

Thanks to the three conditions above, we could strengthen weak alternatives so that they form strong alternatives. That is to say

- is feasible is infeasible.
- is feasible is infeasible.

Indeed, strong alternatives are stronger since (unlike weak alternatives) if we know that one of the sets or is infeasible, then the other has to be feasible.

In my YouTube lecture, I give two applications relating to linear inequalities and intersection of ellipsoids.

]]>This tutorial is brought to you by DataCamp. The tutorial does the most in rigorously explaining the little bits and pieces of the wonderful Matplotlib. Matplotlib is a comprehensive library for creating static, animated, and interactive visualizations in Python. Matplotlib makes easy things easy and hard things easier.

Contents of this lecture are partitioned as follows:

00:00:00 Introduction

00:00:58 What is MATPLOTLIB ?

00:01:56 Installing MATPLOTLIB

00:02:22 Pyplot

00:04:35 Plot Formatting

00:06:05 Multiple Plotting

00:08:43 Legend

00:11:43 Keyword String Plotting

00:16:34 Categorical Data Plotting

00:17:24 Bar Plot

00:17:52 Scatter Plot

00:18:38 Subplotting

00:20:27 Figure Size Adjustment

00:21:27 Control Line Properties

00:22:43 Multiple Figures & Axes

00:25:43 Text Manipulation

00:28:38 Gridding

00:28:56 Plot Limit

00:30:16 Text Annotation

00:35:38 Logarithmic & Nonlinear Scales

00:37:49 Log Scale

00:38:38 Symmetric Log scale

00:39:25 Logistic Scale

00:40:05 imread & imshow

00:42:10 Image Cropping

00:44:03 Barcodes

00:48:23 Layer Images

00:51:42 Alpha Blending

00:53:35 Fill Curves

00:53:59 Koch Snowflake

01:00:59 Rendering Equations with LaTeX

01:05:48 Polar Curves

01:09:59 Summary

Let us say we are interested in checking whether system hereby defined as

(1)

is feasible or not. In other words, could we find a vector that satisfies set ? In many cases, it may turn out to be hard to answer the question by exhaustively searching all possible candidates of . To crank it up a notch, we formulate an optimization problem that serves us well. To this extent, consider

(2)

Yes, that is right. We minimize 0 subject to being in . Well, nothing fancy has been done here. As a matter of fact, if one closely looks at the optimal value, that is

(3)

One could realize that the optimal value acts as an indicator function, i.e. it returns when is feasible, else . That’s awesome. You just wrote down an optimization problem that tells you whether is feasible or not. In other words, you wrote down an optimization problem answering your main question. However, nothing fancy has been done here. All we’re doing is re-writing the problem and hence nothing could be learned from the optimization problem in equation (2). On the other hand, things becomes a whole lot more interesting when taking a look at the dual problem. But for that, we need to pass by the Lagrangian function that is

(4)

The dual function is the infimum of over (domain of the problem), that is

(5)

Finally, the dual problem would be to

(6)

As in the primal problem, the optimal value of the dual problem is also an indicator function to another set of inequalities,

(7)

where

(8)

So now the question is ” How does relate to ? “. Applying weak duality that is

(9)

we can infer two cases.

- Case 1: If , then has to be .
- Case 2: If , then has to be .

Using equations (7) and (3) along with the two cases, we get the following:

- If is feasible, then is infeasible.
- If is feasible, then is infeasible.

This is what weak alternative is. It is when at most one of the inequalities are feasible.

]]>In this lecture, we talk about Perturbation and Sensitivity Analysis. But what does that mean ? Well, consider our good old looking optimization problem that looks like this

(1)

One way to tell how the above problem reacts to perturbation is to actually perturb it and check how its optimal value behaves with perturbation parameters. To this end, consider the following perturbed problem

(2)

What did we just do ? Well instead of having strict equalities and inequalities. We “perturb” the zero boundary. If , we say that the inequality constraint is relaxed by an amount of . Likewise, when , the equality constraint is relaxed by an amount of . Also we can see that problem (2) “boils down” to problem (1) when perturbation parameters ‘s and ‘s are set to zero, which makes sense right ? Now if , you can see that we “tighten” the inequality constraint. Similarly, if , the equality constraint is said to be “tightened”. Going back to our main concern that is “check how its optimal value behaves with perturbation parameters”, we have to quantify that perturbation on , the optimal value of problem (1). To this extent, let us define the optimal value of problem (2) as

(3)

The function tells us how the optimal value of problem (2) as a function of perturbation parameters

(4)

Note that for the particular case and , we have that . In the lecture, we prove an inequality that shows us how far is from which is the following

(5)

where are the optimal dual Lagrangian multipliers. The above also provides a global view on how far we are from the optimal unperturbed problem in terms of the optimal dual Lagrangian multipliers. Now, if we impose extra properties on , i.e. differentiable at and strong duality, we can get a feel on what happens locally around . In the above lecture, we also prove that given the previous two conditions (strong duality and differentiability around ), we have that

(6)

The above allows us to quantify how active a constraint is at the optimal point . ]]>