The post How do I organize my time with this super app called Notion ! appeared first on Algorithmic Kitchen.

]]>Lots to do but not enough time, right ? WRONG !

Notion‘s got your back.

I have conducted the above video explaining how to use such a futuristic app, from A-to-Z, using little to no effort. Make sure you watch the entire video. Notion is the only app you need.

The post How do I organize my time with this super app called Notion ! appeared first on Algorithmic Kitchen.

]]>The post Horner’s Method appeared first on Algorithmic Kitchen.

]]>If you love polynomials, you will love this article. We all know how to evaluate a polynomial at a given point i.e. say you got a polynomial , and you would like to compute . Well, it’s a plug-and-play. Just substitute and evaluate. Simple right ? Right. The problem is for a polynomial of degree , we need multiplications and additions. This article introduces you to Horner’s Method, named after William George Horner, which evaluates a polynomial in a fast way. It is economic as Horner’s method requires only additions and Floor() mutiplications.

Let be a polynomial of degree as follows

(1)

where are the coefficients of the polynomial . The objective is simple and straightforward: Let us evaluate at .

Horner defines the following sequence:

(2)

Therefore because note that

(3)

and hence

(4)

Next, we present the function HornersMethod.m implements this idea, where input is the degree of the polynomial , the vector is an vector containing coefficients , and , the point which we which to find which is the output “out”.

```
function [out] = HornersMethod(n,x,t)
% initialize first elements=
out = x(1);
% apply Horners at point t for coefficients x of a
% polynomial of degree n
for j = 2:n
out = out*t + x(j);
end
```

This article demonstrates Horner’s method, as well as its MATLAB implementation. Indeed, Horner’s method is provides efficiency as operations is required vs the naive implementation, which needs .

Finally, Subscribe to my channel to support us ! We are very close to 100K subscribers <3 w/ love. The algorithmic chef – Ahmad Bazzi. Click here to subscribe. Check my other articles here.

PS: I’m on twitter. I retweet stuff around algorithms, python, MATLAB and mathematical optimization, mostly convex.

The post Horner’s Method appeared first on Algorithmic Kitchen.

]]>The post NVIDIA TensorRT appeared first on Algorithmic Kitchen.

]]>NVIDIA® TensorRTTM is a deep learning inference SDK with outstanding performance. It provides a deep learning inference optimizer and runtime for deep learning inference applications with low latency and high throughput.

During inference, TensorRT-based apps are up to 40 times faster than CPU-only systems. You may use TensorRT to improve neural network models trained in all major frameworks, calibrate for reduced precision while maintaining high accuracy, and deploy to hyperscale data centers, embedded systems, or automotive product platforms.

TensorRT is based on CUDA®, NVIDIA’s parallel programming model, and allows you to optimize inference using CUDA-XTM libraries, development tools, and technologies for AI, autonomous machines, high-performance computing, and graphics. TensorRT takes advantage of sparse tensor cores on upcoming NVIDIA Ampere Architecture GPUs, delivering an additional performance increase.

For production deployments of deep learning inference applications such as video streaming, speech recognition, recommendation, fraud detection, text generation, and natural language processing, TensorRT provides INT8 using Quantization Aware Training and Post Training Quantization, as well as FP16 optimizations. Reduced precision inference cuts application latency in half, which is essential for many real-time services, as well as autonomous and embedded applications.

Watch the above lecture to learn more on how to install TensorRT on your machine and get it up and running using docker containers on Ubuntu. The lecture outline is as follows:

00:00 Intro to TensorRT

02:20 Prerequisites

03:20 TensorRT Docker Images

06:27 Jupyter Lab within Docker Containers

07:25 Compile TRT OSS

08:26 HuggingFace GPT-2

13:42 PyTorch on CPU/GPU vs TensorRT on GPU

16:42 Outro

Finally, Subscribe to my channel to support us ! We are very close to 100K subscribers <3 w/ love. The algorithmic chef – Ahmad Bazzi. Click here to subscribe. Check my other articles here.

The post NVIDIA TensorRT appeared first on Algorithmic Kitchen.

]]>The post Rosenbrock Function Minimization appeared first on Algorithmic Kitchen.

]]>In this article, we present different approaches on minimizing the Rosenbrock function, namely thru Newton, Damped Newton and Steepest Descent.

Rosenbrock’s function is given as

(1)

where in this article we consider the following example:

(2)

Since all the methods we will use are gradient-aware and/or Hessian-aware, then we will spend some time in computing a closed form formula of the gradient and Hessian. Let us now start with the gradient of , that is

(3)

and let

(4)

Also let’s compute the Hessian matrix

(5)

Therefore, we can compute the four double derivatives gives

(6)

Steepest descent works as follows

(7)

where is a given parameter, called \textit{step-size}. So, the iteration works as follows

where are the values of at iteration .

Newton is a modified steepest descent working on Hessian’s instead as follows

(8)

In contrast to Newton, Damped Newton method adds a damping factor as

(9)

We will implement the three methods above as

```
function[x] = steepdescent(f,df,x0,alpha,Niter)
x(:,1) = x0;
for n = 2:Niter
x1 = x(1,n-1);
x2 = x(2,n-1);
x(:,n) = x(:,n-1) - alpha*df(x1,x2);
end
```

```
function[x] = newtonmethod(f,df,H,x0,Niter)
x(:,1) = x0;
for n = 2:Niter
x1 = x(1,n-1);
x2 = x(2,n-1);
x(:,n) = x(:,n-1) - inv(H(x1,x2))*df(x1,x2);
end
```

```
function[x] = dampednewtonmethod(f,df,H,x0,alpha,Niter)
x(:,1) = x0;
for n = 2:Niter
x1 = x(1,n-1);
x2 = x(2,n-1);
x(:,n) = x(:,n-1) - alpha*inv(H(x1,x2))*df(x1,x2);
end
```

Furthermore, we implement our main function with three different initializers to test their sensitivity to the final solution as

```
Niter = 3000;
f = @(x1,x2)100*(x2 - x1^2)^2 + (1-x1)^2;
df = @(x1,x2) [-400*x1*(x2 - x1^2) - 2*(1-x1); 200*(x2 - x1^2)];
H = @(x1,x2) [-400*(x2-x1^2) + 800*x1^2 + 2, -400*x1; -400*x1, 200];
% Initials
x0_1 = [-1;1];
x0_2 = [0;1];
x0_3 = [2;1];
%Steepest Descent
[x_steepest_1] = steepdescent(f,df,x0_1,1e-3,Niter);
[x_steepest_2] = steepdescent(f,df,x0_2,1e-3,Niter);
[x_steepest_3] = steepdescent(f,df,x0_3,1e-3,Niter);
% Newton
[x_newton_1] = newtonmethod(f,df,H,x0_1,Niter);
[x_newton_2] = newtonmethod(f,df,H,x0_2,Niter);
[x_newton_3] = newtonmethod(f,df,H,x0_3,Niter);
%Damped Newton
[x_dampednewton_1] = dampednewtonmethod(f,df,H,x0_1,1e-2,Niter);
[x_dampednewton_2] = dampednewtonmethod(f,df,H,x0_2,1e-2,Niter);
[x_dampednewton_3] = dampednewtonmethod(f,df,H,x0_3,1e-2,Niter);
%plot
figure
subplot(3,1,1)
plot(x_steepest_1(1,:),'r','Linewidth',2)
hold on
plot(x_steepest_2(1,:),'g','Linewidth',2)
plot(x_steepest_3(1,:),'b','Linewidth',2)
plot(x_steepest_1(2,:),'--r','Linewidth',2)
plot(x_steepest_2(2,:),'--g','Linewidth',2)
plot(x_steepest_3(2,:),'--b','Linewidth',2)
xlabel('Iteration number n')
ylabel('x')
legend('x1 path initializing x0 = [-1 1]','x1 path initializing x0 = [0 1]','x1 path initializing x0 = [2 1]','x2 path initializing x0 = [-1 1]','x2 path initializing x0 = [0 1]','x2 path initializing x0 = [2 1]')
title('Steepest descent with step size 10^{-3}')
grid on
grid minor
subplot(3,1,2)
plot(x_newton_1(1,:),'r','Linewidth',2)
hold on
plot(x_newton_2(1,:),'g','Linewidth',2)
plot(x_newton_3(1,:),'b','Linewidth',2)
plot(x_newton_1(2,:),'--r','Linewidth',2)
plot(x_newton_2(2,:),'--g','Linewidth',2)
plot(x_newton_3(2,:),'--b','Linewidth',2)
xlabel('Iteration number n')
ylabel('x')
legend('x1 path initializing x0 = [-1 1]','x1 path initializing x0 = [0 1]','x1 path initializing x0 = [2 1]','x2 path initializing x0 = [-1 1]','x2 path initializing x0 = [0 1]','x2 path initializing x0 = [2 1]')
title('Newton method')
grid on
grid minor
subplot(3,1,3)
plot(x_dampednewton_1(1,:),'r','Linewidth',2)
hold on
plot(x_dampednewton_2(1,:),'g','Linewidth',2)
plot(x_dampednewton_3(1,:),'b','Linewidth',2)
plot(x_dampednewton_1(2,:),'--r','Linewidth',2)
plot(x_dampednewton_2(2,:),'--g','Linewidth',2)
plot(x_dampednewton_3(2,:),'--b','Linewidth',2)
xlabel('Iteration number n')
ylabel('x')
legend('x1 path initializing x0 = [-1 1]','x1 path initializing x0 = [0 1]','x1 path initializing x0 = [2 1]','x2 path initializing x0 = [-1 1]','x2 path initializing x0 = [0 1]','x2 path initializing x0 = [2 1]')
title('Newton method with damping factor 10^{-2} ')
grid on
grid minor
```

which gives us the plot below

Furthermore, we see in the above figure that the convergence of all methods using different initializations. We see Newton converges the fastest, the second faster is damped-newton then steepest descent.

In summary, we have seen how the three methods: Steepest Descent, Newton and damped Newton minimize the Rosenbrock function and their convergence behavior as a function of different initializers.

Finally, Subscribe to my channel to support us ! We are very close to 100K subscribers <3 w/ love. The algorithmic chef – Ahmad Bazzi. Click here to subscribe. Check my other articles here.

The post Rosenbrock Function Minimization appeared first on Algorithmic Kitchen.

]]>The post how i invert a matrix without inverting it ^^ appeared first on Algorithmic Kitchen.

]]>In this article, we show you how to invert a matrix without going through the burdens of inversions, i.e. without doing operations, which is required by Gauss Jordan. Herein, a Newton method is adopted for matrix inversion.

The approach is via Newton’s method and instead of computing explicitly , we solve whose solution is obviously .

Given matrix , define

(1)

Note that the solution of the above problem is , since(2)

Using Newton, the updates to solve are(3)

But hence(4)

But is what we trying to estimate by iterations, and hence replace it by the most recent estimate of , namely(5)

Defining a residual matrix(6)

and error matrix(7)

Let us compute using equation (6) then plugging equation (4) at(8)

(9)

Now(10)

Also,

So, Let’s start with building a function called NewtonInverse, which computes the inverse based on equation (5). This is simply attained as such

```
function[X] = NewtonInverse(A,N,Niter)
%% NewtonInverse: computes inverse of A
%input
%A - matrix
%N - matrix dimension
%Niter - maximum number of iterations
I = eye(N);
%the true inverse to compute the error.
A_trueinverse = inv(A);
%initial guess
X(:,:,1) = A'/(norm(A,1) * norm(A,inf));
%iterations
for n = 2:Niter
X(:,:,n) = X(:,:,n-1) + X(:,:,n-1)*(I - A*X(:,:,n-1));
error(n-1) = norm(X(:,:,n) - A_trueinverse,2);
end
```

For sake of completeness, we will see how the iterations converge and compare it with an inverse based off MATLAB’s LU decomposition (which is not iterative herein). So we shall add the following block

```
%Using LU decompostion
[L1,U] = lu(A);
A_inverse_LU = inv(U)*inv(L1);
error_LU = norm(A_inverse_LU - A_trueinverse,2);
figure
plot(log10(error),'k','Linewidth',2)
hold on
plot(log10(error_LU*ones(Niter,1)),'--*r','Linewidth',2)
xlabel('n iteration')
ylabel('error (log scale)')
legend('Using Newton','Matlabs LU decomposition')
grid on
grid minor
```

Now our function looks like this

```
function[X] = NewtonInverse(A,N,Niter)
%% NewtonInverse: computes inverse of A
%input
%A - matrix
%N - matrix dimension
%Niter - maximum number of iterations
I = eye(N);
%the true inverse to compute the error.
A_trueinverse = inv(A);
%initial guess
X(:,:,1) = A'/(norm(A,1) * norm(A,inf));
%iterations
for n = 2:Niter
X(:,:,n) = X(:,:,n-1) + X(:,:,n-1)*(I - A*X(:,:,n-1));
error(n-1) = norm(X(:,:,n) - A_trueinverse,2);
end
%Using LU decompostion
[L1,U] = lu(A);
A_inverse_LU = inv(U)*inv(L1);
error_LU = norm(A_inverse_LU - A_trueinverse,2);
figure
plot(log10(error),'k','Linewidth',2)
hold on
plot(log10(error_LU*ones(Niter,1)),'--*r','Linewidth',2)
xlabel('n iteration')
ylabel('error (log scale)')
legend('Using Newton','Matlabs LU decomposition')
grid on
grid minor
```

Our main function will then test the above by simply generating a random matrix and inputting it to the above function as

```
A = 50*randn(3,3);
N = 3;
Niter = 30;
[X] = NewtonInverse(A,N,Niter);
```

We have presented a method of matrix inversion through Newton by iterating over an appropriate cost function. MATLAB code is detailed as well.

Finally, Subscribe to my channel to support us ! We are very close to 100K subscribers <3 w/ love. The algorithmic chef – Ahmad Bazzi. Click here to subscribe. Check my other articles here.

The post how i invert a matrix without inverting it ^^ appeared first on Algorithmic Kitchen.

]]>The post Newton Method on matrix nonlinear systems appeared first on Algorithmic Kitchen.

]]>In a previous article of the algorithmic chef (a.k.a me !), I presented Newton’s method. In this one, I’ll demonstrate it’s applicability on nonlinear systems, via matrix nonlinear systems.

Newton’s method could also be applied to system of nonlinear equations. Now, consider

(1)

Alternatively, and in a more compact form, we can say

(2)

where(3)

(4)

and is an all-zero vector.Newton method in this case would iterate as follows

(5)

where

(6)

To avoid confusion of notation, and align the problem with the statements above we note

(7)

We have the following system (equation (1) for )(8)

Let’s compute the gradient matrix , which in this case is(9)

The above derivatives are easily computed as follows(10)

Plugging (10) in (9) we get(11)

Therefore, the Newton iterations are done as follows (substitute (11) in (5) and use instead of )(12)

where the initial guess is (this is given in the requirements)(13)

It is now easy to iterate, let’s compute the values at first iteration for example(14)

Using (13), we have(15)

and so on.. Using Newton iteration, refer to the figure belowFurthermore, we see in the above, which is obtained by running the MATLAB code, which is presented in the next section, the convergence of all methods. Also, note here that subroutine is attained by using MATLAB’s fsolve function to compare it with the Newton method

Now, it’s time to implement the matrix form of Newton method, enabling us to solve non-linear. Firstly, let’s define the system we have in equation (8) in a separate MATLAB function calling it root3d.m

```
function F = root2d(x,y,z)
F(1) = 16*x(1)^4 + 16*x(2)^4+ x(3)^4 - 16;
F(2) = x(1)^2 + x(2)^2 + x(3)^2 - 3;
F(3) = x(1)^3 - x(2);
```

Following the above, we will be using the above function in our main function to get the exact roots of the system (referred to as subroutine) and comparing Now, in the main function, we can easily implement our Newton method as such

```
Niter = 1e1;
% values of x,y,z at initial guess
v(:,1) = [1;1;1];
%functions
f =@(x1,x2,x3)[16*x1^4 + 16*x2^4+ x3^4 - 16;x1^2 + x2^2 + x3^2 - 3;x1^3 - x2];
% Newton iterations
for n = 2:Niter
x = v(1,n-1);
y = v(2,n-1);
z = v(3,n-1);
D = [64*x^3 64*y^3 4*z^3;...
2*x 2*y 2*z;...
3*x^2 -1 0];
v(:,n) = v(:,n-1) - inv(D)*f(x,y,z);
end
```

In order to compare our iterative approach, we shall compare it to the solution given to us by MATLAB’s fsolve function as

```
%Matlab subroutine
fun = @root3d;
x0 = [1,1,1];
x_subroutine = fsolve(fun,x0);
```

where in the above we initialized fsolve to all-ones. Next, we plot

```
figure
plot(v(1,:),'r','Linewidth',1)
hold on
plot(v(2,:),'m','Linewidth',1)
plot(v(3,:),'k','Linewidth',1)
plot(x_subroutine(1)*ones(Niter,1),'--*r','Linewidth',2)
plot(x_subroutine(2)*ones(Niter,1),'--*m','Linewidth',2)
plot(x_subroutine(3)*ones(Niter,1),'--*k','Linewidth',2)
xlabel('Iteration number ','Interpreter','latex')
ylabel('x','Interpreter','latex')
legend('','','',' subroutine',' subroutine',' subroutine','Interpreter','latex')
grid on
grid minor
```

Finally, our full main function looks like this

```
Niter = 1e1;
% values of x,y,z at initial guess
v(:,1) = [1;1;1];
%functions
f =@(x1,x2,x3)[16*x1^4 + 16*x2^4+ x3^4 - 16;x1^2 + x2^2 + x3^2 - 3;x1^3 - x2];
% Newton iterations
for n = 2:Niter
x = v(1,n-1);
y = v(2,n-1);
z = v(3,n-1);
D = [64*x^3 64*y^3 4*z^3;...
2*x 2*y 2*z;...
3*x^2 -1 0];
v(:,n) = v(:,n-1) - inv(D)*f(x,y,z);
end
%Matlab subroutine
fun = @root3d;
x0 = [1,1,1];
x_subroutine = fsolve(fun,x0);
figure
plot(v(1,:),'r','Linewidth',1)
hold on
plot(v(2,:),'m','Linewidth',1)
plot(v(3,:),'k','Linewidth',1)
plot(x_subroutine(1)*ones(Niter,1),'--*r','Linewidth',2)
plot(x_subroutine(2)*ones(Niter,1),'--*m','Linewidth',2)
plot(x_subroutine(3)*ones(Niter,1),'--*k','Linewidth',2)
xlabel('Iteration number ','Interpreter','latex')
ylabel('x','Interpreter','latex')
legend('','','',' subroutine',' subroutine',' subroutine','Interpreter','latex')
grid on
grid minor
```

The above plots the iterations given by the Newton method for the given non-linear system in matrix form, so as to get the figure below

Finally, Subscribe to my channel to support us ! We are very close to 100K subscribers <3 w/ love. The algorithmic chef – Ahmad Bazzi. Click here to subscribe. Check my other articles here.

The post Newton Method on matrix nonlinear systems appeared first on Algorithmic Kitchen.

]]>The post NeMo Conversational AI Translator appeared first on Algorithmic Kitchen.

]]>This article introduces you to IDIA Conversational AI French-to-English Translator through the NeMo conversational AI toolkit. We will present an example on translating from French to English using NVIDIA’s NeMo juicy collections which include:

– ASR: Automatic Speech Recognition

– NLP: Natural Language Processing

– TTS: Text-To-Speech synthesis

Python code is given along the way

Using our good old pip install friend, we can install NVIDIA NeMo for our conversational AI project, as such

`!python -m pip install git+https://github.com/NVIDIA/NeMo.git@'r1.4.0'#egg=nemo_toolkit[all]`

The modules that we are going to use are the following: The ASR, NLP and TTS from NeMo collections. Those collections will serve as recipes for our conversational AI translator IPython will be utilized just for listening to specific pieces of audio

```
import nemo
import nemo.collections.asr as nemo_asr
import nemo.collections.nlp as nemo_nlp
import nemo.collections.tts as nemo_tts
import IPython
```

We will be using NGC models by NVIDIA. NVIDIA is too generous that it makes models available with quantity and quality. To make our lives easier, one could check the models via simple calls as such

`nemo_tts.models.HifiGanModel.list_available_models()`

The above lists models that are available within the Hifi Gan models of NeMo’s TTS module, you should be able to get the following response

```
[PretrainedModelInfo(
pretrained_model_name=tts_hifigan,
description=This model is trained on LJSpeech audio sampled at 22050Hz and mel spectrograms generated from Tacotron2, TalkNet, and FastPitch. This model has been tested on generating female English voices with an American accent.,
location=https://api.ngc.nvidia.com/v2/models/nvidia/nemo/tts_hifigan/versions/1.0.0rc1/files/tts_hifigan.nemo,
class_=<class 'nemo.collections.tts.models.hifigan.HifiGanModel'>
)]
```

CUDA is a parallel computing platform and programming model developed by NVIDIA for general computing on graphical processing units (GPUs). With CUDA, developers are able to dramatically speed up computing applications by harnessing the power of GPUs. We will train our models as such

```
asr_model = nemo_asr.models.EncDecCTCModel.from_pretrained(model_name='stt_fr_quartznet15x5').cuda()
nmt_model = nemo_nlp.models.MTEncDecModel.from_pretrained(model_name='nmt_fr_en_transformer12x2').cuda()
spectrogram_generator = nemo_tts.models.FastPitchModel.from_pretrained(model_name='tts_en_fastpitch').cuda()
vocoder = nemo_tts.models.HifiGanModel.from_pretrained(model_name='tts_hifigan').cuda()
```

I will be using the lightbulblanguages website to read french audio. We can grab any piece of audio by the magical wget then display the audio we have

```
!wget 'https://www.lightbulblanguages.co.uk/resources/audio/trente.mp3'
audio_sample = 'trente.mp3'
IPython.display.Audio(audio_sample)
```

This is what you should see using IPython

I don’t know about your French knowledge, but the guy is pronouncing numbers from 30 all the way up to 39.

We will now use our ASR model to transcribe audio to text as follows

```
transcribed_text = asr_model.transcribe([audio_sample])
print(transcribed_text)
```

This is what you should be able to see

```
Transcribing: 100%
1/1 [00:01<00:00, 1.32s/it]
['trente trente et un trente deux trente trois trente quatre trente cinq trente six trente sept trente huit trente neuf']
```

We will now use the NMT model to perform french to english translation

```
english_text = nmt_model.translate(transcribed_text)
print(english_text)
```

```
['Thirty One Thirty Two Thirty Three Thirty Four Thirty Five Thirty Six Thirty Seven Thirty E']
```

and voila.. Now we have our text translated from French to English.

Now, we shall convert the above text to speech using a 2-step procedure: 1) Text to spectrogram and 2) Spectrogram to audio; which is easily accomplished as

```
parseText = spectrogram_generator.parse(english_text[0])
spectrogram = spectrogram_generator.generate_spectrogram(tokens=parseText)
audio = vocoder.convert_spectrogram_to_audio(spec=spectrogram)
audioOutput = audio.to('cpu').detach().numpy()
```

You can display the audio as

`IPython.display.Audio(audioOutput,rate=22050)`

and we are done.

All in all, we saw how to make use of NeMo NVIDIA Conversational AI French-to-English Translator to perform french to english translation. You could indeed explore other languages by checking the available ones on the NGC or by training using your own custom data.

Enjoyed this article ? Buy me a coffee

PS: I’m on twitter. I retweet stuff around algorithms, python, MATLAB and mathematical optimization, mostly convex.

Follow my other courses such as convex optimization.

The post NeMo Conversational AI Translator appeared first on Algorithmic Kitchen.

]]>The post Python monitors my plants using Tuya IoT appeared first on Algorithmic Kitchen.

]]>This is Python tutorial relating to monitoring plants. Indeed, it seems hard to remember every week or so to water plants and sometimes a reminder could serve us a lot. We’re all busy and forgetting to water our own plants is absolutely ok, lol.

OK, so if you’re a visual person, you can just ignore this article and watch the tutorial above to follow up with how to build the plant monitoring IoT system. If not, feel free to follow this article. So python will monitor our plant thanks to Tuya API cloud.

So the first thing you need to do is create an account on Tuya using this link. Please check the video for details on how to do that since it seems impossible to write the steps down. This article will focus on the python code instead. Please watch the video up to the point where you get your Tuya API key and ID to define the API credentials and access the endpoints.

Now, we’re going to use our good old buddy, “pip” to install the following modules

```
!pip3 install tuya-connector-python
!pip3 install tuya-iot-py-sdk
!pip3 install pycryptodomex
```

As mentioned, after you create your Tuya account you will have access to two important things, the Access ID and Access Key, let’s define them as such

```
ACCESS_ID = "7cvey7v7azt99v7mox7z"
ACCESS_KEY = "79415ad8a4d44a8db80920cb4ed8cd20"
API_ENDPOINT = "https://openapi-weaz.tuyaeu.com"
```

Note that, because I’m being nice with you, I’m sharing my credentials (Access ID and Access key) but you should not share them with anyone, otherwise they will be able to control your IoT devices from any part of the world. Also note that, since I’m based in Nice, France, that is in Europe, the above endpoint will look like the above (tuyaeu).

Now, let’s proceed to connect to our Tuya API cloud via Tuya’s very own TuyaOpenAPI()

```
from tuya_connector import TuyaOpenAPI
openapi = TuyaOpenAPI(API_ENDPOINT,ACCESS_ID,ACCESS_KEY)
openapi.connect()
```

If all goes well, you should be able to see the following JSON response from the API cloud

```
{'result': {'access_token': '22b6aa36107ee30aebd30292dc4f38d7',
'expire_time': 5865,
'refresh_token': '9f76f71089a775bc2d7daf5854b5e740',
'uid': 'bay1632079226744ZEW2'},
'success': True,
't': 1633866895317}
```

Now, just like you define variable (example: x=1), we will proceed to define our own devices that are the RGB LED strip and the humidity/temperature sensor

```
RGB_DEVICE_ID = "68560540807d3a1a9573"
SENSOR_DEVICE_ID = "9be9a7703302248c4esjma"
```

Note that you have to create your devices using Tuya’s platform in order to proceed. Also, I encourage you to watch the video above to know how to exactly do this.

Tuya makes it really easy to get all the functions we have at our disposal using a simple API call. For example, the following call

```
response = openapi.get("/v1.0/iot-03/devices/{}/specification".format(RGB_DEVICE_ID))
print(response)
```

prints the following response

```
{'result': {'category': 'dj',
'functions': [{'code': 'switch_led',
'desc': '{}',
'name': '开关',
'type': 'Boolean',
'values': '{}'},
{'code': 'bright_value',
'desc': '{"min":25,"scale":0,"unit":"","max":255,"step":1}',
'name': '亮度',
'type': 'Integer',
'values': '{"min":25,"scale":0,"unit":"","max":255,"step":1}'},
{'code': 'flash_scene_1',
'desc': '{"h":{"min":1,"scale":0,"unit":"","max":360,"step":1},"s":{"min":1,"scale":0,"unit":"","max":255,"step":1},"v":{"min":1,"scale":0,"unit":"","max":255,"step":1}}',
'name': '柔光模式',
'type': 'Json',
'values': '{"h":{"min":1,"scale":0,"unit":"","max":360,"step":1},"s":{"min":1,"scale":0,"unit":"","max":255,"step":1},"v":{"min":1,"scale":0,"unit":"","max":255,"step":1}}'},
{'code': 'flash_scene_2',
'desc': '{"h":{"min":1,"scale":0,"unit":"","max":360,"step":1},"s":{"min":1,"scale":0,"unit":"","max":255,"step":1},"v":{"min":1,"scale":0,"unit":"","max":255,"step":1}}',
'name': '缤纷模式',
'type': 'Json',
'values': '{"h":{"min":1,"scale":0,"unit":"","max":360,"step":1},"s":{"min":1,"scale":0,"unit":"","max":255,"step":1},"v":{"min":1,"scale":0,"unit":"","max":255,"step":1}}'},
{'code': 'flash_scene_3',
'desc': '{"h":{"min":1,"scale":0,"unit":"","max":360,"step":1},"s":{"min":1,"scale":0,"unit":"","max":255,"step":1},"v":{"min":1,"scale":0,"unit":"","max":255,"step":1}}',
'name': '炫彩模式',
'type': 'Json',
'values': '{"h":{"min":1,"scale":0,"unit":"","max":360,"step":1},"s":{"min":1,"scale":0,"unit":"","max":255,"step":1},"v":{"min":1,"scale":0,"unit":"","max":255,"step":1}}'},
{'code': 'work_mode',
'desc': '{"range":["white","colour"]}',
'name': '工作模式',
'type': 'Enum',
'values': '{"range":["white","colour"]}'},
{'code': 'temp_value',
'desc': '{"min":0,"scale":0,"unit":"","max":255,"step":1}\t',
'name': '色温',
'type': 'Integer',
'values': '{"min":0,"scale":0,"unit":"","max":255,"step":1}\t'},
{'code': 'colour_data',
'desc': '{"h":{"min":1,"scale":0,"unit":"","max":360,"step":1},"s":{"min":1,"scale":0,"unit":"","max":255,"step":1},"v":{"min":1,"scale":0,"unit":"","max":255,"step":1}}',
'name': '彩光模式数',
'type': 'Json',
'values': '{"h":{"min":1,"scale":0,"unit":"","max":360,"step":1},"s":{"min":1,"scale":0,"unit":"","max":255,"step":1},"v":{"min":1,"scale":0,"unit":"","max":255,"step":1}}'},
{'code': 'scene_data',
'desc': '{"h":{"min":1,"scale":0,"unit":"","max":360,"step":1},"s":{"min":1,"scale":0,"unit":"","max":255,"step":1},"v":{"min":1,"scale":0,"unit":"","max":255,"step":1}}',
'name': '情景模式数',
'type': 'Json',
'values': '{"h":{"min":1,"scale":0,"unit":"","max":360,"step":1},"s":{"min":1,"scale":0,"unit":"","max":255,"step":1},"v":{"min":1,"scale":0,"unit":"","max":255,"step":1}}'},
{'code': 'flash_scene_4',
'desc': '{"h":{"min":1,"scale":0,"unit":"","max":360,"step":1},"s":{"min":1,"scale":0,"unit":"","max":255,"step":1},"v":{"min":1,"scale":0,"unit":"","max":255,"step":1}}',
'name': '斑斓模式',
'type': 'Json',
'values': '{"h":{"min":1,"scale":0,"unit":"","max":360,"step":1},"s":{"min":1,"scale":0,"unit":"","max":255,"step":1},"v":{"min":1,"scale":0,"unit":"","max":255,"step":1}}'}],
'status': [{'code': 'bright_value',
'name': '亮度值',
'type': 'Integer',
'values': '{"min":25,"scale":0,"unit":"","max":255,"step":1}'},
{'code': 'colour_data',
'name': '彩光模式数',
'type': 'Json',
'values': '{"h":{"min":1,"scale":0,"unit":"","max":360,"step":1},"s":{"min":1,"scale":0,"unit":"","max":255,"step":1},"v":{"min":1,"scale":0,"unit":"","max":255,"step":1}}'},
{'code': 'scene_data',
'name': '情景模式数',
'type': 'Json',
'values': '{"h":{"min":1,"scale":0,"unit":"","max":360,"step":1},"s":{"min":1,"scale":0,"unit":"","max":255,"step":1},"v":{"min":1,"scale":0,"unit":"","max":255,"step":1}}'},
{'code': 'flash_scene_2',
'name': '缤纷模式',
'type': 'Json',
'values': '{"h":{"min":1,"scale":0,"unit":"","max":360,"step":1},"s":{"min":1,"scale":0,"unit":"","max":255,"step":1},"v":{"min":1,"scale":0,"unit":"","max":255,"step":1}}'},
{'code': 'switch_led', 'name': '开关', 'type': 'Boolean', 'values': '{}'},
{'code': 'work_mode',
'name': '工作模式',
'type': 'Enum',
'values': '{"range":["white","colour"]}'},
{'code': 'temp_value',
'name': '冷暖值',
'type': 'Integer',
'values': '{"min":0,"scale":0,"unit":"","max":255,"step":1}\t'},
{'code': 'flash_scene_1',
'name': '柔光模式',
'type': 'Json',
'values': '{"h":{"min":1,"scale":0,"unit":"","max":360,"step":1},"s":{"min":1,"scale":0,"unit":"","max":255,"step":1},"v":{"min":1,"scale":0,"unit":"","max":255,"step":1}}'},
{'code': 'flash_scene_3',
'name': '炫彩模式',
'type': 'Json',
'values': '{"h":{"min":1,"scale":0,"unit":"","max":360,"step":1},"s":{"min":1,"scale":0,"unit":"","max":255,"step":1},"v":{"min":1,"scale":0,"unit":"","max":255,"step":1}}'},
{'code': 'flash_scene_4',
'name': '斑斓模式',
'type': 'Json',
'values': '{"h":{"min":1,"scale":0,"unit":"","max":360,"step":1},"s":{"min":1,"scale":0,"unit":"","max":255,"step":1},"v":{"min":1,"scale":0,"unit":"","max":255,"step":1}}'}]},
'success': True,
't': 1633866895551}
```

The way we interpret the above is as follows: For example, the first function is switch_led and it accepts a boolean (either ON/OFF). To discover more functionalities, please refer to the video above.

I will show you a bonus here on how to switch colors of the RGB strip as follows:

```
command = {"commands":[{"code":"colour_data","value":{'h':0,'s':100,'v':40}}]}
openapi.post("/v1.0/iot-03/devices/{}/commands".format(RGB_DEVICE_ID),command)
```

As you can see, all we did is “pack-up” a JSON in a variable called command, where we define the color defined in an hsv basis instead of an rgb one. Feel free to experiment with different colors.

Now that we have the RGB all set up, we will configure it to blink when the plants need watering. But FIRST. How do we know that the plant needs to be watered ? That’s where the humidity sensor comes into play. The humidity sensor is placed on the surface of the soil of the plant. All we need to do now is configure it properly so that we can read its humidity. This is attained as follows:

```
response = openapi.get("/v1.0/iot-03/devices/{}/status".format(SENSOR_DEVICE_ID))
response
```

If done properly, you should receive the following API response

```
{'result': [{'code': 'humidity_current', 'value': 6050},
{'code': 'va_battery', 'value': 100},
{'code': 'temp_current', 'value': 2340}],
'success': True,
't': 1633868016000}
```

which basically reads as follows: Humidity is 60.5%, Battery life is 100% and temperature is 23.4 degrees Celsius. For better parsing, we can do this

```
humidity = response['result'][0]['value'] / 100
battery = response['result'][1]['value']
temperature = response['result'][2]['value'] / 100
print("Humidity = " + str(humidity))
print("Battery = " + str(battery))
print("Temperature = " + str(temperature))
```

which outputs this in our case

```
Humidity = 60.5
Battery = 100
Temperature = 23.4
```

Now, we are ready to define the main loop to monitor our plants and save our planet ! This is where a Python loop could save nature

```
humidityThreshold = 70
LedEn = True
sensorReadingTime = 5 # seconds
while True:
response = openapi.get('/v1.0/iot-03/devices/{}/status'.format(SENSOR_DEVICE_ID))
humidity = response['result'][0]['value'] / 100
time.sleep(sensorReadingTime)
if humidity < humidityThreshold:
LedEn = False
else:
LendEn = True
commandLEDStrip = {"commands":[{"code":"switch_led","value":LedEn}]}
openapi.post('/v1.0/iot-03/devices/{}/commands'.format(RGB_DEVICE_ID),commandLEDStrip)
```

So all we did above was set the humidity threshold to 70%, i.e. if the humidity sensed by the sensor is less than 70, we turn the RGB strip off as a reminder to water our plants, else it stays on. I really urge you to check the last part of the video as a demo of the above loop.

Subscribe to my channel to support us ! We are very close to 100K subscribers <3 w/ love. The algorithmic chef – Ahmad Bazzi. Click here to subscribe. Check my other articles here

The post Python monitors my plants using Tuya IoT appeared first on Algorithmic Kitchen.

]]>The post Finite Difference Approximation appeared first on Algorithmic Kitchen.

]]>This is not an article, but rather a friendly post to remind the interested reader of the finite difference method. Remember back in school when you learned about derivatives thru the following formula

Well, the finite difference approximation says that we don’t need “to take things too extreme” in a sense that we can drop the limit above and agree on the approximation learned about derivatives thru the following formula

but keeping in mind that is very small, i.e. small enough (that “small enough” phrase controls how good of an approximation we have).

The finite difference is all over the place to deal with differential equations, one of which is the heat equation. Other applications of interest is the Secant method, where one bypasses the knowledge of the derivative of in Newton’s method, thanks to this approximation.

How does the finite difference approximation generate the Secant method ? Well at each iteration in Newton’s method, we could struggle with the computation . Well, using the above finite difference approximation, we have that where is “small enough”. Let’s define our “small enough” factor as the signed interval length so that we getwhich reads.For more on Secant method, visit my article here.

Do you like the post ? Buy me a coffee using the link below

The post Finite Difference Approximation appeared first on Algorithmic Kitchen.

]]>The post Secant method: More Approximations with less Information appeared first on Algorithmic Kitchen.

]]>Secant’s method and its MATLAB implementation are detailed in this article. We try to connect the dots between both Newton and Secant

As in my previous post on the fixed point method and bisection method, the goal here is to find a root of

.

The secant method takes a step forward – applies more approximations on the form of that we have in order to avoid the knowledge of the derivative of .

If you recall properly from my previous article on Newton; the Newton updates go as such

Secant’s method “doesn’t like” the fact that Newton needs to know the derivative of . So, to avoid this, the Secant method applies a finite difference approximation on the derivative term in Newton’s method. How does the finite difference approximation generate the Secant method ? Well at each iteration in Newton’s method, we could struggle with the computation . Well, using the above finite difference approximation, we have that

where is “small enough”.

Afterwards, let’s define our “small enough” factor as the signed interval length so that we get

which reads

. Now, replacing this in Newton’s step that is

we get

or simply

or simply so that Newton’s method now becomes the Secant method i.e.

This is easily achieved on MATLAB. First and foremost, let’s write a secant function so that we can utilize it in the main.m function

```
function [x,error] = secant(f,x0,Niter)
% Input:
% f - input function
% x0 - current iteration value (call it x_0)
% Niter - number of iterations
% Output:
% x - vector x containing all iteration values x = [x0 x1 x2 .... xNiter]
% error - error vector containing all iteration values error = [|x1 - x0|, |x2 - x1| ... |xNiter - xNiter-1|]
```

Note that in the above is an initial guess.

Now let’s write the secant body function. Please note that in the expression of Secant method, we should not allow the denominator to be very small; otherwise, we will run through division-by-zero type of errors. To avoid this, we test the magnitude of the denominator vs. a user pre-defined tolerance value.

```
function [x,error] = secant(f,x0,Niter)
% Input:
% f - input function
% x0 - current iteration value (call it x_0)
% Niter - number of iterations
% Output:
% x - vector x containing all iteration values x = [x0 x1 x2 .... xNiter]
% error - error vector containing all iteration values error = [|x1 - x0|, |x2 - x1| ... |xNiter - xNiter-1|]
tolerance = 1e-10;
error = [];
x(1) = x0;
denominator = (f(x(1))-f(0));
if abs(denominator) < tolerance
disp('Early termination')
else
x(2) = (0*f(x(1)) - f(0)*x(1))/denominator;
error(1) = abs(x(1) - x(2));
for n = 1:(Niter-1)
denominator = (f(x(n+1))-f(x(n)));
if abs(denominator) < tolerance
disp('Early termination')
break;
else
x(n+2) = (x(n)*f(x(n+1)) - f(x(n))*x(n+1))/(f(x(n+1))-f(x(n)));
error(n+1) = abs(x(n+1) - x(n+2));
end
end
end
```

Now, let’s proceed to implement the main function for testing. As we did with the Bisection method, we will re-use the same previous main function

`Niter = 20;`

Note that in the above, we have assumed the stopping criterion of max number of iterations instead of min tolerance. Next, we can proceed to defining some functions to test, for example:

Progress .. You’re almost done with this article

```
%given functions
f1 = @(x)x^3 - 2*x -5;
f2 = @(x)exp(-x) - x;
f3 = @(x)x*sin(x) -1;
f4 = @(x)x^3 - 3*x^2 +3*x - 1;
```

We can now solve each function using

```
% solve via secant
[x_1_secant,error_1_secant] = secant(f1,x0+1,Niter);
[x_2_secant,error_2_secant] = secant(f2,x0+1,Niter);
[x_3_secant,error_3_secant] = secant(f3,x0+1,Niter);
[x_4_secant,error_4_secant] = secant(f4,x0+50,Niter);
```

For comparison sake, we will also check whether we converged properly or not. We can either us the error function outputted by our bisection.m function or we can use MATLAB’s fsolve function as

```
%solving using a library routine
x_1 = fsolve(f1,1);
x_2 = fsolve(f2,1);
x_3 = fsolve(f3,1);
x_4 = fsolve(f4,1);
```

Right now, we are ready to plot

```
figure
subplot(2,2,1)
plot(x_1_secant,'m','Linewidth',1)
hold on
plot(x_1*ones(Niter,1),'g--','Linewidth',2)
xlabel('Iteration number n','Interpreter','latex')
ylabel('x','Interpreter','latex')
title('Solving ','Interpreter','latex')
legend('Secant','Library Routine','Interpreter','latex')
grid on
grid minor
subplot(2,2,2)
plot(x_2_newton,'m','Linewidth',1)
hold on
plot(x_2*ones(Niter,1),'g--','Linewidth',2)
xlabel('Iteration number n','Interpreter','latex')
ylabel('x','Interpreter','latex')
title('Solving ','Interpreter','latex')
legend('Secant','Library Routine','Interpreter','latex')
grid on
grid minor
subplot(2,2,3)
plot(x_3_newton,'m','Linewidth',1)
hold on
plot(x_3*ones(Niter,1),'g--','Linewidth',2)
xlabel('Iteration number n','Interpreter','latex')
ylabel('x','Interpreter','latex')
title('Solving ','Interpreter','latex')
legend('Secant','Library Routine','Interpreter','latex')
grid on
grid minor
subplot(2,2,4)
plot(x_4_newton,'m','Linewidth',1)
hold on
plot(x_4*ones(Niter,1),'g--','Linewidth',2)
xlabel('Iteration number n','Interpreter','latex')
ylabel('x','Interpreter','latex')
title('Solving ','Interpreter','latex')
legend('Secant','Library Routine','Interpreter','latex')
grid on
grid minor
```

Our complete main function looks like this

```
Niter = 20;
%given functions
f1 = @(x)x^3 - 2*x -5;
f2 = @(x)exp(-x) - x;
f3 = @(x)x*sin(x) -1;
f4 = @(x)x^3 - 3*x^2 +3*x - 1;
% solve via secant
[x_1_secant,error_1_secant] = secant(f1,x0+1,Niter);
[x_2_secant,error_2_secant] = secant(f2,x0+1,Niter);
[x_3_secant,error_3_secant] = secant(f3,x0+1,Niter);
[x_4_secant,error_4_secant] = secant(f4,x0+50,Niter);
%solving using a library routine
x_1 = fsolve(f1,1);
x_2 = fsolve(f2,1);
x_3 = fsolve(f3,1);
x_4 = fsolve(f4,1);
figure
subplot(2,2,1)
plot(x_1_secant,'m','Linewidth',1)
hold on
plot(x_1*ones(Niter,1),'g--','Linewidth',2)
xlabel('Iteration number n','Interpreter','latex')
ylabel('x','Interpreter','latex')
title('Solving ','Interpreter','latex')
legend('Secant','Library Routine','Interpreter','latex')
grid on
grid minor
subplot(2,2,2)
plot(x_2_newton,'m','Linewidth',1)
hold on
plot(x_2*ones(Niter,1),'g--','Linewidth',2)
xlabel('Iteration number n','Interpreter','latex')
ylabel('x','Interpreter','latex')
title('Solving ','Interpreter','latex')
legend('Secant','Library Routine','Interpreter','latex')
grid on
grid minor
subplot(2,2,3)
plot(x_3_newton,'m','Linewidth',1)
hold on
plot(x_3*ones(Niter,1),'g--','Linewidth',2)
xlabel('Iteration number n','Interpreter','latex')
ylabel('x','Interpreter','latex')
title('Solving ','Interpreter','latex')
legend('Secant','Library Routine','Interpreter','latex')
grid on
grid minor
subplot(2,2,4)
plot(x_4_newton,'m','Linewidth',1)
hold on
plot(x_4*ones(Niter,1),'g--','Linewidth',2)
xlabel('Iteration number n','Interpreter','latex')
ylabel('x','Interpreter','latex')
title('Solving ','Interpreter','latex')
legend('Secant','Library Routine','Interpreter','latex')
grid on
grid minor
```

In this article, we have detailed the secant method along with as why it looks the way it looks. Moreover, we sliced down the MATLAB code so that we explain each and every block appearing in our youtube lecture.

Follow my other courses such as convex optimization.

Buy me a cup of coffee using the donate link below

PS: I’m on twitter. I retweet stuff around algorithms, python, MATLAB and mathematical optimization, mostly convex.

PSS: We are so close to 100K subscribers on YouTube. It would mean so much if you could share the channel and subscribe to help us sustain.

The post Secant method: More Approximations with less Information appeared first on Algorithmic Kitchen.

]]>