fbpx

Secant method: More Approximations with less Information

0
Lesson 4: Secant Method – Numerical Analysis and Methods

Secant Method: What you need to know

Secant’s method and its MATLAB implementation are detailed in this article. We try to connect the dots between both Newton and Secant

As in my previous post on the fixed point method and bisection method, the goal here is to find a root of

    \[f(x) = 0\]

.

The secant method takes a step forward – applies more approximations on the form of f(x) that we have in order to avoid the knowledge of the derivative of f(x).

How Secant Method Works

If you recall properly from my previous article on Newton; the Newton updates go as such

    \[x_{n+1} = x_{n} - \frac{f(x)}{f'(x)}\]

Secant’s method “doesn’t like” the fact that Newton needs to know the derivative of f(x). So, to avoid this, the Secant method applies a finite difference approximation on the derivative term in Newton’s method. How does the finite difference approximation generate the Secant method ? Well at each iteration x_n in Newton’s method, we could struggle with the computation f'(x_n). Well, using the above finite difference approximation, we have that

    \[f'(x_n) \simeq \frac{f(x_n+h) - f(x_n)}{h}\]

where h is “small enough”.

Afterwards, let’s define our “small enough” factor as the signed interval length h=  x_{n+1} - x_n so that we get

    \[f'(x_n) \simeq \frac{f(x_n+x_{n+1} - x_n) - f(x_n)}{x_{n+1} - x_n}\]

which reads

    \[f'(x_n) \simeq \frac{f(x_{n+1} ) - f(x_n)}{x_{n+1} - x_n}\]

. Now, replacing this in Newton’s step \Delta_n that is

    \[\Delta_n = - \frac{f(x_n)}{f'(x_n)}\]

we get

    \[\Delta_n = - \frac{f(x_n)}{\frac{f(x_{n+1} ) - f(x_n)}{x_{n+1} - x_n}}\]

or simply

    \[\Delta_n \simeq - \frac{f(x_n)(x_{n+1} - x_n)}{f(x_{n+1} ) - f(x_n)}\]

or simply so that Newton’s method x_{n+2} \simeq x_{n+1} + \Delta now becomes the Secant method i.e.

    \[x_{n+2} \simeq x_{n+1} - \frac{f(x_n)(x_{n+1} - x_n)}{f(x_{n+1} ) - f(x_n)}\]

MATLAB implementation of Secant method

This is easily achieved on MATLAB. First and foremost, let’s write a secant function so that we can utilize it in the main.m function

function [x,error] = secant(f,x0,Niter)

% Input:
% f         - input function
% x0         - current iteration value (call it x_0)
% Niter     - number of iterations
% Output:
% x         - vector x containing all iteration values x = [x0 x1 x2 .... xNiter]
% error     - error vector containing all iteration values error = [|x1 - x0|, |x2 - x1| ... |xNiter - xNiter-1|]

Note that in the above x_0 is an initial guess.

Now let’s write the secant body function. Please note that in the expression of Secant method, we should not allow the denominator to be very small; otherwise, we will run through division-by-zero type of errors. To avoid this, we test the magnitude of the denominator vs. a user pre-defined tolerance value.

function [x,error] = secant(f,x0,Niter)

% Input:
% f         - input function
% x0         - current iteration value (call it x_0)
% Niter     - number of iterations
% Output:
% x         - vector x containing all iteration values x = [x0 x1 x2 .... xNiter]
% error     - error vector containing all iteration values error = [|x1 - x0|, |x2 - x1| ... |xNiter - xNiter-1|]
tolerance = 1e-10;
error = [];
x(1) = x0;
denominator = (f(x(1))-f(0));
if abs(denominator) < tolerance
    disp('Early termination')
else
    x(2) = (0*f(x(1)) - f(0)*x(1))/denominator;
    error(1) = abs(x(1) - x(2));
    
    for n = 1:(Niter-1)
        denominator = (f(x(n+1))-f(x(n)));
        if abs(denominator) < tolerance
            disp('Early termination')
            break;
        else
            x(n+2) = (x(n)*f(x(n+1)) - f(x(n))*x(n+1))/(f(x(n+1))-f(x(n)));
            error(n+1) = abs(x(n+1) - x(n+2));
        end
    end
end

Now, let’s proceed to implement the main function for testing. As we did with the Bisection method, we will re-use the same previous main function

Niter = 20;

Note that in the above, we have assumed the stopping criterion of max number of iterations instead of min tolerance. Next, we can proceed to defining some functions to test, for example:

Progress .. You’re almost done with this article 🙂
%given functions
f1 = @(x)x^3 - 2*x -5; 
f2 = @(x)exp(-x) - x;
f3 = @(x)x*sin(x) -1;
f4 = @(x)x^3 - 3*x^2 +3*x - 1;



We can now solve each function using

% solve via secant
[x_1_secant,error_1_secant] = secant(f1,x0+1,Niter);
[x_2_secant,error_2_secant] = secant(f2,x0+1,Niter);
[x_3_secant,error_3_secant] = secant(f3,x0+1,Niter);
[x_4_secant,error_4_secant] = secant(f4,x0+50,Niter);

For comparison sake, we will also check whether we converged properly or not. We can either us the error function outputted by our bisection.m function or we can use MATLAB’s fsolve function as

%solving using a library routine
x_1 = fsolve(f1,1);
x_2 = fsolve(f2,1);
x_3 = fsolve(f3,1);
x_4 = fsolve(f4,1);

Right now, we are ready to plot

figure
subplot(2,2,1)
plot(x_1_secant,'m','Linewidth',1)
hold on
plot(x_1*ones(Niter,1),'g--','Linewidth',2)
xlabel('Iteration number n','Interpreter','latex')
ylabel('x','Interpreter','latex')
title('Solving f(x) = x^3 - 2x - 5','Interpreter','latex')
legend('Secant','Library Routine','Interpreter','latex')
grid on
grid minor

subplot(2,2,2)
plot(x_2_newton,'m','Linewidth',1)
hold on
plot(x_2*ones(Niter,1),'g--','Linewidth',2)
xlabel('Iteration number n','Interpreter','latex')
ylabel('x','Interpreter','latex')
title('Solving f(x) = e^{-x} - x','Interpreter','latex')
legend('Secant','Library Routine','Interpreter','latex')
grid on
grid minor


subplot(2,2,3)
plot(x_3_newton,'m','Linewidth',1)
hold on
plot(x_3*ones(Niter,1),'g--','Linewidth',2)
xlabel('Iteration number n','Interpreter','latex')
ylabel('x','Interpreter','latex')
title('Solving f(x) = xsin(x) - 1','Interpreter','latex')
legend('Secant','Library Routine','Interpreter','latex')
grid on
grid minor

subplot(2,2,4)
plot(x_4_newton,'m','Linewidth',1)
hold on
plot(x_4*ones(Niter,1),'g--','Linewidth',2)
xlabel('Iteration number n','Interpreter','latex')
ylabel('x','Interpreter','latex')
title('Solving f(x) = x^3 - 3x^2  + 3x - 1','Interpreter','latex')
legend('Secant','Library Routine','Interpreter','latex')
grid on
grid minor

Our complete main function looks like this

Niter = 20;

%given functions
f1 = @(x)x^3 - 2*x -5; 
f2 = @(x)exp(-x) - x;
f3 = @(x)x*sin(x) -1;
f4 = @(x)x^3 - 3*x^2 +3*x - 1;

% solve via secant
[x_1_secant,error_1_secant] = secant(f1,x0+1,Niter);
[x_2_secant,error_2_secant] = secant(f2,x0+1,Niter);
[x_3_secant,error_3_secant] = secant(f3,x0+1,Niter);
[x_4_secant,error_4_secant] = secant(f4,x0+50,Niter);


%solving using a library routine
x_1 = fsolve(f1,1);
x_2 = fsolve(f2,1);
x_3 = fsolve(f3,1);
x_4 = fsolve(f4,1);

figure
subplot(2,2,1)
plot(x_1_secant,'m','Linewidth',1)
hold on
plot(x_1*ones(Niter,1),'g--','Linewidth',2)
xlabel('Iteration number n','Interpreter','latex')
ylabel('x','Interpreter','latex')
title('Solving f(x) = x^3 - 2x - 5','Interpreter','latex')
legend('Secant','Library Routine','Interpreter','latex')
grid on
grid minor

subplot(2,2,2)
plot(x_2_newton,'m','Linewidth',1)
hold on
plot(x_2*ones(Niter,1),'g--','Linewidth',2)
xlabel('Iteration number n','Interpreter','latex')
ylabel('x','Interpreter','latex')
title('Solving f(x) = e^{-x} - x','Interpreter','latex')
legend('Secant','Library Routine','Interpreter','latex')
grid on
grid minor


subplot(2,2,3)
plot(x_3_newton,'m','Linewidth',1)
hold on
plot(x_3*ones(Niter,1),'g--','Linewidth',2)
xlabel('Iteration number n','Interpreter','latex')
ylabel('x','Interpreter','latex')
title('Solving f(x) = xsin(x) - 1','Interpreter','latex')
legend('Secant','Library Routine','Interpreter','latex')
grid on
grid minor

subplot(2,2,4)
plot(x_4_newton,'m','Linewidth',1)
hold on
plot(x_4*ones(Niter,1),'g--','Linewidth',2)
xlabel('Iteration number n','Interpreter','latex')
ylabel('x','Interpreter','latex')
title('Solving f(x) = x^3 - 3x^2  + 3x - 1','Interpreter','latex')
legend('Secant','Library Routine','Interpreter','latex')
grid on
grid minor

newton method
newton method

Summary

In this article, we have detailed the secant method along with as why it looks the way it looks. Moreover, we sliced down the MATLAB code so that we explain each and every block appearing in our youtube lecture.

Follow my other courses such as convex optimization.

Buy me a cup of coffee using the donate link below 😊☕️

PS: I’m on twitter. I retweet stuff around algorithms, python, MATLAB and mathematical optimization, mostly convex.

PSS: We are so close to 100K subscribers on YouTube. It would mean so much if you could share the channel and subscribe to help us sustain.