fbpx

Fixed Point Iteration: Solving “almost” any Equation !

0
Lesson 1: Fixed Point Method – Numerical Analysis and Methods

Fixed Point Method: What you need to know

In this article, I will clarify how fixed point iteration works by example. This article could be considered as lecture notes of my YouTube lecture you see above The one and only thing to keep in mind is that the fixed point iteration solves almost any function (given some conditions which we state in the fixed point theorem below) of the form

    \[f(x) = 0\]

.

Firstly, the magical trick here is no matter how ugly your function looks like, we can almost always write the equation f(x) = 0 as x = g(x). For example, x^2 - x + 2 = 0 could be written as x = x^2 + 2.

Fixed Point Method in Action

OK so now what ? Well, the fixed point method states that, under some conditions (see the fixed point theorem below), we can converge to the root of f(x) just by iterating as follows

    \[x_{n+1} = g(x_n)\]


Yes if we start by a point x_0 then evaluate x_1 = g(x_0) then x_2 = g(x_1) and so on then eventually we can converge to x^* where f(x^*)=0.
On the other hand, One of the most crucial factor that affects the speed of convergence (if any) is the choice of g(x).
Indeed, we can re-write the same function f(x)=0 using many (sometimes infinitely many) choices of x=g(x) functions. Furthermore, in the next section, I will show four different choices of g(x).

Choice of g(x)

Indeed for the function

    \[f(x) = x^2 - x + 2 = 0\]


We can manipulate it in multiple ways, namely bringing -x to the other side yields

    \[x = g_1(x) = x^2 + 2\]


as one possible fixed-point iteration method. Another possibility is taking x^2 on the Right hand side

    \[x^2 = x - 2\]


then taking the square root as

    \[x = g_2(x) = \sqrt{ x - 2 }\]


and so on. Other possibilities are indeed

    \[g_3(x) = 1 + \frac{2}{x}\]


and

    \[g_4(x) = \frac{x^2 + 2}{2x - 1}\]


So, the main idea of Fixed point iterations is to solve the following:\
Given an initial guess x_0, solve

    \[x_k = g(x_{k-1})\]


So, let’s work out by hand g_1(x). Give me x_0 = 4 (initial guess) and let’s find x_1,

    \[x_1 = g_1(x_0) = x_0^2 - 2 = 4^2 - 2 = 16 - 2 = 14 = x_1\]


Let’s obtain at a 2^{nd} iteration the value of x_2, similarly as above

    \[x_2 = g_1(x_1) = x_1^2 - 2 = 14^2 - 2 = 16 - 2 = 195\]


and so on until convergence, if any !!!} The reason we say if any is the following: The root of f(x) = x^2 - x - 2 are 2 and -1. We see that at each iteration, fixed point using g_1(x) is diverging away from either roots. In the next section, we emphasise on conditions that guarantee convergence

Fixed Point Theorem: When do we converge ?

The Fixed Point Theorem states the following:

Let g(x) be a fixed point function continuous on [a,b] and x \in [a,b]. Suppose that the derivative g'(x) also exists on [a,b]. Let 0 < k < 1. If \vert g'(x) \vert < k for all x \in [a,b], then for any initial guess x_0 in [a,b], the sequence in x_k=g(x_{k-1}), i.e. x_{k+1} = g(x_k) converges to a unique fixed point p in [a,b].

This is a super powerful theorem stating that where-ever you start in [a,b], you will converge. Now, let us derive g_1(x) and see what we get.

    \[g_1'(x) = 2x\]


Remember that the roots of f(x) are -1 and 2. Let us focus on x= 2 as the question is saying}. So, no matter what interval [a,b] you choose containing 2, (2 \in [a,b]), you can not ensure that \vert g'(x) \vert < 1, simply because \vert g'(2) \vert = 4 > 1. Therefore you can not converge using fixed point to 2. One might ask, “what if i pick [0, \frac{1}{2}]“. Here, the fixed point theorem applies and you will converge to a fixed point solution in [0, \frac{1}{2}], but this is not interesting because [0, \frac{1}{2}] does not contain the root of f(x).
Now, let’s do g_2(x) = \sqrt{x + 2}. Derive, we get

    \[g_2'(x) = \frac{1}{2\sqrt{x+2}}\]


Is \vert g'_2(2) \vert < 1 ? If yes, you will converge to the 2. At this point, it should be clear how to tackle \vert g'_3(2) \vert and \vert g'_4(2) \vert.

MATLAB implementation of Fixed Point Iteration method

This is easily achieved on MATLAB. First let’s define the total number of fixed point iterations and a starting point to run our fixed point iteration method

Niter = 20; % total number of iterations
x = 0;      % starting point

followed by the g(x) functions we would like to test, i.e. g_1(x) = x^2 - 2, g_2(x) = \sqrt{x+2}, g_3(x) = 1 + \frac{2}{x}, and g_4(x) = \frac{x^2 + 2}{2x - 1}

g1 = @(x) x^2 - 2;  
g2 = @(x) sqrt(x+2);
g3 = @(x) 1 + (2/x);
g4 = @(x) (x^2 + 2)/(2*x - 1);

Now that we have the functions, let’s compute the absolute value of the derivatives to check if the fixed point theorem conditions are satisfied, so

dg1 = abs(2*x);
dg2 = 1/(2*sqrt(x+2));
dg3 = 2/x^2;
dg4 = abs( (2*x^2 - 2*x - 4)/((2*x-1)^2) );

At this point of the code, we have the derivatives, as well as the starting point, which is enough to say something about convergence of fixed point. In other words, if \vert g_1'(x_0) \vert < 1, we are guaranteed to converge, else we are not, hence:

Progress .. You’re almost done with this article 🙂
if dg1 < 1
   disp(['Fixed point is GUARANTEED to converge for x = ',num2str(x),' and g1(x)']) 
else
   disp(['Fixed point is NOT GUARANTEED to converge for x = ',num2str(x),' and g1(x)']) 
end

We will replicate the above block for the other g(x) functions as

if dg2 < 1
   disp(['Fixed point is GUARANTEED to converge for x = ',num2str(x),' and g2(x)']) 
else
   disp(['Fixed point is NOT GUARANTEED to converge for x = ',num2str(x),' and g2(x)']) 
end

if dg3 < 1
   disp(['Fixed point is GUARANTEED to converge for x = ',num2str(x),' and g3(x)']) 
else
   disp(['Fixed point is NOT GUARANTEED to converge for x = ',num2str(x),' and g3(x)']) 
end

if dg4 < 1
   disp(['Fixed point is GUARANTEED to converge for x = ',num2str(x),' and g4(x)']) 
else
   disp(['Fixed point is NOT GUARANTEED to converge for x = ',num2str(x),' and g4(x)']) 
end

Good. At this point of the code, we get display messages stating convergence information about each of the four choices of the g(x) functions. Now, we are ready to run fixed point iteration for each g(x) and experiment what we get. For example, if we were to do so on g_1(x) we get

x = 0;
for n = 1:Niter
    f1(n) = x; % you can omit this line, here I am just saving all values of x for plotting, that's all :) ! 
    x = g1(x); % run fixed point iteration
end

We will do the same thing for the other functions we have as follows:

x = 0;
for n = 1:Niter
    f2(n) = x;
    x = g2(x);
end
x = 0;
for n = 1:Niter
    f3(n) = x;
    x = g3(x);
end
x = 0;
for n = 1:Niter
    f4(n) = x;
    x = g4(x);
end

We have all the fixed point iteration values per g_k(x) where k = 1,2,3,4 and so we are now ready to summarise everything we did in one plot using the following plotting code:

figure
plot(f1,'r','LineWidth',2)
hold on
plot(f2,'m','LineWidth',2)
plot(f3,'k','LineWidth',2)
plot(f4,'b','LineWidth',2)
plot(2*ones(Niter,1),'g--','LineWidth',2)
legend('g_1(x)','g_2(x)','g_3(x)','g_4(x)','True root','Interpreter','latex','FontSize',25)
xlabel('Iteration number n','Interpreter','latex','FontSize',25)
ylabel('x_n','Interpreter','latex','FontSize',25)
grid on
grid minor
Behaviour of x_{n+1} = g(x_n) as a function of different choices of g(x)

Follow my other courses such as convex optimization.

Buy me a cup of coffee using the donate link below 😊☕️

[paypal_donation_block email=’myprivateotherstuff@gmail.com’ amount=’15.00′ currency=’USD’ size=’large’ purpose=’Buy me coffee ^^’ mode=’live’]

PS: I’m on twitter. I retweet stuff around algorithms, python, MATLAB and mathematical optimization, mostly convex.

PSS: We are so close to 100K subscribers on YouTube. It would mean so much if you could share the channel and subscribe to help us sustain.