Finding global optima using genetic algorithm of a stalagmite function

Objective: Try to understand the concept of genetic algorithm by referring to various sources before you attend this challenge. Look out for information regarding stalagmite function and ways to optimise it.

  • Write a code in MATLAB to optimise the stalagmite function and find the global maxima of the function.
  • Clearly explain the concept of genetic algorithm in your own words and also explain the syntax for ga in MATLAB in your report.
  • Make sure that your code gives the same output, even if it is made to run several times.
  • Plot graphs for all 3 studies and for F maximum vs no. of iterations. Title and axes labels are a must, legends could be shown if necessary.

 Under references, mention the literature or online material you have used during research.

In case if you are not able to finish the challenge, upload until what you have done and explain the code and plot. Deserved marks will be allotted

Brief on Genetic Algorithm:

A genetic algorithm (GA) is a method for solving both constrained and unconstrained optimization problems based on a natural selection process that mimics biological evolution. The algorithm repeatedly modifies a population of individual solutions. At each step, the genetic algorithm randomly selects individuals from the current population and uses them as parents to produce the children for the next generation. Over successive generations, the population "evolves" toward an optimal solution.

You can apply the genetic algorithm to solve problems that are not well suited for standard optimization algorithms, including problems in which the objective function is discontinuous, nondifferentiable, stochastic, or highly nonlinear.

The genetic algorithm differs from a classical, derivative-based, optimization algorithm in two main ways, as summarized in the following table.

Classical Algorithm Genetic Algorithm
Generates a single point at each iteration. The sequence of points approaches an optimal solution. Generates a population of points at each iteration. The best point in the population approaches an optimal solution.
Selects the next point in the sequence by a deterministic computation. Selects the next population by computation which uses random number generators.

Role of Genetic Algorithms in Global Optimization:

Evolutionary algorithms are population-basedoptimization algorithms inspired by the principles of natural evolution. When compared with other optimization algorithms, the advantage of evolutionary algorithms is that they do not make limiting assumptions about the underlying objective functions. The objective functions are treated as black-box functions. Furthermore, the definition of objective functions does not require significant insight into the structure of the design space.


Darwin’s theory of evolution identified the principles of natural selection and survival of the fittest as driving forces behind biological evolution. His theory can be summarized as follows.

Variation
There is variation between individuals in a population.


Competition
Resources are limited. In such an environment, there will be a struggle for survival among individuals.


Offspring
Species have great fertility. They produce more offsprings than can grow to adulthood.


Genetics
Organisms pass genetic traits to their offspring.


Natural selection
Those individuals with the most beneficial traits are more likely to survive and reproduce.

Evolutionary algorithms are based on the principles of biological evolution described in Darwin’s theory. Genetic Algorithms (GAs) are a class of evolutionary algorithms, also known as population-based metaheuristic optimization algorithms. GA was first conceived by J.H. Holland in. GA uses a population of solutions, whose individuals are represented in the form of chromosomes. The individuals in the population go through a process of simulated evolution to obtain the global optimum.

The GA repeatedly modifies a set of solutions or individuals in the course of its entire run. At each iteration, the genetic algorithm selects individuals from the current population to serve as parents based on certain criteria. The parents are then used to create the next generation of individuals, called children. Over successive generations, the population evolves toward an optimal solution or Pareto frontier, depending on the type of problems and the type of GA being used. The procedure for a genetic algorithm is illustrated. The terms used in the procedure are explained below:

Fig shows the Procedure of a Genetic Algorithm. 

The terms used in the procedure in above fig is explained below:

Encoding


Encoding is a way to represent individual solutions in evolutionary algorithms. Typically, individual solutions are coded as a finite fixed length string. Binary numbers are usually used as codes. This string is also known in the literature as a chromosome. For example, a binary number, 10001, represents the decimal number 17. The conversion from the binary number to the decimal number is given by 1 × 24 + 0 × 23 + 0 × 22 + 0 × 21 + 1 × 20 = 16 + 1 = 17.


Initial population


The algorithm begins by generating a population of individuals in the design space. Prior to population initialization, the designer must choose the number of individuals in each population and the number of bits in the encoding process. Both of these decisions are extremely important in promoting the success of the GA-based optimization. For example, too large a population can lead to undesirable computational expense, while too small a population can lead to premature convergence to a local optimum or suboptimal point. Further description of these features and issues of GAs can be found in the foundational book on GA by Goldberg.


Evaluation
Computation of the objective values for the individual solutions.


Optimization criteria
The stopping criteria of the algorithm. Examples of stopping criteria include the number of generations, the computation time limit, and the function tolerance.


Fitness assignment
There are several choices of fitness assignment. In a rank-based fitness assignment, the individuals are sorted according to their objective values. It creates an order among the individuals.


Selection
A selection criterion filters out the candidate solutions with poor fitness and retains those with acceptable fitness to enter the reproduction process with a higher probability.


Reproduction
A new generation in the genetic algorithm is created through reproduction from the previous generation. Three mechanisms (elitist, crossover, and mutation) are primarily used to create a new generation.


Elitist
The individuals with the best fitness values in the current generation are guaranteed to survive in the next generation.


Crossover
In this technique, a part of the encoded string of one individual is exchanged with the corresponding string part of another individual. There are many approaches to performing the crossover operation. Suppose there are two individuals, 10101 and 11001. Exchange the first two bits of the two individuals. The offspring of the two individuals are 11101 and 10001.


Mutation
Mutated child solution is generated from a single parent by randomly reversing some bits from 0 to 1, or vice versa. For example, through mutation, the 2nd bit and the 5th bit of 10101 are reversed. The new offspring is 11100.


Best individual
The global optimum that satisfies the termination criteria. The multimodal function shown in Fig. 8.1(b) is solved using MATLAB GA Solver as follows.

                         0<=x<=10

To set up this problem in MATLAB, using the ga command, three M-files are generated: a main file, an objective function file, and a constraint function file. Note that this file structure is similar to the one used with fmincon.

The main file contains the initializations, bounds, options, and the ga command. The objective function file contains the objective or the fitness function definition. The constraint file contains the nonlinear inequality and equality constraints. The files are reported below.


1.Main file

clear 
clc

% Define Linear constraints
A=[ ]; B=[ ]; Aeq=[]; Beq=[];
% Define bounds constraints
LB=[0]; UB=[10];
% Number of design variables
nvars = 1;
% Optimization function ga[x,fval] = ga(@GA_Func,nvars,A,B,Aeq,Beq,LB,UB,@GA_cons);
display(x)
display(fval)


2.Objective function file

function [f]=GA_Func(x)
f = x+10*sin(2*x);


3.Constraint function file

function [C Ceq]=GA_cons(x)
% Define inequality constraints
C = [];
% Define equality constraints
Ceq = [];


The global optimum of the objective function obtained by MATLAB is -7.6563. The optimum value of the variable is 2.3312. The result is the same as that obtained using the multiple start method.


This problem can also be solved using the ga command from the graphical user interface of the GA Solver. Please note that the folder that contains the fitness function and the nonlinear constraint function should be selected as the MATLAB Current Directory. Figure 8.4 shows how to set up the GA solver to solve the problem.

This screen can be opened by typing optimtool( ′ga′) in the Command Window. Alternatively, you can type optimtool in the Command Window; then choose the ga solver option from the top left dropdown menu. Set @GA_Func as the fitness function. Set @GA_cons as the nonlinear constraint function. Since there is no nonlinear constraint for this problem, we can leave it blank. Set the number of variables as 1. Set the lower bound as 0 and the upper bound as 10.

After 5 iterations, the optimization is terminated. The global optimum value of the objective function is -7.6563.

Explaining the syntax of Genetic Algorithm GA:

Finding the minimum of function using genetic algorithm

Syntax

x = ga(fun,nvars)

x = ga(fun,nvars,A,b)

x = ga(fun,nvars,A,b,Aeq,beq)

x = ga(fun,nvars,A,b,Aeq,beq,lb,ub)

x = ga(fun,nvars,A,b,Aeq,beq,lb,ub,nonlcon)

x = ga(fun,nvars,A,b,Aeq,beq,lb,ub,nonlcon,options)

x = ga(fun,nvars,A,b,[],[],lb,ub,nonlcon,IntCon)

x = ga(fun,nvars,A,b,[],[],lb,ub,nonlcon,IntCon,options)

x = ga(problem)

[x,fval] = ga(___)

[x,fval,exitflag,output] = ga(___)

[x,fval,exitflag,output,population,scores] = ga(___)

Description.

x = ga(fun,nvars) finds a local unconstrained minimum, x, to the objective function, fun. nvars is the dimension (number of design variables) of fun.

x = ga(fun,nvars,A,b) finds a local minimum x to fun, subject to the linear inequalities A*x ≤ b. ga evaluates the matrix product A*x as if x is transposed (A*x').

x = ga(fun,nvars,A,b,Aeq,beq) finds a local minimum x to fun, subject to the linear equalities Aeq*x = beq and A*x ≤ b. (Set A=[] and b=[] if no linear inequalities exist.) ga evaluates the matrix product Aeq*x as if x is transposed (Aeq*x').

x = ga(fun,nvars,A,b,Aeq,beq,lb,ub) defines a set of lower and upper bounds on the design variables, x, so that a solution is found in the range lb  x  ub. (Set Aeq=[] and beq=[] if no linear equalities exist.)

x = ga(fun,nvars,A,b,Aeq,beq,lb,ub,nonlcon) subjects the minimization to the constraints defined in nonlcon. The function nonlcon accepts x and returns vectors C and Ceq, representing the nonlinear inequalities and equalities respectively. ga minimizes the fun such that C(x)  0 and Ceq(x) = 0. (Set lb=[] and ub=[] if no bounds exist.)

x = ga(fun,nvars,A,b,Aeq,beq,lb,ub,nonlcon,options) minimizes with the default optimization parameters replaced by values in options. (Set nonlcon=[] if no nonlinear constraints exist.) Create options using optimoptions.

x = ga(fun,nvars,A,b,[],[],lb,ub,nonlcon,IntCon) or x = ga(fun,nvars,A,b,[],[],lb,ub,nonlcon,IntCon,options) requires that the variables listed in IntCon take integer values.

 NOTE: When there are integer constraints, ga does not accept linear or nonlinear equality constraints, only inequality constraints.

 x = ga(problem) finds the minimum for problem, where problem is a structure.

[x,fval] = ga(___), for any previous input arguments, also returns fval, the value of the fitness function at x.

[x,fval,exitflag,output] = ga(___) also returns exitflag, an integer identifying the reason the algorithm terminated, and output, a structure that contains output from each generation and other information about the performance of the algorithm.

[x,fval,exitflag,output,population,scores] = ga(___) also returns a matrix population, whose rows are the final population, and a vector scores, the scores of the final population.

Original coding:

Now here we peformed original coding on how to find global maxima using genetic algorithm of an stalagmite function.

 

% code for global optimization 
% using genetic algorithm for global optimum.

clear all
close all  
clc

x = linspace(0,0.6,150);
y = linspace(0,0.6,150);
num_cases = 50;

% generating 2 Dimesnional mesh
[xx yy] = meshgrid(x,y);

% Evaluating the stalagmite function
for i = 1:length(xx)
    for j=1:length(yy)
        input_vector(1)=xx(i,j);
        input_vector(2)=yy(i,j);
        f(i,j)=stalagmite(input_vector);
    end
end

% study 1 - Statistical behaviour

tic 
for i = 1:num_cases;
    [inputs,fopt1(i)]=ga(@stalagmite,2);
    xopt1(i)=inputs(1);
    yopt1(i)=inputs(2);
end

study1_time=toc

figure(1)
subplot(2,1,1)
hold on
surfc(x, y, -f)
shading interp
plot3(xopt1,yopt1,-fopt1,'marker','o','markersize',5,'markerfacecolor','r')
title('unbounded_inputs');
subplot(2,1,2)
plot(-fopt1);
xlabel('Iterations');
ylabel('Function Minimum');
legend('Iterations','Function Minimum')

% Study 2 - Statistical behaviour - with upper and lower bounds

tic 
for i = 1:num_cases;
    [inputs,fopt2(i)]=ga(@stalagmite,2,[],[],[],[],[0;0],[1;1]);
    xopt2(i)=inputs(1);
    yopt2(i)=inputs(2);
end

study2_time=toc

figure(2)
subplot(2,1,1)
hold on
surfc(x, y, -f)
shading interp
plot3(xopt2,yopt2,-fopt2,'marker','o','markersize',5,'markerfacecolor','r')
title('unbounded_inputs');
subplot(2,1,2)
plot(-fopt2);
xlabel('Iterations');
ylabel('Function Minimum');
legend('Iterations','Function Minimum')

% Study 3 - Statistical behaviour - Increasing GA Iterations
options = optimoptions('ga');
options = optimoptions(options, 'populationsize', 170);

tic 
for i = 1:num_cases;
    [inputs,fopt3(i)]=ga(@stalagmite,2,[],[],[],[],[0;0],[1;],[],[],options);
    xopt3(i)=inputs(1);
    yopt3(i)=inputs(2);
end

study3_time=toc

figure(3)
subplot(2,1,1)
hold on
surfc(x, y, -f)
shading interp
plot3(xopt3,yopt3,-fopt3,'marker','o','markersize',5,'markerfacecolor','r')
title('unbounded_inputs');
subplot(2,1,2)
plot(-fopt3);
xlabel('Iterations');
ylabel('Function Minimum');
legend('Iterations','Function Minimum')

 

Here we programmed the code to find global maxima of a stalagmite function, including the code of a function where we named it as stalagmite function, which is as below:

 

% stalagmite_function.
% using f(x,y) as output

function [output] = stalagmite(input_vector);

x = input_vector(1);
y = input_vector(2);

f1_x = (sin((5.1*pi*x) + 0.5))^6;
f1_y = (sin((5.1*pi*y) + 0.5))^6;

f2_x = exp(((-4)*log(2)*(x-0.0667)^2)/0.64);
f2_y = exp(((-4)*log(2)*(x-0.0667)^2)/0.64);

output = -f1_x*f2_x*f1_y*f2_y;

end

 

here the stalagmite function consists of 4 different functions which has been evaluated and calculated separately along with their product. These function helps us to achieve the global minima and global maxima of a stalagmite function when called upon in the main program as funtion definition.

However with this function, it gives us the global maxima where as in other case, when we want to achieve the global minima , there is a slight difference where we remove the -ve sign on the output variable notation.

Step by step explanation :

First we need to create a dataset of x and y where we get the parent species from where the genetic algorithm can be performed. the following code shows how the dataset is created of x and y array data.

% code for global optimization 
% using genetic algorithm for global optimum.

clear all
close all  
clc

x = linspace(0,0.6,150);
y = linspace(0,0.6,150);
num_cases = 50;

% generating 2 Dimesnional mesh
[xx yy] = meshgrid(x,y);

% Evaluating the stalagmite function
for i = 1:length(xx)
    for j=1:length(yy)
        input_vector(1)=xx(i,j);
        input_vector(2)=yy(i,j);
        f(i,j)=stalagmite(input_vector);
    end
end

and here we called our stalagmite function to perform the global optimum.

Now to achieve the optimised value from genetic algorithm we need to perfom certain studies, where we can achieve the methodological way of charles darwins theory of "survival of the fittest", we can achive the unique and fit global maxima or minima.

Study1:

Now while performing first study we are going to analyse and calculate the statistical behaviour of GA. There is a for loop here where we can use the ga on stalagmite fucntion to reduce the reduction of the function. we have tic and toc operator to achive time taken o process the species and thier values. we used subplot to display the result in a single window. Over each interation and to plot the same we used another opearator names as plot3. By adding the -ve sign we can achive the global maxima as we discussed earlier in fucntion code.

% study 1 - Statistical behaviour

tic 
for i = 1:num_cases;
    [inputs,fopt1(i)]=ga(@stalagmite,2);
    xopt1(i)=inputs(1);
    yopt1(i)=inputs(2);
end

study1_time=toc

figure(1)
subplot(2,1,1)
hold on
surfc(x, y, -f)
shading interp
plot3(xopt1,yopt1,-fopt1,'marker','o','markersize',5,'markerfacecolor','r')
title('unbounded_inputs');
subplot(2,1,2)
plot(-fopt1);
xlabel('Iterations');
ylabel('Function Minimum');
legend('Iterations','Function Minimum')

Output:

As here we can see the from first study the iterations gives us a lot of values and changes rapidly having all over the 3d plot which shows it is a random sampling from the initial population andd we need to mutate our iterations to achieve less number of points, from initial population it also 

Study2:

Here we have study 2 where we again perform statistical behaviour with ub and lb i.e upper bounds and lower bounds with some constraint on the solution. UB and LB helps us to improve the optimzation on X which is to be calculated. Consequently its showed us better result form the first case and the points are lowered and iterations is optimizes compared to the first study.

% Study 2 - Statistical behaviour - with upper and lower bounds

tic 
for i = 1:num_cases;
    [inputs,fopt2(i)]=ga(@stalagmite,2,[],[],[],[],[0;0],[1;1]);
    xopt2(i)=inputs(1);
    yopt2(i)=inputs(2);
end

study2_time=toc

figure(2)
subplot(2,1,1)
hold on
surfc(x, y, -f)
shading interp
plot3(xopt2,yopt2,-fopt2,'marker','o','markersize',5,'markerfacecolor','r')
title('unbounded_inputs');
subplot(2,1,2)
plot(-fopt2);
xlabel('Iterations');
ylabel('Function Minimum');
legend('Iterations','Function Minimum')

Output:

here we can see there are less number of points from random sampling with local and global maxima. The upper and lower bounds used here gives us the optimized species from iteration which made the process simpler than before to achieve the further optimum values.

Study3:

Here in the third study we have another constraint used and known as options as it helps us increase the population to increase the size. Getting the local minima and maxima is common in previous studies but by increasing the population/iterations and decreasing iteration to 0.2-0.5, in our case it is 0.2, errors like this can reduced to achieve the global maxima and minima as well. Consequently we obtained a very good plot with global maxima point. We can see that in result as below.

% Study 3 - Statistical behaviour - Increasing GA Iterations
options = optimoptions('ga');
options = optimoptions(options, 'populationsize', 170);

tic 
for i = 1:num_cases;
    [inputs,fopt3(i)]=ga(@stalagmite,2,[],[],[],[],[0;0],[1;],[],[],options);
    xopt3(i)=inputs(1);
    yopt3(i)=inputs(2);
end

study3_time=toc

figure(3)
subplot(2,1,1)
hold on
surfc(x, y, -f)
shading interp
plot3(xopt3,yopt3,-fopt3,'marker','o','markersize',5,'markerfacecolor','r')
title('unbounded_inputs');
subplot(2,1,2)
plot(-fopt3);
xlabel('Iterations');
ylabel('Function Minimum');
legend('Iterations','Function Minimum')

Output:

Here we can see we have finally achieved a single global point for gobal maxima. This is how genetic algorithm is performed on different values or popultion of species.

Error faced and its resolution:

While we were perfoming genetic algorithm here we have faced only one error as we were using MATLABs trial version our GA toolbox was not given acces by mathworks, therfore we opted for cracked version of MATLAB where we are able to achieve the output of the plot. The error displayed as follows.

Resolution: Better to use cracked version of MATLAB to get the plots of an output in genetic algorithm.

References: 

Optimization in practice with MATLAB : For Engineers students and Professionlas

By Achille Messac

https://in.mathworks.com/help/gads/what-is-the-genetic-algorithm.html

https://in.mathworks.com/help/gads/ga.html

https://in.mathworks.com/help/gads/example-global-vs-local-minima-with-ga.html

 

 

 


Projects by Nashit Ahmad

Objective: Write a program to simulate the transient behaviour of a simple pendulum and to create an animation of its motion. Brief: In Engineering, ODE is used to describe the transient behaviour of a system. A simple example is a pendulum The way the pendulum moves Read more

Objective: Based on the concepts you have learnt during the forward kinematics routine, go ahead and write code that can solve an otto cycle and make plots for it. Here are the requirements: Your code should create a PV diagram. You should output the thermal effi Read more

Objective: Write a program to simulate the transient behaviour of a simple pendulum and to create an animation of its motion. Brief: In Engineering, ODE is used to describe the transient behaviour of a system. A simple example is a pendulum The way the pendulum moves Read more


Loading...

The End