Comparative Study of Typical Bionic Intelligent Optimization Algorithm

Page 1

International Journal of Management Science and Engineering Research Volume 2 Issue 1, June 2015 doi: 10.14355/ijmser.2015.0201.04

www.seipub.org/ijmser

Comparative Study of Typical Bionic Intelligent Optimization Algorithm Li Cun-bin, Li Shu-ke, Liu Yun-qi* School of Economic & Management, North china Electric Power University 2 Beinong Road, Huilongguan Town, Changping District, Beijing, 102206, China *

liu_yunqi@163.com

Abstract Solution to optimization of propositions is one of the characteristics of many equations, high dimensional variables and the strong nonlinearity etc. Intelligent optimization algorithms have advantages on efficient optimal performance and can operate without needing special information of problems. Due to these advantages, intelligent optimization algorithms have achieved remarkable success in application of a wide range of engineering field especially for large-scale optimization problems. In this article, five kinds of typical bionic intelligent optimization algorithms have been carried on the detailed comparison analysis and the specific computational steps; advantages and disadvantages of each algorithm are given. Moreover, international research tendency and application areas of typical bionic intelligent optimization algorithm are also summarily presented. This work can contribute to understanding, selection and application of typical bionic intelligent optimization algorithms. Keywords Intelligent Algorithm; Genetic Algorithm; Ant Colony Optimization; Particle Swarm Optimization; Artificial Fish School Algorithm; Bacteria Foraging Optimization

Significance of Intelligent Optimization Algorithms With the development of the society, in order to improve the system efficiency, reasonable use of resources, and to improve the economic benefit, national defense, industry, agriculture, finance, energy, communications, transportation and other fields, various optimization techniques are actively applied to optimizing the problems in the field of their own. Meanwhile, optimization problems in practical application become more and more complicated, which results in the consequence that the optimization of the proposition has characteristics of many equations, high dimensional variable and strong nonlinearity so as to make the relevant variables and propositions very difficult to be solved. However, previous classical optimization algorithm (COA) all far cannot satisfy the requirement in terms of computation speed, convergence, initial value sensitivity, which makes them incapable of solving such large scale optimization problems. All above make it necessary for us to seek the powerful optimization tools. Intelligent optimization algorithms (IOA) have advantages on efficient optimal performance and can operate without needing special information of problems so that they have gradually received extensive attention and been widely applied into various fields since the 1970s. The classical optimization algorithms (COA) are based on mathematical programming encounter difficulty of computation time, whereas, intelligent optimization algorithms (IOA) as optimization tools can in a relatively short period of time obtain the optimal solution (or suboptimal solution) so they have been received a wide range of application and approval, and also gradually have become a research hotspot in the field of modern optimization algorithm. At present, the researches on application and improvement of intelligent optimization algorithm (IOA) have been concerned relatively broad, the researchers put forward to a lot of improvements of intelligent optimization algorithm direct at a wide variety of specific problems, which makes the intelligent optimization algorithm (IOA) adapt to the needs in various fields. These intelligent optimization algorithms have achieved remarkable success in a wide range of engineering fields especially in solving large scale combinatorial optimization problems such as TSP (Traveling Salesman Problem), QAP (Quadratic Assignment Problem), and JSP (Job-shop Scheduling Problem).

17


www.seipub.org/ijmser

International Journal of Management Science and Engineering Research Volume 2, Issue 1, June 2015

General Comparison between COA and IOA In the existing intelligent optimization algorithm, genetic algorithm (GA) which produced in the 1970s is undoubtedly the most representative and is also the most mature. In recent years, people have been inspired by evolution process of nature and biological life and are occupied in studying on bionic algorithm, which creates a new way of exploring optimization method. Intelligent bionic algorithms have been paid close attention by the researchers wherewith theirs intelligent and efficient optimization ability and wider application. Typical bionic intelligent optimization algorithms have particle swarm optimization algorithm, ant colony algorithm and artificial fish school algorithm and bacteria foraging optimization algorithm, etc. Table 1 shows the basic information of some typical bionic intelligent optimization algorithms. Compared with the classical optimization algorithms, bionic intelligent optimization algorithms is a kind of probabilistic-type stochastic optimization algorithm, whose structure and behavior have the following several typical characteristics[8,9]. Contrastive analysis of the intelligent optimization algorithm and the traditional classic optimization algorithms are as shown in table 2. TABLE 1 PART OF TYPICAL BIONIC INTELLIGENT OPTIMIZATION ALGORITHMS

Originator

The algorithm name

Abbreviation

The time of emergence

Thought source

Holland[1], Bagley[2], Goldberg[3]

Genetic Algorithm

GA

1975

Darwinian theory and Mendelism

Dorigo etc.[4]

Ant Colony Optimization

ACO

1991

Pathfinding behavior of ant

Eberhart, Kennedy[5]

Particle Swarm Optimization

PSO

1995

Preying behavior of bird flock

LI XIaolei etc.[6]

Artificial Fish School Algorithm

AFSA

2002

Foraging behavior of fish school

Passino[7]

Bacteria Foraging Optimization

BFO

2002

Foraging behavior of Human e. coli

TABLE 2 CONTRASTIVE ANALYSIS BETWEEN THE CLASSICAL OPTIMIZATION ALGORITHMS AND THE INTELLIGENT OPTIMIZATION ALGORITHMS

Serial Number

Differences A

Commonality B

Intelligent optimization algorithms

Classical optimization algorithms A feasible solution acts as the initial value

A1

A set of feasible solutions act as the initial value

A2

Search strategy is structured and randomized

Search strategy is deterministic

A3

Only use information of the objective function value

Mostly need derivative information

A4

Not much requirement for function properties

Having strict requirements for the function properties

A5

Calculating amount is quite large

Calculating amount is much smaller than intelligent optimization algorithm

B1

Are all iterative algorithms

The classical optimization algorithm and the intelligent optimization algorithm are both iterative algorithms, but they have great differences, mainly including following several aspects: 1) Classic optimization algorithms employ a feasible solution as the initial value of iteration while intelligent optimization algorithms employ a set of feasible solutions as the initial value. 2) Search strategy of classic optimization algorithms is definitive but search strategies of intelligent optimization algorithms are structured and randomized. 3) Classic algorithms mostly need derivative information while intelligent optimization algorithms only need to use information of the objective function value. 4) Classic optimization algorithms have strict requirements for property of function (such as boundedness, the monotonicity, parity, periodicity) while the intelligent optimization algorithms don’t have too many requirements for property of function. 5) Calculation amount of classic optimization algorithms is much less than that of intelligent optimization algorithm. For example, for the optimization problems of relatively large-scale and relatively poor function quality, the effect of classical optimization algorithm is not good while calculation amount of the generic intelligent optimization algorithms is too large.

18


International Journal of Management Science and Engineering Research Volume 2 Issue 1, June 2015

www.seipub.org/ijmser

Analysis and Procedure Description of IOA Genetic Algorithm Genetic Algorithm is calculation model which simulates natural selection of Darwin's theory of evolution and Genetic mechanism of biological evolution process and is a kind of algorithm to search optimal solution by simulating the natural evolution process. GA starts with a population which represents potential solution to sets of the problems, while a population is composed of a certain number of individuals which have been conducted by genes encoding. After original population generating, according to the principle of survival of the fittest, by the evolution of successive generation the better and better approximate solutions are produced. In each generation, according to fitness of individuals in the problem domain to select them, and by means of genetic operators to conduct combination crossover and mutation, new population produced represents the new solution set. This process which will result in the offspring population is more adapted to the environment than the previous generation; and after the optimal individuals in the last generation population are decoded, they can be used as approximately optimal solutions. Through crossover and mutation operation of the mutual cooperation and competition, it conequation reference goes heresequently has equilibrium search ability to balancing both whole and local. The basic steps of genetic algorithm [10-12] are as follows: 1) The encoding. Before GA will conduct searching, first, solution data of the solution space needs to be expressed as genotype string-structure data of genetic space. The different combinations of these string structure data make up the different points (namely chromosome). 2) The generation of initial population. Randomly generated N initial string-structure data, each string-structure data is called an individual, N individuals form a group. The GA begins to evolve by using the N string-structure data as the initial points. 3) Fitness evaluation. Fitness reflects superiority-inferiority of individual or solution. For different problems, the definition mode of fitness function is also different. 4) Selecting operation: The operation process of selecting superior individuals and eliminating inferior individuals from the population is called selecting operation. Let selection operator act on population. The purpose of selecting is the optimal individual (adaptable individual) which is inherited directly by the next generation or through matching cross to produce new individual which is inherited to future generations. Selection operation is established on the basis of the fitness evaluation of individuals. Adaptable individuals have high probability of contributing to one or more of the offspring for the next generation. 5) Crossover operation. Crossover refers to operation process by replacing and recombining the part of the structure of the two parent individuals to generate new individuals. Let crossover operator act on population. The crossover operator plays a key role in Genetic algorithm because of its global search ability. Through the crossover operation, new generation individuals can be obtained which are combined with the characteristics of the father generation individuals and the search ability of genetic algorithm has a giant leap or great improvement. 6) Mutation operation: Let the mutation operator act on the population. That is for the selected individuals, to change randomly the value of the certain string in the string-structure data according to a certain probability. The probability of mutation occurrence is very low. There are two purposes that variation is introduced into the Genetic algorithm: one is to make the genetic algorithm have local random search capabilities. When genetic algorithm through the crossover operator is close to the optimal solution neighborhood, using the local random search ability of the mutation operator can accelerate convergence to the optimal solution. Obviously, under such a case, the mutation probability should take lesser value, otherwise the building block which is close to the optimal solution due to the variation will be damaged. The second is to make the genetic algorithm maintain the population diversity, which prevents premature convergence phenomenon, at this time convergence probability should take larger value.

19


www.seipub.org/ijmser

International Journal of Management Science and Engineering Research Volume 2, Issue 1, June 2015

7) The judgment of termination conditions: Through selection, crossover and mutation operation on the population P (t ) , population P(t + 1) of the next generation is obtained. If t = T ( T is the number of the setting biggest evolution generation), the largest fitness individuals which are obtained in the process of evolution as the optimal solution are output and the calculation is terminated. Ant Colony Optimization Algorithm In 1991, M. Dorigo etc. first proposed ant colony algorithm. Its main characteristic is to find the optimal path through the positive feedback and distributed collaboration. It has made full use of characteristics of swarm optimizing with which the biological ant colony can search the shortest path from nest to food through simple information transmission between individuals. Parallel and distributed features presented by ant colony foraging behavior make algorithm particularly suitable for parallelizing processing. Ant colony optimization algorithm is first used to solve TSP problem, meanwhile, the algorithm is also used to solve the Job-Shop scheduling problem, quadratic assignment problem (QAP) and multidimensional knapsack problem (MDKP) and so on, which show its superior characteristics of being applied to solve combinatorial optimization problem. Researchers all over the world have carried on the careful research and application development on ant colony algorithm in more than 20 years, in which the algorithm has been applied to the data analysis, solving the robot collaborative problem, electric power, communications, water conservancy, mining, chemical engineering, construction, transportation and other fields, such as graph coloring problem, large scale integrated circuit design, the routing problem in the communication network and load-balancing problem, vehicles scheduling problem etc. Ant colony algorithm has successful application in several fields, among which one of the most successful applications is in combinatorial optimization problem. Ant colony algorithm used to solve TSP problem generally requires several steps [10, 13, 14], as shown in Fig. 1.

Begin Begin

Initializing Initializing parameter parameter

Building Building solution solution space space

The The current current iterations iterations add add 11

No

Updating Updating the the pheromone pheromone

Reaching Reaching to to maximum maximum iterations? iterations?

Yes

Outputting Outputting the the optimal optimal solution solution

End End

FIG. 1 BASIC STEPS OF ANT COLONY ALGORITHM TO SOLVE TSP PROBLEM

1) Initializing parameter. To initialize parameter: ant colony scale, importance factor of pheromone, importance factor of heuristic function, volatilization factor of pheromone, total volume of pheromone release, maximum iterations, the initial value of iterations etc. 2) Building solution space. Put all the ants randomly at different starting points, calculate next node which each ant will visit, continue this process until all the ants have visited all of the nodes. 3) Updating the pheromone. Computing the path length which each ant passed through, record the optimal solutions in the current iterations, updating the pheromone concentration on the connection path of each of nodes. 4) Determine whether or not to terminate computing. If the current iterations are greater than the maximum iterations, terminate calculation and output optimal solutions; otherwise, clear path table and return to step 2. Particle Swarm Optimization Algorithm Particle Swarm Optimization Algorithm is a kind of random search algorithm based on group collaboration, which has been developed through simulation the flock foraging behavior. In PSO, each solution to optimization problems is a bird in the search space, that it is called "particles". Each of particles has a fitness value which is determined by the function which will be optimized and also has a speed to decide their flying directions and distances. Then the particles start to search in the solution space by following the current optimum particles. System is initialized by a group of random solutions and searches optimization value by iteration. In each of iterations, the particles update themselves by tracking two "extreme". The first is the optimal solution found by

20


International Journal of Management Science and Engineering Research Volume 2 Issue 1, June 2015

www.seipub.org/ijmser

particles themselves and the solution is called personal best value pBest . Another extreme value is the optimal solution found by the whole population currently and this extremum is called global extreme value gBest . Or they also need not the global extreme value gBest but just need neighbors of one part of the optimal particles, so the extremum in all the neighbors is local extremum. After finding the two extremum, particles update their new positions and the speed according to the following two formulas: (1)

V [] = V [] + c1* rand () * ( pbest[] − present[]) + c 2 * rand () * ( gbest[] − present[])

(2)

present = [] persent[] + V []

Where V [] is the particle's velocity, w is inertia weight. persent[] is the current particle's position. pbest[] and gbest[] are as former definition. rand () is random number between (0, 1). c1 and c 2 are learning factor. Usually c= 1 c= 2 2 . In most cases, 0 ≤ c1 = c 2 ≤ 4 .

Not many parameters need to be adjusted in PSO, these parameters and their experiential setting are listed as follows: Number of Particles: Usually take 20 to 40. In fact for most of the problems, 10 particles are enough to achieve good results, but for difficult problems or particular categories of problems, the particle number can be 100 or 200. The length of the particle: it is determined by the optimization problem, namely the length of the problem solution. The range of particle: it is determined by optimization problem, each dimension can set up different range. V max : it is particles’ maximum velocity, it decides particles’ the largest mobile distance in a loop, which usually is set by the width of the range of particle. Learning factor: c1 and c 2 is usually equal to 2. Another parameter is the inertia weight W , its optimal value often is determined according to the practical problem. When the inertia weight W is very small, with emphasis on playing a role of the local search ability of particle swarm algorithm; Inertia weight is large, which will make emphasis on the global search ability. Termination conditions: maximum iterations or minimum error criteria. Termination conditions are determined by specific question. The process of two-dimensional multi-objective search algorithm based on particle swarm algorithm is shown in Fig. 2. No Population Population initialization initialization

Fitness Fitness value value calculation calculation

Particle Particle optimal optimal updating updating

Updating Updating of of NoninferiorSet NoninferiorSet

Updating Updating particle's particle's position position and and velocity velocity

End End

Yes

Attaining Attaining dominated dominated solution solution

FIG. 2 THE PROCESS OF PARTICLE SWARM OPTIMIZATION

In population initialization module, system randomly initializes the particle's position X and velocity V . Fitness value calculation is to calculate the value of individual fitness according to calculation formula of fitness value. In the module of updating of optimal particle, according to the new particles position individual optimal particle is updated. In noninferiorset updating module, according to dominance relations between the new particles the pareto solutions are screened. In particle velocity and position updating module, according to the individual optimal particle position and global particle position particle velocity and position are updated [10, 15-17]. Artificial Fish School Algorithm Artificial fish algorithm is proposed by xiao-lei LI et al. in 2003[6] and based on research of animal swarm intelligence behavior, which is a new type of bionic optimization algorithm by simulating the foraging behavior of the fish school to realize optimization according to the characteristic of the waters where the most number of fish exists that is the place where the waters are most rich in nutrients. Behavior description of artificial fish school algorithm: 1) Foraging behavior: setting up current state of artificial fish, and randomly choose another state in its perception scope, if objective function of state attained is greater than that of the current state, then move one-step towards the

21


www.seipub.org/ijmser

International Journal of Management Science and Engineering Research Volume 2, Issue 1, June 2015

state of the new choice attained, on the contrary, to rechoose the new state, judge whether meet the conditions, when the choice numbers reach a certain count, if it still does not meet the conditions, the randomly moves a step. 2) Clustering behavior: Artificial fish explore the number of partners in current neighbors and calculates the center position of partners, then compare the objective function of the newfound center position with that of the current position, if the objective function of the center position is superior to that of the current position and not very crowded, the current position moves a step to the center position, otherwise, perform foraging behavior. 3) Following behavior: Artificial fish explore the optimal position of the surrounding neighbors fish, when the value of objective function of optimal position is greater than that of the current position and not very crowded, the current position moves one step to the optimal neighbor fish, otherwise, carry out foraging behavior. According to the nature of the problem which will be solved, current circumstances in which artificial fish located are evaluated so as to choose a kind of behavior. Commonly used assessment method is: to choose the behavior among various behaviors whose direction of making the largest advance moves towards the optimal direction, i.e. the behavior among various behaviors which make the next state of artificial fish optimal. If there is not the behavior which make the next state of artificial fish better than that of the current state, random behavior is employed. Step description of artificial fish algorithm[18]: Step1: Setting the parameters of the fish school, including the scale of the fish school m , the maximum iterations gen , the perception range of artificial fish Visual , the maximum shift step length step , congestion degree factor d etc. ; Step2: In the parameter interval, randomly generating artificial fish individuals m as the initial fish school; Step3: Calculating food concentration function (objective function) of every fish, putting the optimal value into the bulletin board; Step4: Make each of the artificial fish do the following operation: 1) Calculate the value of the following behavior and clustering behavior, employ behavior selection strategy to select the optimal behavior as the moving direction of the fish, the default behavior is foraging behavior. 2) Calculate food concentration function (objective function) of each fish, and compare its optimal value to the values in the bulletin board, in consequence, always maintain optimal values in the bulletin board. Step5: Determine whether meet the termination conditions. If meet, terminate calculating, otherwise return to the Step4. Termination conditions: the usual method is to determine the mean square deviation of continuous multiple calculation value whether is less than expected error, or to determine directly the iterations. In the end, the value in the bulletin board is the optimal value. Fig. 3 shows the steps of artificial fish school algorithm [10, 19]. Table 3 shows the variable parameters of artificial fish school algorithm. TABLE 3 VARIABLE PARAMETER

Serial number 1 2

22

Variable Names N {Xi}

3

Yi=f(Xi)

The meaning of variables The scale of artificial fish school The individual state position of artificial fish Food concentration of the current position of the i artificial fish, yi is objective function

4 5 6 7 8 9 10

di,j=||Xi-Xj|| Visual Step delta try_number n MAXGEN

The distance between artificial fish individuals The perception distance of artificial fish The maximum shift step length of artificial fish Crowding degree Maximum times of trying to foraging behavior Current foraging behavior times Maximum iterations


International Journal of Management Science and Engineering Research Volume 2 Issue 1, June 2015

www.seipub.org/ijmser

Begin Setting N,Step,Visual,try_number,delta,MAXGEN,gen=1

Initializing the fish school within a given range{X1,X2,…Xn}

i=1

Xi clustering behavior, attained(Xnext1,Ynext2)

Xi following behavior, attained(Xnext2,Ynext2) No

Xi=Xnext1

i=i+1

Yes

No

Y Ynext1> next1>Y Ynext2 next2

No

Xi=Xnext2

i≥N i≥N Yes gen=gen+1

gen>MAXGEN gen>MAXGEN Yes Determine the optimal solution

End

FIG. 3 STEP DIAGRAM OF ARTIFICIAL FISH SCHOOL ALGORITHM

Bacteria Foraging Optimization Bacterial Foraging Algorithm (BFA) [has also called bacteria Foraging Optimization Algorithm, BFO || BFOA] put forward by K. M. Passino in 2002 is based on the behavior of e. coli in human intestinal phagocytose food, which is a new kind of bionics Algorithm. The algorithm has an advantage of swarm intelligence algorithm for parallel search and is easy to jump out of local minima, and become another hot spot in the field of biological heuristic research. In BFA model, the solutions to optimization problem correspond to the bacterial states in the search space, namely the adaptive value of optimal function. BFA includes chemotaxis, reproduction and elimination-dispersal three main behaviors. 1) The behavior that bacteria aggregate towards eutrophic area is called chemotaxis. Bacteria movement patterns include the tumble and the run (swim) in the process of chemotaxis. The unit step length that bacteria move to any direction is defined as tumble. When bacteria complete a tumble, if adaptive value is improved, it will continue to move several steps along the same direction until the adaptive value is no longer improvement, or reaches a predetermined critical value of number of moves. This process is defined as the run. 2) Once the life cycle ends, namely reaching the critical chemotaxis times, bacteria will conduct reproduction. Bacteria reproduction process follows the principle of "survival of the fittest" in nature. Taking cumulative sum of

23


www.seipub.org/ijmser

International Journal of Management Science and Engineering Research Volume 2, Issue 1, June 2015

adaptive value of the bacteria in the process of chemotaxis as a standard, the poor half of the bacteria dies, the better half of the bacteria splits into two daughter bacterium . Daughter bacterium will inherit biological characteristics of the parent bacteria, having the same position and step length as the parent bacteria. To simplify the calculation, the total bacterial count in the process of reproduction can be stipulated to remain unchanged. 3) Chemotaxis process ensures the local search ability of bacteria, the reproduction process can accelerate the search speed of the bacteria, but for complex optimization problems, chemotaxis and reproduction cannot avoid the phenomenon that bacteria is caught in local minimum. BFA introduced elimination - dispersal process to enhance algorithm for global optimization ability. When the bacteria completed a certain number of reproductions, they will be dispersed in terms of a certain probability to optional position of search space. The general process of bacterial foraging algorithm for solving optimization problem[19-21] is shown in Fig. 4. Begin

Initializing parameter Ned, Nre, Ne, Ped, S, Ns

Elimination dispersal according to the selected probability

Reaching Reaching to to elimination elimination -dispersal dispersal times times Ned? Ned?

Yes

Algorithm end

No Yes

Reaching Reaching to to reproduction reproduction times times Nre? Nre? No Reaching Reaching to to chemotaxis chemotaxis times times Ne? Ne?

Yes

Reproduction Sr =s/2

No

Swim(Ns)

Position better

Turn tumble randomly

FIG. 4 THE FLOW CHART OF BACTERIA FORAGING OPTIMIZATION ALGORITHM.

1) Parameter initializing. Ned is the elimination-dispersal times, Nre is the reproduction times, Nc is the chemotaxis times, Ped is the given elementary probability of elimination-dispersal, S is scale of bacteria, Ns is swim times. 2) Initialize the bacteria position. Generate initialization position, calculate the initialization fitness value of the bacteria J (i, j , k , l ) . j represents chemotaxis operation, k represents reproduction operation, l represents elimination-dispersal operation. 3) Conduct the following operation in turn. Elimination-dispersal cycle l = 1: Ned ; Reproduction cycle k = 1: Nre , Chemotaxis cycle j = 1: Nc . 4) Perform bacterial chemotaxis cycle. In BFO algorithm, the process of algorithm optimizing is composed of three operations including the chemotaxis, reproduction, and elimination-dispersal through three layers of nested loop, which the innermost is chemotaxis, the middle layer is reproduction, and the outermost layer is elimination-

24


International Journal of Management Science and Engineering Research Volume 2 Issue 1, June 2015

www.seipub.org/ijmser

dispersal; After the completion of chemotaxis of a life cycle, reproduction is performed; after the completion of the reproductive cycle, the elimination-dispersal is performed. Comparatively speaking, the core of the BFO algorithm is chemotaxis operation, which corresponds to the direction adjustment strategy in the process of bacterial foraging, such as whether to run, steps of run. If adjustment direction, it is selected to conduct. Therefore chemotaxis operator determines the BFO convergence. If j  Nc , then return to chemotaxis operation. 5) Reproduction cycle. After chemotaxis cycle is completed, the fitness of each bacterium in the life cycle is to be accumulated so as to attain bacteria energy, then bacteria is to be sorted according to the bacterial energy, half of the bacteria of poor ability for energy acquisition is to be eliminated, finally half of the bacteria of the strong ability for energy acquisition is to be reproduced. If k  Nre , then return to reproduction operation. 6) The elimination-dispersal cycle. A random probability generates after completing the reproduction operator, and comparing it with the given elimination-dispersal probability Ped , if it is less than the Ped , then conduct the elimination-dispersal operation. If l  Ned , then return to the elimination-dispersal cycle, otherwise, optimizing end. TABLE 4 THE CONTRAST OF RESPECTIVE CHARACTERISTICS AND DISADVANTAGES FOR TYPICAL BIONIC INTELLIGENT OPTIMIZATION ALGORITHM

Algorithm Name

Characteristic

Disadvantage

GA

1) Be suitable for solving discrete problems; 2) Be equipped with mathematics theory support; 3) Essential characteristic: Using genetic operation to solve optimization problems of non-numeric concept or those of difficult to have a numerical concept; 4) Greater flexibility of search process.

1) But there is a problem such as hamming cliffs; 2) The performance of the algorithm is very sensitive to the choice of parameters; 3) It can’t solve the question of “look for a needle in the ocean”, namely the question that there is no an exactly fitness function to represent good or bad of individual so that algorithmic evolution lose guiding.

ACO

1) Be suitable for the problem of searching path; 2) Simple behavior rules, that is, through the pheromone (message transmit) this link, under the ant individuals work together to search the optimal path; 3) Used the positive feedback mechanism, it is one of the most significant characteristics different from other bionic optimization algorithm.

1) Computational amount will be very big, slow convergence speed, prone to stagnation; 2) Easily be trapped in local optimal solution; 3) The convergence of the ant colony algorithm is relatively sensitive to initialization parameter setting.

PSO

AFSA

BFO

1) Underperforming in processing the optimization problems of discrete; 1) Be suitable for value types processing; 2) Easy to fall into local optimum; 2) Simple and convenient, fast solving speed. 3) The mathematical basis of algorithm is relatively weak, there is lack of deep and universal theoretical analysis. 1) But obtained solutions is the satisfaction solution domain of 1) Artificial fish algorithm not only has the ability of fast system, not the exact solutions of problem; tracking extreme value point, but also has stronger ability of 2) The algorithm has faster convergence at initial stage of the jumping out of local extremum points; optimization, convergence slow in later period; 2) Algorithm is not sensitive to selection of initial value and 3) The mathematical basis of algorithm is relatively weak, there parameter, robust, simple and easy to be implemented. is lack of deep and universal theoretical analysis. 1) BFO algorithm only uses the function value of the target problem, do not need to use derivative information, and has certain self-adaptive ability to the search space, and has better 1) BFO is easy prematurity, convergence speed slow; global optimization ability; 2) BFO algorithm is the lack of a solid mathematical foundation. 2) Multiple bacteria individuals parallelly search, has high optimization efficiency; 3) BFO algorithm also has outstanding ability of local search.

Comparative Analysis of Typical Bionic Intelligent Optimization Algorithm They are all bionic optimization algorithm by simulating behavior of natural biological groups so as to seek the global optimal. They have some common characteristics, but they all have their own characteristics and defects. As shown in table 4. Common characteristics: 1) They are one of the kinds of intelligent optimization algorithms based on behavior of natural biological group.

25


www.seipub.org/ijmser

International Journal of Management Science and Engineering Research Volume 2, Issue 1, June 2015

2) Don't need derivative information or other mathematical nature of optimization problems. So under different environment and conditions, algorithm is elementary applicable and effective. 3) They are one of the kinds of probabilistic type global search algorithms. And the uncertainty of these algorithms can make the algorithm have more opportunity to get the global optimal solution and more flexible. But, from the angle of mathematics, how to prove the correctness and reliability of this kind of algorithm is more difficult. 4) Having the nature of concurrency and parallel processing efficiency is very high, so the algorithm is very suitable for parallel computing of large-scale optimization problems, which makes the algorithm at the smaller expense to get larger performance. 5) Having the self-organizing learning, namely, in uncertain complex environments, the individuals of algorithms all can continuously improve their adaptability by learning. Characteristics and Defects They all have their own characteristics and defects, as shown in table 4. The Research Trend of Intelligent Optimization Algorithm Application practice shows that optimization technology has produced a huge economic and social effect on many fields, which especially on dealing with large-scale problem, so benefit is even more significant. The advantages of the intelligent optimization methods mainly have three aspects[19]: First, because the algorithm only involves the various basic mathematical operations, the calculation is relatively simple. Second, the requirements of data processing to CPU and memory is also not high .Third, the intelligent optimization methods mostly have potential parallelism and distributed characteristics. These characteristics provided technical guarantee for dealing with a lot of data existed as database. Therefore, whether from the view point of theoretical study, or that of applied research, the research of intelligent optimization algorithm for solving the difficulty which classical algorithms have encountered has important academic significance and practical value. The current research on intelligent optimization algorithm presents three trends[22-24]: One is the improvement and widely application of classic intelligent algorithm as well as the in-depth study of theirs theory; The second is to develop new intelligent tool so as to broaden theirs application domain and to seek for theirs theoretical basis; The third is to combine classical intelligent algorithms with modern intelligent algorithm to construct hybrid intelligent algorithm. Conclusion Bionic intelligent algorithm is an important branch of artificial intelligence research field. Nature gives human beings a lot of enlightenment and inspiration, and based on the various biological and physiological phenomena of nature, people have designed many new methods to solve complex optimization problems, these methods can get acceptable solution in limited calculation time. In this article, the five kinds of typical bionic intelligent optimization algorithms have been carried on the detailed analysis and comparison as well as the specific steps and advantages and disadvantages of each algorithm are also given. The work will contribute to the understanding and the selection of application of typical bionic intelligent optimization algorithm. ACKNOWLEDGMENT

This research is partially funded by the National Natural Science Foundation of China (71271084), the Fundamental Research Funds for the Central Universities (2014XS55; 2015XS32) and the Project for The Beijing’s EnterpriseAcademics-Research Co-Culture Post-Graduate. REFERENCES

[1]

Holland J H. Outline for a logical theory of adaptive systems[J]. Journal of the ACM (JACM), 1962, 9(3): 297-314.

[2]

Bagley J. D. The behavior of adaptive systems which employ genetic and correlation algorithms. Doctoral dissertation,

26


International Journal of Management Science and Engineering Research Volume 2 Issue 1, June 2015

www.seipub.org/ijmser

University of Michigan, 1967,(2): 36-52. [3]

Golberg D E. Genetic algorithms in search, optimization, and machine learning[J]. Addion wesley, 1989,(3): 16-35.

[4]

Dorigo M. Optimization, learning and natural algorithms[D]. Politecnico di Milano, Italy, 1992.

[5]

Eberhart R C, Kennedy J. A new optimizer using particle swarm theory[C]//Proceedings of the sixth international symposium on micro machine and human science. 1995, 1: 39-43.

[6]

Xiaolei LI. A New Intelligent Optimization Method-Artificial Fish School Algorithm[D]. Zhejiang University, 2003.

[7]

Passino K M. Biomimicry of bacterial foraging for distributed optimization and control[J]. IEEE Control Systems Magazine, 2002, 22: 52-67.

[8]

Guangyan SHI, Jiali DONG. Optimization method[M]. Beijing: Higher Education Press,2002.

[9]

Yiwen Zhong. The study on Intelligent Optimization Methods and theirs applications. [D] Zhejiang University,2005.

[10] Feng SHI, Hui WANG, Lei YU, et al. Matlab Intelligent Algorithm Analysis of 30 Cases. [M] Beijing: Beijing University of Aeronautics and Astronautics Press,2011. [11] Yingjie Lei, Shanwen Zhang, Xuwu Li, et al. MATLAB Genetic Algorithm Toolbox and Application. [M]. Xi'an: Xidian University Press,2006. [12] Weile D S, Michielssen E. Genetic algorithm optimization applied to electromagnetics: A review[J]. Antennas and Propagation, IEEE Transactions on, 1997, 45(3): 343-353. [13] Dorigo M, Gambardella L M. Ant colonies for the travelling salesman problem[J]. BioSystems, 1997, 43(2): 73-81. [14] Jingling LI. The Explanation of Search Engine Algorithm ‘Ant Colony Algorithm’. Edit:China E-business Research Center,2010. http://www.100ec.cn. [15] Kennedy J. Particle Swarm Optimization[J]. Proc. IEEE the International Conference on Neural Networks, 1995, 1995, 4:1942 - 1948. [16] Shi Y, Eberhart R. A modified particle swarm optimizer[J]. IEEE International Conference on Evolutionary Computation, IEEE World Congress on Computational Intelligence, 1998:69 - 73. [17] Guangyou Y. A Modified Particle Swarm Optimizer Algorithm[C]. //International Conference on Electronic Measurement & Instruments. IEEE, 2007:2-675 - 2-679. [18] Wang Chuang. The analysis and improvement of artificial fish-swarm algorithm. [D].Dalian Maritime University,2008. [19] Liu Xiaolong. Modification and application of Bacteria Foraging Optimization Algorithm[D].2011 South China University of Technology,2011. [20] Hu Jie .Research on the Modified Bacteria Foraging Optimization Algorithm and Its Application. [D]. Wuhan University of Technology,2012. [21] Chen H, Zhu Y, Hu K. Multi-colony bacteria foraging optimization with cell-to-cell communication for RFID network planning[J]. Applied Soft Computing, 2010, 10(2):539–547. [22] Xu X, Chen H. Adaptive computational chemotaxis based on field in bacterial foraging optimization[J]. Soft Computing, 2014, 18(4):797-807. [23] Niu B, Wang H, Wang J, et al. Multi-objective bacterial foraging optimization[J]. Neurocomputing, 2013, 116(10):336-345. [24] Iacca G, Neri F, Mininno E. Compact Bacterial Foraging Optimization[J]. Lecture Notes in Computer Science, 2012: 84-92.

27


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.