Artificial Bee Colony (ABC) Optimization Algorithm for Training Extreme Learning Machine

Page 1

www.as-se.org/ccse

Communications in Control Science and Engineering (CCSE) Volume 2, 2014

Artificial Bee Colony (ABC) Optimization Algorithm for Training Extreme Learning Machine Chunning Song*1, Liangming Feng2, Xiaofeng Lin3, Heng Li4 College of Electrical Engineering, Guangxi University, Nanning 530004, China *1

2

3

4

scn703@163.com; fenglming@126.com; 1083985558@qq.com; fenglm07@163.com

Abstract To solve the problem that the slow convergence speed, initial weights and threshold of hidden layer value random selection in extreme learning machine (ELM), this paper proposes a kind of extreme learning machine based on artificial bee colony optimization algorithm (ABC-ELM). Traditional training algorithms have some drawbacks such as dependence of error surface shape, initial values of connection weights, parameters and computational complexity. Therefore, evolutionary algorithms are employed to train ELM networks to overcome these issues. In ABC-ELM, the network initial weights and threshold parameters are optimized by the artificial bee colony (ABC) algorithm, whose trial vector generation strategies and their associated control parameters are self-adapted in a strategy pool by learning from their previous experiences in generating promising solutions. Simulation results demonstrate that ABC-ELM provides better generalization performance and more compact network architecture. Keywords Artificial Bee Colony; Extreme Learning Machine; Weights; Optimization

Introduction [1,2]

Recently, a new method called extreme learning machine (ELM) was developed for SLFNs an d was popular for its fast training speed by means of utilizing random hi dden node parameters and calculating the output weights with [3–5] . These features enable ELM to overcome several limitations exi sted in gradient-descent least square algorithm based algorithms, such as stuck in the local minima and slow converg ence performance. Since in E LM, the number of hidden nodes is assigned a priori, the hidden node parameters are randomly chosen and they remain unchanged during the training phase. Man y non-opti mal nodes may exist and play a very minion role in the network output an d thus eventually increase the network complexity. Moreover, Huang et al. also points out that ELM tends to require [2] more hidden nodes than conventional tuning-based algorithms in man y cases . [6]

During past several years, artificial bee colony algorithms were widely used as a global searching method for optimizing the neural network parameters. The artificial bee colony algorithm was a novel evolutionary computation technique originally inspired by certain soci al behaviors of bee flocking and bee schooling, which was an adaptive stochastic algorithm based on swarm searching strategies. Karabog a and Basturk have been compared the performance of the ABC algorithm with those of other well-known modern heuristic algorithms such as Genetic [7] Algorithm (GA), Differential Evolution (DE), Particle Swarm Optimization (PS O) on unconstrained probl ems . Due to its simplicity of i mplementation and little number of parameters, ABC has been approved to be a good global [8-10] . optimization algorithm and has won more and more attention In this paper, w e proposes a kind of extreme learning machine based on artificial bee colony optimi zation algorithm. The algorithm using artificial bee colony algorithm optimal initial weights and threshold value for the extreme learning machine in the feasible sol ution space, weight and thresholds val ue problem was transformed to the process of searching the best nectar for honey bees. To show the efficiency of our proposed algorithm, we compare our [11] [12] [13] method with several related algorithms, including I-ELM , P C-ELM and PSO-ELM . Simulations on regression and classification problems are carried out to show that our method out performs all these related algorithms in general. Extreme Learning Machine(EL M) In 2006, Professor Huang proposed a new learning algorithm called extreme l earning machine (ELM) for singlehidden layer feed-forward neural n etworks (SLFNs), which randomly chose hidden nodes and analytically det ermines

64


Communications in Control Science and Engineering (CCSE) Volume 2, 2014

www.as-se.org/ccse

[1]

the output weights of S LFNs . The E LM learning speed can be thousands of times faster than traditional feedforward n etwork learning algorithms like back-propagation (BP) algorithm while obtaining better generalization performance. The mathematical model of the extreme learning machine as showed in figure 1: oj

Output Layer

βi

β1 Hidden Layer

1

βL

i

L

(ai , bi ) 1

Input Layer

n

xi

FIG.1 MODEL OF AN ELM NETWORK HAVING ONE HIDDEN LAYER

Three-Step Learning Model: m Given a training set= ψ {( xi , t j )} ∈ Rn × R= , i 1, 2,..., n , hidden node out put function g(ai , bi , x ) , and the n umber of

hidden nodes L . Step one: Randomly generating hidden layer parameters (ai , bi ), i = 1, 2,..., L , where ai is the input weight and bi is the threshold of hidden layer nodes; Step two: Calcul ating the hidden layer out put matrix H ,

h( x1 )  = H =   h( xn )

 g(a1 , b1 , x1 )  g(aL , bL , x1 )          g(a1 , b1 , xn )  g(aL , bL , xn ) n× L

(1)

Step three: Calculating the output weight β ,

 β1T    = ,T β H= T , β   = β T   L  L ×m +

t1T      tT   n  n×m

(2)

Where H + is the Moore-Penrose generalized inverse of hidden layer output matrix H and T is the target output. Through the training, ELM can approximate these training samples until the error becomes less than a predefined constant ε : L

= O j ∑ βi g(ai , bi = , xi ),O j -t j ≤ ε , j 1,..., n i =1

(3)

The node transfer function g(.) is a nonlinear function such as a Hardlim function, a Sigmoid function, a Gaussian function, etc. Generally, the adaptation can be carried out by minimizing (optimizing) the network error function E :

= E(( ai ,bi ))

1 n l 2 ∑ ∑ (ok − tk ) n =j 1 =k 1

(4)

Where, (ai , bi ) is the hidden node parameters. tk is the desired output node; ok is the actual val ue of the k th output node. The optimization goal is to minimize the objective function by optimizing the hidden node parameters (ai , bi ), i = 1, 2,..., L .In Evolutionary Algorithms, the major i dea underlying this synthesis is to interpret the weight matrices of the ELM as indi viduals, to change the hi dden node parameters by means of some operations such as crossover and mutation, an d to use the error E(( ai ,bi )) produced by the ELM as the fitness measure which guides selection. This leads to the following evolutionary training cycle: 1. Formation of the n ext population of E LM by means of operators such as crossover an d mutation an d fitness– oriented selection of the weight matrices.

65


www.as-se.org/ccse

Communications in Control Science and Engineering (CCSE) Volume 2, 2014

2. Evaluation of the fitness values of the ELM. 3. If the desired result is obtained, then stop; otherwise go to step 1. Artific ial Bee Co lony(ABC) Algorithm [6]

Artificial Bee Colony (ABC) algorithm was proposed by Karaboga for optimizing numerical problems in 2005 . Th e algorithm simulates the intelligent foraging behaviour of honey bee sw arms. It is a very simple, robust an d population based stochastic optimization algorithm. In the ABC algorithm, there are three kinds of artificial bees: employed bees, onlooker bees an d scouts. The number of the employed bees or the onlooker bees i s as same as the n umber of sol utions. E ach solution X i = ( xi1 , xi 2 ,..., xiD ) is a D-dimensional vector. Here, D is the n umber of optimization parameters. ABC algorithm includes three phases: employed bee phase, onlooker bee ph ase an d scout phase. The three phases are executed iteratively until their cycles equalling to the maxi mum cycle n umber MCN or meeting the expected error ε . In the employed bee phase, an employed bee produces a modification on the position in her memory according to the rule of greedy selection. If its nectar amount is better than that of its currently associat ed food source, employed bee provi des that the fitness of this new food source leaving the old one, otherwise it retains its old food source. When all employed bees have compl eted this process, they share the n ectar information of the food sources with the onlookers. An onlooker bee eval uates the nect ar information taken from all employed bees and then the rul e of greedy sel ection is also used to compare the selected solution with the new solution. An onlooker bee chooses a sol ution according to a probability value associ ated with that solution, pi , calculated by the following formula:

 1 , fi > 0  fiti = 1 + fi 1+ | f |, f ≤ 0 i i  pi =

fiti

SN

(5)

(6)

∑ fiti

i =1

where f i is the objective function of the optimization problem, fiti is the fitness val ue of the solution i which is proportional to the nectar amount of the food source in the position i and SN is the number of food sources which is equal to the number of employed bees. Obviously, based on the rul e of greedy selection, a good food source can attract more onlooker bees and get a bigger chance of evol ution which can accelerate the convergence speed of the algorithm. In the above of two phases, In order to produce a candi date food position from the ol d on e in memory, the ABC uses the following expression :

uij = xij + rand(−1,1) * ( xij − xkj )

(7)

where k ∈ {1, 2,..., SN } and j ∈ {1, 2,..., D} are randomly chosen indexes an d K is different from i . As the difference between the parameters of the xij and xkj decreases, the perturbation on the position xij gets decrease, too. Th us, as the search approaches to the optimum solution in the search space, the step length is adaptively reduced. The food source that the nectar is abandoned by the bees is replaced with a new food source by the scout s. In ABC, this is si mulated by producing a position randomly an d replacing it with the aban doned one. The value of predetermined n umber of cycles is an i mport ant control parameter of the ABC algorithm, which is called “limit” for aban donment. Assuming that the aban doned source is xi and j ∈ {1, 2,..., D} , then the scout discovers a new food source to be replaced with xi. This operation can be defined as in (8) j j j xij = xmin + rand(0,1) * ( xmax − xmin )

(8)

After each candi date source position uij is produced and then eval uated by the artificial bee, the performance i s compared with that of its old one. The rule of greedy selection mech anism i s al so used to compare the old and the candidate one. Th ere are three control parameters in the ABC: The number of food sources which is equalling to the

66


Communications in Control Science and Engineering (CCSE) Volume 2, 2014

www.as-se.org/ccse

number of employed or onlooker bees (SN), the value of limit, the maxi mum cycle number (MCN). In a robust search process, exploration an d exploitation processes must be carried out together. In the ABC algorithm, while onlookers and employed bees carry out the exploitation process in the search space, the scouts control the exploration process. Hybrid ABC-EL M Algorithms To solve the problem that the slow convergence speed, initial weights and threshold val ue ran dom selection in extreme learning machine, we proposed a kind of extreme l earning machine based on artificial bee colon y optimization algorithm (ABC-ELM). In this algorithm, ELM algorithm is taken as a foun dation, and then ABC algorithm is introduced to optimi ze the network input weights and hidden node biases an d the extreme learning machine to derive the network output weights. This algorithm combines the robust global searching ability of ABC algorithm with the strong nonlinear mapping an d learning ability of E LM algorithm. The main idea of ABC-ELM algorithm i s used the error RMSE(( ai ,bi )) produced by the ELM as the fitness measure which guides sel ection, and then the optimized n etwork parameters are taken as the input of ELM algorithm, at last ELM is trained constantly until the total error i s less than the expected error ε . Given a set of training data {( xi , t j )}iN=1 ∈ Rn × Rm and L hi dden nodes with an activation function g(.) we summarize the ABC-ELM algorithm in the following: Step 1. Initialization A set of SN vectors where each one includes all the network hidden node parameters are initialized as the populations of the first g eneration:

θk ,G = [a1T,(k ,G ) ,..., aTL,(k ,G ) , b1,(k ,G ) ,..., bL ,(k ,G ) ]

(9)

Where a j and bj ( j = 1, 2,..., L) are randomly g enerated, G represents the generation and k = 1, 2,..., SN . Step 2. Calculation of output weights and RMSE Calculating the network output weight matrix and root mean square error (RMS E) with respect to each population vector with the following equations, respectively:

βk ,G = Hk ,G +T n

RMSEk ,G =

(10)

L

∑ || ∑ ( βi g(ai ,(k ,G ) , bi ,(k ,G ) , xi ) − T j )||

=j 1 =i 1

m* N

(11)

Step 3. ABC searching In the first generation, the population vector with the best RMSE is stored as θbest ,1 and RMSEbest ,1 . Similar to [14], for each target vector in the current generation, the trial vector generation strategy is according to probability pi , in order to contain the less food source as reference sources, an artificial onlooker bee chooses a food source depending on the probability calcul ated by the following expression:

= pi

0.9 * fiti + 0.1 MAX _ fit

(12)

Where, MAX _ fit = max( fit1 , fit2 ,..., fitSN ) . All the trial vectors θk ,G +1 generated at the G + 1 th generation are eval uated by using equation (13).

θk ,G +1 if RMSEθk ,G − RMSEθk ,G+1 > 0  = θk ,G +1 θk ,G +1 if RMSEθk ,G − RMSEθk ,G+1 < 0 and || βθk ,G+1 ||< || βθk ,G ||  θk ,G else

(13)

The norm of the output w eight β is added as one more criteria for the trial vector selection as pointed out in

[15]

that the neural networks ten d to be better gen eralization performance with smaller weights. The experimental dat a

67


www.as-se.org/ccse

Communications in Control Science and Engineering (CCSE) Volume 2, 2014

[16]

in literature shows that poor food source itself will not stop evolution an d the probability of it evolved into a better bee is very small, at the same time the probability its retention times that not exceed the threshold ‘limit’ is very big. In order to strengthen the ABC algorithm, a strateg y of adjusting searching space dyn amically for iterative optimi zed was developed in this paper by use of i mproved ABC algorithms of multiple ways. In each loop, the worst of l(l = SN / 10) nectar is reconfigured. Reconfigure the space according to the current population situation. Setting dynamically adjusted space is: j newxmin = min( x(l +1) j , x(l + 2) j ,..., x SNj )  j newxmax = max( x(l +1) j , x(l + 2) j ,..., x SNj )

(14)

j j After adjusting newS {(= = newxmin , newxmax )| j 1, 2,..., D} , according to the Eq(15) configuration the l food source: j j j θi j = newxmin + rand(0,1) * (newxmax − newxmin )

(15)

Where, = i 1,= 2,..., l , j 1, 2,..., D . Performance Evaluation of ABC-EL M The performance of ABC-ELM is evaluated on the benchmark problems which includes 5 regression applications [12] (concrete, Parkinsons , Friedman, Wine, Airfoil) , 6 classification applications (Segment, Iris, Heart , Waveform, [12] Musk, Statlog ) . ABC-ELM is first compared with other popul ar ELM learning algorithms such as I-ELM, P C-ELM and PSO-ELM. All the simulations have been con ducted in MATLAB 2010a environment running on an ordinary P C with 2.0 GHZ CPU. During our simulations, the input attributes of regression applications and classification applications are normalized into [-1, 1] while the output attributes are normalized into the range [0, 1]. The input weights and bi ases are randomly chosen from the range [-1,1]. In each network selected for each problem, ‘sigmoid’ function is employed as transfer function. Training processes were stopped when the root mean squared error of the outputs associated with inputs w as equal to or less than 0.001 (RMSE≤0.001) or when the maxi mum g eneration or cycle has been reached. Since the difficulty of each problem i s different, different parameter settings were used for each of them. ABC Settings: The val ue of “limit” is equal to SN * D , where D is the di mension of the probl em. Colony size is 50 for all probl ems. PSO Settings: particle number is set to 30, position and speed i s limited in [-1,1], particle acceleration factor l (n + 1) * L , where n is the input attribute, L is the ELM Hidden-Node = c1 2= .8, c2 1.3 , the length of the particle = Numbers. Evaluation on Regression Problems In this simulation, the performances of ELM, I-ELM, P C-ELM, PSO-ELM and ABC-ELM with the same hidden node numbers are compared with the regressions of 5 real-world dataset s covering various domains from the U CI dat abase. Table 1 lists the specifications of these bench mark problems. For each trial of simulations, the training set an d testing set are ran doml y generated from the whole dataset with the partition number show ed in Table 1. For compared, w e set the ELM parameters as well as the literature [12]. Th e ELM Hidden-Nodes are al so set in Table 1. TABLE1 SPECIFICATIONS OF REAL-WORLD REGRESSION DATASETS

Datasets concrete Parkinsons Friedman Wine Airfoil

#Attri 9 22 11 12 6

#Train 530 3000 20768 800 753

#Test 500 2875 20000 799 750

(ELM) #Nodes 20 200 20 20 20

Table 2 lists the averaging results of multiple trials using all these six algorithms. All these results (RMSE) in this section are obtained over 50 trials for all cases. Seen from Table 2, for all regression cases, the testing RMSE of ABC-ELM are generally much smaller than other methods with the same hidden-node numbers. This because, in basic ELM, some of the hidden nodes may play a very minor role in the network output an d may eventually increase the network complexity and reduce the effici ency of generalization. Furthermore, ABC is acting a global search, used to find best hidden nodes parameters, we can obtain

68


Communications in Control Science and Engineering (CCSE) Volume 2, 2014

www.as-se.org/ccse

higher searching efficacy, meanwhile, obtaining compact network architecture. TABLE 2 RESULTS COMPARISONS FOR THE REGRESSION

Datasets concrete Parkinsons Friedman Wine Airfoil

Mean 0.0684 0.1947 0.0846 0.0865 0.1154

ELM Times(s) 0.0305 1.9066 6.3181 0.2934 0.2063

Mean 0.1884 0.1349 0.1914 0.1782 0.2044

I-ELM Times(s) 0.3943 4.9827 13.525 0.4633 0.4138

Mean 0.0501 0.1115 0.0776 0.0743 0.1137

PC-ELM Times(s) 0.401 4.1016 13.9422 1.7782 1.0392

(P, t) (2,10) (4,10) (5,20) (2,10) (4,10)

PSO-ELM Mean Times(s) 0.0487 2.5914 0.1064 13.7697 0.0772 32.2341 0.0736 5.3247 0.1035 5.0946

ABC-ELM Mean Times(s) 0.0392 7.0357 0.0817 51.3485 0.0644 90.0382 0.0613 13.0376 0.0798 11.7301

Evaluation on Classification Problems In this section, the performance of ABC-ELM is evaluated on 6 real-world dat asets classification problems. Th e specification of these 6 datasets is showed in Table 3. For each trial of simulations, the training set and t esting set are randomly generat ed from the whole dataset with the partition number show ed in Table 3. The number of E LM hidden l ayer nodes are also setting in Tabl e 3. For compared, we set the ELM parameters as well as the literature [12]. All these results in this section are obtained over 50 trials for all cases. The averaging classification results (testing accuracy) of multiple trials for all these 6 real world dat asets are shown in Table 4. With the same hi dden-node numbers, we can easily fin d that ABC-ELM achieves the highest success t esting rates in all 6 datasets among these 6 algorithms. The best parameters of hidden node that l eading to the larg est residual error decreasing will be added to the ABC-ELM network. TABLE 3 SPECIFICATIONS OF REAL-WORLD CLASSIFICATIONS DATASETS

Datasets Segment Iris Heart Waveform Musk Statlog (Landsat Satellite)

#Attri 19 4 14 21 166 36

#Train 1500 80 152 2500 3299 1000

#Test 810 70 151 2500 3299 1000

Classes 7 3 2 3 2 6

#Nodes 20 10 20 20 100 200

TABLE 4 RESULTS COMPARISONS FOR THE CLASSIFICATIONS Datasets Segment Iris Heart Waveform Musk Statlog

ELM Mean 83.46 93.75 77.13 84.06 93.94 83.39

Times(s) 0.2344 0.0469 0.0313 0.1251 1.3906 0.0625

I-ELM Mean 73.19 78.57 78.77 73.88 77.35 73.06

Times(s) 29.9302 6.9872 6.8453 50.1095 80.2934 42.326

PC-ELM Mean 89.34 91.43 81.23 84.55 89.59 89.34

Times(s) 31.0722 5.9126 6.6810 51.7299 73.7254 31.0735

PSO-ELM (P,t) (4,11) (5,100) (5,100) (5,100) (5,100) (5,100)

Mean 95.07 97.59 85.43 87.92 96.68 91.40

Times(s) 36.3627 16.5071 22.7251 63.4437 90.7731 68.1303

ABC-ELM Mean 97.33 100 87.42 89.13 97.85 94.01

Times(s) 56.5271 59.2946 83.6907 126.1677 148.0683 107.314

Conclusions In this paper, we have developed a novel learning algorithm named ABC-ELM for single hidden layer feedforward networks. A performance comparison of ABC-ELM with other learning algorithms has been carried out on benchmark problems in the areas of regression and classification. This paper shows that compared with I-ELM, P CELM and PSO-ELM, proposed ABC-ELM can achieve faster convergence rate and much more compact network architectures. Unlike some ELM learning methods, our new approach incorporating the self-adaptive differential evolution algorithm to optimize the network hidden node parameters an d employing the extreme learning machine to derived the network output weights. The main goal in this paper is not to show that the ABC-ELM is either the best in the term of testing accuracy or the computational time. The main goal is to prove that it is very good compromise among the number of hidden nodes, the training time and the testing accuracy/RMSE. Furthermore, this paper proves that paramet er of hidden nodes in SLFNs can be optimi zed by any optimization algorithm, thus the performance of ABC-ELM with other types of optimization methods is worth investigated. Also, w e will try to find a more efficient way to reduce the training time by the proposed ABC-ELM algorithm. ACKNOWLEDGEMENTS

The research work was supported by National Natural Science Foundation of China un der Grant No. 61364007 an d Natural Science Foundation of Guangxi Provincial under Grant No. 2011GXNSFC018017.

69


www.as-se.org/ccse

Communications in Control Science and Engineering (CCSE) Volume 2, 2014

REFERENCES

[1]

G. B. Huang, Q. Y. Zhu, and C. K. Siew. “Extreme learning machine: A new learning scheme of feedforward neural networks,” in Proc. Int Joint Conf. Neural Netw., vol. 2. Budapest, Hungary, Jul. 2004, pp.985-990.

[2]

G. B. Huang, Q. Y. Zhu, and C. K. Siew. “Extreme learning machine: Theory and applications,” Neurocomputing, v ol. 70, nos. 1–3, pp. 489–501, 2006.

[3]

Jiuwen Cao, Zhiping Lin. Guang-Bin Huang. Self-Adaptive Evolutionary Extreme Learning Machine. Neural Process Lett (2012) 36:285–305.

[4]

HUANG Guang Bing, XIAO Jian Ding. Optimization method based extreme learning machine for classification. Neurocomputing, 2010,74(1):155-163.

[5]

Iosifidis A, Tefas A, Pitas I. Dynamic action recognition based on dynemes and extreme learning machine. Pattern Recognition Letters,2013,34(15):1890-1898.

[6]

KARABOGA D, BASTURK B. A powerful and efficient algorithm for numerical function optimization: artificial bee colony (ABC)algorithm. J ournal of global optimization, 2007, 39(3):459-471.

[7]

Basturk, B., Karaboga, D. An Artificial Bee Colony (ABC) Algorithm for Numeric function Optimization. In: IEEE Swarm Intelligence Symposium, May 12-14, 2006, Indianapolis, Indiana, USA (2006).

[8]

Alok Singh. An artificial bee colony algorithm for the leaf-constrained minimum spanning tree problem. Applied Soft Computing,2008, 9:1-7.

[9]

Dervis Karaboga and Bahriye Basturk. Artificial Bee Colony (ABC) Optimization Algorithm f or Solv ing Constrained Optimization Pr oblems. Springer-Verlag Berlin Heidelberg, pp. 789–798, 2007.

[10] D. Karaboga, B. Basturk. On the performance of artificial bee colony (ABC) algorithm. Applied Soft Computing 8 (2008) 687– 697. [11] HUANG Guang Bing, LI Ming BIN, CHEN Lei, et al. Incremental extreme learning machine with fully complexity hidden nodes. Neurocomputing, 2008, Vol.71, pp.576-583. [12] Yimin Yang, Ya onanWang, Xiaofang Yuan. Parallel Chaos Search Based Incremental Extreme Learning Machine. Neural Process Lett (2013) 37:277–301. [13] Wang jie, Bi Haoyang. Particle Swarm Optimization (PSO) Search Based on Extreme Learning Machine, J. Zhengzhou Univ. (Nat. Sci. Ed),2013,45(1),1 00-1 04 (in chinese) [14] Qin A-K, Huang V-L, Suganthan P-N (2009) Differential evolution algorithm with strategy adaptation for global numerical optimization. IEEE Trans Evol Comput 13(2):398–417. [15] Bartlett PL (1998) The sample complexity of pattern classification with neural networks: the size of the weights is more important than the size of the network. IEEE Trans Inf Theory 44(2):525–536. [16] CHEN Zhuo Ming, WANG Yun Xia, LI Wei Xin, et al. Artificial bee colony algorithm for modular neural network. Advances in Neural Networks, ISNN 2013, Springer Berlin Heidelberg, 2013:350-356. Chunning Song received the M.E. degree in applied mathematics and M.Eng. degree in computer engineering from Huazhong University of Science & Technology, PR China, in 1992 and 1995, respectively. From May 1995, he has been working as an Associate Professor in the Automation institute of guangxi, College of Electrical Engineering, Guangxi University. He was awarded twice the second prize for Guangxi scientific and technological progress award in 2006 and 2011. His current research interests include machine learning, computational intelligence, neural networks, and power electronic equipments. Xiaofeng Lin has been a Professor at College of Electrical Engineering, Guangxi University, Nanning,China since 1999. From 2008 to 2009, he was a visiting Professor with the University of Illinois at Chicago, USA. His research interests are neural networks, approximate dynamic programming and intelligent control and information processing. Liangming Feng is currently a master student with Guangxi University, College of Electrical Engineering, Nanning, His research interests include neural networks and evolutionary algorithms. He has published a number of papers

70


Communications in Control Science and Engineering (CCSE) Volume 2, 2014

www.as-se.org/ccse

in international journals and conferences. Heng Li is currently a master student with Guangxi University, College of Electrical Engineering, Nanning, His research interests include neural networks and opimization algorithms.

71


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.