Swarm Optimization Applied to Engine RPM Control

Page 1

Swarm Optimization Applied to Engine RPM Control Steve Rogers, Brian Birge, Institute for Scientific Research, srogers@isr.us Abstract Optimization for control system design or testing is commonly used. Most of the optimization approaches are based on simplex or gradient descent. If the system is complex these approaches are susceptible to being caught in local minimums. Particle swarm optimization (PSO) is a subset of evolutionary computation, which includes genetic algorithms. Evolutionary search techniques have been introduced as a means of detecting global minimums within a parameter range. PSO has been presented by a number of researchers, with applications in function optimization and neural network training. In this study PSO theory and equations will be detailed. The procedure will be applied to an engine rpm control system and results will be presented. The optimization procedure is used to minimize cumulative error and select parameters for a lead-lag plus integral control system. The simulation was coded in simulink and is shown in the figures. Keywords: engine control, optimization, particle swarm optimization Introduction Control system optimization has long been used to improve performance, fine tune control parameters, and to enforce performance criteria under varying operating conditions. Idle control has long been a concern of engine manufacturers, as much of normal operation occurs at idle. During idle the engine speed must remain constant in the face of numerous disturbances including air conditioning/heating loads, lighting, battery charging, and power steering. The system is also highly nonlinear with variable time delays. Consequently, it would be of value to optimize controller parameters based on reasonable control criteria. Engine idle controllers have traditionally been classical control circuits, such as proportionalintegral-derivative, lead, and lead-lag designs. In nearly all cases these controller designs have the capability to meet time-based or frequency-based control criteria for single input single output systems. Optimization is a relatively mature field and there are many texts on the classical optimization approaches4-8. The classical approaches have difficulty in avoiding local minimums. Recent approaches have been investigated that are modeled after a social metaphor2 in order to improve overall optimization. One of these approaches is called particle swarm optimization1,2. It has been favorably compared to other modern optimization techniques in avoiding local minimums2, time to completion, and overall accuracy of the solution. Particle swarm optimization (PSO) is a relatively new addition to the evolutionary methods. It may be considered as a form of Cellular Automata9; hence, it may be applied to a variety of applications. Swarm Optimization Fundamentals Particle swarm optimization (PSO) has been proposed as a technique to avoid local minimums. PSO is a subset of the evolutionary computation methodologies and is considered an adaptive algorithm. A population of individuals (particles or agents) or a swarm adapt by returning towards previously successful regions in the search space. The particles proceed through the search space testing the optimization criteria as they go. The particle trajectory is influenced by both the global best solution and the particle’s best solution. The influence of the global best performance permits swarm cooperation within the search. This influence also allows a memory of previous best performances. The PSO approach is summarized below1,2,9. The ith particle is represented as X I = ( xi1 , xi 2 , K , x iD ) , where D is the dimensionality of the problem. The velocity of the ith particle is V I = (vi1 , vi 2 , K , viD ) . The


best previous position of the ith particle is PI = ( p i1 , p i 2 , K , p iD ) . The global best previous position of the ith particle is PG = p g1 , p g 2 , K , p gD . In each iteration the particle velocity for each dimension is updated

(

)

by vik = wk vik + c1φ1 p ik + c 2φ 2 p gk {k , g }∈ {1,2, K , D} . wk is an inertia weight in the range [0 to 1]. Note

that 1 is an upper stability limit. c1 , c 2 are constants normally in the range [2 to 4]. φ1 , φ 2 are influence weightings for the personal best and global best and are in the range [0 to 1]. Each particle’s position is then updated by xik = xik + v ik . Convergence of the particle swarm is influenced by the constants, initial particle positions, and swarm size. Engine RPM Model The engine rpm model is in simulink and is part of the set of demonstrations provided in the mathworks software package. The top-level code is shown in Figure 1 below.

Figure 1 High-level simulink model from Mathworks Note that it has variable timing so that normal linearization techniques will not apply without modification of the timing block. The Controller block at the left of Figure 1 was modified for optimization. The drag torque block at the lower right of Figure 1 produces the load disturbances to the simulation. Simulation The controller strategy is shown in Figure 2 below. The control design is a lead-lag with an integrator. There is also an anti-windup mechanism (note the block with gain = .03 at the bottom center of Figure 2)


included with the integrator. The anti-windup gain was not part of the optimization. There were 5 optimization parameters – 2 in the lead block, 2 in the lag block, and 1 in the Ki (integral gain) block.

Figure 2, Optimized controller using PSO Reference 1 is the source of PSO matlab code. The toolbox referred to may be downloaded freely as a self-extracting zipfile from: http://www4.ncsu.edu/~bkbirge/PSO/PSOt.exe. The basic matlab optimization script is shown below. The function eng_opt is accessed by the PSO routine psovectorized that is in the PSO toolbox. The variable vector xin contains the 5 optimization parameters. function out = eng_opt(xin); % lead2 = -xin(1); lag2 = -xin(2); lead3 = -xin(3); lag3 = -xin(4); kp = xin(5); set_param('enginewc_LC/Controller/lead','Numerator',... ['[',num2str(1),' ',num2str(lead2),']']) set_param('enginewc_LC/Controller/lead','Denominator',... ['[',num2str(1),' ',num2str(lag2),']']) set_param('enginewc_LC/Controller/lag','Numerator',... ['[',num2str(1),' ',num2str(lead3),']']) set_param('enginewc_LC/Controller/lag','Denominator',... ['[',num2str(1),' ',num2str(lag3),']']) set_param('enginewc_LC/Controller/Kp','Gain',... num2str(kp)) sim('enginewc_LC') out = sum(abs(err(:,2))); out = out + sum(abs(err(:,1))); [nr nc] = size(err); noi = abs(err(1:end-1,2) - err(2:end,2))*2.5; out = out + sum(sum(noi)); save eng_dat xin out,xin The parameter out contains the performance criteria, which reduces setpoint error and response to noise. The matlab script main.m below is the calling function of the PSO engine rpm control application. Note


that the particles only operate within the set limits for each of the 5 optimization parameters as given in mini and maxi. % main.m % demo of the psovectorized.m function % the psovectorized.m tries to find the minimum of the eng_opt function % Brian Birge % Rev 2.0 % 10/30/03 min1 = 0.0; max1 = 0.5; min2 = 0.2; max2 = 0.85; min3 = 0.2; max3 = 2.5; min4 = 0.5; max4 = 0.98; min5 = 0.05; max5 = 0.7; Alim = [min1 max1; min2 max2; min4 max4; min5 max5; min3 max3]; set(gcf,'Position',[636 33 640 295]); hold on axis('equal') psovectorized('eng_opt',5,Alim,0,... [1,2000,40,.5,.25,0.25,0.975,0.6,1500,1e-9,50,1]) %psovectorized('eng_opt',5,Alim,0,... % [1,2000,40,.5,.25,0.25,0.975,0.6,1500,1e-9,50,1]) Results


Figure 3, Engine RPM simulation results

Figure 4, Simulation results showing load variation, command throttle degrees, and rpm error


Figures 3 and 4 show the simulation results. Figure 3 shows the rpm setpoint tracking performance. Figure 4 shows the engine torque load in Nm, the throttle command in degrees, and the setpoint tracking error. Results show very satisfactory performance. Conclusions An approach to obtaining optimized control systems designs, which avoid local minimums, has been presented. The approach uses particle swarm optimization technology. Although it requires more knowledge of the system than the classical simplex approaches, it has mechanisms to avoid the local minimums. Since it ensures that the global minimum is found, it must investigate many more cases and thus takes considerably more time to complete its optimization than other optimization methods. Further areas of investigation include improved and more efficient optimization criteria, applications to other types of control systems, and methods of improving the PSO searches. References: 1) Birge, B., 2003, PSOt, A Particle Swarm Optimization Toolbox for Matlab, IEEE Swarm Intelligence Symposium Proceedings, April 24-26. See also http://www4.ncsu.edu/~bkbirge/PSO/PSOt.exe. 2) Kennedy, J., Eberhart, R., Shi, Y., Swarm Intelligence, Academic Press, Inc., 2001 3) Bonabeau, E., Dorigo, M., Theraulaz, G., Swarm Intelligence From Natural to Artificial Systems, Oxford University Press, 1999, ISBN 0-19-513159-2 4) Pierre, D., Optimization Theory with Applications, , Dover, 1986, ISBN 0-486-65205-X. 5) Siddall, J., Optimal Engineering Design – Principles and Applications, Marcel Dekker, 1982, ISBN 0-8247-1633-7 6) Papadimitriou, C., & Steiglitz, K., Combinatorial Optimization – Algorithms and Complexity, Dover Publications, 1998, ISBN 0-486-40258-4 7) Marlow, W., Mathematics for Operations Research, Dover Publications, 1993, ISBN 0-486-677230 8) Hillier, F., & Lieberman, G., Operations Research, Holden-Day, 1974, ISBN 0-8162-3856-1 9) Voss, M., and Feng, X., ‘ARMA Model Selection Using Particle Swarm Optimization and AIC Criteria,’ 2002 IFAC 15th Triennial World Congress, Barcelona, Spain


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.