Portfolio Optimization Using Particle Swarm Optimization Steven C. Rogers, Member, EE-Pub, Leon Luxemburg, Matt McMahon, Steven Knudsen Published: March 29, 2005
Abstract Portfolio Management is an important constrained optimization topic in finance. A typical approach is estimating the value of return and the variance or volatility of a set of equities. From this information an optimal portfolio is determined using constrained optimization techniques. Classical gradient-based optimization methods are subject to being trapped in local extrema and have difficulty finding the global extrema. Evolutionary optimization algorithms such as genetic algorithms, ant algorithms, and particle swarm optimization are designed to search for global extrema within a given range and are more suitable for complex systems. In this paper particle swarm optimization will be applied to portfolio optimization.
Article Information Field of Study—Optimization Keywords—mean-variance-optimization Prediction estimation particle-swarmoptimization constrained-optimization
I. INTRODUCTION There are only a few things in life which are regarded as a better synonym for uncertainty than security prices. There are just too many factors and unpredictable events which influence security prices, such as, the economic situation, political events, company influences, behavior of buyers and sellers, and technical innovations. This lack of predictability with securities has driven investors toward diversification as a means of risk reduction. However, even diversification cannot protect investors under all circumstances. Thus, the issue of finding the optimal distribution of resources over a given set of securities has gained importance. One of the approaches commonly used for portfolio optimization is a simple oneperiod model1,8 or step ahead model. It is also called the mean-variance approach1 and is one of the most commonly used methods for portfolio management. Since it is a discrete event driven approach as opposed to the more complex continuous time diffusion models derived from the basic Black-Sholes equation, it has become popular due to its simplicity and logic. Investment decisions are made at the beginning of an investment period based on the current performance of the equities. The results of the decision are evaluated at the end of the period and no modifications are made until the end of the decision. Thus, the one-period model is a static investment model that is valid for a single data period only. The objective of portfolio optimization is to find a proper balance between risk and return, that is, between portfolio variance (risk) and portfolio mean (return). The optimization mechanism is to manipulate the fraction of the resources devoted to each equity within the portfolio. The resource fraction ranges from 0 to 1. The equity alternatives include cash and may be as diverse as desired. Even if not chosen for one period, a given equity may be chosen for another period. When selecting a portfolio an investor has the intention of maximizing his return. If this is the only criterion, then this will inevitably lead to investing all resources into a single equity with the higher mean
return. However, this equity may have a high variance, which carries a high risk. In order to accommodate risk factors the optimization cost function incorporates a risk term that represents the variance of each equity within the portfolio. Therefore, at each investment period beginning we are searching for a portfolio vector that optimizes return and variance, or, maximizes a combination of return and variance. Portfolio optimization may be accomplished using evolutionary algorithms, such as genetic algorithms, ant algorithms, and particle swarm optimization. Particle swarm optimization (PSO) is another form of evolutionary computation, which was inspired initially by flocking birds. It is stochastic in nature, as are genetic algorithms. Unlike genetic algorithms with their dying and mutating populations PSO swarms have a fixed number of particles that fly through the problem space. The search for an extrema is made possible by a particle’s remembering its own past best position and the global best position. It has been shown to have genetic algorithm advantages (global searches without becoming locked in local extrema) while avoiding the larger computational burden of genetic algorithms. PSO is based on the concept that complex behaviors may be derived from a few simple rules. PSO will be explained further in section 3.
II. ONE STEP AHEAD PORTFOLIO OPTIMIZATION The one step ahead approach1,8 to portfolio optimization may be represented in standard optimization form by a cost function and a set of constraints. In matrix form the cost function1,8 is:
J is the scalar cost function, W is the vector portfolio fraction of resources expended on each equity, R is the return vector, and Q is the covariance matrix representing the variance or volatility of the equities within the portfolio. W’ is the transpose of W. Each return may be computed as:
where Pi is the price of each equity and Pi-1 is the previous price. Note that the 1st term of J represents the investment return estimate and the 2nd term represents the risk factor in the form of equity variance. R has the difference in the numerator and consequently is a relatively noisy signal. Consequently, a smoothed estimate may be used. Also, since we are forecasting future behavior a predicted value may be inserted here instead of the current value. The Q matrix is estimated by computing the cross-correlation matrix of the portfolio i.e., Q = RR’. If there are n elements in the vector R then Q is an nxn matrix. As with the R vector, the Q matrix may be smoothed by a low pass filter or the Q matrix can be the covariance matrix estimated by a Kalman filter. Therefore, the optimization problem may be written:
The 1st condition ensures that the entire portfolio is accounted for and included in the computation. The 2nd condition ensures that no negative resource allocations, which are physically impossible, occur. The above-specified problem may be solved by different approaches, including evolutionary algorithms, such as genetic algorithms, ant algorithms, or particle swarm optimization. Classical gradient-based algorithms may become trapped by local extrema. Genetic algorithms are less subject to local extrema as they are designed to explore the entire search space without resorting to derivative calculations. There are several alternative means to select R. In one approach we may assume that the previous value is the same as the predicted value. Another approach is to design an adaptive predictor3 using linear or nonlinear adaptive filter structures. In this study we will compare both methods. The predictive method may be represented as below.
The architecture of predictive filters6 is shown in Figure 1.
Figure 1 Architecture of an adaptive predictive filter The Q matrix may be passed through a simple low pass filter as shown below.
where a is a scalar filter pole in the range [0 to 1]. The filter pole is ordinarily in the reduced range [0.75 to 0.98]. This is a simple approach to solving the problem. The Kalman filter may also be used to solve for the covariance matrix. The Kalman filter equations in matlab code are presented below. In the code below, the covariance matrix is represented by P. P and the estimated state vector x are output at each time period. A. Matlab code for Kalman filter function [P,x] = KF(zm,H,A,x,R,Q,P,B,u); % % K is the Kalman Filter gain % zm is measurement vector % H is output measurement matrix % A is control state matrix % x is state vector % R is the measurement noise weighting matrix % Q is the state noise weighting matrix % P is the covariance matrix % x = A*x;projection K = P*H'*inv(H*P*H' + R);compute Kalman gain x = x + K*(zm - H*x);update state estimate P = P - K*H*P;Error covariance P = A*P*A' + Q;Predict covariance matrix
The next section will explain particle swarm optimization.
III. PARTICLE SWARM OPTIMIZATION The basic principles of particle swarm optimization (PSO) have a very simple explanation. A set of moving particles known as the swarm is initially given random positions inside the bounded search space. The follow swarms in nature, such as flocks of birds or insect swarms. Each particle has the following features: - initial and current position and velocity - it knows its own position and the objective function value for its position - it knows its own best previous position and objective function value - it knows the swarm’s best previous position (global best) and objective function value - it has a memory of its own best previous position and the global best previous position - it moves through the particle space based on very simple rules Note that the behavior of a given particle is a compromise between three possible choices: - follow its own way - go towards it best previous position - go towards the global best previous position The equation below illustrates the above features2-5. Each particle trajectory has this governing equation.
where vt is the velocity at time step t, xt is the position at time step t, pi,t is the best previous position at time step t, pg,t is the global best previous position, and ci are the confidence coefficients. As the above equations show with an initial random position and velocity each particle will be attracted to its own previous best and the global best. Each particle will be forced to ‘fly’ through the bounded search space until the search is ended. PSO can be used for optimization problems with relatively simple straightforward constraints and does not rely on or require derivatives or gradients. Nevertheless, each particle has a defined position and velocity at each time step. Our experience indicates that PSO will quickly converge to a near global extrema and avoid local extrema; however, another gradient-based optimization technique should be used from that point. Note that PSO converges quickly to the general vicinity of a global extrema, but has a slow convergence to the actual global extrema. Most successful strategies utilize a combination of PSO and a gradient search technique. Birge2 has developed PSOT, a Particle Swarm Optimization Toolbox for Matlab, which was used for this study. It may be downloaded from the web.
IV. RESULTS Simulations were based on a 9 stock portfolio and cash at 1% annualized return, thus there are 10 equities for choices at each update. Transaction costs and the effects of
transaction delays were ignored in this study for two reasons: 1) difficulty in estimating the costs, and 2) the purpose is to compare algorithms and not the complete strategy. Realistic transaction costs and delays will be the subject of future studies. The choices are based on optimization criteria as explained above. The stock data is for a two-year period and is end of day closing prices. Optimizations were performed at the end of each day. A buy and hold strategy will produce the following results for each of the portfolio equities: The acronyms are the exchange symbols used to find the data from the internet7.
Thus, if an investor had bought equity 7 (mnst) and held it throughout the data period, he would have a ~124.78% return on investment. Four inputs to the PSO optimizer were simulated: 1) raw R – filtered Q, 2) raw R – Kalman filter derived Q, 3) predicted R – filtered Q, and 4) predicted R – Kalman filter derived Q. The plots for each input condition include the optimized portfolio allocation and the estimated R compared with the actual. Also tested and plotted are 6 period steps ahead for comparison. The idea is to optimize for 6 time periods based on current values of 1 period step ahead. These ideas were also compared and are shown in the attached plots. The results for the PSO optimization for the one step ahead approach are given in the table below.
The results for the single step ahead optimization were obtained using the following parameters: - 13 iterations or epochs - 40 particle members - 0.5 maximum particle velocity - 0.25 acceleration const 1 (c1) - 0.25 acceleration const 2 (c2) The results for the six step ahead optimization were obtained using the following parameters: - 30 iterations or epochs - 30 particle members - 0.5 maximum particle velocity - 0.25 acceleration const 1 (c1) - 0.25 acceleration const 2 (c2) A basic weakness of the above portfolio optimization statement (shown below) is that many of the recommended allocations are either 1 or 0.
It is more desirable to have a more distributed portfolio for a risk averse investor. We may consider an additional constraint that will improve the portfolio diversity and avoid concentrations in a single equity. Such a constraint is: W’*W = Wk where W is the portfolio vector and Wk is the target value of the average value of W. Note that W’*W is essentially the average fraction of the investment in a given equity. Therefore, if this is constrained so that Wk is a median value between 0 and 1, it will force the whole portfolio to converge to this value (according to the weighting associated with this constraint). This will be left as a future research topic. The matlab calling routine for the PSOT routine (psovectorized) is given below: A. Particle swarm optimization matlab code function wal = pso_port(alloc,roi,P) % main.m % Brian Birge % Rev 2.0 % 10/30/03 global roi_pf P_pf roi_pf = roi; P_pf = P; save covar_mat alloc save port_params alloc [nr len] = size(roi); Alim = [zeros(len,1),ones(len,1)]; set(gcf,'Position',[636 33 640 295]);
hold on axis('equal') [wal,walval] = psovectorized('pso_port_opt',len,Alim,1,... [2,13,40,.5,.25,0.25,0.975,0.6,1500,1e-90,50,1]);
The optimization cost function is: function out = pso_port_opt(wal) % global roi_pf P_pf roi = [roi_pf]; out = roi*wal' - wal*P_pf*wal';
V. CONCLUSIONS AND FUTURE RESEARCH A few approaches toward obtaining inputs for the one-step ahead portfolio optimization problem have been compared. Particle Swarm Optimization has been demonstrated as suitable for this simplified portfolio optimization problem. Four different types of input suites representing the returns (raw calculations and predicted using an adaptive filter) and risks (filtered covariance and Kalman filter derived) were presented. In addition, a six step ahead procedure was developed for comparison. In all cases the optimized solution showed improvement over the buy and hold procedure. The one step ahead showed better results than the six step ahead procedure. The raw return gave better results than the adaptive predicted and the Kalman filter covariance estimate was better than the smoothed covariance for the one step ahead process. The results for the six step ahead process were inconclusive regarding prediction and the use of the Kalman filter. It is important to understand that each particle is randomly selected at the beginning of the tour; therefore, the number of iterations (epochs) must be large enough to prevent random initialization from skewing the end results. Future research will include larger portfolios that have closer to realistic sizes and constraints to enforce diversification. Investigations into other forms of evolutionary algorithms (genetic algorithms and ant algorithms) will be included. Improvements in the particle swarm optimization based on stability theory will be applied to speed of convergence and accuracy. Stochastic optimal control applied to portfolio optimization will be studied.
VI. ABOUT THE AUTHORS Dr. Steven C. Rogers is with the Institute for Scientific Research, Fairmont, WV, USA 26554, srogers@isr.us; Dr. Leon Luxemburg is with the Institute for Scientific Research, Fairmont, WV, USA 26554, lluxemburg@isr.us, Mr. Matt McMahon is with the Institute for Scientific Research, Fairmont, WV, USA 26554, mmcmahon@isr.us. Mr. Steven Knudsen is with the Institute for Scientific Research, Fairmont, WV, USA 26554, sknudsen@isr.us.
VII. REFERENCES
[1] Korn, R. Option Pricing and Portfolio Optimization, American Mathematical Society, 2001, ISBN 08218-2123-7 [2] Birge, B. ‘PSOT, A Particle Swarm Optimization Toolbox for Matlab,’ IEEE Swarm Intelligence Symposium Proceedings, April 24-26, 2003 [3] Eberhart, R., et al, Computational Intelligence PC Tools, Academic Press, Inc., 2003, [4] Kennedy, J., et al, Particle Swarm Optimization, 1995, Proc. IEEE Int’l Conf. On Neural Networks. [5] Kennedy, J., et al, Swarm Intelligence, Academic Press, Inc., 2001. [6] Rogers, S., et al, Adaptive Predictive Trading Systems, http://www.ee-pub.com ISSN 1554-4982, February 22, 2005 [7] http://finance.yahoo.com/q/hp?a=07&b=2&c=2002&d=00&e=8&f=2005&g=d&s=ba [8]Markowitz, H. Portfolio Selection, Journal of Finance 7, 1952, 77-91
Figure 2 Raw roi with filtered covariance allocations – one step ahead
Figure 3 Raw roi with Kalman filter covariance allocations – one step ahead
Figure 4 Predicted roi with filtered covariance allocations – one step ahead
Figure 5 Predicted roi with Kalman filter covariance allocations – one step ahead
Figure 6 Raw roi with filtered covariance portfolio values – one step ahead
Figure 7 Raw roi with Kalman filter covariance portfolio values – one step ahead
Figure 8 Predicted roi with filtered covariance portfolio values – one step ahead
Figure 9 Predicted roi with Kalman filter covariance portfolio values – one step ahead
Figure 10 Raw roi with filtered covariance allocations – six step ahead
Figure 11 Raw roi with Kalman filter covariance allocations – six step ahead
Figure 12 Predicted roi with filtered covariance allocations – six step ahead
Figure 13 Predicted roi with Kalman filter covariance allocations – six step ahead
Figure 14 Raw roi with filtered covariance portfolio values – six step ahead
Figure 15 Raw roi with Kalman filter covariance portfolio values – six step ahead
Figure 16 Predicted roi with filtered covariance portfolio values – six step ahead
Figure 17 Predicted roi with Kalman filter covariance portfolio values – six step ahead