Comparison of Signal Processing Inspired Trading Systems Steven C. Rogers, Member, EE-Pub Published: February 2, 2005
Abstract Many powerful Digital Signal Processing (DSP) techniques have been developed to support time series analyses. This discipline lends itself very nicely to the development of trading decisionmaking. Recently, applications of DSP to trading systems have been more the focus of researchers. The techniques in this paper have application to high frequency (intra day) trading as well as long-term trades. A study by Ehler (ref. 1) has proposed some interesting concepts, which will be applied and compared in this paper using matlab for implementation. These approaches make extensive use of the Hilbert transform and cycle estimation. They are compared using various stock price data.
Article Information Field of Study—DSP Keywords—frequency-based-analysis Hilbert-transform Cycle-period-estimation tradingsystems financial-trending
I. INTRODUCTION Digital signal processing has long been used for time series analyses. Since the advent of more powerful computers it has increasingly been used in real-time applications, such as semi-automated trading systems. In these systems signal processing technologies have been useful in signal detection and signal change detection. The development of clear indicators that aid in decision-making is a subject of intense interest for trading systems. One of these areas of research is cycle period estimation for improving trading performance. However, there are obstacles to simplistic estimation of cycle periods including noisy data, multiple sinusoids, and data window issues. These problems have been the subject of research in signal processing and control applications for decades1-4. One approach for frequency estimation involves the Hilbert transform1-5. The Hilbert transform is a procedure that transforms ordinary price data to complex values (a number composed of a real and an imaginary part). The real component is called inphase and the imaginary is called quadrature. Once the inphase and the quadrature components are calculated then several approaches are available to develop indicators useful for trading systems. In this paper five different methods developed by Ehlers are presented and matlab script files are given in the appendix. In addition an observer approach is presented and compared.
II. HILBERT TRANSFORM CYCLE ESTIMATION TECHNIQUES One useful approach toward cycle estimation is to use a Hilbert transform5,6 to generate an inphase (real-Re) component and a quadrature (imaginary-Im) component. With the
inphase and quadrature components determined there a homodyne discriminator may be constructed to estimate the cycle or period. The homodyne discriminator approach has us multiply the current measurement with the complex conjugate of the previous measurement as.
Thus, we get the signal squared and the angular frequency, which can be used to easily solve for the cycle period. The above equation shows the general concept; the actual calculations are performed using the quadrature and inphase components determined from a Hilbert transform. This is the basis of the five approaches as developed by Ehlers1. The five approaches are: 1) Hilbert transform, 2) trendline, 3) zerolag, 4) MAMA, and 5) optimal predictor. They all have a common denominator in the use of the real and imaginary component. The use of the Hilbert transform is explained in references 1 to 4.
III. IMPLEMENTATION Matlab script is given below for the homodyne discriminator method. It is a portion of the complete matlab script that is given in the appendices. The cycle period estimate is used for all of the approaches and this code is repeated for all of the trading system approaches. Once the real and imaginary components of the signal are obtained, the magnitude and phase of the signal may be readily calculated.
Figure 1 Matlab code for Hilbert Transform and Homodyne Discriminator The price data is first passed through a simple moving average filter for smoothing (b in the code). It is weighted toward more recent data to reduce the amount of lag in the filter. It is then detrended (bdetr in the code) and smoothed to remove the bias and noise. The real and imaginary components are the final outputs of the code fragment. Although higher precision may be obtained by using a higher order filter, smaller filters cause fewer lags. The Hilbert transform coefficients (bdetr) were obtained by matching a desired frequency response (0 transfer function magnitude at 0 Hz and 1 transfer function magnitude near midrange). Since the methods are explained elsewhere this paper will not explain them. An observer approach is now presented which shows promise as the basis of a trading system. Basic observer design has been in use for decades6. A signal may be represented in the linear state space form x = A*xp + w, where x is the output state space signal which may include measurements, A is the control matrix, xp is the previous state vector, and w is a process noise or disturbance. If y is the measurement of interest then y = C*x, where C is the output matrix. Since x is not known exactly, only y is known, x must be estimated. The following equation represents the estimation equation along with a correction based on the known measurement:
where is the state estimate, is the previous state estimate, K is the observer gain, and z is the measurement. In our case z is the current price. In order to get as smooth an estimate as possible the estimate is assumed to consist of value, rate, and acceleration components. Thus, A is where dt is the sampling interval. C = [1 0 0]. Now the observer gain may be calculated by a number of ways. Matlab code is included at the end of the appendix. Note that to create a ‘slow’ signal the price estimate is passed through a low pass filter.
IV. RESULTS The above-described methods were compared against each other based on return as defined by
where P(i) is the current price and P(i-1) is the previous price. R(i) values were summed during the buy signal periods. This approach normalizes each price for comparison purposes. Daily prices for the stocks given in Table 1 for a period of ~ 2 years were chosen for evaluation. Table 1 shows the returns for the 6 approaches. The table values are computed as: If buy then tablei = sum(R(i)). Percent return on investment is obtained by multiplying the table values by 100%.
Table 1 Return sums for the 6 DSP approaches The approaches rank consistently as: 1) Observer, 2) MAMA, 3) Zerolag, 4) Trendline, 5) Hilbert, and 6) Opt Predict. Observer is slightly ahead of MAMA and they both consistently outperform the rest. Zerolag and Trendline are close to each other in performance. The following figures show the buy-sell comparisons for some of the stocks.
V. CONCLUSIONS
A number of alternative trading systems have been developed and compared. Also, a simple observer based approach has been presented. Matlab code has been provided as well as simulation results for the various methods. Of the 6 approaches considered, the observer approach and MAMA appear to be superior. They have been examined with stock price data having different characteristics and have performed well in all cases. More testing and evaluation is necessary before definitive performance conclusions may be made, however, there is sufficient basis for further testing.
Figure 2 Raytheon buy-sell comparisons
Figure 3 Boeing buy-sell comparisons
Figure 4 Exxon buy-sell comparisons
Figure 5 Pepsico buy-sell comparisons
Figure 6 Monster buy-sell comparisons
Figure 7 Continental Airlines buy-sell comparisons
Figure 8 Fifth Third Bancorp buy-sell comparisons
VI. ABOUT THE AUTHOR
Steven C. Rogers is with the Institute for Scientific Research, Fairmont, WV, USA 26554, srogers@isr.us
VII. REFERENCES [1] J.F. Ehlers, Rocket Science for Traders (John Wiley & Sons, 2001), ISBN 0-471-405667-1 [2] H. Baher, Analog & Digital Signal Processing (John Wiley & Sons, 1990), ISBN 0-471-92342-7. [3] R.E. Ziemer & W.H. Tranter, Principles of Communications, 4th Edition (John Wiley & Sons, 1995), ISBN 0-471-12496-6. [4] K. Shenoi, Digital Signal Processing in Telecommunications Processing (Englewood Cliffs, NJ: PrenticeHall, 1995), ISBN 0-13-096751-3. [5] Rogers, S., Comparison of Different Frequency Detector Algorithms For a Generic Adjustable Notch Filter, American Control Conference 2003, FM03-1 [6] Gelb, A. Applied Optimal Estimation, MIT Press, 1974, ISBN 0-262-57048-3
VIII. APPENDIX The following matlab code is the author’s interpretation of code given by Ehlers1 and is included as a study aid only. The complete matlab code for the five trade systems is given in entirety to show how the real and imaginary components interface. function yout = MLP(in,N,m); % % MLP backpropagation learning for single hidden layer % W is output layer weights % Vi is for ith hidden layer % % Assume N number of interior nodes, then the MLP NN equations are: % O = W*atanh(V*I); % With the above there are 2 update equations: % W = W - mu*err*atan(V*I); % V = V - mu*err*W*I*[1/(1+(V*I)]; % N is the number of interior nodes % m is the number of inputs including the bias signal persistent X % N = 30; % m = 10; init = in(1); d = in(2); % Initialize W & V if init == 1 | isempty(X) X.W = zeros(1,N); X.dW = X.W; X.V = rand(N,m)/10000; X.dV = zeros(size(X.V)); X.in = [1;d*ones(m-1,1)]; X.predslow = d; end marketposition = 0; alph = 0.5; mu = -.013;
bet = .018; G = tanh(X.V*X.in); out = X.W*G; err = d - out; nextW = X.W - mu*err*G' + bet*X.dW; sec2h = sech(X.V*X.in); sec2h = sec2h.*sec2h; nextV = X.V - mu*err*sec2h.*X.W'*X.in'... + bet*X.dV; X.in(2:end) = [d;X.in(2:end-1)]; X.dW = nextW - X.W; X.dV = nextV - X.V; X.W = nextW; X.V = nextV; % step ahead prediction G = tanh(X.V*X.in); pred = X.W*G; X.predslow = alph*X.predslow + (1 - alph)*pred; if pred-X.predslow>0,marketposition = 1;end if pred-X.predslow<0,marketposition = -1;end yout = [out,pred,marketposition*0.9]; function yout = MLP_recurrent(in,N,m); % % MLP backpropagation learning for single hidden layer % W is output layer weights % Vi is for ith hidden layer % % Assume N number of interior nodes, then the MLP NN equations are: % O = W*atanh(V*I); % With the above there are 2 update equations: % W = W - mu*err*atan(V*I); % V = V - mu*err*W*I*[1/(1+(V*I)]; % N is the number of interior nodes % m is the number of inputs including the bias signal persistent X % N = 30; % m = 10; init = in(1); d = in(2); % Initialize W & V if init == 1 | isempty(X) X.W = zeros(1,N); X.dW = X.W; X.V = rand(N,m+N)/10000; X.dV = zeros(size(X.V)); X.in = [1;d*ones(m-1,1);zeros(N,1)]; X.predslow = d; end marketposition = 0; alph = 0.5; mu = -.017;
bet = .06; G = tanh(X.V*X.in); out = X.W*G; err = d - out; nextW = X.W - mu*err*G' + bet*X.dW; sec2h = sech(X.V*X.in); sec2h = sec2h.*sec2h; nextV = X.V - mu*err*sec2h.*X.W'*X.in' + bet*X.dV; X.in(2:end) = [d;X.in(2:m-1);G]; X.dW = nextW - X.W; X.dV = nextV - X.V; X.W = nextW; X.V = nextV; % step ahead prediction G = tanh(X.V*X.in); pred = X.W*G; X.predslow = alph*X.predslow + (1 - alph)*pred; if pred-X.predslow>0,marketposition = 1;end if pred-X.predslow<0,marketposition = -1;end yout = [out,pred,marketposition*0.9]; function out = FIRpred(init,in,N) % mu = 0.01; alphsig = 0.8; alph = 0.5; persistent X if init==1 | isempty(X) X.coef = ones(1,N)/N; X.px = in*ones(N,1); X.sigma = 2; X.yslow = in; end out = X.coef*X.px; err = in - out; % update variables X.sigma = alphsig*X.sigma + (1 - alphsig)*in*in; mu_e = mu*err/X.sigma; X.coef = X.coef + mu_e*X.px'; X.px = [in;X.px(1:end-1)]; % prediction pred = X.coef*X.px; X.yslow = alph*X.yslow + (1 - alph)*pred; if pred > X.yslow,marketposition = 1;end if pred <= X.yslow,marketposition = -1;end out = [out,X.yslow,marketposition*0.9]; function out = IIRpred(init,in,num,den) %
mu = 0.02; alphsig = 0.8; alph = 0.5; persistent X if init==1 | isempty(X) X.cnum = ones(1,num)/(num+den); X.cden = ones(1,den)/(num+den); X.pnum = in*ones(num,1); X.pden = in*ones(den,1); X.sigma = 2; X.yslow = in; end out = X.cnum*X.pnum + X.cden*X.pden; err = in - out; % update variables X.sigma = alphsig*X.sigma + (1 - alphsig)*in*in; mu_e = mu*err/X.sigma; X.cnum = X.cnum + mu_e*X.pnum'; X.cden = X.cden + mu_e*X.pden'; X.pnum = [in;X.pnum(1:end-1)]; X.pden = [out;X.pden(1:end-1)]; % prediction pred = X.cnum*X.pnum + X.cden*X.pden; X.yslow = alph*X.yslow + (1 - alph)*pred; if pred > X.yslow,marketposition = 1;end if pred <= X.yslow,marketposition = -1;end out = [out,X.yslow,marketposition*0.9]; function out = EST4pred(init,close); % % from input data estimate rate, acceleration, % & jerk components to improve estimates % persistent X if init==1; Ts = 1/21600; C = [1 0 0 0]; abar = [1 Ts .5*Ts*Ts Ts*Ts*Ts/3;0 1 Ts .5*Ts*Ts;... 0 0 1 Ts;0 0 0 1]; [L,prec,msg] = place(abar',C',... [exp(-5100*Ts),exp(-2500*Ts),exp(-2350*Ts),exp(-2200*Ts)]); abar = abar - L'*C; bbar = L'; X.amat = abar; X.bmat = bbar; X.xvec = close*C'; X.xslow = close; end buy_sell = 0; alph = 0.55;
X.xvec = X.amat*X.xvec + X.bmat*close; X.xslow = alph*X.xslow + (1 - alph)*X.xvec(1); if X.xvec(1) > X.xslow,buy_sell = 1;end if X.xvec(1) <= X.xslow,buy_sell = -1;end % update variables out = [X.xvec(1),X.xslow,buy_sell*.95]; function out = EST3(init,close); % % convert radians to degrees persistent X if init==1; Ts = 1/21600; abar = [1 Ts .5*Ts*Ts;0 1 Ts;0 0 1]; [L,prec,msg] = place(abar',[1 0 0]',... [exp(-5000*Ts),exp(-2450*Ts),exp(-2400*Ts)]); abar = abar - L'*[1 0 0]; bbar = L'; X.amat = abar; X.bmat = bbar; X.xvec = close*[1 0 0]'; X.xslow = close; end buy_sell = 0; alph = .45; X.xvec = X.amat*X.xvec + X.bmat*close; X.xslow = (1 - alph)*X.xslow + alph*X.xvec(1); if X.xvec(1) > X.xslow,buy_sell = 1;end if X.xvec(1) < X.xslow,buy_sell = -1;end % update variables out = [X.xvec(1),buy_sell*.95,X.xslow];