QMS .
.
Advisors
Capital Market Assumptions (CMA) Methodologies
Executive summary This paper offers a review of our framework for deriving return, volatility and correlation expectations for sovereign and corporate bonds, equities, alternative investments (hedge funds, private equity, commodities and real estate), and foreign exchange. QMS Advisors’ strategic asset allocation process involves 45 markets across seven asset classes for which our team provides long-term total return forecasts, volatility and correlation estimates. Our approach consists in obtaining a set of model-derived expectations, and to further refine our forecasts with numerous qualitative inputs; a process that relies on the contributions of a range of industry experts including economists, portfolio managers, and product specialists. Our rigorous quantitative and qualitative review processes ensure that our assumptions are based on sound economic and financial rationales. We further strive to utilize both comparable methodologies and common return drivers across assets to achieve consistency across our expectations (i.e. universal underlying macroeconomic assumptions): Consistency with economic theory and practice: a wide array of economic and market factors are combined in order to derive robust return expectations for each asset class. Consistency across business cycles: Macroeconomic factors are chosen for their ability to explain returns over multiple economic cycles. Consistency across asset classes: Expected returns reflect a congruent pricing of risk, measured by the exposure of each asset class to economic and financial factors. Capture dynamic market features: Interaction between economic and financial signals and the variations in asset classes’ potential returns and risks over time. We implicitly assume that -as suggested by empirical evidence- most of the key variables used in our models will converge over the long-run. Therefore bond yields, GDP and dividend growth are expected to converge over longer periods. For most asset classes we use clearly specified multi-linear regression models to forecast returns, while relying on traditional models in the cases of equities and foreign exchange (Dividend Discount Model and Fair Value Model, respectively). The object of this exercise is to arrive at five-year return and volatility forecasts for each of the assets, which are then used as inputs for the final optimization process. To an extent, forecasting returns for a five-
Q.M.S Advisors | Av. De la Gare, 1 CH-1003 | Tel: 078 922 08 77 | e-mail: info@qmsadv.com | website: www.qmsadv.com |
QMS .
.
Advisors year period is less error-prone than for a much shorter period and also lends itself to a greater reliance on longer-term fundamentals as drivers of future performance. It also implies that incorporating a mean-reverting element into the return forecasts is far less controversial than it would be over a shorter time horizon. Additionally, it is worth mentioning that all our models are supported by cross-checking procedures that aim to rationalize the initial forecast outputs. To a certain extent, return forecasts should have relatively little impact on forecasts of volatility and covariance. Risk, or volatility, is more a measure of the uncertainty of the return, rather than the forecast of the return. In the shorter term, underlying risk and covariance should be more stable than expected returns. With regard to volatility forecasts, we compute both historical volatilities and OrnsteinUhlenbeck estimates for all assets, correcting for auto-correlation where necessary as suggested in econometric literature. Historical volatilities are taken as the best proxy for five-year average volatility forecasts for all alternative investments and equity indices. We employ the Ornstein-Uhlenbeck process to reflect the mean reversion process in volatility over time. We have found this process to produce more realistic out-of-sample forecasting results when compared to other volatility models such as variants of Arch or Garch-models. We use Ornstein-Uhlenbeck volatility forecasts for our fixed income indices when volatility clustering leads us to expect a slow return to the long-term average volatility. The correlation matrix is estimated using all available data from time series using the Stambaugh algorithm, a process that is further combined to the Ledoit-Wolf shrinkage methodology is applied to reduce estimation error in the calculations of the correlations, while taking into account the different correlation patterns between major asset groups such as bonds and equities. We combine a prior matrix with the Stambaugh matrix by calculating a shrinkage factor. The result is a well-behaved correlation matrix with reduced estimation error. The variance-covariance matrix is in turn obtained by multiplying the correlation matrix with estimated variances. Long-term macroeconomic forecasts Long-term economic forecasts anchored to potential growth are derived from a production function linking input and output variables of an economy. The short end of long-term forecasts reflects macroeconomic forecasts for the next 18 months based on a demand-side approach. In the medium-to-longer term, macroeconomic variables such as Gross Domestic Product (GDP) are important for the development of financial variables. If macroeconomic variables can surprise to the up- or downside, the same can be held true even in the short term, causing an “announcement� effect on financial markets. In order to identify factors which drive the longer-term return perspectives, however, the trend path of economic growth of a country has to be determined. In the longer term, a country’s GDP should, on average, grow with its potential growth rate and in this steady state environment unemployment rates are roughly in-line with price stability, i.e. no substantial inflationary or deflationary developments emerge. An increase in production above this level would result in an increase in inflation; a drop below this level would, ceteris paribus, reduce inflation or even cause deflation. The productive potential, which we refer to as potential growth of an economy, depends on the development of the capital stock, the labor force and technological progress. Long-term
Q.M.S Advisors | Av. De la Gare, 1 CH-1003 | Tel: 078 922 08 77 | e-mail: info@qmsadv.com | website: www.qmsadv.com |
QMS .
.
Advisors developments in the labor force are mainly determined by demographic and socioeconomic trends such as birth rates and migration flows. Technological progress, however, is a very complex phenomenon and therefore very difficult to model. Many different ways to measure potential output have been proposed, ranging from simple statistical measures (e.g. a Hodrick-Prescott filter to estimate trends) to more structural economic models. The results of simple statistical measures and univariate techniques depend to some extent on non-economic decisions such as sample length, estimation parameters and out-of-sample restrictions. In its growth estimation exercise, QMS Advisors has opted for a production function approach which breaks down growth into three components: capital stock, labor supply and factor productivity. This breakdown of growth enables us to identify the changing contribution of each factor over time. In QMS Advisors econometric model, the production technology for the total economy is assumed to be a constant return to scale Cobb-Douglas production function with capital stock, labor supply and factor productivity as input factors. Factor productivity is included in the Cobb-Douglas function as labor efficiency, i.e. factor productivity is multiplied directly by the labor supply, but not by the capital stock (Harrod neutrality). The power parameters of the Cobb-Douglas function are calibrated by averaging the respective factor-earning share over the sample period. The production function can be expressed by the following formula: Y = (LE * Pop * PR * (1− NAIRU) * H) α *C1−α where LE stands for labor efficiency, Pop for population, PR for labor force participation rate, NAIRU for non-accelerating inflation rate of unemployment, H for hours worked and C for capital stock. In contrast to previous estimates, the OECD now uses a total economy approach for measuring potential. Previously, the focus was on the business sector, but the prevalent use of chain-linking in national accounts makes it increasingly difficult to calculate business sector figures. This approach also ensures better comparability with potential growth estimates from other sources, which mainly use a total economy approach as well. In the OECD models, the factor labor force is derived by the product of the non-accelerating inflation rate of unemployment (NAIRU), the working age population, the participation rate and the annual amount of hours worked per employee. The NAIRU is estimated by a reduced-form Phillips curve approach, which combines a structural equation relating inflation and unemployment to statistical techniques to constrain its path. While this method has the advantage of providing more timely and robust estimates of the NAIRU than purely structural models, it gives little insight into its determinants and thus makes forecasting more difficult. The unobserved factor productivity is expressed by the detrended residuals of the CobbDouglas function after solving the function for factor productivity. This adds an element of arbitrariness to the estimates, since labor efficiency is not modeled and its future behavior is thus hard to predict. Capital stock is obtained by the sum of yesterday’s capital stock after depreciation and today’s investment activity. This capital service approach combines both a scrapping rate and an efficiency profile for the capital stock. It places more weight on assets that depreciate more quickly, since their marginal product should be higher to justify their cost.
Q.M.S Advisors | Av. De la Gare, 1 CH-1003 | Tel: 078 922 08 77 | e-mail: info@qmsadv.com | website: www.qmsadv.com |
QMS .
.
Advisors Information and computing technology equipment, whose productive value diminishes very rapidly, is therefore better captured than in national account measures of the capital stock. Since this measure is calculated directly, it is more comparable than national account figures, where different calculation methods are used. Capital services measures of the capital stock grow more rapidly than traditional estimates of the capital stock, thereby reducing the importance of labor efficiency and thus making the whole estimate more tractable. Using the original values of these series can result in undesirable volatility in potential growth, especially due to variation in the NAIRU, the capital stock and working-age population. Therefore all series are de-trended by an HP-filter. For the potential growth projections, the NAIRU and the trends of the other factors are extended exogenously using additional assumptions on a judgmental basis. For example, the trend labor participation is calibrated using a method which takes account of underlying demographic changes and cohort effects. The potential growth estimates serve as the anchor for our long-term forecasts. We therefore assume that over a period of five years, the economies reach their potential growth. Pending an assumed optimum policy response on the monetary and fiscal policy side, the economies concerned should be able to grow along their potential growth path. While this explains where the economies are heading in the medium-to longer term, it does not tell much about the way to potential growth. In order to determine this adjustment path to potential GDP, we link our long-term growth assumptions with our short-to-mediumterm GDP projections, i.e. combining the structural developments with the cyclical overlay. There are three ways to measure GDP: via the production side, via the expenditure side and via the income side. The production side sums up the contributions of the various sectors of the economy, while the demand side commonly consists of private and government consumption, investment and net exports. The income side shows the distribution of national income between profits and wages. For our purpose, we have decided to focus on the expenditure side, since data is more readily available and the factors that influence the components are more easily identified and most importantly, interpreted. Interpretation of the data is crucial as it also allows scenario analysis to be applied. The following formula describes the demand side of the economy: Y = C + I + G + NX Y stands for output, C for private consumption, G for government consumption, I for investment and NX for net exports. The relevance of these components varies from country to country. While private consumption accounts for 70% of GDP in the USA, it is only slightly above 50% in Germany. Assuming a permanent income hypothesis, private consumption mainly depends on disposable income, which predominantly consists of labor income and to some extent on private wealth, which refers to both housing wealth and portfolio investments. For all countries concerned, the influence of labor income is by far more important than wealth variables. Next to the income/wealth channel, there is also a credit channel of private consumption. The willingness and ability of banks to grant credit to households and the ability of households to pay the price of it, e.g. interest, is therefore crucial. Investment depends on the level of capacity utilization, costs of funds and expectations about future demand. Government expenditure mainly consists of the wages of public
Q.M.S Advisors | Av. De la Gare, 1 CH-1003 | Tel: 078 922 08 77 | e-mail: info@qmsadv.com | website: www.qmsadv.com |
QMS .
.
Advisors employees and of public investment, while transfers are merely redistribution and do not qualify as consumption. Net exports consist of two factors: Imports and exports. Since trade patterns cannot be modified very quickly, net exports are mainly determined in the short term by domestic and foreign demand. A combined analysis of these various impact factors allows us to make predictions for the real GDP growth rate pattern in the short-to–medium term. Available data only have a "leading" character to a limited extent, i.e. allowing a forecast of GDP. We therefore only make such a detailed analysis for a future period of two years and assume a smooth adjustment process to the previously defined values of potential growth. This of course highlights two inherent key risks of the forecasts presented in this study. Firstly, assumptions about the future trend in key variables, for example productivity growth, may be incorrect. Second, some exogenous shocks are not able to be forecast (oil shock, financial crisis). Third, the assumption of return to potential growth goes hand in hand with the assumptions of optimal policy response: if growth weakens in the near term, central banks will likely lower policy rates, and this should bolster growth in the medium term and allow a return to the potential growth path. We assume that central banks credibility will keep inflation expectations in check and that the central bank’s target will be reached in the longer run. Since we also assume that economies are in steady state in the longer run, this implies that inflation converges to the central bank’s target in our longer-term projections. In the short term, however, there can be substantial deviations. If the economy is running above potential, inflation tends to rise, while the opposite is the case if resources are underutilized. Volatile food and energy prices are also key determinants of inflation variation on the shorter horizon. Long-term interest rate forecasts We employ a fair value model based on long-term macroeconomic factors, complemented by underlying macroeconomic growth, inflation and interest rate policy scenarios as an anchor point of our interest rate forecasts. In forecasting long-term government bond yields, we employ a framework where yields are linked to long-term macroeconomic factors. QMS specifies as a system of five equations, estimating the government bond yields of the four most important government bond markets (USA, Japan, EU + UK and Switzerland). The methodology considers not only domestic factors, but also incorporates international interdependencies. We include monetary policy rates as explanatory variables in our system of equations. Monetary policy rates reach neutral levels, i.e. neither expansionary nor restrictive, as the economy moves towards steady state in the underlying macroeconomic developments. In the model, the fair value of government bond yields is given by the regression of the yearon-year growth rate in global industrial production on the year-on-year inflation rate of the respective country, a weighted sum of year-on-year inflation in the four remaining countries, the real effective exchange rate and domestic and foreign monetary policy rates. Global inflation and global industrial production have been incorporated in each equation because bond markets function as global markets, with investors substituting one for another. We also control for multi-colinearity in the equation.
Q.M.S Advisors | Av. De la Gare, 1 CH-1003 | Tel: 078 922 08 77 | e-mail: info@qmsadv.com | website: www.qmsadv.com |
QMS .
.
Advisors Foreign exchange (FX) We base our long-term FX forecasts on long-term equilibrium estimates derived from our models. Although long-term equilibrium estimates can change over time, they are our favorite means of forecasting exchange rates, as relationships between money market rates and currencies do not appear stable over time. QMS’ model is our starting point in forecasting spot rates for major currencies (i.e. EUR/USD, GBP/USD, USD/CHF, USD/JPY). Our model is rooted in traditional purchasing power parity (PPP) theory, but is augmented by several structural and cyclical factors, which help explain much of the observed deviation from basic PPP in the floating exchange rate period. Basic PPP theory suggests that an exchange rate (expressed as the foreign currency price of a unit of home currency) should adjust to offset differences in home and foreign price levels. Theoretically, if significant differences between exchange- rate adjusted home and foreign goods’ prices were to persist, goods market arbitrage should occur. This arbitrage would eliminate the price differences, driving the exchange rate to mean-revert towards “equilibrium.” While the long-run evidence in favor of PPP is increasingly compelling, the observed adjustment back towards theoretical “equilibrium” is, in practice, painfully slow. In order to address this observation, our framework includes variables covering relative interest rates, productivity and external balances in addition to the basic PPP measure. Model outline First, a divergence in trend productivity relative to other countries may lead the exchange rate to deviate from PPP due to the Balassa-Samuelson effect and/or strong capital inflows because of productivity advantages. Second, the model allows for the effect of interest rate differentials. An interest rate differential in favor of the home country may be associated with a stronger currency than stated by PPP, as investor capital may be attracted to the higher returns of the home country, thus leading to an appreciating exchange rate and ultimately equalizing long-term expected returns. Third, our model explicitly includes the external accounts of each country. The model uses a panel dynamic OLS framework, which generates robust long-run estimates for many countries simultaneously and thus consistently. Our dynamic OLS estimator also helps to address potential serial correlation in the residuals introduced by the presence of non-stationary variables in the data set. The long-term equilibrium exchange rates are estimated with the following equation: Rit = β0i + β1(Pit - p*) + β2(Rit - R*) + β3(gdpcit - gdpc*) + β4(BALit - BAL*) + β5(NIIit - NII*) + eit where eit ~ N(0, σ2) where r is USD per unit of home currency, p is the home price level as measured by GDP deflator, R is the 10Y bond yield, gdppc is GDP per capita (a proxy for productivity), BAL is the goods and services trade balance as a percentage of GDP, and NII is the net investment income as a percentage of GDP. Lower cases indicate data is in logarithmic terms, and * indicates a “foreign” variable, which in each case here is restricted to the US.
Q.M.S Advisors | Av. De la Gare, 1 CH-1003 | Tel: 078 922 08 77 | e-mail: info@qmsadv.com | website: www.qmsadv.com |
QMS .
.
Advisors Fixed income indices Non-credit-related indices (government bond indices) are modeled as a function of a time trend, GDP and CPI indices using an error-correction framework. Credit-related indices are modeled as a function of a time trend, pure credit and government bond total return indices as well as equity volatility. Fixed income total return (TR) indices reflect both accruing interest and price returns. While the first component has a deterministic nature and introduces a time trend into the modeling of fixed income TR indices, the price return component is stochastic. Modeling the stochastic component can be done by assuming a given stochastic data-generating process. The disadvantage of this procedure lies in the fact that forecasts cannot be directly linked to the macro-forecasts (GDP, inflation and interest rates) provided in the macro-forecasting section, and a separate judgmental overlay is necessary. An alternative modeling avenue consists in using explicitly the likely long-term (co-integrating) relation existing between the TR indices and certain exogenous variables pre-defined by economic theory. GDP and CPI indices would, for example, be natural candidates for a cointegrating relation with government bond indices. The clear advantage lies in the ability to directly link the fixed income TR forecasts to the macro-forecasts, and we have also found the quality of our regressions to improve with respect to our first set of models. Government bond indices Government bond indices (without credit component) are modeled in an error-correction framework as a function of a deterministic time trend, GDP and CPI indices relating to the respective TR index through a cointegrating relation: Δyt = c + β(yt-1 – α1trend – α2GDPt-1 – α3CPIt-1) + ΣγiΔyt-1 where yt represents the total return index at time t (periodicity monthly), @trend is a deterministic time trend, GDP and CPI are real GDP and consumer price indices. Only significant variables are kept in the equations. Variations to this base equation consist in modifications to the inclusion of the time trend and are directed to optimizing the quality of the equation and forecasts. Credit-related indices We proceed in three steps to forecast total returns credit-related indices (both by rate and credit return components). First, we compute constant maturity US and EUR government bond total return indices for different maturities. These represent the rate-return components which will be included in the credit-related regressions. We also compute credit return components for the major rating classes. Specifically, we derive forecasts for “residual” credit performance from Moody’s long-term (average, AAA, AA, A, BAA) bond yield indices by stripping off the duration component from the indices and using a timeseries approach to forecast these. Second, we model the target indices as a function of our computed rates and credit return components with the aid of univariate OLS regressions, which additionally include a time trend, again to capture deterministically accruing interest and equity volatility, which we find to be an important determinant of credit-related indices price returns. Third, we forecast the target indices combining step 1 and step 2.
Q.M.S Advisors | Av. De la Gare, 1 CH-1003 | Tel: 078 922 08 77 | e-mail: info@qmsadv.com | website: www.qmsadv.com |
QMS .
.
Advisors Other indices For other indices such as Inflation Linked and Municipal Bond Indices we utilize our ILB total return forecasts. Those are computed as the sum of inflation expectations and real yieldbased total returns. For Convertible Bond indices, we ran multi-linear regressions against market factors and found a robust statistical relationship between convertible bond returns and equity as well as US investment grade credit returns. We thus estimate convertible bonds returns based on their sensitivities to equity and US corporate bond returns. All forecasts are submitted to a qualitative consistency check across asset classes and against the macroeconomic team’s long-term interest rate forecasts. Equities We use a two stage dividend discount model that also allows us to construct scenarios for different states of the world. The revision to our existing methodology brings together the construction of the equity CMAs with our short- and long-term index targets. We have revised our approach to forecast stock returns and now use the dividend discount model. This has some important advantages, as it is not only intuitive, but also applicable to a wide range of markets and investment styles. It also allows us to incorporate variability in market conditions, earnings growth and earnings levels into our forecasts. We compute equity returns through a three-step process: we first employ a 2-stage dividend discount model to calculate a market-implied equity risk premium (ERP). We then use this estimate with our own earnings forecasts to calculate the projected total return over the next five years. Finally, we use scenario analysis to “stress test” the level of the total return for different states of the world. The concept of the ERP is central to our methodology. Simply put, this is the additional compensation equity holders expect to receive for bearing more risk relative to risk-free assets like government bonds. Many valuation models use historic estimates of the ERP, however we base our forecasts on both top-down and bottom-up valuation metrics to derive our forward-looking ERP. The advantage of such an approach is that it allows the ERP to fluctuate as global macroeconomic conditions change. Moreover, it allows us to gauge the impact of changes in market conditions (approximated by both our Risk Appetite and the Equity Market Volatility indices) on the ERP by using regression analysis. The dividend discount model assumes the value of a stock equals the present value of expected dividends. As a first step, we estimate dividends using market consensus earnings and the historical payout ratio. The 2-stage model is represented by the following equations:
Dividend n Terminal Value Index level = ∑ + n ( 1 + r) ( 1 + r) 3 n =1 Dividend 3 × (1 + g ) Terminal Value = r−g Cost of equity (r ) = r f + β × ERP 3
Q.M.S Advisors | Av. De la Gare, 1 CH-1003 | Tel: 078 922 08 77 | e-mail: info@qmsadv.com | website: www.qmsadv.com |
QMS .
.
Advisors “Index level” represents the value of the index on the valuation date, rf is the 10-year risk free rate, g is the growth rate in perpetuity and β is the systematic risk of the index (assumed to equal 1). The growth rate in perpetuity is estimated using estimates of longterm real GDP growth and the expected inflation rate. The underlying assumption is that dividends will remain constant as a proportion of GDP in the long term. Using the equations above, we solve for the market implied ERP. We estimate the 12-month forward fair value of the index using the market-implied ERP and QMS earnings forecasts. Finally, using a regression model, we estimate the ERP for +/– 1 standard deviation changes in our and risk appetite and volatility indices (note that these two factors tend to move in opposite directions). We combine the new ERP estimates with optimistic/ pessimistic earnings scenarios, for which we assume a growth rate of +/– 2.5% per annum relative to the base case during the first three forecast years Private equity and hedge funds We employ a multiple linear regression model for forecasting expected returns for private equity. We expanded the range of factors considered in our modeling of hedge fund returns. As benchmark for private equity, we use a weighted composite of the Cambridge Private Equity (70%) and Venture Capital (30%) indices. The addition of venture capital translates into higher return volatility for the composite index, but does not materially influence our return forecast. For hedge funds, we use the HFR Hedge Fund Indices as benchmarks to model hedge fund returns. In both alternative investment classes, a set of factors tends to drive or explain returns, and to capture these we use multiple linear regression models with lagged returns to forecast returns. This methodology also lends itself to the fact that these alternative investments tend to be positively correlated with current and lagged stock returns. The general econometric model setup can be described by: r
K
rt = β 0 + ∑∑ β k ,i xk , t − i + et i = 0 k =1
where et ~. N (0, σ 2 )
where rt is the return of the corresponding index, the xk,t-i are the K exogenous explanatory variables and βk,i are the respective coefficients. The error term et is normally distributed. Both the Cambridge Private Equity and Venture Capital Indices are quarterly series based on net returns (excluding fees and expenses) of a large and representative sample of US buyout and venture capital funds. The HFRI Weighted Composite is an equally-weighted index which accounts for over 2,000 funds listed in the HFR database. It is widely considered as one of the better performance benchmarks in the hedge fund industry. Private equity In general, the business activity of a private equity fund lies in investing in selected companies with the intention of influencing investment and operating decisions. Target companies are often restructured and resold through IPOs or directly to a new investor.
Q.M.S Advisors | Av. De la Gare, 1 CH-1003 | Tel: 078 922 08 77 | e-mail: info@qmsadv.com | website: www.qmsadv.com |
QMS .
.
Advisors Besides value extracted through restructuring, the returns of private equity funds are influenced by the timing of the initial investment and of the divestment, therefore returns are usually positively correlated with current equity prices and negatively correlated with equity market returns in the previous 3–5 years. Accordingly, we incorporate these facts as explanatory variables in our private equity regression model. We use two different models. One-quarter lagged total returns of the S&P 500 index are included in both versions, while the second one includes the 5-year lagged credit spread between the yield-to-maturity (YTM) on US high yield bonds and the YTM on US AAA corporate bonds. This additional variable aims to capture the influence of financing costs and the timing of the initial investment. We made conservative assumptions about the level of alpha in the private equity industry going forward; a reflection of our anticipation of lower top and bottom line growth than in the past 15 to 20 years. Hedge funds We performed a 24-month rolling regression of monthly hedge fund returns, using the HFR indices as benchmarks, against standard market factors (equities, government bonds, excess return of corporate bonds over government bonds, commodities), which is a widespread approach in empirical studies. We estimated average market sensitivities of hedge fund returns across the cycle and combined the coefficient values with our 5-year forecasts for the corresponding market factors (equities, bonds, commodities, etc). Our approach integrates the structural decline in alpha witnessed in the industry over the past decade, yet we believe that market inefficiencies can still be profitably exploited going forward. We nevertheless make a conservative estimate for the amount of available alpha over the next five years to obtain our total return forecast. Real estate Our forecasts for the real estate indices are based on error correction models. The underlying economic model respects both the rental and the capital market side of the direct real estate market. The objective is to form a replicable and robust forecasting methodology for expected returns of real estate equity, fund and direct market indices. In order to guarantee consistency over the estimates, we use the same model-based forecasting approach for all real estate indices. However, we consider the diversity of the property markets and of the investment vehicles by allowing the set of explanatory variables to differ across the indices. For real estate equities we use the regional sub-indices of the GPR 250 Property Securities Index. The GPR 250 Index consists of the 250 most liquid property companies worldwide, and only uses the tradable market capitalization of these companies as index weights. The constituent property companies operate in a variety of branches in the real estate universe: from senior housing to warehousing, still most of them remain focused on the classic business strategy of owning and renting office space or residential properties. The US direct real estate market performance is measured by the NCREIF Property Index. By capturing more than 6000 institutionally owned properties, the NCREIF index represents the predominant index for US commercial real estate returns. Yet, as it is an appraisal-based index, the NCREIF series tend to lag the underlying direct market performance slightly.
Q.M.S Advisors | Av. De la Gare, 1 CH-1003 | Tel: 078 922 08 77 | e-mail: info@qmsadv.com | website: www.qmsadv.com |
QMS .
.
Advisors We calculate our expected total returns on direct real estate indices by looking at both net operating income (e.g. rental income) and prospective capital value growth. The former is mainly a result of the current demand/supply imbalances on the rental market. Capital value growth is, on the other hand, strongly driven by the attractiveness as of direct real estate as an investment destination. Our error correction approach takes into account the stylized fact that both expected rental growth and potential capital value growth codetermine each other; therefore enhancing the consistency among our real estate forecasts. Moreover, by linking the long-run performance of real estate investments to the development of the underlying economy, the new methodology facilitates scenario analyses based on different macroeconomic forecasts. In our error correction models, we first capture the long run relationship of real estate returns and economic fundamentals with a cointegration equation using macroeconomic indicators as explanatory variables. The exact set of independent variables depends on the specific index, but qualitatively does not differ very much across the series. Nominal GDP is used as a proxy for the strength of the demand side in the rental markets. To model the investment demand of real estate investments in capital markets, we take long-term government bond interest rates and LIBOR rates as rising capital market yields should reduce growth in property prices. In order to account for the effect of inflation, we utilize CPIs as explanatory variables. In a second step, we model the total returns’ shortterm dynamics, with an error correction equation using the change in total returns as the dependent variable and the deviation of the current return to the long-run cointegration return as one of the explanatory variables. Therefore if recent performance was above the estimated cointegration return, our model incorporate a downward adjustment of the total return towards the estimated long-run level in the next period. Adjustments of the forecasts based on qualitative judgments are made in a final step. Commodities We forecast commodity index returns using a multiple linear regression model. In addition to the econometric model, we apply a scoring model as a consistency check and for scenario analysis purposes. The objective is to set up a systematic, robust and transparent methodology to derive expected total returns for commodities that are consistent with our projected macroeconomic scenario. We utilize the UBS-CMC Indices as benchmarks; a diversified array of constant maturity commodity indices that offer either exposure to broad commodity markets or to specific sectors (i.e. energy, industrial metals, precious metals, agriculture and livestock). The UBS-CMC indices offer minimal tracking errors to underlying commodity prices (spot returns), while minimizing the typical negative roll yield associated with traditional commodity indices by rebalancing the related contracts on a daily basis. We derive our total return estimates utilizing a multiple linear regression model: n
trt = β 0 + ∑ β k xk ,t + et k =1
where et ~. N (0, σ 2 )
Q.M.S Advisors | Av. De la Gare, 1 CH-1003 | Tel: 078 922 08 77 | e-mail: info@qmsadv.com | website: www.qmsadv.com |
QMS .
.
Advisors where trt is the total return of the UBS-CMCI, xn,t is the n-th explanatory variable and βt are the respective coefficients. The error term et is normally distributed. An econometric model needs to capture all three sources of commodity return (spot returns, roll yield, and return on collateral) in order to produce reliable forecasts. QMS’ model employs global industrial production, composite inflation indices, longterm yields and global oil inventories as explanatory variables. Global industrial production and composite inflation indices are utilized in order to derive expected spot price movements, as spot returns exhibit significant sensitivities to economic growth rates and deviations from long-term growth trends. Although the UBS-CMC Indices are designed to neutralize the roll yield associated with commodity contracts, we nonetheless use commodity inventories to capture expected roll yield. The term structure of commodity futures markets and therefore the roll yield largely depend on the economics of storage, for which inventories are a decent proxy. In an environment of falling inventories, commodity markets are usually in backwardation and generate positive roll yields. Additionally, longterm yields are included to capture the return on collateral. The dependent variable as well as global industrial production, and inflation pressure are modeled in terms of year-on year changes, oil inventories are modeled as deviation from the 5-year average and long-term yields (YTM) are taken in their original form. Our factors are consistent with economic theory and practice, and their estimated coefficients are all significant, have the “appropriate” sign and are fairly stable over time. The regression has an R-squared of roughly 0.7. The Durbin-Watson statistic is slightly above 1, which indicates the presence of some form of autocorrelation. Nevertheless we restrain from adding autoregressive terms. Since the purpose of the model is forecasting, we keep it as parsimonious as possible. Additionally, we use ex-ante forecasts for the explanatory variables as inputs for the model forecasts, so that the model forecast is derived directly from the macroeconomic input. This approach _although lacking robustness_ allows for the derived expected returns to be consistent with our macroeconomic scenario.
Q.M.S Advisors | Av. De la Gare, 1 CH-1003 | Tel: 078 922 08 77 | e-mail: info@qmsadv.com | website: www.qmsadv.com |
QMS .
.
Advisors Volatility forecasting We compute historical volatilities and Ornstein-Uhlenbeck model estimates for all assets Introduction 1. Definition of volatility: As there are different ways to compute historical volatility, we refer to volatility as the standard deviation of monthly returns of an asset class. All monthly volatilities are annualized. 2. Autocorrelation of returns: when asset returns exhibit autocorrelation, i.e. are statistically dependent on past returns, the volatility as defined above tends to underestimate the true return volatility. 3. Volatility clustering: Historically, volatility across financial asset classes tends to cluster and to mean revert over time. A simple approach to long-term forecasting of volatilities would hence consist in taking long-term historical averages as an approximation of the unconditional mean volatility. In some cases, however, this can lead to the underestimation of volatilities due to the clustering and high autocorrelation of volatility in some asset classes. In those cases, using models that can properly capture the dynamics of volatility over time induces lower forecasting errors. Among the various modeling avenues tested, including different variants of Garch, Arch, T-Arch and Ornstein-Uhlenbeck processes, the Ornstein-Uhlenbeck model was found to produce the most convincing out-of-sample forecasting results. In the CMA, we compute both the long-term historical mean corrected for auto-correlation and the Ornstein-Uhlenbeck model estimates for all assets and apply a judgmental overlay for final forecasts. The Ornstein-UhIenbeck process To incorporate the clustering and mean-reverting features, we model volatility patterns using a modification of a stochastic process. In the short run, we allow the volatility of an asset to fluctuate randomly up and down (e.g. in response to shocks such as profit warnings), although in the longer run we expect the volatility to be drawn back to its historical mean. Mathematically, we capture the process with the Ornstein-Uhlenbeck (0-U) mean reverting stochastic differential equation:
dx = η (x m − xt )dt + σ dt ε t
where ε t ~. N (0,1)
where η denotes the speed of reversion, xm the mean value to which the process returns in the long run, and σ the diffusion term volatility. Notice that the mean reversion component is governed by the distance between the current xt (i.e. current volatility) and the long-term mean xm as well as by the mean reversion rate η. If, for example, current volatility is above the mean reversion level, then the mean reversion component will be negative and thus the volatility is more likely to fall than to rise in the next period. This process results in a volatility pattern that drifts towards the mean reversion level, at a speed determined by the mean reversion rate. The 0-U continuous time equation is a first-order AR(1) process, making it possible to estimate the parameters of the continuous time process from discrete time data. Our forecasted volatility path will then be based on the continuous time equation, but the
Q.M.S Advisors | Av. De la Gare, 1 CH-1003 | Tel: 078 922 08 77 | e-mail: info@qmsadv.com | website: www.qmsadv.com |
QMS .
.
Advisors process parameters will be determined by a regression analysis of the discrete process using the historical time series. The forecasting procedure The forecasting procedure goes as follows: First, generate a series of historical volatilities for each asset class. Second, compute adjusted historical volatilities for return autocorrelation which biases to the downside the volatility measurement – this is essential for alternative asset classes. Third, estimate the O-U process parameters out of the historical volatilities. Then calculate forecasts using the estimated underlying 0-U process with current volatility as a starting point. We average our predicted volatilities to obtain an average for the next five years. Correlation matrix Computing a covariance/correlation matrix is fraught with several statistical problems and, in this section, we outline two key adjustments we make in computing a correlation matrix for the 45 asset classes. De-smoothing of returns An appraisal valuation approach is used to measure the performance of several illiquid asset classes such as direct real estate, private equity and to an extent hedge funds. Appraisal performance measurement artificially smoothes the return pattern of a time series which leads to auto-correlated returns and an understatement of the true return volatility. We addressed this by de-smoothing the return for private equity, hedge funds as well as direct real estate with the Fisher-Geltner method, which is based on a AR1 process, to remove the autocorrelation of returns and therefore to estimate the “true” volatility of the asset class. Stambaugh’s methodology The first problem arises from having to compute the correlation matrix with time series of varying time length. The simple approach (truncated estimation) would consist in computing the correlation matrix based only on the common history for all assets. This would lead to severe information losses / conditioning of the matrix by the characteristics of short history sample thus resulting in elevated estimation errors of the parameters. The Stambaugh methodology helps one combine the information available in long history to shorter timeseries by generating the estimated missing data in a recursive way. Shrinkage method After estimating the covariance/correlation matrix, our next step aims at minimizing the estimation errors contained in the matrix. Standard statistical theory assumes that the real world outcome we observe in a time series constitutes only a sample of an invisible data generating process we try to estimate. Since we do not know the true underlying datagenerating process and can produce only noisy estimates of the correct volatilities and correlations, the optimization output is usually biased. An ex-ante optimal portfolio construction can turn out to be quite inefficient with the benefit of hindsight. Additionally,
Q.M.S Advisors | Av. De la Gare, 1 CH-1003 | Tel: 078 922 08 77 | e-mail: info@qmsadv.com | website: www.qmsadv.com |
QMS .
.
Advisors outliers and extreme values in the covariance matrix can lead to the construction of inefficient portfolios. Though extreme correlations can truly reflect the dynamic between two asset classes, they often originate from bad data or sample bias. On the one hand, an asset with a low correlation could be assigned a disproportionate weight, while on the other, high correlations could displace other asset classes in the portfolio. The shrinkage method represents a common way of reducing estimation error. All matrix values are “shrunk” toward central values through a linear combination of the sample value and a target value. We choose a shrinking factor (∂) that reflects the degree of uncertainty around our parameters. Σ= ∂F + (1− ∂)S According to the above formula, the new correlation matrix Σ, is a combination of F, the Stambaugh’s correlation matrix, with S, the prior matrix. S is the matrix used to shrink the correlation toward more central values. To compute the target matrix S, we define five broad asset groups that exhibit similar statistical behaviors (Equities, Bonds, Cash, Real estate, and Commodities). We compute an equally weighted average correlation within each asset group and, in a similar way, average cross asset group correlations between the five asset groups. Consequently, the prior matrix S is composed of 25 blocks. These average values are then slightly adjusted, based on qualitative judgment to better reflect future expectations of co-movements between these asset groups. We then use a Ledoit/Wolf algorithm to compute the shrinkage factor and apply it to the two matrices, F and S, as described in the equation above. A brief summary We compute a correlation matrix, with all the available history of long time series, using the Stambaugh algorithm. We then apply the shrinkage method to reduce estimation error in calculating the correlations while taking into account the different correlation patterns between major asset groups such as bonds and equities. We combine a prior matrix with the Stambaugh matrix by reckoning a shrinkage factor. The result is a well-behaved correlation matrix with reduced estimation error.
Q.M.S Advisors | Av. De la Gare, 1 CH-1003 | Tel: 078 922 08 77 | e-mail: info@qmsadv.com | website: www.qmsadv.com |