Aenorm 66

Page 1

This edition:

Exploiting Patterns in the S&P 500 and DJI Indices: How to Beat the Market

66

And: vol. 17 dec. ‘09

An Introduction in Prospect Theory Interview with James Ramsey Longevity Risk Hedging for Pension Funds


Stapje voor stapje

of maak je de sprong?

Zeker aan het begin van je loopbaan moet je jezelf vooral afvragen wat je uiteindelijk wilt bereiken. Het International Associate Programme is in ieder geval een uitstekende start. En de huidige ontwikkelingen bij Fortis Bank Nederland bieden misschien wel meer kansen dan ooit. Als financiĂŤle dienstverlener zijn wij voortdurend in beweging. Daarom zoeken we gedreven masters met lef en originaliteit. Het International Associate Programme is een traineeship van 15 maanden en volledig toegesneden op jouw wensen. Je kiest zelf de (internationale) projecten waarbinnen je veel invloed kan uitoefenen. Zo leiden we jou op tot allround manager of specialist. Wil je meer informatie over het International Associate Programme, kijk dan op www.fortis.nl/career.


Colofon Chief editor Annelies Langelaar Editorial Board Annelies Langelaar Editorial Staff Erik Beckers Daniëlla Brals Lennart Dek Winnie van Dijk Chen Yeh Ron Stoop Design United Creations © 2009 Lay-out Taek Bijman Cover design Michael Groen Circulation 2000 A free subscription can be obtained at www.aenorm.eu. Advertisers Aon DNB Fortis KPMG SNS Reaal Towers Perrin Watson Wyatt Zanders Information about advertising can be obtained from Daan de Bruin at info@vsae.nl Insertion of an article does not mean that the opinion of the board of the VSAE, the board of Kraket or the redactional staff is verbalized. Nothing from this magazine can be duplicated without permission of VSAE or Kraket. No rights can be taken from the content of this magazine. ISSN 1568-2188 Editorial Staff adresses VSAE Roetersstraat 11, C6.06 1018 WB Amsterdam tel. 020-5254134

Bureaucracy by: Lennart Dek As any people, also the Dutch have their peculiarities. Besides carrying a basket of tulips, walking around on wooden shoes and eating loads of cheese, we have a predilection for bureaucracy. For instance, the Dutch government is planning to abolish its road taxes and start charging drivers by the kilometre. From 20-a-lot onwards (Dutch governmental projects do not have a great reputation when it comes to making deadlines) all 9 million Dutch drivers will receive a monthly bill. The result: 108 million fully specified invoices per year. Unfortunately, universities form no exception to this Dutch paperwork fetish, as anyone who has ever tried his hands at the OER (Education and Examination Regulation) would know. However, the OER pales into significance compared to the circus that took place at the Faculty of Economics last month. What happened? The visitatiecommissie (assessment committee) paid a visit. In Holland every education program has to prolong its accreditation every six years. What appears to be a formality for an established program as ours, is in reality no sinecure. A preparation of several months preceded this visit, which itself only lasted two days. The visit had a major influence on everyday university life: program directors and policy advisor sat nervously together, classes were rescheduled and caterers hired. As a member of the Opleidingscommissie (education committee) I was one of the lucky many that got to meet with the committee. In order to prepare me for the meeting which would last no more than half an hour, I received 500 pages worth of reports and SWOT analyses accompanied with the request to read them carefully. Like there was nothing else demanding my attention, just at a time that the notion of full time student suddenly seemed to have gotten a meaning after years of sounding so ridiculous. However, my self-pity was instantly forgotten when I thought of the poor person who had written all these reports. He could have obtained the Nobel price if he had devoted all these hours to his research instead of compiling another success rate graph. Fortunately, the programs proved to comply with all rules and regulation, so that the faculty can continue to educate students for at least another six years. Of course it is a good idea to asses yourself every now and then and of course the government ought to control the level of the heavily subsidised university programs. Still, it seems that we have taken these procedures a bit too far. Ultimately, an assessment should serve the education and not the other way around. As always also Aenorm provides a lot of paperwork. However, the amount of work involved in composing the magazine is nothing compared the numerous hours of research preceding the publication. This Aenorm is not only special because of the several nice contributions, it is also the last edition to which current chief-editor Annelies and former chief-editor Erik co-operate. I would like to thank both for their devotion to Aenorm.

Kraket De Boelenlaan 1105 1018 HV Amsterdam tel. 020-5986015 www.aenorm.eu

© 2009 VSAE/ AENORM

vol. 17 (66)

December 2009

3


00 66 Exploiting Patterns in the S&P 500 and DJI Indices: How to Beat the Market

by: Marco Folpmers

Whether stock prices follow a random walk has been the central question of the finance discipline. The question is because the random walk assumption is inherent in many valuation formulas and an unpredictable path is impossible to exploit for smart traders.

Interview with Boswijk by: Annelies Langelaar Peter Boswijk is a professor of Financial Econometrics at the University of Amsterdam. He has obtained an MSc in Economics and a PhD in Economics (cum laude) at the University of Amsterdam. Currently he is the Director of the Amsterdam School of Economics.

An Introduction in Prospect Theory by: Chen Yeh In the analysis of decision-making under uncertainty the expected utility hypothesis, formulated by von Neumann and Morgenstern (1944), has been the dominating framework. However the establishment of expected utility theory as a descriptive model has proved to be less successful and has led to several critiques.

Interview with James Ramsey by: Annelies Langelaar James B. Ramsey, Ph.D. from Madison, Wisconsin, 1968, first position at Michigan State , later Professor and Chair of Economics and Social Statistics, Univ. of Birmingham, Birmingham, England, 197-1973, Professor, 1976 to date, and Chair, 1978-1987, at New York Univ. James Ramsey was also a jury member of the Econometric Game 2009.

A Geostatistical Approach for Dynamic Life Tables by: A.Deb贸n, F.Mart铆nez-Ruiz, F.Montes

Dynamic life tables arise as an alternative to the standard life table with the aim of incorporating the evolution of mortality over time. This article presents an alternative approach to classical methods based on geostatistical techniques which exploit the dependence structure existing among the residuals.

Solvency II: The Effect of Longevity Risk on the Risk Margin and Capital Requirement by: Lars Janssen Solvency II will come into force in 2012 and provides new regulations for the insurance business regarding capital requirements. Two major components are risk sensitivity and market valuation.

An Alternative Pricing model for Inflation by: Alexander van Haastrecht and Richard Plat

As medicine against the current financial crisis, governments and central banks created an almost unlimited money supply. Hyperinflation scenarios are being feared and as the cure and disease thus affect consumer spending and prices in opposite directions, a large uncertainty currently exist about future inflation.

4

AENORM

vol. 17 (66)

December 2009

vol. 17 00 dec. m. y. 09


BSc - Recommended for readers of Bachelor-level MSc - Recommended for readers of Master-level PhD - Recommended for readers of PhD-level

Using a Markov Switching Approach for Currency Crises Early Warning Systems by: Elena-Ivona Dumitrescu

We deal with a currency crisis when the investors flee a currency en masse out of fear that it might be devalued. This should stimulate economicst to improve the efficiency of Early Warning Systems.

Estimation and Simulation of Copulas: With an Application in Scenario Generation by: Bas Tammens

In order to calculate economic capital, economic scenarios for the coming year are needed. Herefore the consequences for the balance sheet are calculated and the economic capital corresponds to a certain quantile of the loss function.

Evaluating Analysts' Performance: Can Investors Benefit from Recommendations by: Lennart Dek This article assesses the performance of analysts by examining whether investors can profit from their recommendations. It shows that it is possible to obtain abnormal returns by following an investment strategy based on recommendations.

Deregulation of the Casino Market: a Welfare Analysis by: Frank Pardoel Several market forms within the casino market have been proposed. A well balanced analysis is necessary in order to shed some light on the current discussion regarding the optimal way of regulating the casino market in Europe.

Longevity Risk Hedging for Pension Funds by: Peter Steur Increasing longevity has a more worrying impact on those whose business it is to provide for old-age income. This article highlights the concept of longevity risk and the likelihood of success of a financial market for longevity derivatives.

Indexing the Value of Home Contents

by: Loes de Boer

This article is based on the investigation of the possibilities of how to index the value of home contents on a year-on-year basis, in corporation of the center of Insurance Statistics and the Central Bureau of Statistics.

Statement: Regression Analysis Should be Understood as a Descriptive Account by: Daniella Brals and David Hollanders

Puzzle Facultive AENORM

vol. 17 (66)

December 2009

5


Econometrics

Exploiting Patterns in the S&P 500 and DJI Indices: How to Beat the Market by: Marco Folpmers Whether stock prices follow a random walk has been the central question of the finance discipline during the last decades. The question is relevant for a couple of reasons. First the random walk assumption is inherent in many valuation formulas (especially for derivatives) and secondly an unpredictable path is impossible to exploit for smart traders. Recently, the consensus between supporters and opponents of the random walk seems to be that the walk is not random, some patterns seem to be present, but it is very hard to exploit these regularities. In one of his famous articles about market anomalies professor Richard H. Thaler concludes about anomalies (Thaler, 1987, 200): ‘A natural question to ask is whether these anomalies imply profitable trading strategies. The question turns out to be difficult to answer. […] None of the anomalies seem to offer enormous opportunities for private investors (with normal transaction costs).’ More recently, in 2002, Thaler has again indicated that it is hard to take advantage of mispricings because it might take too long for prices to return to a more sensible level.1

Introduction In this article, we will show how it is possible to identify local peaks and local troughs in the Standard & Poor’s 500 index, how to predict these peaks and troughs and how to exploit them with the help of a fairly straightforward algorithm. We will compare the performance of the algorithm with a buy-and-hold strategy and demonstrate that the algorithm outperforms the buy-and-hold strategy dramatically.

The great debate: do stock prices follow a random walk? Proponents of the Efficient Market Hypothesis claim that stock prices follow a random walk and that it should be impossible to predict future movements based on publicly available information. The idea that stock prices are unpredictable and follow a random walk (Geometric

Marco Folpmers Dr. Marco Folpmers FRM works for Capgemini Consulting and leads the Financial Risk Management service line of Capgemini Consulting NL. He can be reached at marco. folpmers@capgemini.com.

Brownian Motion) around their intrinsic values is a fundamental element of the Black-Scholes formula for call and put option valuation and a series of formulas (collectively referred to as Black’s formula) for other types of derivatives such as interest rate derivatives. Whereas some evidence has been found that stock prices may depart from the random walk (e.g. the ‘January effect’, January stock prices tend to exceed the prices in the other months, see Thaler, 1987), the departures found are difficult to explain. On top of that, they are often dismissed as accidental patterns that can easily be identified with the help of abundant data – without any meaning whatsoever. Proponents of the Efficient Market Hypothesis (especially University of Chicago professor Eugene Fama) have not tired of explaining away apparent departures from the unpredictable random walk. In 1994, Merton Miller ascribes apparent mean reversion in the Standard & Poor’s 500 index to ‘a statistical illusion’ (Miller, 1994). On the other hand, followers of the Behavioral Finance school highlight certain inefficiencies and among these inefficiencies are overreactions to information after which adjustment takes place.2 The phenomenon that new information leads to an extreme reaction that is adjusted later on is consistent with a short-term mean reverting behavior (De Bondt & Thaler, 1987). However, as cited above, Richard Thaler, one of the founders of the Behavioral Finance school, concedes that, even though Richard Thaler and Burton Malkiel debate in 2002 at Wharton, see http://knowledge.wharton.upenn.edu/article. cfm?articleid=651.

1

6

AENORM

vol. 17 (66)

December 2009


Econometrics

Figure 1. S&P index showing local peaks (star) and local troughs (box)

Table 1. Maximum model

SP time series from Aug, 2007 to July, 2009

1600

Maximum model Dependent variable: L_MAX

1500 1400 1300 1200 1100 1000 900 800 700 600

Predictor

Beta

T

P

(6.43) 37.94 1.07 0.06

Std Error 1.24 18.92 0.30 0.04

Constant GR UP DST_LST_ MIN DST_LST_ MAX

(5.18) 2.01 3.61 1.48

0.00 0.04 0.00 0.14

0.08

0.05

1.54

0.12

Model performance 0

100

200

300 Day

400

500

600

inefficiencies may be pointed out, it is nearly impossible to profit from them. In this article we will test whether mean reversion is apparent in stock index data and if so, how it can be exploited. Our analysis departs from previous research since we are not interested in predicting the level of the index stock price, but only in predicting the binary attribute whether or not the price is at a local extreme.

The algorithm: the minimum and maximum model The underlying idea behind the algorithm is that the index time series can be modeled as an oscillation with unpredictable amplitude but with predictable frequency. Our aim is to identify local peaks and troughs and not the level of these local extremes. The algorithm is applied to index data since index series are less influenced by idiosyncratic factors. First we define a local peak (L_MAXt) in the daily opening prices of Standard & Poor’s 500 Index (It) as the observation that is the maximum of the d preceding and the d following observations, so: L_MAXt = 1, if It = max(It-d, It-d+1,..., It+d) L_MAXt = 0, otherwise The local trough is defined analogously. We have initialized d to a default value of 6 and applied the definitions of the local peak and trough to a 2-year time series of the S&P, running from August 1, 2007 to July 31, 2009. The result is shown in Figure 1, where local local peaks are shown with the help of a star

Pairs Concordant Discordant

Nr 2,947.00 267.00

% 91.5% 8.3%

Ties Total

6.00 3,220.00

0.2% 100.0%

and local troughs with the help of a box. Within the period shown, the index reached its (global) maximum value of 1564.98 on October 10, 2007, and its (global) minimum value of 679.28 on March 10, 2009, a decline of 57% in 15 months. We have also split the sample into a development set (the first 250 observations) and a test set (the last 250 observations). The split is depicted with the help of a vertical line. In order to predict a local extreme, we estimate two models, a maximum model and a minimum model, with the help of the development set. We define the explanatory variables for the maximum model as follows: • GRt: growth rate of the index determined as GRt = It/ It-d-1; • UPt: number of successive upward movements at time t; • DST_LST_MINt: distance of observation t to the most recent minimum before time t; • DST_LST_MAXt: distance of observation t to the most recent maximum before time t. Note that we use only data that is contained in the series itself. For the minimum model we use the same explanatory variable with one exception: DOWNt is used (number of

For the information overreaction hypothesis tested as mean reversion, see also De Bondt & Thaler, 1989. De Bondt & Thaler test the mean reversion in the long run (3-7 years; see also Cutler, 1991, for long-term mean reversion). See also Balvers e.a. (2000) where it is concluded that there is strong evidence of mean reversion in stock index prices of 18 countries (16 OECD countries plus Hong Kong and Singapore) over several years. Our purpose is to describe an algorithm that exploits mean reversion within days. Short-term mean reversion for individual stocks has mainly be tested after an extreme performance.

2

AENORM

vol. 17 (66)

December 2009

7


Econometrics

Figure 2. performance of algorithm for S&P 500 versus buy-and-hold for test set; left-panel: value development, right-panel: number of index stocks in portfolio

Value algo and buy & hold (dotted line) Nr shares algo and Buy & Hold (dotted line) 13000 35

11000

25

10000

20 Nr of shares

30

Value

12000

9000 8000

5

6000

0

0

100

Day

200

300

successive downward movements at time t) instead of UPt. We now estimate a logit maximum model with L_ MAXt as the dependent variable and the explanatory variables mentioned above as independent variables. The estimation is performed on the development set. The results of the estimation are reported in Table 1. The independent variables GRt and UPt are both significant at the 5% level. The model concordance is high, 91.5%. We can illustrate this concordance also as follows: with the help of the estimated betas we calculate the logit scores as: p_maxt ) = X_max ⋅ b_max 1 − p_maxt

logit ( p_maxt ) = log(

v In this equation, p_max is the estimated probability that an observation is a maximum according to the maximum model, X_max is the matrix containing the explanatory variables for the maximum model, preceded by a column containing a one-vector, and b_max is the coefficients of the maximum model reported in Table 1. The estimated probabilities are: exp( X_max ⋅ b_maxt ) 1 + exp( X_max ⋅ b_maxt )

With the help of a cut-off value equal to 0.1, we identify the observations that are flagged as a local peak (p_max above 0.1). Of course, there is a trade-off in determining the cut-off value. If it is too high, the model will more often fail to identify a local peak, while, on the other

8

AENORM

10

7000

5000

p_maxt =

15

vol. 17 (66)

December 2009

-5

0

100

Day

200

300

hand, if it is too low, it will generate many ‘false alarms’, observations wrongly flagged as local peak. A minimum model has been estimated using the development set in an analogous way (only using DOWNt instead of UPt).

The trading strategy The trading strategy works as follows: • The initial liquidity balance equals € 10,000. For each trading day, a liquidity balance is maintained as well as the number of index shares in the portfolio and their value using current prices. • When a local minimum has been identified, the algorithm buys stocks at the current prices for a monetary amount of 10% of the initial liquidity balance, i.e. € 1,000. The liquidity balance decreases with € 1,000 and the stocks bought are added to the portfolio. • When a local maximum has been identified, the algorithm sells stocks at the current prices for a monetary amount of 10% of the initial liquidity balance, i.e. € 1,000. The liquidity balance increases with € 1,000 and the stocks sold are subtracted from the portfolio. • The entire portfolio is liquidated at the end of the period contained in the test set. The performance is assessed in terms of one-year outperformance of a buy-and-hold strategy. We have first calibrated the parameters for application of the algorithm within the development set. Thus, we arrived at d = 6 (as stated above) and the conversion rate of


Econometrics

Figure 3. 101 test sets

0.4

and-hold strategy outperforms the algo. In all applications we have not quantified the transaction costs of the trading activity of the algorithm. However, we believe that the outperformance is dramatic and that the transaction costs have no significant impact on the results.

0.3

Discussion

Split between development set and test set

0.6

Outperformance

0.5

In this paper we have referred to the claim that, although underlying patterns may be present in stock price development, it is impossible to profit from these patterns. We have shown with the help of a straightforward algorithm that this claim is untenable.

0.2 0.1 0 -0.1 250

260

270

280

290 300 310 320 Observation used for split

330

340

350

10% of the initial balance at suspected peaks (conversion from stocks to liquidity) and troughs (conversion from liquidity to stocks). Generally, a higher conversion rate leads to a more volatile performance of the algorithm. Lower values of d lead to a more active algorithm. i.e. more suspected peaks and troughs and, hence, more trading (conversion of liquidity to stocks or vice versa). Whether a profit can be made from the algorithm can only be illustrated by applying the algorithm, i.e. the minimum and maximum models estimated with the help of the development set and the parameters settings for d, the cut-off (0.1) and the conversion rate, to the subsequent test set. In Figure 2 we show the relative performance of the algorithm when applied to the test set. From the figure we conclude that the algorithm starts at a loss, but the value is almost always above the value of the buy-and-hold portfolio. The one-year return of the buy-and-hold strategy is -20.2%; the one-year return of the algorithm is 21.0%. The outperformance equals 51.7%. In order to prove robustness, we have applied the same model also to the Dow Jones Industrial Average index (DJI) for the same period (again split into a development set and a test set): for the DJI, the minimum and maximum models have been estimated for the same period used for the S&P 500 index. Subsequently, the outcomes of the models have been applied to the same test period. The situation is very similar to the results shown for the S&P 500 index. The outperformance of the algorithm applied to the test set of the DJI equals 59.3%. An objection that could be made, is that we only tested the algo with the help of one test set. In order to counter this objection, we have performed additional tests: we have split the development and test set not only at the 250th observation, but at all observations on the domain [ 250, 350]. We have plotted the outperformance for all these 101 test sets in Figure 3. We conclude that the algo consistently outperforms the buy-and-hold strategy. The average outperformance equals 29.8% and there are only 4 cases in which the buy-

References Balvers, R., Y. Wu, E. Gilliland. “Mean reversion across national stock markets and parametric contrarian investment strategies.” Journal of Finance, 55.2 (2000) Bondt, W.F.M. de, R. H. Thaler. “Further evidence on investor overreaction and stock market seasonality.” Journal of Finance, 42.3 (1987):557-581 Bondt, W.F.M. de, R. H. Thaler. “Anomalies: a meanreverting walk down Wall Street.” Journal of Economic Perspectives, 3.1 (1989):189-202 Cutler, D.M., J.M. Poterba, L.H. Summers. “Speculative dynamics.” Review of Economic Studies, 58 (1991):529-546 Fama, E.. “Random Walks In Stock Market Prices.” Financial Analysts Journal, 21.5 (1965):55-59 Folpmers, M.. “Making money in a downturn economy: using the overshooting mechanism of stock prices for an investment strategy.” Journal of Asset Management, 10.1 (2009):1-8 Miller, M.H., J. Muthuswamy, R.E. Whaley. “Mean reversion of Standard & Poor’s 500 Index basis changes: arbitrage-induced or statistical illusion?” Journal of Finance, 49.2 (1994):479-513 Poterba, J.M., L.H. Summers. “Mean reversion in stock prices: evidence and implications.” NBER Working Paper Series, w2343 (1989). Available at SSRN: http://ssrn.com/abstract=227278 Thaler, R.H.. “Anomalies: the January effect.” Journal of Economic Perspectives, 1.1 (1987):197-201 Zeira, J.. “Informational overshooting, booms and crashes.” Journal of Monetary Economics, 43.1 (1999):237-257

AENORM

vol. 17 (66)

December 2009

9


4982

Interview with Peter Boswijk by: Annelies Langelaar

Peter Boswijk Peter Boswijk is a professor of Financial Econometrics at the Department of Quantitative Economics at the University of Amsterdam. He has obtained an MSc in Economics and a PhD in Economics (cum laude) at the University of Amsterdam. Currently Peter Boswijk is the Director of the Amsterdam School of Economics and dean of the Amsterdam School of Economics.

Could you briefly describe your career? I studied Economics at the faculty of Economics (and Econometrics, at the time I graduated) of the University of Amsterdam. My first study choice was not Economics; I hadn’t even taken any economics at high school. Originally, I began studying Political Science. However, the lack of a quantitative scientific dimension, particularly in mathematics, led me to change course in favour of Economics. Following my economics studies and having completed some econometrics courses, I became a teaching assistant of Mars Cramer at the former department of Actuarial Sciences and Econometrics of this university. This was the moment I properly became involved in econometrics. In 1988 I became a PhD student (AIO), supervised by Jan Kiviet. After I obtained my PhD in 1992, I worked for a short period as an Assistant Professor (UD) at Tilburg University and then, after receiving a research scholarship, I returned to the UvA and have worked here ever since. I am now a professor of Financial Econometrics, which increasingly has become my research field. Do you still feel you benefit from having studied economics? In theory, I do benefit from the fact that I have studied economics. As an econometrician you need to have a broad knowledge both of mathematical and statistical techniques, and of economics. However, my knowledge of economics is largely dated, considering I finished my economics studies more than twenty years ago. I do not benefit from my background in economics every day, because my day-to-day work is more about statistics and mathematical techniques. Do you think that econometricians have a poor knowledge of economics? I do think that graduates in econometrics know

10

AENORM

vol. 17 (66)

December 2009

a lot about mathematical economics, especially microeconomics. Many econometrics students write their thesis about mathematical economics and their theses demonstrate sufficient knowledge of economics. Econometrics students also know a lot about financial economics. The blind spot is macro economics. Within the econometrics program we do not have a well-developed variant in macro econometrics. If, for example, you graduate in econometrics and go to work for the DNB (Dutch Central Bank), you may realize that you do not have enough macroeconomic knowledge. On the other hand, we only have four years available so it is difficult to teach the students everything. From the opposite perspective, I think that economists often do not have enough knowledge of econometrics, though this is hard for me to judge in general. Economists often think of econometrics as a support course. We try to teach the students that econometrics is more than just a support course; that you can also specialize in econometrics. You were affiliated with the University of Aarhus. What did you do there? When I was an AIO (PhD Student) I visited this university for about a month. I think it is always a good idea for a PhD student to visit other universities. You meet other people from the same field, which is good as you can share knowledge and experiences, and build up an international network. During my time at the University of Aarhus I met a few people that I am still in contact with. Years later, I have also taught certain advanced econometrics courses for students in Aarhus. What kind of courses have you given? In Aarhus, I have taught a course in the field of asymptotic theory of unit roots and co-integration. Our master students in econometrics learn about cointegration methods in a course like Financial Econometrics. To


4982aa:Layout 2

02-07-2008

09:53

Pagina 1

RUIMTE

voor uw ambities Risico’s raken uw ondernemersgeest en uw ambities. Aon adviseert u bij het inzichtelijk en beheersbaar maken van deze risico’s. Wij helpen u deze risico’s te beoordelen, te beheersen, te bewaken en te financieren. Aon staat voor de geïntegreerde inzet van hoogwaardige

expertise,

diensten

en

producten op het gebied van operationeel, financieel en personeel risicomanagement en verzekeringen. De focus van Aon is volledig gericht op het waarmaken van uw ambities.

In Nederland heeft Aon 12 vestigingen met 1.600 medewerkers. Het bedrijf maakt deel uit van Aon Corporation, Chicago, USA. Het wereldwijde Aon-netwerk omvat circa 500 kantoren in meer dan 120 landen en telt ruim 36.000 medewerkers. www.aon.nl.

4982aa

RIS IC OMA NA G E ME NT | E M P L O YEE B EN EF I T S | VER Z EKER I N GEN


understand the properties of these methods, you need techniques from statistics and probability. This is really a PhD level course, not suitable for MSc students. You were affiliated with a few other universities. Could you expand on that? I have stayed a few weeks at the University of Oxford, also during my PhD research. Again, the feedback from people there was very useful for the progress in my research. I have also been affiliated with the University of California at San Diego, where I stayed for a winter term to teach a course and do research. In San Diego, like at most other US universities, econometrics is part of a two-year master program in economics, preparing students for their PhD research. I taught an advanced time series econometrics course, and also helped some of the students in that course with their PhD research. What do you consider the differences between Dutch students and international students? I do not have any experience with teaching undergraduate students abroad. I have only taught master’s and PhD students, which is of course different from the average bachelor student. Regarding the results and grades of students, it is remarkable that for Dutch students it is very common to graduate in more than four years (bachelor and master taken together). In the USA and the UK, but also most other European countries, this is not the case, a four-year study program really means finishing within four years. Here in The Netherlands students have many part-time jobs and are active in study associations. I don’t find that entirely rational, as you will earn more money if you finish your studies earlier and then start to work. But that is the culture here and it differs from country to country. It is not easy to say what are more general differences. Being a master’s student abroad is different from our general master’s student as most of our Dutch master’s students aim for a job in business after they graduate. Usually they do not want to do more research at the university. However, students who graduate in econometrics are well educated and are comparable to international econometrics students. For example, universities in the United States are often interested in Dutch econometrics students to become PhD students. So students from abroad are not by definition better, but our bachelor and master system has a different orientation. What are the differences regarding education? I am sure that there are differences but I am not fully aware of them, in particular because I have only taught PhD and master’s students. I must say that I found it interesting to see that master’s students in a top US university such as San Diego, who do not have a bachelor in econometrics, still have a knowledge of econometrics comparable to Dutch master’s students in econometrics. An undergraduate program in the United States is not of a particularly high level, and in particular it is typically

12

AENORM

vol. 17 (66)

December 2009

much broader and less specialized than a Dutch bachelor program. However, being a graduate student in the United States is tough and the selection is certainly not easy. If you completed your undergraduate degree at a certain university in the United States, it does not mean you will automatically be able to do your graduate degree at the same university. The quality differs between each university in the United States, so the top institutions can select only the good students. The system in The Netherlands is quite different, for example because the vast majority of bachelor students continue with their master’s program at the same university. You have written a PhD thesis, where was this about? My PhD thesis was written quite a long time ago, I defended it in 1992. When I started, cointegration methods had just become a popular research field. Cointegration is about econometric methods for multivariate nonstationary time series, containing trends. In my thesis I developed one type of model and derived econometric methods for this class of models. These methods included both estimation methods and testing procedures, and a large part of my research was about deriving asymptotic properties of these procedures, and studying how they will work in practice, based on empirical applications and Monte Carlo simulations. Once I finished my PhD thesis (and in fact also during my PhD research) I tried (and largely succeeded) to get parts of it published as journal articles. Of course you hope that people will then read it and use at least part of it, especially applied econometricians. Indeed some the methods from my PhD theses have subsequently been used by researchers at the CPB (Central Planning Bureau) and DNB. The transfer from the technical work of developing new methods to their application in practice it typically slow, but it is nice if it happens. What are your current research projects? My research has always been in time-series econometrics; over the years it has focussed more on the econometrics of financial time series. I am currently involved with a few different research projects. One of them involves cointegration, where we are trying to develop tests that are suitable in the presence of volatility clustering. In financial markets the level of volatility changes over time, leading to the phenomenon of volatility clustering, where busy periods and quiet periods follow each other. If you work with financial time-series, in particular at the daily frequency, this phenomenon is very common. The standard methods for unit roots and cointegration are not fully suitable for this type of data, and over the past few years I have tried to develop methods for that. Another research project involves the development of estimation and testing methods for multivariate volatility models. If you have a portfolio of, for example, ten stocks and you need to know how the variances and covariances are changing over time,


then you need such models. I currently supervise two PhD students. The first, Yang Zu, is working on realized volatility, which pertains to high frequency data. That is when large amounts of data are recorded within a day and you need to measure the volatility. My other PhD student Paulius Stakenas (a former MSc Econometrics student) is working on fractional integration and cointegration. He is developing and evaluating methods for that, and as usual with PhD students I try to help him by regularly giving feedback on his work. What are your plans for the future? Besides my research plans, I have recently taken up the role of director of the Graduate School of Economics. Within our faculty two graduate school have recently been established, one in economics and the other in business. In this new role I will be responsible for the economics and quantitative economics master programmes, but also for the research policy in the Amsterdam School of Economics, including policy related to PhD students. As far as the master programmes is concerned, my main ambition is to make them more international. The master programmes in Economics and Econometrics have been taught in English for quite a few years now, whereas the programmes in Actuarial Science and Operations Research recently started with this. In all cases it is important that the percentage of international students increases considerably, to make the programmes truly international. Related to this is my involvement with the Tinbergen Institute, the joint research institute and graduate school of the two Amsterdam universities and the Erasmus University Rotterdam. Besides teaching a Tinbergen Institute course in time-series econometrics together with colleagues from Rotterdam, I have recently become board member, where I will try to develop new initiatives. In particular, we are currently working on a new PhD track for econometrics and mathematical economics students.

AENORM

vol. 17 (66)

December 2009

13


Mathematical Economics specialty

An Introduction in Prospect Theory by: Chen Yeh In the analysis of decision-making under uncertainty the expected utility hypothesis, formulated by von Neumann and Morgenstern (1944), has been the dominating framework. It has in general been accepted as a normative model of choice under risk (How should economic agents make decisions under risk?) and has also proved popular as a descriptive model of economic behaviour (How do people make decisions in real life when faced with uncertainty?). However the establishment of expected utility theory as a descriptive model has proved to be less successful and has led to several critiques. One of the most cited critiques is Prospect Theory proposed by Kahnemann and Tversky (1979). In this article, their intriguing framework is analyzed at the introductory level.

Introduction

The axioms of expected utility theory

Decision-making (under certainty) by the economic agent has been one of the cornerstones of modern microeconomics. Currently there is a theory that is based on a set of axioms of rational consumer preferences and has been standing firm for quite a while. A logical next step for economic researchers was of course to come up with a theory of decision-making under uncertainty. The famous expected utility hypothesis was presented by von Neumann and Morgenstern in 1944. Although their theory from a normative perspective has been considered as the leading paradigm in decisionmaking under uncertainty (or risk), their framework is subject to a few paradoxes and has therefore lead to several critiques. Economists (and psychologists) were especially criticizing the capabilities of the theory as a descriptive model. If people in their daily lives were making decisions as predicted by the expected utility hypothesis, then surely their preferences should also satisfy the axioms of this theory. However Kahneman and Tversky (1979, henceforth K&T) showed in laboratory experiments that the preferences of economic agents often violate the axioms of expected utility theory. As a result, they presented their own descriptive framework of decision-making under risk: Prospect Theory. This article was published in the prestigious journal Econometrica and has become one of the most cited articles of this scientific magazine.

Decision-making under risk can be viewed as choice between prospects (gambles or lotteries). A prospect can be seen as a set of n outcomes, which each have a certain probability. This is denoted mathematically by (x1,p1;x2,p2;..; xn,pn). Since we are facing probabilities, the sum of these probabilities must furthermore equal unity, i.e. p1 + p2 +...+ pn = 1. The outcome of each prospect gives the decision-maker or economic agent a certain satisfaction or utility. In expected utility theory, the following three assumptions are made: 1. (Expectation) U(x1, p1; x2, p2 ;..; xn, pn) = p1u(x1) + p2u(x2) +...+ pnu(xn) Thus the overall utility of a prospect U is a weighted average of the utility u of its outcomes.1 2. (Integration) Integrating an outcome w into a prospect is acceptable if and only if: U(w + x1, p1; w + x2 , p2 ;..; w + xn, pn) > u(w) The above formula simply means that a decision-maker is willing to bet with w in a lottery (i.e. w is integrated into the lottery) if and only if the expected utility of this

In this issue of AENORM, we continue to present a series of articles. These series contain summaries of articles which have been of great importance in economics or have caused considerable attention, be it in a positive sense or a controversial way. Reading papers from scientific journals can be quite a demanding task for the beginning economist or econometrician. By summarizing the selected articles in an understanding way, the AENORM sets its goal to reach these students in particular and introduce them into the world of economic academics. For questions or criticism, feel free to contact the AENORM editorial board at info@vsae.nl

14

AENORM

vol. 17 (66)

December 2009


Mathematical Economics

Figure 1. Illustration of Allais’ paradox (Kahneman and Tversky, 1979)

lottery exceeds the utility of a certain outcome w. Thus according to expected utility theory, people only seem to care about the utility levels they end up with (final states) and do not think in terms of utility gains or losses (differences between states). 3. (Risk aversion) The utility function u is concave. The economic interpretation of concavity is that a person prefers to receive x with certainty to any risky prospect that has expected value of x, thus a decisionmaker in general does not like risk.

Figure 2: Overweighting of miniscule probabilities (Kahneman and Tversky, 1979)

In the following set of problems (Figure 2), K&T show how the substitution axiom of utility theory is systematically violated. In the first problem, people often tend to choose the prospect where winning is more likely (i.e. B). In the second part, the problem is structurally the same, but the probabilities of winning are now miniscule. However people now change their choice to C. Thus in line with previous results, K&T conclude that people’s attitudes towards risk are changing as function of probabilities. A phenomenon that cannot be explained by expected utility

People tend to think in differences in wealth rather than final states of wealth Evidence from the laboratory: violations of axioms and effects According to expected utility theory, the utilities of outcomes are weighted according to their respective probabilities. However in a series of experiments, K&T show that this axiom is systematically violated: people often overweight outcomes that are considered as a sure bet, relative to outcomes that are merely probable. In their paper, K&T label this effect as the certainty effect. In the following pair of problems (Figure 1), subjects were asked to make a choice in each problem of monetary outcomes. Where the asterisk indicates that the preference was significant at the 1 percent level. It is clear that the combination (B,C) was the most popular. Individual patterns of choice indicated that a 61 percent majority made this modal choice. However this contradicts expected utility theory. To see why, note that problem 2 is obtained by simply eliminating a 66 percent chance of winning 2400 from both prospects. Thus choice A (B) and C (D) are equivalent according to the expected utility hypothesis, but subjects seem to be inconsistent in their choices. This paradox has been labeled as the Allais paradox of expected utility theory.

theory. In the previous pair of problems, K&T only used positive prospects, i.e. people could only win money. However K&T demonstrated that people’s attitudes towards risk also change when they are faced with negative prospects. Table 1 illustrates that preferences between negative prospects are the exact mirror image of the preference between positive prospects. K&T termed this as the reflection effect. The main implication of these results is that the overweighting of certainty increases the aversion of losses as well as the desirability of gains. To resolve this problem, it was suggested that people prefer prospects that have high expected value and low variance. However Problem 4’ in table 1 seems to contradict this conjecture: losing 3000 for sure has a higher expected value and lower variance than losing 4000 with a 80 percent probability or losing nothing with 20 percent probability. Still, a highly significant majority of 92 percent chose the latter. In order to simplify choices, people often disregard components that prospects share and focus on those parts that distinguish prospects. However this may produce preferences that are inconsistent as the following pair of

Note the difference between overall utility U and utility of one specific outcome u. The difference in use of capital letters is subtle, but important to distinguish.

1

AENORM

vol. 17 (66)

December 2009

15


Mathematical Economics

Table 1. Evaluation criteria for the three model specifications Positive prospects

Problem 3 N = 95 Problem 4 N = 95 Problem 7 N = 66 Problem 8 N = 66

(4,000, 0.80) [20] (4,000, 0.20) [65]* (3,000, 0.90) [86]* (3,000, 0.002) [27]

< > > <

(3,000) [80]* (3,000, 0.25) [35] (6,000, 0.45) [14] (6,000, 0.001) [73]*

problems will show (Figure 3). In the two-stage version of the game (the third problem above), there is a 0.25*0.80 = 0.20 chance to win 4000 and a 0.25*1.00 = 0.25 chance of winning 3000. Thus this problem is equivalent to second problem above. However the results of the two stage game were very similar to those of the first problem above (contrary to the prediction of expected utility theory). It seemed as if people simply ignored the first stage of the game: K&T describe this as the isolation effect.

Formalization of the framework: the editing and evaluation phases The results of their experiments were a sufficient reason for K&T to conclude that expected utility theory should be disregarded as a descriptive model of choice behaviour under risk. They coined their alternative as Prospect Theory. This particular theory was originally developed for simple prospects with monetary outcomes and stated probabilities, but could be extended to more scenarios. One important feature of prospect theory is the division of the choice process into an editing phase and an evaluation phase. In the editing phase, agents perform Figure 3: The isolation effect (Kahneman and Tversky, 1979)

16

AENORM

vol. 17 (66)

December 2009

Negative prospects

Problem 3’ N = 95 Problem 4’ N = 95 Problem 7’ N = 66 Problem 8’ N = 66

(-4,000, 0.80) [92]* (-4,000, 0.20) [42] (-3,000, 0.90) [8] (-3,000, 0.002) [70]*

> < < >

(-3,000) [8] (-3,000, 0.25) [58] (-6,000, 0.45) [92]* (-6,000, 0.001) [30]

a preliminary analysis of the offered prospects, which often results in simplifying the prospects. Consequently in the evaluation phase, the edited prospects are evaluated and the prospect with the highest value is chosen. According to K&T, the function of the editing phase is to reorganize and edit the prospects such that subsequent evaluation and choice becomes easier for the subject in question. K&T consider the following to be the major operations: - Coding: people perceive outcomes as gains and losses, rather than final states of welfare. Thus people tend to think in differences rather than final outcomes. Gains and losses however have to be defined relative to some situation. K&T call this the reference point. - Combination: prospects can sometimes be simplified by combining two prospects into one, e.g. winning 200 with 25 percent or winning 200 with 25 percent can be simply reformulated as winning 200 with 50 percent. - Segregation: prospects are often divided into riskless and risky components, e.g. the prospect of winning 300 with 80 percent or 200 with 20 percent is decomposed Figure 4: A hypothetical value function v (Kahneman and Tversky, 1979)


specialty

Figure 5: A hypothetical weighting function π (Kahneman and Tversky, 1979)

weight. Therefore, K&T propose that probabilities should be evaluated in the weighting function π(p), which results in proper decision weights. However they explicitly mention that these decision weights are not probabilities as “decision weights measure the impact of events on the desirability of events and not merely the perceived likelihood of these events”. An example of a weighting function relative to probabilities is given in the Figure 5. Thus the value of a prospect in its whole is given by V(x1, p1; x2, p2 ;..; xn, pn ; r) = π(p1)v(x1 ; r) + π(p2)v(x2 ; r) +...+ π(pn)v(xn ; r)

into a sure gain of 200 and the prospect of winning 100 with 80 percent or nothing with 20 percent. - Cancellation: people often disregard those components that are shared by different prospects. This was demonstrated in the isolation effect.

Formalization of the framework: the value and weighting function In the expected utility hypothesis, outcomes were evaluated in utility functions. K&T suggested the use of value functions. Two important features distinguish the value function v from the classical utility function u. First of all, people tend to think in differences in wealth rather than final states of wealth. Second, these changes in wealth are subject to the current state (or reference point) of the decision-maker, e.g. the worth of winning 1000$, given that the person is a billionaire, is obviously lower than the worth of winning this amount, given that this person is a (poor) college student. Similarly, the difference between losing 100$ or 200$ appears greater as the difference between the loss of 1100$ and 1200$.2 Thus strictly speaking the value function has two inputs: the impact of the outcome of the prospect relative to the reference point and the magnitude of change in wealth by the outcome of the prospect. The fact that v is a function of the reference point seems to be a valid reason for hypothesizing that v is concave above the reference point and convex below it.3 Figure 4 shows the functional form of a hypothetical value function. The phenomenon of overweighting seems to imply that probabilities alone are an insufficient tool as a decision

where r denotes the reference point. Although its mathematical representation compared to expected utility theory is subtle, the results are fundamentally different! Preferences between risky options in prospect theory are thus based on two principles. The first principle concerns editing activities by the decision-maker that determine how prospects are perceived. Consequently, the second principle involves the subjective evaluation of gains and losses and the weighting of uncertain outcomes. After several critical reaction papers, Kahneman and Tversky (1992) revised their framework, which resulted in Cumulative Prospect Theory. Their model has been applied into many fields of economics and has proved to be of valuable use. In 2002, Kahneman received the Nobel Prize in Economic Sciences for integrating psychological insights into economics, especially for his development of Cumulative Prospect Theory.

References Kahneman, D. and A. Tversky. “Prospect Theory: An Analysis of Decision under Risk.” Econometrica, 47.2 (1979):263 – 291. Kahneman, D. and A. Tversky. “Advances in Prospect Theory: Cumulative Representation of Uncertainty.” Journal of Risk and Uncertainty, 5 (1992):297 – 293.

The loss in the second scenario is relatively smaller for the decision-maker, although the loss in magnitude is the same as in the first scenario. 3 For an intuitive notion of this concave/convex structure, note that it is easier to discriminate between a temperature change from 3º to 6º than it is in a change from 13º to 16º. 2

AENORM

vol. 17 (66)

December 2009

17


Interview with James Ramsey by: Annelies Langelaar

James Ramsey James B. Ramsey, Ph.D. from Madison, Wisconsin, 1968, first position at Michigan State , later Professor and Chair of Economics and Social Statistics, Univ. of Birmingham, Birmingham, England, 197-1973, Professor, 1976 to date, and Chair, 1978-1987, at New York Univ. Fellow ASA, Visiting Fellow at School of Mathematics, Institute for Advanced Study, Princeton, 1992-1993, ex-preseident Society for Nonlinear Dynamics and Econometrics. James Ramsey was also a jury member of the Econometric Game 2009.

You have worked in Birmingham as a professor and Chair of Econometrics and Social Statistics at the University of Birmingham for two years. What have you done there and which European professors have you worked with during this period? I worked with a number of younger people, for example Andy Chester. He is now a professor at University College London. He was a lecturer under me when I had the chair of Econometrics in Birmingham, England. For me working in Birmingham was a return home, because I grew up in England and went to school there. But the Birmingham professorship did not work out; I saw the inflation coming and tried to persuade the university to pay me in real terms rather than nominal. They refused, I resigned. Within half a year my resignation was justified. If I had stayed my salary would have been halved due to the rampant inflation in England in the early 70’s. So my forecast was right. Shortly thereafter my colleagues in Economics, Mathematical Economics and Econometrics left and the Economics Departments were debilitated. So in the end it was all about accurate forecasting. That is life. Right now you have already been affiliated with the New York University for already 33 years. What is it about this university that binds you so much to it? I went to New York University from Stanford. The reason I am still at NYU is interesting. I joined the university in September and by December I was ready to quit. I felt that the appointment was the biggest mistake I ever made in my life. However, I did not resign because my children had already moved school three times in five years. In January I was made Chair. Over the next several years I was able to hire absolutely superb faculty. Consequently, I decided I had better stay and turned notes down a number of offers. Now I am committed to New

18

AENORM

vol. 17 (66)

December 2009

York University. I enjoy my working life in New York City. I live a charmed life, I walk to work in five minutes, I have a basement garage, which is unusual in New York, and in about 45 minutes can be at my yacht club. Most recently we keep our boat in Turkey after having sailed across the Atlantic from North America in 2003. Perhaps you are most famous for developing the RESET tests, which is a general specifications test for the linear regression model. How did you come to the idea of this test? I felt that there was information contained in the data which could be, but had not been, extracted. I had a hard time in pursuing the thesis. Indeed, a number of famous econometricians told me things like: “that cannot be done” and that “by definition it was impossible”. Professor Zellner, my thesis advisor, never thought I would be successful. But Irwin Gutman in the Statistics Department encouraged me and sustained my intellectual energy. I would go to the Stat Department every few months and he would say: “yep, you have made some progress.” It is interesting that the idea of specification error tests which were regarded with great skepticism at the time are now accepted wisdom. Now no one would dream of ignoring the potential presence of specification errors. So the acceptance of specification error tests was a real cultural change in the study of econometrics. What was biggest difficulty in proving your idea? One of the important lessons that I learned is that new ways of thinking about Econometrics take between ten and twelve years for the profession to accept the new ideas as standard. Any truly new path of analysis takes a long time to be accepted and then everyone says: “but, it is obvious!” Consider, for example, the difficulties Rob Engel and Clive Granger had in getting ARCH accepted.


Specification error tests are now an obvious thing to do, Question but that definitely Answer was not so in the beginning. It is hard to recognize now, with hindsight, the intellectual resistance that new ideas must face in the beginning. Besides your theoretical work, you have also performed a lot of empirical research. Do you often use your own test in your empirical work? Most of my empirical work, with a few exceptions, are examples of the use of my theoretical work in econometrics. The theoretical work is always driven by the desire to understand economic relationships more accurately and more robustly. Usually my empirical work illustrates the use and benefits of my developments in econometric technique. There are exceptions. For example, a colleague in Health and Human Services (HHS) asked me to rebut the current fad for the “supply induced demand hypothesis,” I became a “health economist. ” As another example, a colleague came into my office in Michigan Stage and asked “What do you know about oil economics“ and I said “nothing”. We developed a paper which was ready for publication within two months. The paper was remarkable for its timeliness but also demonstrated the use of my theoretical development on the derivation of market demand curves. A similar example involved my early work in oil exploration which led to two books on the subject. It is of interest to note that one of my colleagues at the time said “Why are you studying the economics of oil exploration when everyone knows there is little oil to find.” And my answer was: ”If you look ,you may not find; but if you do not look, you will not find.” So that ended that discussion. You teach courses in both econometrics and mathematical economics. At the University of Amsterdam, students can choose between masters in those fields after having completed a single bachelor program. Choosing between the two is difficult for most students. Do you have a preference for either one of them and, if so, why? Well, I am an Econometrician rather than a Mathematical Economist. It is my personal choice, but to separate the two is rather like saying: “ I want to run but only have one leg and need two.” In Economics both Econometrics and Mathematical Economics are needed. Mathematical Economics is needed in order to formulate in a sensible manner a theoretical structure one is trying to develop. Once the theoretical structure is determined then one can develop the appropriate Econometric procedures. Mathematical Economics provides the input for the theoretical econometric decisions and the implementation of those procedures provides the actual numbers that fit in the equations. I always tell my students in the Honors Tutorial who have to write a thesis: “ Develop the theory first and then we can decide on the appropriate Econometrics.”

Do you think that nowadays econometricians are spending too much time on the statistical methods and too less on the economic theory? I would not put it that way. Econometric articles, can be divided between theoretical econometrics and empirical econometrics. Empirical econometrics is the application of econometric tools and methodology to solve actual problems. What is done is determined by the theoretical structure. I try to tell my students that choosing the econometric algorithm comes at the very end of the process. The art of theoretical econometrics has become mathematically complex. Unfortunately, too many articles are being published which concentrate on developing nuances and esoteric examples of econometric technique. It would be preferable to encourage the development of genuinely new ways of processing the data; an example might be the application of functional data analysis or the use of wavelets in terms of non-parametric analysis. This may be just a phase we are going through, but we have lost our sense of adventure. We should try to be truly innovative instead of being an “N+1er”. What I call an “N+1er” is someone who can be popular and successful. But whose modest contribution is to extend the analysis in an obvious manner, albeit using complex mathematics. The true paradigm shift is rare but deeply exciting. But to be receptive to such a change requires an open mind. When I started my thesis on specification error tests I was too ignorant to know that I could not do it, so I did it. As early as 1973 you wrote a paper on “Mortgages, Interest Rates and Changing in the Housing Stock”. 35 years later, this became the exact field in which the Credit Crunch originated. What are your thoughts on today’s crisis? Wrote the article with David Sheppard The article you are referring to I wrote, as you note, 35 years ago so I do not remember what I said in that article. But in reference to the current credit crunch there are a number of elements which coalesced to create a “perfect economic storm.” While the major banks and investment houses did not ignore risk, their method of assessing risk was premised on markets near equilibrium. Consequently, the risk implications of the models that were applied to markets in strong disequilibrium were misleading to say the least. If you add to that, strong political pressure on the banks to lend to those who would not normally be deemed acceptable risks meant that the mortgage market was grossly overextended and that normal safe banking practices were eschewed. Finally, various financial instruments utilizing mortgages had recently been developed whose market properties were ill understood. Your readers might be amused to note, that on my return New York I received a request from an analyst on an article on interest rates that I wrote over 30 years ago: so much for the benefits of electronic record keeping.

notes

AENORM

vol. 17 (66)

December 2009

19


Actuarial Sciences

A Geostatistical Approach for Dynamic Life Tables by: A. Debón, F. Martínez-Ruiz, F. Montes Dynamic life tables arise as an alternative to the standard (static) life table with the aim of incorporating the evolution of mortality over time. The parametric model introduced by Lee and Carter in 1992 for projected mortality rates in the US is one of the most outstanding and has been largely used since then. Different versions of the model have been developed but all of them, together with other parametric models, consider the observed mortality rates as independent observations. This is a difficult hypothesis to hold when looking at the graph of the residuals obtained with any of these methods. This article presents an alternative approach to classical methods based on geostatistical techniques which exploit the dependence structure existing among the residuals.

Introduction The concept of a dynamic life table seeks to jointly analyze mortality data corresponding to a series of consecutive years, allowing the calendar effect influence on mortality to be studied. Most of its models adapt traditional laws to the new situation and all them share a common hypothesis: they consider the observed measures of mortality as independent across ages and over time. As Booth et al. (2002) point out, it is difficult to hold such a hypothesis when looking at the graph of the residuals obtained after the adjustment with any of these models. Tools appropriate to other disciplines can be used to overcome the problem which residual dependency supposes. We have turned to Geostatistics, which provides techniques for modelling the dependency structure among a set of neighbouring observations. Covariance functions and variogram are the essential tools which, together with kriging techniques, also allow predictions to be made and associated errors to be calculated (Matheron, 1975; Journel and Huijbregts, 1978). Geostatistical techniques were designed for the analysis of data which were very far from what a dynamic table represents. This distance is only apparent as a dynamic table can actually be considered as a set of data over a

A. Debón, F. Martínez-Ruiz and F. Montes Ana Debón is assistant professor in statistics at the Centro de Gestión de la Calidad y del Cambio of the Universidad Politécnica de Valencia (Spain). Francisco Martínez-Ruiz was associate professor at the Dpt. d'Estadística i Investigació Operativa of the University of Valencia and now is working in the Servei d'Estadística of the City of Valencia Corporation. Francisco Montes is professor in statistics at the Dpt. d'Estadística i Investigació Operativa of the University of Valencia. All three share graduation and prediction of mortality measures as common research field.

20

AENORM

vol. 17 (66)

December 2009

rectangular grid equally spaced both vertically, for age, and horizontally, for year. The diagonals of this grid stand for the cohort determined by the age and the year. On the other hand, the aim of Geostatistics is, as already mentioned, to model the dependence structure among neighbours, which requires defining a neighbourhood relationship as well as a distance. They are straightforward in the case of spatial data but also possible in other kind of data (Cressie, 1993; Mateu et al., 2004).

Adjustment and prediction of mortality rates We consider a set of crude mortality rates qɺ xt , for age x ∈ [x1, xk] and calendar year t ∈[t1, tn], which we use to produce smoother estimates, qˆ xt , of the true but unknown mortality probabilities qxt. A crude rate at age x and time t is typically based on the corresponding number of deaths recorded, dxt, relative to those initially exposed to risk, Ext. Lee-Carter models for logit(qxt) Some variations on the original Lee-Carter model can be expressed through a unified model whose expression, for the logit of the mortality ratio which we are going to work with (Debón et al., 2008; Renshaw and Haberman, 2006) Model Expression q xt ) = a x + bx kt + ε xt 1 − q xt

LC

ln(

LC2

q ln( xt ) = ax + bx1 k 1t + bx2 k t2 + ε xt 1 − qxt

LC-APC

q ln( xt ) = ax + bx1 kt + bx2 kc + ε xt 1 − qxt


Actuarial Sciences

Where LC and LC2 stand for the original model with one and two terms and LC-APC stands for the AgePeriod-Cohort model. All these models will be used to adjust the Spanish mortality data described later on. The adjustment is carried out through ML using the gnm library of R. Geostatistical methods The mortality data we want to analyze can be considered as a set of spatio-temporal data, with age as the unidimensional spatial component and year as the temporal one. Following Cressie and Majure (1997), we denote by {Z(x, t), x ∈ D, t ∈ T} the mortality measure at age x and time t. The model can be written as, Z(x, t) = μ(x, t) + δ(x, t)

(1)

where E[Z(x, t)] = μ(x, t) is a deterministic large-scale variation (trend), and δ(·,·) is the stochastic small-scale variation (error), a zero-mean second order stationary Gaussian process with covariance function representing

the parameters are projected, which once substituted in the models provide the prediction of the probabilities of death. Throughout this process the parameters dependent on age stay fixed. Prediction expressions for the distinct models are shown in Table where c* = tn + s – x. Model

Expression

) =qˆaˆx ,xtn ++ sb)ˆx=kˆtnaˆ+xs + bˆx kˆtn + s LC logit( qˆ x ,tlogit( n +s ˆ1 ˆ1 ˆ2 ˆ2 =ˆ aˆ x +) bˆ=x kˆaˆtn + + , tn + s ) q LC2 logit( qˆ xlogit( x ,tn + s x s b x k tn + s + b x k tn + s

qˆ x ,tqnˆ+ s ) =) aˆ=x a+ˆ bˆ+x kˆbˆtn1+kˆs + bˆ 2 kˆ * logit( LC-APC logit( x ,tn + s x c x tn + s x ˆ ˆ ˆaxˆ,xtn ++s b) x=ktnˆ+ + ˆx + cˆtn + s MP logit( qˆ x ,logit( tn + s ) =q s r qˆ x ,qtˆn + s ) =) a=ˆ x ˆ++bˆxrˆkˆtn++ scˆ + dˆ * logit( MP-APC logit( x , tn + s x tn + s c qˆ x ,tn +qˆs ) = aˆ)x =+ bˆˆx+kˆtrnˆ+ s+ cˆ + pˆ ( x, t + s ) MP-res logit( logit( x ,tn + s x tn + s δ n

MPAPC-res

logit( qˆ x , tn + s ) = ˆ + rˆx + cˆtn + s + dˆc* + pˆ δ ( x , t n + s)

C(h, u) = Cov[Z(x + h, t + u), Z(x, t)] and variogram

Notice that the two models based on geostatistical techniques have a second prediction term corresponding to residuals, namely pˆ δ(x, tn + s), obtained accordingthe ordinary kriging approach (Cressie, 1993).

2γ(h, u) = Var[Z(x + h, t + u) – Z(x, t)] = Var[δ(x + h, t + u) – δ(x, t)], both characterizing the spatial and temporal dependence.

Bootstrap confidence intervals

Modelling trend The mortality data in a dynamic life table can be viewed as a two-way table. In this context, the deterministic component in (1), μ(x, t), can be expressed as the sum

Non-parametric bootstrap confidence intervals are obtained using Deviance residuals. The procedure used is the following: starting from the deviance residuals obtained by the original data, a bootstrap sample is drawn, estimated deaths are set, dˆxt , and the observed deaths are obtained, dxt, from the expression of those residuals for a Binomial distribution which is the one we assumed for the deaths, Dxt ~ B(Ext, qxt). We will have

μ(x, t) = μ + rx + ct,

(2)

where μ is an overall effect, rx is a row effect due to age and ct is a column effect due to year. We use a median polish algorithm (Cressie, 1993) for producing overall effect, ˆ , row effects, rˆx , x ∈ D, and column effects, cˆt , t ∈ T. As in the Lee-Carter model, we can enlarge the trend introducing a cohort effect, μ(x, t) = μ + rx + ct+ dc,

(3)

for whose estimation we have adapted the original median-polish algorithm. Prediction for future years For all the models described in the previous sections, the prediction beyond the period under observation, for example for the year tn + s, has been carried out adjusting an ARIMA model to the series of time parameters, whether they be periods or cohorts. With the time series adjusted,

rdevxt =

(4) d xt E xt − d xt ˆ sign ( d xt − d xt ) 2[ d xt log( dˆ ) + ( E xt − d xt ) log( E − dˆ )], xt

xt

xt

reaching the solution through numerical methods, for which we turn to the uniroot function from the library stats of R. With the new deaths observed, the new crude mortality ratios are obtained, and thereafter a new adjustment of the model which provides new estimations of the parameters. The process is repeated for the N bootstrap samples, which in turn provides a sample of size N for the set of model parameters and the mortality, ratio, life expectancy and annual measurements. The confidence intervals are obtained from the percentiles, IC95 = [p0.025,p0.975].

AENORM

vol. 17 (66)

December 2009

21


Actuarial Sciences

0.0

13

0.

2

14

01

0.

01

0.

0.0

6

0.

14

0.0

01

01

1

2 01 0.

1

7

0. 01 8

0

5

0

01

18

0.

0.0

0.005 0.000

0

1

2

3

4

5

0

the predictions for qˆ xt,2004 and qˆ xt,2005 have been compared with the corresponding crude estimates, qɺ x,2004 and qɺ x,2005. A summary of these predictions via their residuals is presented in graph form in the Figure 2, allowing a comparison of the accuracy of each method’s predictions.

6

01

2

01

0.

0.

01

0.

15

0.

0.010

1

13

0.0

0.015

1

01

year

3

0.020 0.0

2

3

Figure 1. Empirical (left) and theoretical (right) covariance for the residuals of MP model for men.

1

2

age

3

4

5

age

Bootstrap confidence intervals for e65t and a65t The Figure 3 shows the confidence intervals for the prediction of expected remaining lifetimes and annuities at the age of 65, e65t and a65t for the period 2004-2023 obtained with different models using bootstrap techniques described above.

Analysis of Spanish mortality data

Conclusions

Data Mortality data in Spain for the period 1980-2003 and a range of ages from 0 to 99. The crude estimates of qxt, necessary for the models under study, have been obtained with the process used by the Instituto Nacional de Estadística (INE, Spanish National Institute of Statistics).

Concerning the model fitting, the first conclusion, common to all models, is that adjustments perform better for women than for men. The model showing best global results for both sexes and for the goodness-of-fit measures is the LC-APC. The Geostatistical models, MP-res and MP-APC-res, show the best global result for both sexes and for both years when predicting mortality rates for years 2004 and 2005. The LC2 and LC-APC models also show a good performance, particulary for year 2004. The greatest differences between the crude and predicted values for all models are observed in the intermediate ages. Bootstrap confidence intervals obtained with the MP model provides higher values for expectancy. This is due to the prediction of a reduction in the qxt for all ages while the other models predict increases for the advanced and intermediate age groups. With regard to the width of the intervals, a noticeable fact is their narrowness, a feature in common with other published studies (Lee and Carter, 1992; Lee, 2000; Booth et al., 2002; Koissi et al., 2006) whose authors offer different explanations for it. Finally, as far as the influence of gender and of the model over the intervals, there does not seem to be a clear effect of either factor when they are considered separately. We could speak, nonetheless, of the existence of an interaction between them both. For example, the greatest amplitude of intervals for the MP model for men moves to the LCAPC model for women. This comment is as valid for expected remaining lifetime as it is for the annuities.

Model adjustment: residuals and covariance function The model described above, three Lee-Carter, two Median-Polish and two Geostatistical models, have been adjusted to the data, separately for women and men. The model performance is evaluated with three measures, the Deviance, the Mean Absolute Percentage Error (MAPE) and Mean Square Error (MSE) (Table 1). The valid covariance function adjusted to the empirical covariance function obtained from the residuals δˆ (x,t) = Z(x, t) – ˆ (x, t) is a Gneiting model. The Figure 1 shows the empirical (left) and adjusted (right) covariance functions for the MP-res when applied to men. The same structure of covariance function has been used for men and women residuals. One final comment on model adjustment. Notice that there is no information about MP-res and MP-APC-res models, the reason for that is because kriging is an exact interpolator method in the absence of error measure. Sometimes a crossvalidation method can be used for estimating the goodness of fit. Predictions for years 2004 and 2005 In order to measure the goodness of prediction for each model, Table 1. Goodness of fit for the different models Deviance Model

LC LC2 LC-APC MP MP-APC

22

AENORM

Women

6051.85 3533.13 2952.95 18153.05 5831.11

vol. 17 (66)

MSE Men

14255.57 4598.81 4272.75 24885.66 13317.25

December 2009

Women

0.007857 0.004805 0.005408 0.014191 0.005874

MAPE Men

0.009588 0.008588 0.007360 0.013476 0.008418

Women

6.00 4.75 4.31 7.81 4.97

Men

6.98 4.19 4.12 8.87 5.60


Actuarial Sciences

Figure 2: Absolute values of prediction residuals for each age for different models

Acknowledgment The research described in this article was financially supported by grants from the MEyC (Ministerio de Educación y Ciencia, Spain, project MTM2007-62923 and project MTM2008- 05152).

References Booth, H., J. Maindonald and L. Smith. “Applying LeeCarter under conditions of variable mortality decline.” Population Studies, 56.3 (2002):325–336. Cressie, N. Statistics for Spatial Data, Revised Edition. JohnWiley: New York, 1993.

23.5 21.5

e65t

22.5

19.5 18.5

e65t 17.5

2020

2005

2015

2020

a65t

14.8

LC LC−APC MP

14.0 2005

2010

2015

2020

Journel, A. G. and C.J. Huijbregts. Mining Geoestatistics. Academic Press: New York, 1978. Koissi, M., A. Shapiro and G. Högnäs. “Evaluating and extending the Lee- Carter model for mortality forecasting confidence interval.” Insurance: Mathematics & Economics, 38.1 (2006):1–20.

Lee, R. and L. Carter. “Modelling and forecasting U.S. mortality.” Journal of the American Statistical Association, 87.419 (1992):659–671. Mateu, J., F. Montes and M. Plaza. “The 1970 US draft lottery revisited: a spatial analysis.” JRSS Series C (Applied Statistics), 53.1 (2004):1–11. Matheron, G.. Random Sets and Integral Geometry. Wiley:New York, 1975.

14.4

12.8 12.4

LC LC−APC MP

12.0

a65t

2010

15.2

2015

LC LC−APC MP

20.5

16.5

2010

13.2

2005

Debón, A., F. Montes and F. Puig. “Modelling and forecasting mortality in Spain.” European Journal of Operation Research, 189.3 (2008):624–637.

Lee, R. “The Lee-Carter method for forecasting mortality, with various extensions and applications.” North American Actuarial Journal, 4.1 (2000):80–91.

Figure 3: Bootstrap interval for e65t and a65t for men and MEN WOMEN women

LC LC−APC MP

Cressie, N. and J. Majure. “Spatio-temporal statistical modelling of livestock waste in streams.” Journal of Agricultural, Biological, and Environmental Statistics, 2.1 (1997):24–47.

2005

2010

2015

2020

Renshaw, A. and S. Haberman. “A cohort-based extension to the Lee-Carter model for mortality reduction factors.” Insurance: Mathematics & Economics, 38.3 (2006):556–570.

AENORM

vol. 17 (66)

December 2009

23


Actuarial Sciences

Solvency II: The Effect of Longevity Risk on the Risk Margin and Capital Requirement by: Lars Janssen Solvency II will come into force in 2012 and provides new regulations for the insurance business regarding capital requirements (DNB (VII) 2007). Two major components of Solvency II are risk sensitivity and market valuation (DNB (VII) 2007). This means that an insurance company that is subject to a low amount of risk will have a lower solvency capital requirement than insurance companies exposed to higher amounts of risk, thereby stimulating good risk management and policyholder protection. A direct consequence of the new regulation is that all the assets and liabilities are valued at market prices. The framework aims to create a level playing field for all participants, harmonise supervision and improve capital allocation (Bouma 2006). To obtain a more general view of the effects of Solvency II on longevity risk these effects are calculated for a hypothetical insurance company which insures old-age pensions only. For this hypothetical insurance company the focus will lie on the Technical Provision and the Solvency II capital norm with respect to longevity risk. Special attention will be given to the effect of interest on measuring capital requirements for longevity. Furthermore a comparison will be made between the Solvency II Technical Provision with a best estimate that is adjusted by way of age-reduction on the mortality table.

Calculation of the Solvency Capital Requirement The Solvency II regime demands that the amount of own funds on top of the Technical Provision is (at least) equal to the Solvency Capital Requirement (SCR). The Technical Provision consists of the Best Estimate (BE) of all future cash flows in terms of interest and mortality on

Lars Janssen Lars L. Janssen iss a master student in "Actuarial Sciences" at the UvA (Amsterdam). Furthermore he is completing his master in "Pharmaceutical Sciences" at the VU (Amsterdam). This article is a summary of his bachelor thesis, which was written under supervision of drs. R. Bruning.

24

AENORM

vol. 17 (66)

December 2009

top of which the Risk Margin, or Market Value Margin (MVM) has to be held. The SCR is set up to prevent policyholders against unforeseen losses (Bouma 2006). The level of the SCR is such that it protects policyholders with 99.5% confidence over a one year period (DNB (IV) 2007). If the amount of own funds is below the SCR level, the supervisor will demand action. Besides the SCR Solvency II also describes the Minimal Capital Requirement (MCR). The MCR is smaller than the SCR and should be ‘simple, robust, and clear’ as described by the DNB (DNB (IV) 2007). CEIOPS describes the MCR as “(…) a level of capital below which an insurance undertaking’s operations present an unacceptable risk to policyholders. If an undertaking’s available capital falls below the MCR, ultimate supervisory action should be triggered” (CEIOPS 2005). Insurers can use a standard formula to calculate the SCR or they can use an approved internal model to calculate the SCR. The SCR according to the standard formula has a modular structure which defines capital requirements for all (sub)groups as is shown in Figure 1 (EC 2008). For each module the capital requirement is calculated and all of these capital requirements are aggregated using a correlation matrix (CEIOPS 2007). The correlation matrix is designed to capture diversification effects between different risk groups.

The Quantitative Model In this paragraph the streamlined old-age pension insurance company is introduced. To measure the effects of Solvency II at the level of the insurance company it is not enough to model the capital requirements for a single policy. Every policy contributes differently to the


Actuarial Sciences

Figure 1. (EC (II) 2008)

SCR k , xi = kVx*i − kVxBE i BE k xi

V

* k xi

=

V =

1

k

k

1 pxi v (0, k )

1 p*x v (0, k ) i

aɺɺxi ( kbxi ) − aɺɺxi ( k π xi )

aɺɺ*xi ( k bxi ) − aɺɺ*xi ( k π xi )

q*x + k = 0.75 1 qx + k ,2008 , 0 < k < ω − x

0

q*x + k = 1 qx + k = 0 ∀k

* 1 ω

q = 1 qω = 1

MVMt , xi = CoC %v(0, t ) −1 (∑ k =t +i 1 v(0, k )SCR k , xi ) ω− x

V SolvII = ∑ i ( tVxBE + MVMt , xi ) i

t

SolvtTotal = ∑ i ( tVxBE + SCR t , xi +MVMt , xi ) i

capital requirement due to differences in gender, age and the amount of yearly pension that is insured. To draw conclusions at the level of an insurance firm it is necessary to create a representative portfolio of policies and to calculate the capital requirements for this portfolio. The streamlined insurance company presented here consists of 50 persons randomly selected from a database from ‘ASR verzekeringen’. We assume here that the insurance company is only subject to longevity risk. Basic statistics for this portfolio are listed in Table 1. The supposition regarding the longevity risk simplifies the calculation of the SCR (Figure 1) to the subsection associated with longevity risk (Lifelong). Furthermore, to simplify computations, all policyholders are assumed to retire at the age of 65, and all birth dates are rounded to the nearest whole year (resulting in a birth date of January 1st for all policyholders). In order to obtain the Technical Provision firstly the Best Estimate (BE) has to be calculated. The BE at time k is calculated using the actuarial definition of a reserve (kVBE) as described by Promislov (Promislow 2006). The nominal interest rate term structure as prescribed by DNB is used as a measure for future interest, while for future mortality the ‘generatietafels lijfrenten’ (life annuity tables for different generations) by the ‘Verbond van Verzekeraars’ (Alliance of Dutch insurance companies) are used (Verbond van Verzekeraars 2008; DNB 2009).

Box 1: Actuarial Formula’s. The SCRk, x is calculated as i the difference between the BE-reserve for policy xi under shocked conditions (kV x *) and the reserve for policy xi i under normal conditions (kV x BE). The standard actuarial i definitions as described by Promislov (Promislow 2006) are shown for kV x * and kV x BE. k Year survival rates for i i policy xi are shown as k p x under the BE scenario or as i p * under the shocked scenario ( asterisks (*) mark k xi scenarios calculated with shocked mortality rates). The superscript in front of a vector (e.g.: kb x ) indicates that i the first k elements of the vector associated with policy xi are taken as zero. v(0,k) Represents the discount function valuating 1 at time k. The benefit vector for policy xi(b x i ) consists of nominal benefits paid from the age of 65 for policy xi. The premium for policy xi(π x ) is calculated using i a fixed mortality table (2008) and a fixed 3% interest. The premium vector for policy xi(π x ) has elements greater than i zero for all years until retirement. The mortality rates are affected by the shock on the table of 2008 (the first year in the generation life-annuity table), as indicated by 2008 in the subscript, the zero and ultimo mortality rates are not subjected to the shock. The MVMk,xi is calculated by discounting all the future SCRs to time k and multiplying them with the Cost of Capital factor (CoC%) to obtain the the costs of holding the future SCRs. The CoC% factor is defined as the Cost of Capital for an average firm. Under QIS4 this is set to 6%. The CoC% of an average firm is used to protect policyholders against ‘run-off’ risks. In the case that no insurance company can be found to take over the portfolio there’s enough money to finance the future SCRs (Bundesamt für Privatversicherungen 2006).

Table 1.

Gender Average age Average Insured pension Average Insured pension (to be financed)

76% male, 24% female 47 (standard deviation: 8.9 year) € 29.468 (standard deviation: € 11.456) € 16.673 (standard deviation: € 9.144)

The Market Value Margin which has to be held on top of the Best Estimate is calculated using prospects of future SCRs. The BE and MVM together constitute the Technical Provision. The SCR for policy xi at time k (SCRk, xi ) is calculated as the difference between the reserve (Best Estimate) calculated under normal conditions and the reserve using a mortality shock on all one year mortality rates: the calculation of the SCR as well as the definition of the shocked mortality rates are displayed in Box 1. The

AENORM

vol. 17 (66)

December 2009

25


Actuarial Sciences

Table 2.

Table 4.

Solvency II norm Best Estimate

Euros € 5,782,141 € 5,087,209

Nominal

%BE 113.66% 100.00%

Solvency II norm Solvency II norm, -50 bp Solvency II norm, +50 bp

Table 3.

Technical Provision Best Estimate

111.63% 100.00%

prescribed downward shock of 25% on future mortality rates is translated here into a flat mortality-scenario of 75% of the current mortality rates. This is done because the main aspect of the 25% is mortality-improvement [7], but this is an aspect that is already captured by the BE. The MVM for policy xi at time k (MVMk, xi ) is defined using expected SCRs under the assumption that interest and mortality develop as assumed (shown in Box 1). Finally the Technical Provision for the portfolio can be obtained by summing the BE and the MVM over all the policies: tVSolvII shown in Box 1. The total amount of funds an insurance company is obliged to hold at time t under the Solvency II regime (referred to as the Solvency II norm) consist of the Technical Provision and the SCR for the entire portfolio (SolvtTotal in Box 1).

€ 4,579,405

79.20%

Table 5.

Technical Provision, shock 0 bp Technical Provision, shock -50 bp Technical Provision, shock +50 bp

111, 63% 112, 04% 111, 42%

The Solvency II norm (Technical Provision + SCR) is equal to 113.63% of the Best Estimate (Table 2), while the Technical Provision is equal to 111.63% of the Best Estimate (Figure 3). Consequently the MVM is 11.63% of the BE and the SCR is 2.03% of the BE. These figures are especially important since they are enforced by the supervisor for this portfolio. Projections of the Solvency II norm and the Technical provision are shown in Figure 2 and Figure 3. It is important to realise that these projected

values will only be realised if interest and mortality develop exactly as predicted at time 0. As shown in Figure 2 it is expected that the Solvency II norm will increase for the next 20 years, which is in line with the assumption that the average policyholder will work 18 more years before retiring (Table 1). On a single policy level the reserve is expected to be maximal just before the moment of retirement, apparently this rule can be used as a rule of thumb for the entire portfolio a well. The projections for the Technical provision are expected to be reasonably stable for the next 40 years, varying between 5% and 15% of the Best Estimate (Figure 3). It is remarkable that the pink line in figure 3 increases after 20 years, which means that the MVM is relatively higher when more people are retired. The effect of a change in the nominal interest rate term structure of 50 basis points (bp) on the Solvency II norm is shown in Table 4. The table shows that a relatively small change in the term structure has big effects on the Solvency II norm at time 0. A decrease of 50 bp increases the Solvency II norm with 24, 88% while an increase of 50 bp reduces the Solvency II norm with 20, 80%. The

Figure 2.

Figure 3.

Results

Figure 3

Figure 2

26

€ 5,782,141 € 7,220,594

% Solvency II norm 100.00% 124.88%

AENORM

vol. 17 (66)

December 2009


w w w.g a a a n . n u

D e av o n D v o o r h e t a f r o n D e n va n het fusier apport voor een i n t e r n at i o n a l e b i e r b r o u w e r

a u d i t J ta x J a dv i s o r y


Actuarial Sciences

Table 6.

Acknowledgements

Solvency II norm 0 year reduction 1 year reduction 2 year reduction 3 year reduction

%(0VBE + MVM0) 100.00% 89.58% 96.70% 103.69% 110.56%

%DVSolvII 100.00% 92.34% 96.94% 101.56% 106.19%

Technical Provision as a percentage of the Best Estimate (Table 5), shows only little differences. This means that the MVM as a percentage of the BE is not very sensitive for the interest rate. Though the results are not shown, the same trends are observed for the projected values of the Solvency II norm and the projected Technical provision. Age reduction is a frequently used premium principle by insurance companies. Calculating the best estimate using a reduced age can be seen as a means to approximate the Risk Margin. Here, two different approaches are presented to compare this premium principle with the Risk Margin. The first approach (the results are shown in the second column of Table 6) compares the Technical Provision at time 0 with the reserve calculated with the reduced age. Table 6 shows that for the entire portfolio an average age reduction of one or two years give comparable results with the Solvency II Risk Margin. A second approach is to compare the discounted values of the projected values of both the Solvency II and the age reduction approach. The results for this approach are shown in the third column of Table 6 as a percentage of the sum of the Discounted Value under Solvency II (DVSolvII). The results for the second approach show that an age reduction of two years is on average the best value for the entire portfolio and for all years. In accordance with the results presented in Table 6 it is concluded that an age reduction of two years compares the best to the Risk Margin as calculated under the Solvency II regime.

Conclusions The main conclusion is that, for the portfolio observed, Solvency II requires 113, 66% of the Best Estimate to be held as capital of which 11. 63% serves as a risk margin and 2, 03% serves as SCR. On the portfolio level, the Solvency II norm is sensitive to upward and downward changes of the term structure. An increase of 50 bp leads to a decline of the Solvency II norm of 20, 80% while a decrease of 50 bp leads to an increment of the Solvency II norm of 24, 88%. The risk margin as a percentage of the BE is not very sensitive for the interest rate. Finally an age reduction of two years is the age reduction which is, for the portfolio observed, most in accordance with the Risk Margin as defined under Solvency II.

28

AENORM

vol. 17 (66)

December 2009

Special thanks go out to Jenneke G. Meijer with whom the featured mathematical model was constructed and to Drs. Rob Bruning who was my supervisor for this thesis.

References Bouma, S.. Risk Management in the Insurance Industry and Solvency II Capgemini Compliance and Risk Management Centre of Excellence, 2006. Bundesamt für Privatversicherungen, B.. The Swiss Experience with Market Consistent Technical Provisions - the Cost of Capital Approach. B. f. P. BPV. (2006) CEIOPS. Answers to the European Commission on the second wave of Calls for Advice in the framework of the Solvency II project, 2005. CEIOPS (I). QIS3 Calibration of the underwriting risk, market risk and MCR, 2007. DNB (IV). “Solvency II - Kapitaaleisen.” 2009, from http://www.dnb.nl/openboek/extern/id/nl/all/40158354.html., 2007. DNB (IX). “Nominal interest rate term structure (zerocoupon).” from http://www.statistics.dnb.nl/index. cgi?lang=uk&todo=Rentes, 2009. DNB (VII). “Solvency II: Op weg naar een nieuw toezichtkader voor verzekeraars.” De Nederlandsche Bank Kwartaalbericht. December (2007): 55-60. EC (II), E. C.. QIS4 - Technical Specification, 2008. Promislow, S. D.. Fundamentals Mathematics, Wiley, 2006.

of

Actuarial

Verbond van Verzekeraars. Generatietafels Lijfrenten 2008, 2008.


Actuarial Sciences

An Alternative Pricing Model for Inflation by: Alexander van Haastrecht and Richard Plat As medicine against the current financial crisis, governments and central banks created an almost unlimited money supply and spent huge amounts to stimulate the economies worldwide. Due to an increasing money supply, hyperinflation scenarios are being feared, but simultaneously, because of the economic crisis, deflationary spirals, are also alarmed for. As the cure and disease thus affect consumer spending and prices in opposite directions, a large uncertainty currently exists about future inflation. These developments directly affect insurance companies and pension funds, as their liabilities involve inflation features such as inflation-linked indexations. In the UK, products depending on the Limited Price Index (LPI), where the yearly inflation of the price index is restricted between 0% and 5%, have gained significant popularity. Whereas in the Netherlands, conditional indexations based upon the Harmonized Index of Consumer Prices (HICP), and inflation-linked Defined Benefit schemes with some form of deflation protection nowadays have become common. Over the last decade, the markets for inflation derivatives have witnessed a significant growth. Inflation products, linearly depending on future price index levels, such as HICP (exclusive Tobacco), can be valued marketto-market, as a liquid market has emerged. More complex inflation products, such the inflation-linked obligations embedded in typical Dutch and UK pension schemes, have to be valued market-to-model by applying suitable stochastic methods, conform Solvency 2 and IFRS 4 Phase 2. In this article we consider a flexible and parsimonious pricing model, which can be used for the pricing and risk management of such inflation-linked products. The considered model is structurally similar to the JarrowYildirim model, but offers several advantages.

Inflation modelling Articles on inflation modelling typically start out with the Fisher equation, which relates the nominal, inflation and real interest rates. In particular the real returns of an investment are given by the relative increase in buying power and not by nominal returns, see Fisher (1930). That is, the Fisher equation defines the 'real' rate as r=n–y, where n represents the continuously compounded nominal interest rate and y the continuously compounded inflation rate. As commented in Jäckel and Bonneton (2008), this definition is of no further consequence for any derivatives modelling, however its concepts and terminology are useful as they spread through the

modelling literature. The behaviour of inflation is typically modelled by a stochastic differential equation that describes the movement of the Consumer Price Index I(t), for instance US CPI or European HICP ex Tobacco. The inflation i(T1,T2) between times T1 and T2 can directly be derived from this price index, i.e. i(T1,T2) = I(T2)/I(T1) – 1, which in case I(T2) < I(T1) becomes negative and we speak of deflation rather than inflation.

Jarrow-Yildirim model The Jarrow and Yildirim (2003) framework for modelling inflation and real rates is based on a foreign-exchange analogy between the real of and the nominal economy. That is, the real rates can been as interest rates in the real economy, whereas the nominal rates represent the interest rates in the nominal economy. The inflation index then represents the exchange rate between the nominal and real economies. In short, the stochastic differential equations for the Jarrow and Yildirim (2003) model under the risk-neutral measure are given by: dn(t) = [θn(t) – an(t)]dt + σndWn(t), dr(t) = [θn(t) – ρr,IσIσr – arr(t)]dt + σrdWr(t), dI(t) = I(t)[n(t) – r(t)]dt + σII(t)dWI(t), where the nominal and real interest rates are given by

Alexander van Haastrecht Richard Plat Hull and White (1993) models andand the CPI index follows

Alexander a lognormal van Haastrecht distribution. is Actuarial Advisor at Delta Lloyd / Expertise Centrum. Richard Plat is amodel seniorinitially risk manager Eureko / Achmea The Jarrow-Yildirim gainedatsignificant Holding. Both among authors quantitative are Phd candidates at the of Amsterdam. popularity analyst andUniversity ALM experts, Alexander's main areas are Market mainly due to the factofthatinterest the model could beConsistent developedValuation, Hedging, ALMfast and as Risksimilar Management of long were term financial relatively frameworks alreadycontracts and of collective pension contracts in particular. Richard's main areas available as cross-currency Hull-White or hybrid equity of Hull-White interest are models. EconomicItsCapital, Market Consistent Embedded Value, main drawbacks, however, are: Pricing, Hedging, ALM, mortality risk and all other issues related to Life & • Pension insurance and application of the model requires The calibration

the specification of correlations between the inflation

AENORM

vol. 17 (66)

December 2009

29


Actuarial Sciences

index and nominal interest rates, inflation index and real rates, and the nominal interest rates and real rates, of which only the first of these is directly observable. • The model requires volatility specifications for the non-observable real rates, which are even harder to estimate than the inflation rate itself, e.g. see Belgrade et al. (2004). In the following section, we will discuss a model which is structurally similar to the Jarrow-Yildirim framework, but does not suffer from the above disadvantages.

An alternative inflation model with stochastic interest rates To overcome the drawbacks of the Jarrow and Yildirim (2003) model, alternative models have been proposed by Dodgson and Kainth (2008), Plat (2008) and Jäckel and Bonneton (2008) which avoid the dependency on unobservable correlations and real rates. Though different parametrizations are being used, these authors essentially all model inflation rates directly in contrary to the somewhat indirect foreign exchange approach of Jarrow and Yildirim (2003). Furthermore, the instantaneous inflation and nominal interest rates are driven by (correlated) Hull-White processes, hence the log-normality of the price index is preserved. In Jäckel and Bonneton (2008) the following inflation model with stochastic interest rates is formulated for the Consumer Price Index I(t): I (t ) = F (0, t )e P(t , T ) =

− Ba ( t ,T )Cov0 [ z ( t ), X ( t )]− 12 Var0 [ X ( t )] + X ( t )

P (0,T ) P (0, t )

e

− Ba ( t ,T ) z ( t ) + 12 Ba2 ( t ,T )Var0 [ Z ( t )]

,

,

t

with X (t ) = ∫ x(u ) du and Bμ(t,T) = [1 – e–μ(T–t)]/μ. Under 0 the T-forward measure, which uses the zero-coupon bond price P(t,T) as numeraire, the dynamics of instantaneous inflation driver x(u) follows a standard OrnsteinUhlenbeck process with mean reversion strength κ and volatiltility α. Also the instantaneous interest rate z(u) is driven by a correlated Ornstein Uhlenbeck process. Using the log-normality of the price index, closed-form Black-style option pricing formulas can be derived. Moreover, complex options can be valued using Monte Carlo simulations, which can be done very efficiently due to the fact that the triple z(t), x(t), X(t) follows a trivariate Gaussian distribution. The model is very tractable and for instance allows for simple closed-form calibration and simulation methods1. Furthermore it is straightforward to consider extensions with multi-factor inflation and interest drivers, or to translate its dynamics to other measures such as the money-market measure, see e.g. Brigo and Mercurio

(2006). The model setup offers several advantages over the Jarrow-Yildirim framework: • First and foremost, it avoids the need to translate observable market data into unobservable correlations and real rate volatilities. • Secondly, by suitably choosing the different drivers' correlation, volatility and mean reversion strength, the model allows for a highly flexible calibration to the inflation curve's desired auto-correlation structures. • Thirdly, the model with two inflation drivers can in fact be seen as a direct extension of the JarrowYildirim framework. To this end, note that in limit of κ → ∞ whilst keeping α2/κ constant, the OrnsteinUhlenbeck process driving the inflation, becomes a Brownian motion. One can thus choose to let the inflation index be driven by one Brownian motion and one mean reverting inflation factor, see also Jäckel and Bonneton (2008), in complete analogy to the JarrowYildirim model. In this sense, the two-factor instantaneous inflation model with stochastic interest rates incorporates the auto-correlation structures of the Jarrow-Yildirim model as a special case, but in general allows for a more flexible calibrations and, more importantly, it avoids the need to translate observable market data into unobservable correlations and real rate volatilities.

Auto-correlation structure As inflation rates are logically interconnected, it is for the pricing of inflation derivatives important to consider realistic correlation structures between future inflation rates. To investigate the (auto-)correlation structure in the latter inflation model, we the logarithm of the inflation index ratio, Zi = ln(I(Ti) / I(Ti–1)), which quantity forms the direct input for inflation rates, year-on-year swaps and inflation-linked options. The correlation structure between the Zi's for different model parameters is investigated in Figure 1 and 2. From figures 1 an 2, we can indeed see that the model allows for believable auto-correlation structures. Moreover, flexibility can be obtained through the mean reversion strength κ, which is used to calibrate the desired inflation curve's auto-correlation structure. In Figure (1) the correlation between logarithms of the first year inflation index ratio Z1 and the future inflation index ratios Zi, for the maturities i = 2,...,21 is plotted. In Figure (2) the correlation between the logarithms of consecutive inflation index ratios, Zi–1 and Zi, is displayed. From Figure 1 we also see that the auto-correlation between the first year rate and other rates monotonically decreases for longer maturities, exactly as we like it to be. For instance, if κ = 1, the correlation between the first

Explicit solutions for the driving model factors, option pricings for vanilla inflation options and auto-correlation structures are available upon request.

1

30

AENORM

vol. 17 (66)

December 2009


Actuarial Sciences

Figure 1. Auto-correlation structure between the logarithms of first and future year inflation index ratios for different mean reversion parameters κ.

Figure 2. Auto-correlation structure between the logarithms of consecutive inflation index ratios for different mean reversion parameters κ.

year inflation rate and the second year rate is equal to 53%, whereas the correlation with the inflation of the fifth year is only 2.5%. Finally, from Figure 2 it follows that year-on-year inflation rates become more correlated for longer maturities, in particular when the reversion strength is small or mean fleeing. Generally, due to the mean reversion effect of the underlying Ornstein Uhlenbeck inflation process, the correlation between inflation rates decreases for larger values of κ. Intuitively this can be interpreted by the fact that the inflation driver has a longer memory for lower mean reversion strengths: for high values of κ the underlying process, movements of the inflation driver x have little impact on future inflation rates as they will be restored to their original relatively fast by the high mean reversion strength. Movements of the inflation driver x under a low mean reversion levels, on the other hand, directly impacts future inflation rates as the state variable remains at that level for a relatively long period. Taking this argument to the limit, one can even show that for κ→∞, whilst keeping α2/κ constant, the underlying inflation driver converges to an ordinary Brownian motion, i.e. with no memory property at all.

HICP) index I. The payoff at time T2 of an inflationindexed caplet with strike k is given by

Pricing of inflation-indexed caps and floors To calibrate inflation models, generally market prices for vanilla year-on-year caps and floors are being used. It is important that closed-form pricing formulas for these vanilla options are available in a pricing model, as calibrations have to be done fast and accurately. We hence consider the pricing of vanilla caps and floors in more detail. An inflation-indexed cap (floor) is a sum of caplets (floorlets), which in return can be seen as a call (put) option on the inflation rate implied by the inflation (e.g.

max(

I (T2 ) − 1 − k , 0) I (T1 )

For ease of exposition we here assume a unit notional and year fraction, and neglect lags in the inflation index. One can show that I(T2) / I(T1) under the T2-forward measure QT2 , which uses the T2-bond price as numeraire, follows a lognormal distribution with mean F and total volatility v. Standard no-arbitrage theory implies that the time zero value of an inflation-indexed caplet is given by the following version of Black's formula with forward price F, strike K = 1+k, total volatility v and ω = 1: IICplt(T1 , T2 , K ) = P (0, T2 )[ FωN (ωd1 ) − KωN (ωd 2 ), d1 = ln( F / K ) + υ2 / 2, d 2 = d1 − v,

with P(0,T2) the time zero discount factor for time T2 and where N denotes the cumulative normal distribution function. The price of the corresponding floorlet can analogously be obtained by setting ω = –1 in the above formula. As the above formula can be evaluated very efficiently, it can be used in calibrations of the model to the market prices of vanilla year-on-year caps and floors.

Calibration Assuming that the nominal interest rate parameters have been calibrated against relevant interest rate option data, there are, depending on the model's parametrization, several possibilities to calibrate the inflation driver(s). For

AENORM

vol. 17 (66)

December 2009

31


Actuarial Sciences

Figure 3. Inflation model parameters calibrated to year-onyear inflation rates and vanilla 0% year-on-year floors.

Mohammed. “A market model for inflation.” http:// ssrn.com/abstract=576081, 2004. Brigo, D. and F. Mercurio. Interest rate models - theory and practice. Springer Finance, 2006. Dodgson, M. and D. Kainth. “Inflation-linked derivatives.” http://www.quarchome.org/inflation/ InflationLinkedDerivatives20060908.pdf, 2006. Fisher, I.. The Theory of interest. The Macmillan Company, 1930.

instance, with one inflation factor, a fixed mean reversion strength and time-dependent volatilities, one can calibrate the volatilities by bootstrapping them to a set of year-onyear inflation options. To obtain appropriate correlation structures and year-on-year rates, the mean reversion strength is typically much higher than encountered in interest rate models. We can also take year-on-year inflation rates and a set of year-on-year inflation floors/caps simultaneous into account. As the year-on-year inflation rates contain information about the inflation curve's auto-correlation structure, we can use these to infer information about the inflation driver's mean reversion structure, see Figure 1 and 2 Hence for each maturity we can calibrate to two inputs, the year-on-year rate and the corresponding cap/ floor, by solving for the two unknowns, i.e. the mean reversion and volatility. The calibrated parameters are plotted in Figure 3.

Conclusion In this article we have discussed a flexible and parsimonious inflation modelling framework, structurally similar to the Jarrow-Yildirim model, but which offers several advantages over that approach. First, it avoids the need to translate observable market data into unobservable correlations and real rate volatilities. Not only does this allow the user to hedge inflation derivatives based on market quantities, it also reduces model and parametrization risks. Secondly, by suitably choosing the different drivers' correlation, volatility and mean reversion strength, the model allows for a highly flexible calibration to the inflation curve's desired auto-correlation structures. Finally, one can show that the inflation model discussed here incorporates the Jarrow-Yildirim model as nested special case, but in general allows for a more flexible calibrations, and avoids the need to translate observable market data into unobservable correlations and real rate volatilities.

References Belgrade, N., E. Benhamou, E. Koehler, and M.

32

AENORM

vol. 17 (66)

December 2009

Hull, J. and A. White. “One factor interest rate models and the valuation of interest rate derivative securities.” Journal of Financial and Quantitative Analysis, 28.2, (1993). Jäckel, P. and J. Bonneton. Inflation products and inflation models (Encyclopedia of Quantitative Finance). John Wiley and Sons, 2008. Jarrow, R. and Y. Yildirim. “Pricing treasury inflation protected securities and related derivatives using an HJM model.” Journal of Financial and Quantitative Analysis, 38.2 (2003):409-430. Plat, R.. “An alternative pricing model for inflation.” 2008.


Econometrics

Using a Markov Switching Approach for Currency Crises Early Warning Systems: an Evaluation Framework by: Elena-Ivona Dumitrescu Currency crises are phenomena more and more present worldwide as the globalization process intensifies. For instance, the emerging markets are suffering from currency crisis effects as a consequence of the recent global financial crisis. This fact should stimulate economists to improve the efficiency of Early Warning System (hereafter EWS) so that the authorities may take measures in order to prevent or at least attenuate the effects of these phenomena. Generally speaking, we deal with a currency crisis when the investors flee a currency en masse out of fear that it might be devalued, in turn fueling the very devaluation they anticipated (P. Krugman)1.At the same time, an EWS is a mixture of three elements: a definition of a currency crisis, empirical models which return crisis probabilities, and evaluation criteria and comparison tests. Nevertheless, most of the currency crisis literature is focused on the construction of EWS, neglecting the evaluation and the identification of the model that best recognizes crisis and calm periods.

Introduction In this paper we aim at identifying an optimal EWS specification based on Markov switching framework. For this reason we use both performance assessment criteria and comparison tests. Besides, we scrutinize the role of market expectation variables in the construction of the EWS. Considering several Markov switching specifications for six Latin-American and six South-Asian countries, we find that the Markov switching model with market expectation variables and spread switching outperforms the other specifications, proving that the choice of modelisation and macroeconomic indicators may have a positive impact on the efficiency of an EWS.

A Currency Crisis Dating Method Following the results of Lestano and Jacobs (2004) we have decided to use KLR modified pressure index in order to identify crisis and calm periods: KLRmn ,t =

en ,t en ,t

σ e rn ,t σ e + in ,t , σ r rn ,t σi

where en,t denotes the exchange rate (units of country n currency per US dollar in period t), rn,t represents the foreign reserves of country n in period t, while in,t is This statement is based on reports from Stephen Jen, the chief currency strategist at Morgan Stanley.

1

the interest rate in country n at time t. Meanwhile, the standard deviations σx are the ones of the relative changes in the variables σ X n ,t / X n ,t , where X denotes each variable at a time: the exchange rate, the foreign reserves, and the interest rate, and ΔXn,t = Xn,t – Xn,t-6. For both subsamples the threshold equals two standard deviations above the mean. If the crisis pressure index goes beyond the threshold there is a crisis in period t, and the crisis variable takes the value of one, while in the opposite case it takes the value of 0. Thus, we define yn,t as country n’s crisis dummy variable taking the value of one if there will be a crisis in the following 24 months and taking the value of 0 otherwise:

yn,t

1, if  = 0, 

24 j =1

Crisisn ,t + j > 0

Otherwise

This binary variable is at the basis of the construction

Elena-Ivona Dumitrescu After having obtained a BSc in Statistics and Economic Forecasting in Romania (2007), Elena went to Orleans University as an Erasmus student. Recently, she obtained her Master diploma in Econometrics and Applied Statistics, and currently she is a first year PhD student in Econometrics at Orleans University, France and Maastricht University, Netherlands. Her research topic is Early Warning Systems (EWS), and for the moment she studies the case of currency crises EWS.

AENORM

vol. 17 (66)

December 2009

33


Econometrics

of evaluation criteria (QPS, LPS).

A Markov switching framework The Markov switching framework does not need a prior identification of crises and imposes fewer distributional assumptions than discrete-choice models. The basic model is defined as follows (Hamilton (1988)): yt = t + xt + εt ,

where yt is the chosen pressure index, xt is the matrix of macroeconomic indicators, and εt is N .i.i.d .(0, σ S2t ) , St is a latent variable, which follows a first order two states T Markov chain {S t }t =1 , where St = 0 if there is a crisis and St = 1 if not. For the ex-post identification of the two regimes, Abiad (2003) considers that the crisis (resp. tranquil) regime is the one having a higher (resp. lower) volatility. In the rest of the paper we concentrate on the constant transition probabilities Markov model switching in regime, in regressors and in variance. The model is estimated by Maximum Likelihood, as described by Hamilton (1988). For each state the conditional mean and difference to the mean are computed. Next, the normal probability density for each regime ηt can be obtained. Given the initial values of the parameters (μ0, β0) and of the conditional probability for each regime ξ0, we can iterate from t=1 to T on the following equations: ∧

ξ t |t −1  ηt'

,

∧'

1' (ξ t |t −1  ηt' ) ∧'

ξ t|t-1 = P ξ

t −1|t −1

.

(1) where ○ denotes element-by-element multiplication and 1 is the unit vector. Equation 1 gives the filtered probabilities Pr(St = i|Ωt) for each state, while equation 2 shows the forecasted probabilities of being in a state (2) in the next period Pr(St = i|Ωt)1. The conditional normal density ηt and the filtered probabilities allow computing the conditional log likelihood of the observed data: T

Pr( S{t +1...t + 24} = 0 |  t ) = 1 − Pr( S{t +1...t + 24} = 1 |  t ) = 1 − {[ P01 P1123 Pr( St = 0 |  t )] + [ P1124 Pr( St = 1 |  t )]},

where P01 and P11 are constant transition probabilities.

How can we evaluate and compare diffe(3) rent EWS specifications? Since we estimate several models, developing a statistical framework becomes compulsory for analyzing the forecasting performance of an EWS and choosing the optimal specification. Thus, in the first step using performance assessment criteria we compare the crises probabilities outputted by the EWS models with the actual occurrence of crises, while in the second step we identify the best model specification2 by implementing Clark-West (2007)’s comparison test for nested models. We base our analysis on the two most reckoned indicators in the currency-crisis EWS literature, the Quadratic Probability Score and the Log Probability Score. The Quadratic Probability Score (QPS) is a mean square error measure which compares the predicted probabilities of the two states (crisis/ non-crisis) with the real crisis indicator. It is defined as: QPS =

∧'

ξ t|t =

ahead forecasts by estimating the probability of observing at least one crisis in the next 24 periods as follows (Arias and Erlandsson (2005)):

L(θ ) = ∑ log(1' (ξ t |t −1  ηt )). t =1

Aiming at foreseeing crises in a certain period (24 months in our case), we construct a series of 24 month

1 T ∑ 2( Pt − C 24t ) 2 , T t =1

where Pt represents the estimated probability of crisis at time t and C24t is the realization of the crisis event at time t. QPS takes values from 0 to 2, with 0 being perfect accuracy. The Log Probability Score (LPS) loss function penalizes large errors more heavily than QPS: 1 T LPS = − ∑ [(1 − C 24t ) ln(1 − Pt ) + C 24t ln( Pt )]. T t =1

It ranges from 0 to ∞, with LPS=0 being perfect accuracy. In order to implement Clark-West (2007) ’s test let us consider that model 1 is the restricted one and model 2 is the larger one, which reduces to model 1 if some of its parameters are set to 0. The sample size is T, and besides, the k step ahead forecasts of the two models are denoted ɵy and ɵy 2,t+k. The null hypothesis is equal MSPE, while 1,t+k the alternative is that the unrestricted mode has a smaller

Often we are interested in forming an inference about the true regime at date t based on observations obtained through a later date T, denoted ξt |T. These are referred to as "smoothed" probabilities Pr(St =i |Ωt) and they are given by Kim's algorithm (1994): ξt |T = ξt +1|T ξt |TP / ξt +1|t. 2 Moreover, we perform a sensitivity analysis of our models, by replacing the filtered probabilities with the smoothed ones in equation 3 and applying the same performance assessment criteria and comparison test to these new results. 1

34

AENORM

vol. 17 (66)

December 2009

d

e

v

j

i

h

i

e

o

w

v

v


Je leert meer...

...als je niet voor de grootste kiest.

Wie graag goed wil leren zeilen, kan twee dingen

net zo goed voor onze traineeships waarin je diverse

doen. Je kunt aan boord stappen van een groot zeilschip

functies bij verschillende afdelingen vervult. Waardoor je

en alles leren over een bepaald onderdeel, zoals de stand

meer ervaring opdoet, meer leert en sneller groeit.

van het grootzeil. Of je kiest voor een catamaran, waarop

SNS REAAL is met een balanstotaal van € 124 miljard en

je zelf de koers bepaalt en jouw ontwikkeling direct van

zo’n 8.000 medewerkers groot genoeg voor jouw ambities

invloed is op de snelheid van je boot. Zo werkt het ook met een startfunctie bij SNS REAAL, de

Starters

en klein genoeg voor een persoonlijk contact. Ambitieuze en ondernemende starters op hbo-

innovatieve en snelgroeiende dienstverlener in bankieren

en wo-niveau bieden we naast een afwisselende functie

en verzekeren. Waar je als starter bij een hele grote

een uitstekend salaris en goede doorgroeimogelijkheden.

organisatie vaak een vaste plek krijgt met specieke

Aan jou de keuze: laat je de koers van je carrière door

werkzaamheden, kun je je aan boord bij SNS REAAL in de

anderen bepalen of sta je liever zelf aan het roer? Kijk voor

volle breedte van onze organisatie ontwikkelen. Dat geldt

meer informatie over de startfuncties en traineeships van

voor onze nanciële, commerciële en IT-functies, maar

SNS REAAL op www.werkenbijsnsreaal.nl.


Econometrics

Table 1. Evaluation criteria for the three model specifications Simple Markov

Argentina Brazil Indonesia Korea Malaysia Mexico Peru Philippines Taiwan Thailand Uruguay Venezuela

Markov with Market Expectation Variables

Markov with Market Expectation variables and spread switching

QPS

LPS

QPS

LPS

QPS

LPS

1.0728271 1.5702479 1.510376 1.7246418 1.5072766 1.7950411 1.2090568 1.0683215 1.4599955 0.8972826 1.510376 1.4861916

1.3797654 14.462518 3.4354281 7.9123129 3.7163905 3.2302817 1.9693428 1.730428 3.0608666 1.2685974 3.4354281 8.6029436

1.4496557 1.2204401 0.8663157 0.525106 1.1898159 1.5510451 1.3381246 1.1426814 1.4676255 1.217813 1.5481259 1.483826

2.9443765 2.1207162 0.9784603 0.6478694 2.1895865 2.1827321 2.7377585 2.0406641 3.0851527 1.9414385 6.4607424 5.2932221

1.4924517 0.7651425 0.5673435 0.68247 1.1848449 1.6581049 1.1924465 1.0805968 1.467622 1.5113241 1.1859688 1.4540767

3.5576488 1.0572604 0.6076446 0.9097558 2.1650958 2.5267824 1.9393026 1.7563993 3.0850802 2.7105663 2.0235655 3.3612616

Note: QPS ranges from 0 to 2, 0 being perfect accuracy, while LPS ranges from 0 to ∞, 0 being perfect accuracy.

MSPE than the restricted one, i.e. it performs better than the other one. Consequently, we can compute Clark and West (2007) MSPE-adj. statistic as: −

MSPE − adj. =

T f ∧

,

V

where f t + k = (yt+k – ɵy 1,t+k)2 – [yt+k – ɵy 2,t+k)2 – ( ɵy 2,t+k – ɵy 1,t+k)2], f t+k is the sample average of f t + k and V is the sample variance of f t + k – f t. This one-sided test uses critical values from the standard normal distribution.

Data In this paper we use monthly data in US dollars for 12 countries3, obtained from the IMF-IFS database or national banks of the countries through Datastream. Following Lestano et al. (2003) we consider the following economic variables: one-year growth rate of international reserves, M2 to foreign reserves, one-year growth of M2 to foreign reserves, one-year growth of m2 multiplier, one-year growth of domestic credit over GDP, real interest rate, lending rate over deposit rate, one-year growth of real bank deposits, and one-year growth of industrial production. In addition, we test the capability of the market expectation indicators to explain the occurrence of

currency crises by introducing the yield spread and the stock market price index into the model. We define the term spread as the difference between the money rate and the long term government bonds. Moreover, we reduce the impact of extreme values as Kumar et al. (2003).

Empirical results As aforementioned, the aim of this paper is to find the specification that best identifies crisis and calm periods. In order to do this, we develop three different specifications, i.e. a simple Markov model (ms), a Markov model with market expectation variables (mme) and a Markov model with market expectation variables and spread switching (mmess). It turns out that the three models resemble, even if Markov model with market expectation variables and spread switching seems the best performing (see table 1). To be more exact, using Clark-West’s test for nested models we find countries like Indonesia, Korea, Malaysia, Mexico, for which Markov model with market expectation variables is better than simple Markov model, and countries like Brazil, Indonesia, Peru and Uruguay, for which this Markov model with market expectation variables is overtaken by Markov model with market expectation variables and spread switching (see table 2). At the same time, for countries like Brazil, Indonesia, Korea, Malaysia, Uruguay, Markov model with market expectation variables and spread switching is better than

From South America: Argentina, Brazil, Mexico, Peru, Uruguay and Venezuela; From South-East Asia: Indonesia, South Korea, Malaysia, Philippines, Taiwan and Thailand. 4 Similar results have been obtained when using smoothed probabilities instead of filtered ones, confirming the robustness of our results to changes in methodology. 5 These results are available upon request. 3

36

AENORM

vol. 17 (66)

December 2009


specialty

Table 2. Clark-West(2007) comparison test for the three specification ms-mme

Argentina Brazil Indonesia Korea Malaysia Mexico Peru Philippines Taiwan Thailand Uruguay Venezuela

ms-mmess

mme-mmess

Clark-West test statistic

Clark-West test pvalue

Clark-West test statistic

Clark-West test pvalue

Clark-West test statistic

Clark-West test pvalue

-13.06947775 9.66941179 27.7988026 22.34687596 9.86217511 38.71671458 -8.196028908 -9.610889578 -3.827343396 -12.70580883 -17.90946712 17.10201983

1.0000 0.0000 0.0000 0.0000 0.0000 0.0000 1.0000 1.0000 0.9999 1.0000 1.0000 0.0000

-13.36934818 13.88708056 26.80603391 23.54917316 9.971694541 23.10545564 1.932649864 -2.870534863 -3.823517317 -14.16817917 7.656744205 17.23018426

1.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0266 0.9980 0.9999 1.0000 0.0000 0.0000

-6.774018499 8.083313531 19.00302815 -0.713119885 4.847530069 -26.95338974 9.728101998 11.10216073 1.405783953 -7.762337219 8.185765825 17.20915399

1.0000 0.0000 0.0000 0.7621 0.0000 1.0000 0.0000 0.0000 0.0799 1.0000 0.0000 0.0000

Note: The null hypothesis of the Clark-West test is the equality of predictive performance of the two models. The alternative indicates that the non-constraint model (the bigger one) is better than the other one. Under the null hypothesis, the MSPE-adj statistic follows a normal distribution with a unilateral value of 1.645 (5%). Bold entries indicate significance at the 5% level.

simple Markov model (see table 2)4 . Additionally, we notice that for Asian countries like Indonesia, Korea, Philippines, or Thailand, the market expectation variables are significant, highlighting the idea that the Asian currency crisis, unlike the Latin-American one is a self-fulfilling crisis5. To sum up, we can say that Markov model with market expectation variables seems the best of our specifications, proving that second generation crisis model is adequate in explaining the occurrence of currency crises, and that market expectation variables should be taken into consideration more often when constructing EWS.

Conclusion This paper proposes a statistical framework to identify the optimal EWS model, i.e. the one that best recognizes crisis and calm periods. Considering a Markov switching framework, we use the Quadratic Probability Score and the Log Probability Score as criteria assessing the forecasting performance of the EWS models. Furthermore, we implement Clark-West (2007) test for nested models to conclude if a specification overshadows another one or not. We find that the optimal EWS is based on a Markov model with market expectation variables and spread switching and that it is robust to some sensitivity analysis. In addition, we proved that forward-looking variables, such as yield spread or stock market price index help explaining the emergence of currency crises. Nevertheless, our results are conditional on methodological choices. Hence, a deeper empirical analysis, including more countries, longer time series, other market expectation

variables and/or in a Time-Varying Markov Switching framework seems appropriate. We leave these issues for future research.

References Arias, Guillaume and Ulf G. Erlandsson. “Improving early warning systems with a Markov Switching model - an application to South-East Asian crises.” Working Paper (2005) Clark, Todd E. and Kenneth D. West. “Approximately normal tests for equal predictive accuracy in nested models.” Journal of Econometrics, 138.1 (2007):291311 Hamilton, James D.. “Rational-Expectations Econometric Analysis of Changes in Regime: An Investigation of the Term Structure of Interest Rates.” Journal of Economic Dynamics and Control, 12 (1988):385-423 Kumar, Mohan, Uma Moorthy and William Perraudin. “Predicting emerging market currency crashes“ Journal of Empirical Finance, 10 (2003):427-454 Lestano and Jan Jacobs. “A comparison of currency crisis dating methods: East Asia 1970-2002.” CCSO Working Papers, 200412 (2004) University of Groningen CCSO Centre for Economic Research. Jacobs, Jan, Gerard H. Kuper. and Lestano. “Indicators of financial crises do work ! An early warning system for six Asian countries.” (2003) University of Groningen, CCSO Centre for Economic Research.

AENORM

vol. 17 (66)

December 2009

37


specialty

Title by: name

Lead

We are looking for

Consultants Tussenkop

Basistekst

with different levels of experience About Zanders Zanders is an independent firm with a track record of innovation and success across the total spectrum of Treasury & Finance. We provide international advisory, interim, outsourcing and transaction services. Zanders creates value with our specialist expertise and independence. Since 1994, Zanders has grown into a professional organization with currently a dedicated advisory team of more than 80 consultants and an associated pool of more than 45 interim managers. Our success is especially due to our employees. That is why Zanders provides a working environment which offers development opportunities to everyone, on a professional as well as personal level. At Zanders, we are always looking

for talented people who would like to use their expertise and know-how in our firm. We are currently looking for Consultants to strengthen our team.

Zanders Netherlands Brinklaan 134 1404 GV Bussum + 31 (0)35 692 89 89

Consultants_A4_10-07.indd 1

Areas of expertise

What is the profile of our Consultants? To be considered for this position, you should meet the following requirements: • University degree in Economics, Econometrics, Business Administration, Mathematics or Physics; • Up to 2 years work experience for associate consultants; • 2-5 years work experience for consultants; • Well-developed analytical skills and affinity with the financial markets; • Pragmatic and solution-oriented; • Excellent command of English (spoken

e-mail: careers@zanders.eu Zanders Belgium Place de l’Albertine 2, B2 1000 Brussels +32 (0) 2 213 84 00

and written), willing to learn Dutch or French, any other language being an asset.

• Treasury Management • Risk Management • Corporate Finance

Competences • Strategy & Organization • Processes & Systems • Modeling & Valuation • Structuring & Arranging Would you like more information about this position, and/or are you interested in a career at Zanders? If so, please contact our Human Resources Manager, Philine Veldhuysen.

website: www.zanders.eu Zanders UK 26 Dover Street London W1S 4LY +44 (0)207 763 7296

Postal address P.O. box 221 1400 AE Bussum The Netherlands

07-10-2008 16:01:48


6:01:48

Econometrics

Estimation and Simulation of Copulas: With an Application in Scenario Generation by: Bas Tammens Economic capital is used as an indicator for the business to determine the capital needs of a financial institution. Next to that, various rating agencies are interested in these calculations, because the probability of default of a financial institution is linked to the quantile of the aggregate loss function on which the economic capital calculations are based. This is important to investors, because in this way they can determine whether the financial institution is making enough return for the risk they are as an investment and the financial institutions can be compared. Next to that, the supervisory authority (De Nederlandsche Bank in the Netherlands) keeps a close watch at these numbers. They want to know how well the financial institutions are doing especially since the credit crunch and the accompanying credit the government provided to the financial institutions. Last but not least, the economic capital is used to determine performance of the business internally in a financial institution, for example for the Risk Adjusted Return On Capital (RAROC). In order to calculate economic capital, economic scenarios for the coming year are needed. For these economic scenarios the consequences for the balance sheet are calculated and the economic capital corresponds to a certain quantile of the loss function. This article discusses the outcomes of a research (for a thesis) conducted at SNS REAAL in corporation with Zanders. The following subjects are addressed in this article: • • • o • •

Main question of the research Economic capital Structure of a multivariate density function Making a tailor-made distribution function Estimating the correlation matrix in a copula Consequences for the economic capital

Economic capital and main question The economic capital department at SNS REAAL uses a multivariate Sudent’s t-distribution to simulate scenarios. They wanted to know to what extent the current approach fails and how this can be improved. The answer on the first part of this question is that to use a multivariate t-distribution, all the random variables have to be Student’s t-distributed with the same degrees of freedom parameter ν and the copula, which describes the dependence structure has to share this parameter as well. This is a stringent assumption, because the variables are stock indices, interest rates and macroeconomic variables. It is not very likely that all those variables are individually distributed with the same degrees of freedom parameter. This can be solved by constructing a tailor-made multivariate distribution function, which is described in the following section.

Tailor-made multivariate distribution function To obtain a tailor-made multivariate distribution function, it is necessary to fit a univariate distribution function on all the series separately. In the second step the cumulative distribution functions of these marginal distributions can be used to transform the variables to the k-dimensional unit hypercube. On this space, the dependence structure is described by a copula. For an introduction in copulas, look for example at the books of Nelsen (2006) and Joe (2001) or at the articles of Romano (2002) or Demarta and McNeil (2005). To understand how to obtain a tailor made multivariate distribution, the main structure of such a distribution is stated here:

Bas Tammens Bas Tammens finished his thesis in Econometrics in November 2009 at the University of Amsterdam, under the supervision of Noud van Giersbergen and Cees Diks. He did his research for his thesis at the economic capital division of SNS REAAL. In December he starts as a consultant at Zanders *Treasury and Finance Solutions*. He is going to work in the field of treasury IT in combination with risk management.

AENORM

vol. 17 (66)

December 2009

39


Econometrics

f ( x1 ,..., xk ) = f1 ( x1 ) ⋅ ... ⋅ f k ( xk ) ⋅ c ( F1 ( x1 ),..., Fk ( xk )),

where the left side of the equation denotes a k-dimensional distribution function defined on the outcome space of continuous random variables X1,…,Xk. The right side of the equation consists of two parts: the product of k univariate distribution functions f1(),…,fk() for the individual variables (these can all be different) and the copula density function c(), which describes the dependence between the individual variables and is defined on the k-dimensional unit hypercube. To get to this space the cumulative distribution functions of all the individual variables are used. By the Probability Integral transform, this data is uniformly distributed, when the marginal distributions are correctly specified (for a discussion on the consequences of misspecified marginals look for example at Fantazinni, 2009). This is a necessary condition for the copula to be estimated consistently. To grasp the concept of a copula a little better think of the situation in which the copula function is 1, then the multivariate distribution functions consists of the product of the marginal distributions: this means that they are assumed to be independent. The copula can describe many types of dependence, such as nonlinearity. In this

marginal distribution and the copula can be estimated separately. This method is known as the Inference For Margins (IFM) method. In this article, the focus is on the estimation of the copula and the estimation of the marginals is left out, and are assumed to be specified correctly. The main discussion is on how to estimate the correlation matrix in the copula.

Estimating a correlation matrix in the copula First the estimation of the correlation of the Gaussian copula is discussed. The correlation matrix of the Gaussian copula should be based on the correlation that is determined between the xi’s, where this vector of the xi’s can be written as a function of the uniformly distributed data: x = ( Φ −1 (u1 ),..., Φ −1 (uk ) ) . So actually the correlation should be based pairwise on the linear (Pearon’s) correlation coeficient between the Φ-1(ui) and Φ-1(uj), where i does not equal j. This straightforward method captures more information than the method using Kendall’s tau, which is used in many applications in the literature, look for example at Luo and Shevchenko (2007). Secondly this approach is also applied to obtain the correlation matrix for the t-copula. Again the

The gaussian copula underestimates the economic capital research only linear dependence and tail dependence are discussed. These types of dependence can be described by the Gaussian and the t-copula. These two copulas can be derived from the multivariate normal (Gaussian) and multivariate Student’s t-distribution making use of Sklar’s theorem (1959). The parameters in the Gaussian copula are the correlation coefficients between the variables. For a k-dimensional copula, the correlation matrix consists of k*(k-1)/2 non-diagonal entries. The correlation matrix determines the linear dependence between all the random variables pairwise. The t-copula consists of a correlation matrix and a degrees of freedom parameter ν, so it has one parameter more than the Gaussian copula. This degrees of freedom parameter describes the tail dependence between two variables: λU = lim P[Y > G ( −1) (α ) | X > F ( −1) (α )]. α ↑1

So this measure is a limiting case of simultaneous extreme events. When the degrees of freedom parameter is higher, the tail dependence is lower between the variables. The Gaussian copula is nested in the t-copula. When degrees of freedom parameter ν of the t-copula goes to infinite, it is equivalent to the Gaussian copula. To estimate the multivariate distribution function the

40

AENORM

vol. 17 (66)

December 2009

correlation should be based on the xi’s. The matrix of xi’s in this case can be written as a function of the uniformly distributed data: x = ( tν−1 (u1 ),..., tν−1 (uk ) ). The correlation matrix should be determined on the bivariate linear correlation coefficients between tν−1(ui) and tν−1(uj), where i does not equal j. The problem now is that the degree of freedom parameter ν is not known a priori. Therefore, here a method is proposed to determine the estimate for this parameter and the correlation matrix that is partially based on this parameter. Of course, a priori, the correct degrees of freedom parameter of the copula is not known and therefore an iterative method is proposed to construct a consistent estimator for the correlation matrix. 1. ν* is the starting value for the iterative procedure. This initial value could be based on the estimation using Kendall’s tau, but because this is a function of the second degree, the starting value does not influence the results, it only affects the estimation time. 2. Define (Zi, Zj) = (t-1(ui), t-1(uj)) where t-1() is the inverse cumulative distribution function of the student’s t-distribution with degrees of freedom parameter ν*. 3. Rij* = ρ(Zi, Zj), where ρ is the linear (Pearson) correlation. 4. Now estimate the degrees of freedom parameter with maximum likelihood to obtain the estimate νˆ .


Econometrics

Table 1. Results on a simulation study, which is done to determine what the consequences are when the wrong copula is estimated. Economic capital at percentile DGP

Copula estimated

t-copula, ν=5

t Gaussian

t-copula, ν=15

t Gaussian

Gaussian copula

t Gaussian

Mean Standard error Mean Standard error Mean Standard error Mean Standard error Mean Standard error Mean Standard error

5. When | νˆ –v*| > ε (stop criterion), the estimation procedure starts over again with v* = νˆ as the new starting value. Where ε is some small positive real number that can be chosen arbitrarily. The trade-off of this choice is between estimation speed and accuracy. This (parametric) method has proved itself to yield higher values for the likelihood function than the method based on Kendall’s tau. This is no surprise, because that is a rank correlation measure (nonparametric). This means that it only takes into account the order of the random variables rather than the distance. When a practitioner is afraid of outliers in his data, this method might provide a acceptable alternative, because it is insensitive to outliers.

Consequences on economic capital In the research, it has been observed that when dealing with small data sets it there is quite some uncertainty in the estimate of the degrees of freedom parameter of the t-copula. This causes the fact that it is hard to conclude what would be the suitable parameter to simulate scenarios with. The (large) uncertainty in this parameter leads to the question if it would not be better to just estimate the correlation matrix of the Gaussian copula and dismiss the t-copula as an improvement. This is the reason that a simulation study is performed on the consequences of estimating the parameters of the Gaussian and the t-copula, where the data generating process is known and conclude what the consequences are for the estimate of the economic capital. The set up is as follows. From the DGPs in the left column, a simulated sample size of 141 (size of the set that is under consideration in the research) is done. Secondly, the parameters of both the Gaussian and the t-copula are estimated. To obtain

90%

95%

99%

99.96%

19.7 0.1 19.1 0.1 19.6 0.0 19.4 0.0 19.3 0.1 19.3 0.0

26.1 0.3 24.5 0.2 25.7 0.1 25.0 0.1 24.8 0.1 24.9 0.1

45.1 1.6 38.0 0.7 42.1 0.4 39.6 0.2 38.8 0.4 39.1 0.2

100.0 8.6 66.7 2.1 85.7 1.9 71.3 0.5 70.5 1.1 70.9 0.5

the quantile estimates, 50,000 scenarios are generated with the corresponding parameters of the Gaussian and the t-copula. For these scenarios, the economic capital is calculated and the quantiles were determined. This is repeated 20 times for every DGP. The results are presented in table 1. From the results presented in table 1, it can be seen that there is difference between the economic capital when the DGP is known and the Gaussian as well as the t-copula are estimated. The following remarks can be made: • The main conclusion is that it does matter for the economic capital which copula is specified to model the data. The economic capital is higher, when the degrees of freedom parameter is lower and the correlation matrix of the DGP is held constant, as expected. • When the Gaussian copula is estimated and the DGP is the t-copula with df=5, the economic capital at every percentile lies lower than in the case of the two other DGPs and estimating the Gaussian copula. This is remarkable, because the data generating process produces more observations in the tails. So this leads to the conclusion that the Gaussian copula is not modeling the data well and even worse, it underestimates the economic capital in comparison with the DGP that has supposedly less observations in the tails. • Especially the estimates of 99% and the 99.96% quantiles reflect the properties that are expected. The estimates of the economic capital at those quantiles are significantly different for the three different models when the correct model is specified. This is a convenient result, because it tells that it is possible to discriminate between the Gaussian and the t-copula with different values for the degrees of freedom parameter.

AENORM

vol. 17 (66)

December 2009

41


Econometrics

Conclusion The main conclusions that can be drawn are the following: • A multivariate (Student’s t) distribution can be replaced by a tailor made distribution. This can be done by examining the marginal distributions, that describe the properties of the individual random variables, separately from the copula, which describes the dependence structure. • Estimation of the correlation matrix can be done in several ways. In many applications in the literature, a method making use of Kendall’s tau (rank correlation) and using a asymptotic equality with the linear correlation is used. In finite sample, this method proves to provide a lower likelihood in the MLE than the method proposed in this research • For the economic capital, it does matter what the DGP is and which copula is estimated. The Gaussian copula drastically underestimates the economic capital, when the DGP is a t-copula with a low value for the degrees of freedom parameter. This is due to the fact that it can not capture the tail dependence property (simultaneous extreme events). This means that, although the parameter estimate is not very robust, it is still preferable to use the t-copula in the modeling of the data.

References Demarta, S., and A McNeil. “The t Copula and Related Copulas.” International Statistical Review, 73.1 (2005):111–129. Fantazzini., D.. “The effects of misspecified marginals and copulas on computing the value at risk: A Monte Carlo study.” Computational statistics & data analysis, 53.6 (2009):2168-2188. Joe, H.. Multivariate models and dependence concepts. Chapman & Hall/CRC, 2001. Luo, X., and P. V. Shevchenko. The t Copula with Multiple Parameters of Degrees of Freedom: Bivariate Characteristics and Application to Risk Management. Available at SSRN: http://ssrn.com/abstract=1023491, 2007. Nelsen, R. B.. An introduction in copulas. Springer, 2006. Romano, C.. “Applying Copula Function to Risk Management.” University of Rome, “La Sapienza”, working paper, 2002. Sklar, M.. “Fonctions de repartition à ndimensions et leurs marges.” Publ. Inst. Statist. Univ. Paris, 8 (1959):229-231.

42

AENORM

vol. 17 (66)

December 2009


Econometrics

Evaluating Analysts’ Performance: Can Investors Benefit from Recommendations by: Lennart Dek This article assesses the performance of analysts by examining whether investors can profit from their recommendations. It shows that it is possible to obtain abnormal returns by following an investment strategy based on recommendations. Analysts follow listed firms and release recommendations, which generally vary from buy to sell. However, it is not clear whether investors are able to profit from their advice. According to the efficient-market hypothesis prices reflect all publicly available information. Though this does not rule out the possibility that an immediate price reaction follows the release of a new recommendation, it does imply that no long-term effect can result from this initial shock. Private investors, incapable of acting instantaneously on this information, should therefore not be able to benefit from analysts’ advice. Empirical research on this subject has found evidence of a long-term relationship between the recommendation for a stock and its price, but so far has been unable to prove that it can be profitable to act upon analysts’ advice (Womack, 1996; Jegadeesh et al., 2004). Thus, the existing literature does not provide a conclusive answer to the question of whether investors can benefit from the advice of analysts. It can only be profitable to act upon analysts’ recommendations if a relationship exists between the recommendation for a stock and its price. Therefore, the research begins by attempting to find evidence of such a relationship. Subsequently, more advanced techniques are used to ascertain whether investors can benefit from this hypothesised relationship. Data on recommendations released between January 2006 and February 2009, made available by the website analist.nl, is used to examine the performance of analysts. The recommendations in the dataset come from analysts of several banks and concern stocks that were part of the two major indices of the Amsterdam Stock Exchange, the AEX and AMX, from the third of March 2009 onwards.

Recommendations as signal

called replacements, form a signal to the market. An upgrade (which occurs when a new recommendation is higher than the previous one) and an iteration of a 'buy' recommendation can be interpreted as a positive signal, as the analyst who released it believes that the outlook is more positive than previously suggested or that the stock is still undervalued. A downgrade and an iteration of a 'sell' recommendation can in a similar fashion be regarded as a negative signal. If a relationship between price and recommendation really exists, one would expect that the price of a stock would move simultaneously with the replacement of its recommendation. This implies that a positive signal coincides with a price increase, while a negative signal is accompanied by a price decrease. To establish whether this pattern occurs in reality, following Womack (1996) the market-adjusted average three-day return is calculated for all nine possible recommendation replacements. The results of this procedure are presented in table 1. In table 1, bold returns indicate observations that significantly differ from zero at a significance level of five percent. All returns deviate significantly from zero in the direction of the recommendation replacement. Only the return that coincides with the iteration of a ´hold´ recommendation does not differ from zero. However,

Lennart Dek Lennart Dek obtained his Bachelor degree in econometrics (cum laude) at the University of Amsterdam. This article is a summary of his bachelor thesis, which he wrote under the supervision of dr. Noud van Gierbergen. He is now pursuing his master degree in econometrics at the same university.

Analysts update their recommendations on a regular basis. These updates, which from now on will be

AENORM

vol. 17 (66)

December 2009

43


Econometrics

Table 1. Market-adjusted average three-day return (%) To recommendations From recommendation

Buy

Hold

Sell

Buy t-value Hold t-value Sell t-value

0.30

-2.50

-2.95

(2.78) 1.44

(-4.44) -2.89

(6.57) 2.33

(-10.19) -0.24 (-1.49) 1.22

(-6.66) -1.57

(3.21)

(3.90)

(-4.61)

this is the only replacement from which no unambiguous signal can be deduced. These results provide evidence that there indeed exists a relationship between analysts´ recommendations and the price of a stock. Although a relationship between price and recommendation has been established, so far nothing can be said about the causality of this relationship. It is possible that the replacements directly influence the price or that analysts are able to accurately predict price movements. However, it could also be the case that replacements often coincide with events that have a major influence on prices as, for example, earning announcements. If the results of table 1 are caused by the latter, recommendation replacements would not necessarily have predictive power with regard to price changes, but the two would have a common cause. This would imply that it is not possible to profit from the relationship as investors are unaware of the exact date of releases of recommendation replacements.

Abnormal returns A more advanced approach is thus required to assess whether investors can profit from recommendations. Following Barber et al. (2001) it is examined whether an investment strategy based on analysts' recommendation yields abnormal returns. Here abnormal returns are defined as returns that cannot be explained by market fluctuation and risk profile. The first step in the approach of Barber et al. is to determine the consensus recommendation of each stock. The consensus is an index that takes all outstanding recommendations into account. The result can be regarded as the average recommendation of all analysts that cover a particular stock. At the beginning of each trading day all stocks are placed into one of three different portfolios based on their consensus recommendation. The buy portfolio contains the most highly recommended stocks, while the sell portfolio consists of stocks that are on average the least preferred by analysts. All stocks in between are placed in the hold portfolio. Each day the value-weighted returns of all three portfolios are calculated in order to compare the performance of the portfolios.

44

AENORM

vol. 17 (66)

December 2009

A higher return of one portfolio relative to another does not necessarily imply that the former has outperformed the latter. It could also be the case that the first portfolio is riskier, so that a higher expected return is required to compensate for this risk. Therefore, comparing portfolios must be done by examining to what extent the realized returns deviate from the ones predicted by their risk profile, i.e. their abnormal returns. To determine whether the calculated returns are abnormal, the Fama-French Three Factor Model is used to correct for risk. Fama and French (1992) argue that the expected return of a portfolio depends upon its market sensitivity and the extent to which it is comprised of stocks of large firms and firms with high book-to-market ratios.

Influence credit crunch The credit crunch has had a large influence on the stock market during part of the sample. It could also be the case that it has influenced analysts, so that one of the implicit assumptions of the Three Factor Model, a constant risk profile, is violated. This would translate itself into a structural break in the parameters of the model. The Quandt-Andrews test indeed indicates that a structural break occurred mid-2008. This break is modelled by adding all explanatory variables multiplied by a dummy (δ) variable, which is ‘0’ prior to July 1st 2008 and ‘1’ afterwards. The inclusion of a structural break is an addition to the approach of Barber et al. This research further deviates from Barber et al. by using daily instead of monthly data. Since high frequency financial data often suffers from clustered volatility, this particular form of heteroskedasticity is modelled by using the GARCH(1,1) model (Bollerslev, 1986). The final model becomes: Rtp −Rt f = α1 + β1 ( RtM −Rt f ) + γ1 SMBt + δ1 HMLt + d t BRt + εt BRt ≡ α2 + β2 ( RtM −Rt f ) + γ2 SMBt + δ2 HMLt Rtp represents the return on portfolio p on day t, Rt f M f is the risk free rate, RtM the market return and Rt − Rt the risk premium. BRt corrects for the structural break and εt is the disturbance term that follows a GARCH(1,1) process. β indicates how sensitive the portfolio is to market change: the higher this coefficient, the riskier the portfolio. A γ larger (smaller) than zero implies that the portfolio consists mostly of stocks of large (small) firms. A δ larger (smaller) than zero indicates that the stocks in the portfolio are firms with a high (low) book-to-market value on average. The intercept α should, according to economic theory, be equal to zero. If the estimate of the intercept deviates significantly from this value, the portfolio earns abnormal returns. Running regressions for all three portfolios provides estimates of all intercepts and coefficients, which are reproduced in table 2. Bold printed values again indicate


Econometrics

Table 2. Results based on regression analysis Portfolio

Buy

Hold

Sell

Fama-French Three Factor Model

coefficient standard error p-value

0.0002

1.0724

-0.0077

0.0006 0.0007 0.404

-0.1313

0.0039 0.048

-0.0040 0.0061 0.517

0.0001 0.018

0.0097 0.000

coefficient standard error p-value

-0.0001 0.0001 0.271

0.8744

-0.0146

-0.0175

0.1573

0.0145 0.000

0.0064 0.023

0.0088 0.046

-0.0005 0.0004 0.255

0.0220 0.000

coefficient standard error p-value

-0.0006

0.9579

0.2365

0.3019

0.0003 0.049

0.0371 0.000

0.0137 0.000

0.0164 0.000

-0.0013 0.0009 0.175

-0.0766 0.0490 0.118

estimates that differ significantly from zero at the five percent level. Of special interest is the estimate αˆ1 of α1: the abnormal return. The results indicate that holding the buy portfolio yields a positive abnormal return, while the sell portfolio earns a negative abnormal return. The results do not provide any evidence that the return of the hold portfolio is abnormal. These findings imply that stocks preferred by analysts outperform those that are not. It is possible to profit from recommendations by buying a portfolio that consists of stocks with the highest consensus recommendation and selling the one with the lowest. However, it is important to note that transaction costs have not been taken into account. The abnormal returns of the buy and sell portfolio are a violation of the efficient-market hypothesis, which states that prices reflect all publicly available information. The dummy variables illustrate the effect of the credit crunch. The intercepts do not differ significantly between the period before the credit crunch and during it. However, the estimates of the market sensitivity change significantly for the buy and hold portfolio. This implies that since the beginning of the crisis, the buy portfolio consists of stocks that are less risky, while the hold portfolio has become more volatile. A logical explanation for this result is that many volatile and therefore risky stocks have been downgraded from buy to hold as a result of the crisis. The finding that the credit crunch has not affected the intercepts indicates that it has not influenced the performance of the portfolios and thus those of analysts. The lowered recommendations of volatile stocks show that the credit crunch did affect analysts' preferences.

0.0318 0.000

-0.0010 0.0173 0.955

-0.0033 0.0155 0.830

-0.0040 0.0119 0.733

0.0093 0.0127 0.465

-0.0854

-0.2180

0.0255 0.001

0.0256 0.000

a strategy in which the most highly recommended stocks are bought and stocks with the lowest consensus recommendation are sold. This means that investors can profit from analysts’ advice. The credit crunch did not affect analysts' performance but did influence their preferences.

References Andrews, D.W.K.. “Tests for parameter instability and structural change with unknown change point.” Econometrica, 61.4 (1993):821-856. Barber, B., R. Lehavy, M. McNichols, en B. Trueman. “Can Investors Profit from the Prophets? Consensus Analyst Recommendations and Stock Returns.” The Journal of Finance, 56.2. (2001):531-563. Bollerslev T.. “Generalized Autoregressive Conditional Heteroskedasticity.” Journal of Econometrics, 31.3 (1986):307-327. Fama, E. en K. French. “The Cross-Section of Expected Stock Returns.” The Journal of Finance, 47.2 (1992):427-465. Womack, K.. “Do Brokerage Analysts' Recommendations Have Investment Value?” The Journal of Finance, 51.1 (1996):137-167.

Conclusion Investors are able to attain abnormal returns by following

AENORM

vol. 17 (66)

December 2009

45


Econometrics

Deregulation of the Casino Market: a Welfare Analysis by: Frank Pardoel

Looking at the European casino market, we observe a diversity of market regulations. Since the European Union announced its objective to implement a uniform policy for regulating the casino markets across their different member states, several market forms have been proposed. Each market form is characterized by positive as well as negative effects. In the debate about the appropriate regulation of the casino market several arguments are provided in order to favor deregulation. For example, "Liberalization will lead to lower cost, lower prices and more innovation (...) There is no reason why the three types of beneficial effects of market liberalization would not be present in the gambling industry as well." (Van Damme (2007) p. 10) and "For the most part, the gaming and betting industries within the EU have become mature markets with slow growth or even stagnation." (Eadington (2006) p. 2). There are also arguments in favor of a market regime with government interference, like "Maintaining the state monopoly is necessary in order to deal with gambling addiction efficiently. Casinos under competition will maximize their profits and therefore attract as many consumers as possible which increases the risk of gambling addiction." (Minister Hirsch Ballin (2007)1. There is no obvious optimal market form due to the existence of negative and positive effects which occur hand-in-hand under the different regulation forms. Therefore a wellbalanced analysis is necessary in order to shed some light on the current discussion regarding the optimal way of regulating the casino market in Europe, also stated in Van Damme (2007) as "careful balancing of the pros and cons associated with various policy options" (p.12). In this study, a two-period consumption model will be developed describing the gambling behavior of consumers in the casino market. Afterwards, by making use of the newly developed model, the optimal prices set under different market regulation forms will be determined. With knowledge of these characteristics, the different welfare measures and the consumer and producer surplus under the several market regimes can be compared. The optimal way is characterized as the regulation form which maximizes the total welfare in the market, thereby taking the negative side-effects into account.

Frank Pardoel This article is a summary of his thesis written under supervision of Dr. J. Tuinstra in order to obtain the master degree in Econometrics, track Mathematical Economics. The subject of his thesis is partly chosen because of his former employment at Holland Casino. Currently, Frank is a Retirement and Financial Management Consultant at Hewitt Associates B.V. and studies Actuarial Science and Mathematical Finance at the UvA.

46

AENORM

vol. 17 (66)

December 2009

The consumer gambling model The starting point of our model is the consumer side of the economy. In order to properly specify the consumer side, we make a distinction between regular and problematic gamblers. Regular gamblers are consumers who enjoy gambling. They join the gambling game for the thrill and the ambiance, but always have the power to quit if necessary. Problem gamblers have problems with controlling their gambling behavior which leads to an increasing gambling consumption. In order to obtain the same utility as before, more consumption of the same consumption good is needed. Our model includes intrapersonal dynamic conflict for problem gamblers where they specify a consumption plan for the coming periods but feel the incentive to deviate as soon as they reach a future period, most of the time due to lack of selfcontrol, which causes a negative effect on their utility. We use habit formation in combination with hyperbolic discounting to model the negative consumer effects of gambling. Habit formation models rational addiction, which implies that consumers are aware of their problematic behavior and that increasing consumption results in a higher utility level. In this way consumers are not harmed by being addicted. In order to include the 1

Source: "BA0670 Uitspraak Raad van State, 2.6.2.1".


Econometrics

negative effects of addiction in the casino market model as well, we have to include irrational addiction and therefore we will use hyperbolic discounting. The set-up involves a two-period model. The consumer spends the first and second period in the casino and afterwards spends his residual income on primary consumption goods. The consumer specifies his consumption plan prior to the first period and can revise his consumption plan prior to the second period. The total utility function including habit formation and hyperbolic discounting prior to the first period is n n 1 n U(ci1 ,ci 2 , y) = ∑ ci1 − (∑ ci21 + e∑∑ ci1c j1 ) + 2 i =1 i =1 i =1 j ≠ i n 1 n βδ[∑ (ci 2 − γci1 ) − (∑ (ci 2 − γci1 ) 2 2 i =1 i =1

(1)

n

+ e∑∑ (ci 2 − γci1 )(c j 2 − γc j1 ))] + δy

By making use of the Lagrange method with budget n restriction M = ∑ i =1 ( pi1ci1 + pi 2 ci 2 ) + y and a normalized price for the residual goods y, we are able to determine the expressions for the optimal consumption bundles for regular and problem gamblers prior to both periods. The 'price' of gambling in the model is defined as one minus the return percentage. If a casino increases its return percentage the cost of gambling for the consumers will on average be lower. The actual consumption bundles for problem and regular gamblers are given by (c1* ,cˆ2* , yˆ ) = (ci*1 ,..., cn*1 , cˆ12* ,..., cˆn*2 , yˆ ). In the case of regular gamblers it holds that c2* = cˆ2* and y* = yˆ * . The regular gambler does not suffer from intrapersonal dynamic conflict and thereby has no incentive to deviate from his prespecified consumption plan prior to the second period. Problem gamblers show deviating behavior indicated by c2* ≠ cˆ2* and y* ≠ yˆ * if y ≠ 0.

i =1 j ≠ i

If gamblers become more problematic, the sufficient rate of competition will increase where cit is the gambling consumption at casino i in period t, for t = 1,2, e ∈ [0,1) indicates the rate of differentiation among the different casinos2, β ∈ (0,1) represents the rate of hyperbolic discounting, γ ∈ [0,1] is the habit formation parameter, δ ∈ [0,1] is the discount factor and y is the residual consumption in the final period. Note that utility function (1) is an extended case of the concave utility 2 function ut (ct ) = ct − 12 ct . In our model, regular gamblers are characterized by a parameter specification (β,γ) = (1,0) corresponding with the case free of addiction. The parameter specification of problem gamblers depends on their characteristics and can be specified within the different parameter ranges. After the first period, the gambler has consumed c1* = (c11* ,..., cn*1 ) units of gambling. Prior to the second period he again faces the opportunity of maximizing his utility, now given his first period consumption. The (2) consumer moves one period ahead resulting in a total utility function prior to the second period of the form 1 ˆ ˆ , y) ˆ = ∑ (cˆi 2 − γci*1 ) − ( ∑ (cˆi 2 − γci*1 ) 2 U(c i2 2 i =1 i =1 n

n

The casino side of the model After the process of deriving the optimal consumption bundles for gamblers, the next step is to determine the optimal price levels set by the casinos on the market. The objective of our study is to analyze different market regulations in the casino gambling market and determine total welfare under these market regulations. Therefore we need the optimal price levels set under the involving market regulation forms which are state-owned monopoly, private monopoly and competition with n casinos3. In our model it is assumed that the casinos have complete information regarding the consumption functions. The state-owned monopoly has the objective (3) of maximizing the total welfare in the casino market. This is because in most countries, the government's policy is aimed at taking care of its citizens. This type of regulation is similar to the one which will result in the highest welfare. However, as mentioned earlier, there are also economists who provide (efficiency) arguments against this regulation form. This is why we analyze other regulation forms that can also be implemented into the casino market. The objective function of the state monopolist is equal to the welfare function

n

+ e∑ ∑ (cˆi 2 − γci*1 )(cˆ j 2 − γc*j1 ))] + yˆ i =1 j ≠ i

If the rate of differentiation equals zero, consumers are not willing to substitute one product for the other. In this case we speak of heterogeneous products and the different casinos form separate monopolies. However, when e →1, the different products will become more and more substitutes and therefore an almost homogeneous market occurs. 3 Note that the private monopoly regulation is a special case of competition if n = 1. 2

AENORM

vol. 17 (66)

December 2009

47


Econometrics

Figure 1. Regular gambler utility under different regulations.

Figure 2. Problem gambler utility under different regulations.

objective function of casino i under competition reduces, in comparison with (3), to

W ( p1 , p 2 ) = αU pro (c1,* pro , cˆ2,* pro , yˆ *pro ) * + (1 − α )U reg (c1,* reg , cˆ2,* reg , yˆ reg )

+ ( p1 − k )(αc1,* pro + (1 − α )c1,* reg ) + ( p2 − k )(αcˆ

* 2, pro

+ (1 − α )cˆ

* 2, reg

(4)

p (α , β , γ ) = Φ t ( α , β , γ ) k Φ1 (α, β , γ ) = 1 + (1 − Φ 2 (α , β , γ ) =

+ ( pi 2 − k )(αcˆ

* i 2, pro

)

where α indicates the fraction of problem gamblers in the population and where the index reg and pro correspond respectively with regular and problem gamblers. In order to find the optimal price levels for the state monopolist, two first order conditions involving expression (3) should be solved. This results for the state monopolist in * t

Π i(n) ( pi1 , p i 2 ) = ( pi1 − k )(αci*1, pro + (1 − α )ci*1, reg )

where

1 + αγ − α 2 γ 2 )αγ 1 + αγ 2 + αβ − α − α 2 γ 2

The observation that our casino market model is an extension of an ordinary consumption market also (5) follows by looking at the case with no problem gamblers, α = 0, where the price levels simplify to the marginal cost level ( p1* , p2* ) = ( k , k ) . Under the other regulation form, competition with n casinos, the objective of each casino is to maximize its profits given the prices set by the competitors. The owner of a casino does not really care about the welfare of the consumers but only cares about its own profits. The Figure 3. Producer surplus under different regulations.

(6)

)

for i = 1,…,n. The optimal price levels under competition are found by solving 2n first order conditions corresponding with functions (5). The general expression for the price levels is  p1*   (1 − αγ + αγ 2 − α 2 γ 2 )(1 − e) + (1 + αγ 2 − α 2 γ 2 )k   * = Γ  1 − e + (1 + αγ 2 − α 2 γ 2 ) k    p2 

2

1 + αγ 2 − α 2 γ 2 1 + αγ 2 + αβ − α − α 2 γ 2

+ (1 − α )cˆ

* i 2, reg

where Γ =

1 and  = 1 + ( n − 2)e (1 + αγ − α γ )(2 − 3e + en) 2

2 2

If we again omit problem gamblers and consider a perfect heterogeneous market with e = 0, the price levels under competition reduce to ( p1* , p2* ) = ( 1+2k , 1+2k ) . In the other extreme case where we approach a homogeneous market with e →1 , we find ( p1* , p2* ) = ( k , k ) . Both price levels are observed in general economic theory.

Benchmark In order to compare the different regulation forms we determine the sufficient level of competition. This measure indicates the number of casinos needed in order to obtain the same welfare levels for gamblers and/or casinos under a state monopoly and competitive market4. As observed, the model consists of a significant number of parameters. Each parameter has an important contribution to the model. So an appropriate parameter specification is required. Therefore, we start by specifying a benchmark. The benchmark is specified as (α,β,δ,γ,e,M,k) = ( 15 , 107 ,1, 101 , 54 ,100, 201 ) 5. Figures 1 and 2 plot, for the benchmark, the utility The first intention was finding a general expression of the form n*(α,ß,δ,γ,e,M,k) , unfortunately that was not possible. 5 The background is provided in our thesis “Deregulation of the Casino Market: A Welfare Analysis (2009)”. 4

48

AENORM

vol. 17 (66)

December 2009


Econometrics

Figure 4. Total welfare level of the casino market CS weight 1/2.

Figure 5. Total welfare level of the casino market CS weight 3/4.

levels of the regular and problem gamblers respectively, on the vertical axis against the number of casinos on the horizontal axis under the different regulations forms. Figure 3 plots the producer surplus, Π ∈ [0, 12 ] , on the vertical axis against the number of casinos, n ∈ [0,10], on the horizontal axis. The figures 4 and 5 are the most important where they plot the total welfare levels as the weighted average of the consumer and producer 1 1 ∈ [0, ∈ Π ] ] PS) = ([0, surplus with in figure 4 (CS, Π 2 , 2 ) and in figure 5 3 1 (CS , PS ) = ( 4 , 4 ) . The second method has clearly more emphasis on consumer welfare What do the figures tell us? The intersection points in all figures indicate the sufficient level of competition where the welfare under competition and state monopoly reaches the same value. Figures 1 and 2 focus only on the consumers. In a market with only regular gamblers, the sufficient rate is given by n* = 4, while a market with only problem gamblers needs n* = 5 casinos. One extra casino is needed in order to favour competition. Problem gamblers are protected under a state-owned monopoly which is beneficial for them. The efficiency gains under competition need to be larger in order to compete with this protection mechanism. Figure 3 shows that the profits, made by the private monopolist, exceed the profits made by the state-owned monopolist or the total profits of the casinos under competition (n>1). Figure 4 shows that if producer and consumer surplus are equally weighted, competition is the optimal market form when n ≥ 2. If both components are weighted as in figure 5, competition results in a higher overall welfare level for n ≥ 3.

more homogeneous, the sufficient rate of competition will initially increase until a maximum. After that, a decreasing trend is observed. This is caused by the opposite dynamics in consumer and producer surplus. The final observation states that if the efficiency gains due to competition become larger, the sufficient rate of competition is equal to a duopoly. Both consumers and producers benefit from increasing efficiency gains corresponding with lower prices. Concluding, if the potential entry in a casino market is small, the state monopoly will be the preferred market form. However, if a sufficient number of casinos enter the market, the efficiency gains under competition will have a larger impact on the total welfare compared to the protection mechanism under a state monopoly. Therefore, competition will be the optimal market form if the number of potential entrants is large enough which depends on market characteristics.

Conclusions After analyzing the benchmark, different additional scenarios have been examined even as a sensitivity analysis. Some important results will be discussed next. If a population consists of a larger fraction of problem gamblers, the sufficient rate of competition will be higher. If gamblers become more problematic, the sufficient rate of competition will increase. Looking at total welfare, the range will be between two and four casinos. If the government only takes the consumer into account, the range of the sufficient rate of competition will be between four and six. If the gambling service becomes

References Damme, van E.. “Liberalizing Gambling Markets: Lessons from Network Industries?” TILEC Discussion Paper, (2007):025. Eadington, R. W.. “Gambling policy in the European Union: monopolies, market access, economic rents, and competitive pressures among gaming sectors in the member states.” 2007. Schinkel, M. P., Tuinstra, J. and Rüggeberg, J.. “Illinois Walls: How barring indirect purchaser suits facilitates collusion.” CeNDEF Working Paper 2005:05-10, University of Amsterdam.

AENORM

vol. 17 (66)

December 2009

49


Actuarial Sciences

Longevity Risk Hedging for Pension Funds by: Peter Steur More than ever before, the liabilities of pension funds with Defined Benefit (DB) pension systems are sensitive to changes in mortality. Life expectancy has been increasing in almost all the countries of the world. In the Netherlands, life expectancy of 65-year old males rose from 78 years in 1980 to almost 82 years in 2008 (Statistics Netherlands). Although this fact is a blessing to the world as a whole, increasing longevity has also a more worrying impact on those whose business it is to provide for old-age income. If mortality rates decline at a faster rate than expected, pensions need to be paid longer than expected, increasing the pension liabilities of pension funds and annuity providers. This article highlights the concept of longevity risk and the likelihood of success of a financial market for longevity derivatives.

Financial Testing Framework In the Netherlands, the Pension Act (Pensioenwet) and the Financial Testing Framework (Financieel Toetsingskader, or FTK) have been in force since 1 January 2007. Under the FTK, the valuation of the pension liabilities is no longer based on a fixed interest rate (of 4%), but on a term structure of interest, monthly published by the Dutch Central Bank (De Nederlansche Bank, or DNB). Fluctuations of the term structure of interest causes volatility in the value of the pension liabilities. Many pension funds use financial derivatives to (partially) hedge this interest rate risk. This can be done using derivatives such as interest swaps or swaptions. Another part of the FTK is that the valuation of the pension liabilities is based on the ‘expected outgoing cashflows’, which must be determined taking account of a ‘foreseeable trend in survival probabilities’. Therefore the Actuarial Association in the Netherlands (Actuarieel Genootschap, or AG) issued the 2005-2050 forecast mortality table in 2007, which contains a foreseeable trend in survival probabilities. The transition to the 20052050 forecast table raised the value of pension liabilities for Dutch pension funds.

Peter Steur Peter Steur has obtained the Master of Actuarial Science degree at the University of Amsterdam in August 2009. He wrote his master thesis at the Purmerend office of Watson Wyatt Worldwide, under the supervision of drs. R. Meijer. The supervisor at the University of Amsterdam was prof. dr. J.B. Kuné. Peter works for Watson Wyatt Worldwide as junior benefits consultant.

50

AENORM

vol. 17 (66)

December 2009

Two years later, in 2009, statistics show us that the foreseeable trend in survival probabilities (captured in the AG 2005-2050 forecast table) has been a considerable underestimation. The Actuarial Association is now considering issuing a revised forecast mortality table, with updated mortality figures and an adjusted forecasting method. If the AG will update the mortality table, the value of pension liabilities for pension funds will rise again, resulting in financial losses for pension funds. Longevity risk is defined as the risk that mortality rates will fall at a faster rate than accounted for in pricing and reserving calculations. It can be seen as the uncertainty surrounding the increases in life expectancy as a result of unanticipated changes in mortality rates. Increasing awareness and concerns about the impact of longevity risk are stimulating the development of financial instruments to allow economic agents to hedge, diversify and position on this risk.

Dealing with longevity risk Pension funds affected by longevity risk can deal with it in several ways: • Accept the risk as a legitimate business risk; • Diversify their longevity risks across different products in their portfolio (natural hedge); • Enter into some form of full or partial reinsurance with a reinsurance company (pension buy-in); • Arrange for a bulk buyout of their pensions in payment, transferring the responsibility for payment to an insurance company; • Manage longevity risk using mortality-linked securities, such as traded contracts (longevity bonds, longevity futures) or over-the-counter contracts (mortality swaps, mortality forwards). The focus of interest in this article is with the last one:


specialty

Wat als ook haar bank een gat in de hand heeft? De hele middag lekker shoppen. Dat moet af en toe kunnen. Daarbij houdt ze wel precies in de gaten tot hoever ze kan gaan. Maar wat als ze onverwacht niet meer zou kunnen pinnen? Omdat niet haar eigen saldo ontoereikend is, maar dat van haar bank? Daarom houdt de Nederlandsche Bank (DNB) toezicht op de stabiliteit van financiële instellingen. We stellen eisen bij het verlenen van vergunningen aan nieuwe banken, verzekeraars en pensioenfondsen en houden een vinger aan de pols. Toezicht houden is niet de enige taak van DNB. Als onderdeel van het Europese Stelsel van Centrale Banken dragen we ook bij aan een solide monetair beleid en een soepel en veilig betalingsverkeer. Zo maken we ons sterk voor de financiële stabiliteit van Nederland. Want vertrouwen in ons financiële stelsel is de voorwaarde voor welvaart en een gezonde economie. Wil jij daaraan meewerken? Kijk dan op www.werkenbijdnb.nl.

| Accountants | Kwantitatief economen

Werken aan vertrouwen. AENORM

vol. 17 (66)

December 2009

51


Actuarial Sciences

the use of mortality-linked securities to manage longevity risk. Markets for mortality-linked securities have different stakeholders who might be interested in some position (long or short) on longevity risk. Pension funds with a DB pension plan have a short exposure to longevity, as their pension liabilities rise with longevity. Life insurers providing term insurance have a long exposure to longevity, as their policy liabilities fall with longevity. Many life insurers also provide annuities (with a short longevity exposure), which creates a (partial) natural hedge. Another party in the longevity market is the group of capital market institutions, such as investment banks or hedge funds. Provided that expected returns are reasonable, they might be interested in acquiring an exposure to longevity risk. Furthermore, longevity risk has a low correlation with standard financial market risk factors. Therefore mortality-linked securities could be attractive investments in diversified portfolios. Governments have many potential reasons to be interested in markets for mortality-linked securities, for example to manage their own exposure to longevity risk (General Old Age Pensions, health and social care systems). Further, they might wish to assist financial institutions which are exposed to longevity risk, for example by issuing longevity bonds which can be used to hedge longevity risk. This may reduce the probability that large companies are bankrupted by their pension fund, with the result that the society as a whole benefits form a greater stability of the economy.

Longevity products The first attempt to issue longevity products was in November 2004, when the European Investment Bank (EIB) offered to issue a longevity bond (with a maturity of 25 years), with coupons linked to a survivor index based on 65-year old males from the national population of England and Wales. Investors had to make an initial payment, and received in return an annual mortality dependent payment (for 25 years). These coupon payments were dependent of the survivors of the survivor index; the more people survived, the higher the coupon payment. The manager and designer of the longevity bond issue was BNP Paribas. Their target group was UK pension funds, but the longevity bond did not attract sufficient interest to be actually launched and was therefore withdrawn in 2005. There seems to be a number of reasons for the failure of this attempt: • The specific survivor index covered just a fraction of the average pension plan’s exposure to longevity risk. The EIB longevity bond was linked to 65-year old males from England and Wales, while pension plans also have male members of other cohorts as well as female members. The single benchmark was for most pension funds inadequate to create an effective

52

AENORM

vol. 17 (66)

December 2009

hedge; • The structure of the longevity bond did not offer sufficient flexibility. A considerable initial payment was required, which was high relative to the reduction of risk exposure; • The total issue size of the longevity bond was too small to create a liquid market. Therefore market makers did not welcome the longevity bond because they believed it would be closely held and they would not make money from it being traded; • The forecast model used to determine the projected cashflows of the EIB longevity bond was not published. Due to this lack of transparency, this created a barrier for investors not familiar with longevity risk and mortality projection models. It became apparent that a flexible and reliable set of mortality indices was needed for contracts to be written on. A more successful attempt was made by JPMorgan in 2007, in conjunction with The Pensions Institute and Watson Wyatt. They launched the LifeMetrics indices (Coughlan et al., 2007a), comprising publicly available mortality data at population level, broken down by age and gender, for different countries (including The Netherlands). Next to the longevity bond, another example of a longevity product is the mortality forward. This contract involves the exchange of a realized mortality rate relating to a specified population at a given future date, in exchange of a fixed mortality rate (forward rate) determined at the beginning of the contract. In 2007, JPMorgan announced the launch of a mortality forward contract, under the name ‘q-forward’ (part of the LifeMetrics platform). The payout of the q-forward depends on the values of the LifeMetrics index, mentioned earlier. On the maturity date, JPMorgan (the fixed rate payer) pays the pension fund (the floating rate payer) an amount related to the pre-agreed fixed mortality rate (forward rate). In return, the pension fund pays JPMorgan an amount related to the reference mortality rate (based on the LifeMetrics index) on the maturity date. The settlement amount is the difference between the fixed amount (depending on the forward rate) en the floating amount (depending on the realized reference rate). If the reference rate at maturity is lower than the fixed forward rate (lower mortality than expected), the settlement amount is positive, so the pension fund receives a payment from JPMorgan, which can be used to offset the increase of its pension liabilities. On the other hand, if the reference rate at maturity is higher than the fixed forward rate (higher mortality than expected), the settlement amount is negative and the pension fund makes a payment to JPMorgan, which will be offset by the decrease in its pension liabilities. Because the hedge provider (JPMorgan in this case) requires a risk premium, the fixed forward rate at the start of the forward contract will be somewhat below the anticipated mortality rate on the maturity date of the contract. A q-forward contract can be seen as a standardized


Actuarial Sciences

longevity hedge building blok. A portfolio of q-forward contracts with suitably chosen reference ages and maturity dates can be constructed to provide an effective hedge for the longevity risk of a pension fund. Coughlan et al. (2007b) argue that a liquid, hedge-effective market could be built around just eight standardized contracts: • A specific maturity (e.g., 10 years); • Split by gender (male and female); • Four different age groups (50-59, 60-69, 70-79, 8089). To determine for what age groups a pension fund will be interested in a q-forward contract, the sensitivity to longevity for the different age groups needs to be analyzed. By analogy with the concept of interest duration, Coughlan et al. (2007a) introduce the 'q-duration', which is defined as the change in value of the pension liabilities due to a unit change in mortality rates. The q-duration can be calculated as follows: 1. For each age group, shock the mortality rates by an annual cumulative trend shock of 1 base point. The formula for the mortality rates in projection year k is then: q(t,x)shock = q(t,x)∙(1-0.01%)k Where q(t,x) is equal to the mortality rate in year t for age x. In this example the AG 2005-2050 forecast table is used. 2. Then calculate the change (as a percentage) of the pension liabilities due to the 1 base point shocked mortality rates (∆L/L) and the change of the average mortality rate for the specific age group (∆q). 3. The q-duration is then equal to (∆L/L)/∆q : the change of the pension liabilities due to a unit change in mortality rates. The value of the pension liabilities (L) in this example is calculated per 31 December 2018 (assuming a q-forward contract with a maturity of 10 years and 31 December 2008 as contract date). Repeating this procedure for seven different age groups of an average pension fund, Figure 1 can be obtained. Figure 1 shows that age group 60-69 has the largest exposure to longevity risk, with a q-duration of approximately 8.0. So if the mortality rate in 2018 is 0.1% lower than captured in the AG 2005-2050 forecast table, the value of the liabilities (for this age group) will be approximately 0.8% higher. The age group 7079 is also quite sensitive to longevity, with a q-duration of around 7. A possible portfolio of q-forwards for this pension fund could be: • Maturity of 10 years; • Male and female; • Age groups 60-69 and 70-79.

Figure 1. q-durations of an average pension fund

With these four contracts, a large part of the longevity risk can already be hedged. It is the responsibility of the board of the pension fund to determine their desired longevity ‘matching’ percentage.

Conclusions A market for longevity-linked products can facilitate the development of risk management for pension funds. For such a market to exist, pension funds should have access to suitable longevity hedge instruments. The last decade there have been some efforts to create such a market, however without sufficient success. The last few years, substantial progress has been made in product design and some key transactions have already taken place. The next few years are likely to show which longevity-linked financial securities and derivatives will provide a valid alternative to the more traditional insurance solutions and offer new capacity for the transfer of longevity risk exposures.

References Blake, D., A.J.G. Cairns and K. Dowd. “Living with mortality: Longevity bonds and other mortality-linked securities.” British Actuarial Journal, 12 (2006):153228. Biffis, E. and D. Blake. “Mortality-Linked Securities and Derivatives.” The Pensions Institute (2009): Discussion Paper PI-0901. Blake, D., A.J.G. Cairns and K. Dowd. “The Birth of the Life Market.” The Pensions Institute (2008): Discussion Paper PI-0807. Coughlan, G., et al. LifeMetrics, A toolkit for measuring and managing longevity and mortality risks. Technical Document, JPMorgan Pension Advisory Group (2007a). [www.lifemetrics.com]. Coughlan, G., et al. q-Forwards: Derivatives for transferring Longevity and Mortality Risks. JPMorgan Pension Advisory Group, London (2007b) [www. lifemetrics.com]. Steur, P. Het afdekken van langlevenrisico bij pensioenfondsen. Master Thesis Actuarial Sciences, University of Amsterdam (2009).

AENORM

vol. 17 (66)

December 2009

53



Actuarial Sciences

Indexing the Value of Home Contents by: Loes de Boer The Dutch Association of Insurers represents the interests of private Insurance companies operating in The Netherlands. The Association’s members represent more than 95 percent of the insurance market expressed in terms of gross premium income. The Center of Insurance Statistics (CVS), the statistics and research department of the association, explores data from, for and about the insurance sector. It was this Center that asked me to investigate the possibilities of how to index the value of home contents on a year-on-year basis in cooperation with the Central Bureau of Statistics (CBS), where the index is made currently.

Introduction The value of home contents changes from year to year. New purchases, discarding older goods and price changes give rise to this change in value of home contents. The change in value of home contents is important to property insurers because in case of fire or theft they pay-out that value to a maximum of the sum insured. The sum insured should be equal to the value of the home contents. When the sum insured is higher than the value of home contents the policyholder pays to high a premium. When the sum insured is lower than the value of home contents the policyholder does not get paid the full amount in case of fire or theft. It is vital to insurer and policyholder that the sum insured changes equally with the change in value of home contents. Because of the importance of this change for the sum insured, the CBS publishes an index, commissioned by the CVS, that can be used to correct the sum insured on a year-on-year basis.

Current situation In the current situation, the index is constructed according to the Stone and Rowe model. This model assumes every consumer has a desired level of every particular product group belonging to the home contents. The desired level depends on income and the relative price of the good that will be attained in the future. The equation is estimated using a Maximum Likelihood Method in which the average service life of the product group is estimated implicitly. Problems arise when the estimation is calculated; some estimated values do not converge to a particular value and some solutions are not realistic. The model can be made much easier by setting the average service life of the product groups as constants in advance. The Ordinary Least Squares method can then be used to estimate the equations of the desired levels of product groups. After completing the estimations, most of the coefficients appear to be non-significant. This can be an

indication of several factors, but model misspecification seems to be the most important factor. The model assumes constant behavior in time and does not account for technological developments and declining prices of, for example, computers and television sets. The CBS wants to switch to a method that is also used for the specification of capital stock in The Netherlands, the Perpetual Inventory Method (PIM). The PIM is easy to implement and does not require econometric skills. Insurers want to use a method with a volume and price component, so that both developments can be investigated separately. The Stone and Rowe model and the PIM both use a volume and price component.

Models used in other countries Other countries in Europe do not use indexation or they index the sum insured in proportion with changes to the Consumer Price Index. This happens for example in Germany, Iceland and Austria. In Austria the insured party has in addition the option of refusing any adjustment in value. Figure 1 shows the price index for goods belonging to the home contents and the normal Consumer Price Index. It is obvious that the indices are not similar. This can be explained by rising prices of services and declining prices

Loes de Boer Loes de Boer just finished her study Actuarial Science at the University of Amsterdam. She wrote her Master’s thesis at the Center of Insurance Statistics on the indexation of the value of home contents. She found a job at Ernst & Young and will work for their worldwide Financial Services Organization, more specifically the Performance Improvement advisory group.

AENORM

vol. 17 (66)

December 2009

55


Actuarial Sciences

Figure 1. Price indices in the Netherlands

of goods belonging to the home contents. The influence of increasing wealth is omitted when using this index for indexing the value of home contents. Wealth leads to more consumption and more goods that will be added to home contents.

PIM The next formula forms the basis of the PIM: Value of home contentst = Value of home contentst-1 + It – Dt with Value of home contentst = Value of home contents at time t It = Investments/Consumption in the year prior at time t Dt = Discarded goods in the year prior at time t. The application of the PIM to the value of home contents requires estimates of and assumptions for three parameters:

Figure 2. Discarding with different patterns

assumes that the probability of discarding will increase to the expected average service life and will decrease when the expected average service life has been reached. The linear discard pattern means that a certain portion of goods will be discarded every year. For example: we start with a value of 100 and the expected average service life is 5 years. Next year the remaining value is 80. The year thereafter it will be 60 and so on. Discarding at once means that we will discard all of the goods at once, when the expected average service life is reached, without losing value in the meantime. Graphically, this is shown in Figure 2.

Results PIM The next formula shows how the actual index is computed: Value index t = Value index t −1

∑pq ⋅ ∑p q i∈G

i∈G

t i

t i

/ #households t

t −1 t −1 i i

/ #households t-1

• consumption; • service life and • discard patterns.

with

∑pq

The CBS publishes the National Accounts every year. These National Accounts contain information about consumption of households at different aggregation levels.

Figure 2 shows the indices with different discard patterns used. They all look much the same except for

t i

t i

= Value of all goods belonging to home contents (G) at time t #householdst = Number of households at time t i∈G

Lower consumption has a smoothing effect on the index Service lives are an important parameter in the PIM. However, estimates of service lives are hard to determine based on statistical information. Research of service lives used by different insurers resulted in a table containing “best-practice” average service lives per product group. Discard patterns can be modeled in numerous ways. The following techniques will be explained: the geometric depreciation pattern, the gamma function, linear discard pattern and discarding at once. The geometric depreciation pattern means that there will be discarding of goods amounting to a certain percentage of the value of the previous year. For example: we start with a value of 100 and my expected average service life is three years. Then next year the value will be 100*(1-1/3) and the year thereafter 100*(1-1/3)*(1-1/3). The gamma function

56

AENORM

vol. 17 (66)

December 2009

the geometric depreciation pattern. This is because all the other discard patterns explicitly use the expected average service life. This means it does not matter greatly if you throw a good away part by part (theoretically) or that you throw it away after reaching the expected average service life, as long as this expected average service life is the same for all the discard patterns. It does matter which discard pattern is used for the influence of changes in consumption. Changes in consumption can influence the procession of the index. The effect is most trivial for the gamma function and discarding at once. The geometric depreciation pattern and the gamma function share the problem that according to these techniques the goods will never be fully discarded. At some point the status must be truncated. Additionally,


Actuarial Sciences

Figure 3. Indices of the value of home contents with different discard patterns

it is more difficult to create a starting position for these techniques as the underlying process is more difficult to grasp. The influence of the expected average service lives is not as significant as intuitively expected. When the expected average service live increases, the emission of goods decreases. This means the value of home contents, predicted by the PIM, increases more. However, the consumption stays the same in both situations. The consumption will add value to the home contents, but if the value of home contents is very high, the relative influence of this consumption is not large. When we look at the value of home contents from year to year this has a “smoothing� effect on the index.

Conclusion The Stone and Rowe model seems to be out of date. It requires a lot of time and its solutions are sometimes not very realistic. Other countries in Europe do not use any index or they index the sum insured in proportion with changes to the Consumer Price Index. The influence of increasing wealth is omitted when using this index for indexing the value of home contents. The PIM requires estimations and assumptions for consumption, service lives and discard patterns. Information about consumption is available. The influence of expected average service lives is not as significant as intuitively expected. This is due to the opposite effects of increasing expected average service life and decreasing discarding of goods. The influence of different discard patterns is minimal as long as the average expected service life is the same. The PIM uses a volume and price component, so that both developments can be investigated separately. Furthermore, the PIM is very easy to implement. The PIM with discarding at once seems the most straightforward method of all. The effect of lower consumption is postponed using this method and that has a smoothing effect on the index.

AENORM

vol. 17 (66)

December 2009

57


Econometrics

Statement: Regression Analysis Should be Understood as a Descriptive Account by: David Hollanders and Daniella Brals

Estimated coefficients come with standard errors, as another sample could have result in other estimates. Significance indicates how sure one can be that the true, yet unknown population-parameter has a particular value. While the abstraction of a population may be a convenient fiction for some applications (for example wage-information for a subsample of all US workers), it is a fictitious convention in others. That is, for historical, non-repeatable episodes, one has the whole population and coefficients are what they are. An example is panel-data regression with democratic countries in the 20th century. It is then not relevant whether another sample would have resulted in another estimate, as there never will be another sample, since there is no population of countries out of which such a sample is drawn. Significance may still be used as a heuristic indicator of relevance, but should be understood markedly different. With historical episodes, regression analysis should be understood not as a testing procedure but as a descriptive account, to which significance in itself adds no insight.

Answer Jan Kiviet: Although I appreciate the provocative tone of this proposition, I also strongly disagree with the views it expresses. Indeed, the classic concepts of population and sample are often not appropriate in econometrics, especially not for time-series and thus neither for panel data. Nevertheless, it is time-series data and especially panel data that could potentially be used for producing causal inference, instead of just the much less inspiring pure historic statistical data description. Modern econometrics obtains the properties of estimators and test statistics on the basis of the statistical specification of a so-called data generating process (DGP), which replaces population and sample. Unlike what is suggested in the first line of the proposition, estimated coefficients do not come with their standard errors from heaven. Formulas to calculate these standard errors are obtained by analytic derivation of the (asymptotic) standard deviation of the estimator on the basis of the assumptions incorporated in the adopted characterization of the DGP. Without a (possibly implicitly adopted) DGP there are no meaningful standard errors, and a different DGP would imply different estimators with different standard errors. A DGP could either concern data on all 20th century democratic countries, or on a random or a selective sample from them, or could concern non-democratic countries as well. These aspects certainly matter for the appropriate specification of the DGP. However, that it is practically inconceivable to replay the 20th century is of no concern whatsoever.

58

AENORM

vol. 17 (66)

December 2009

Econometricians choose to include stochastic elements in the specification of their models (which should respect the DGP under investigation) not in the first place because their data may establish (selective) samples, but primarily to bridge the gap between the usually deterministic components in their models, which are inspired by economic reasoning, and complex reality. This is done by random disturbance terms, and can lead to successes only when the deterministic components are sufficiently adequate for the relationship under study, and the stochastic specification of the disturbance terms sufficiently general. These disturbance terms affect both the endogenous variables to be explained by the model and any explanatory endogenous and lagged endogenous variables in such a complex way, that analytic results on the properties of inference techniques usually have an asymptotic and thus approximate nature. This all can be illustrated easily and the effects on parameter estimators, their standard deviations and their estimates (the standard errors), by Monte Carlo simulation and designing a few alternative DGP’s. Doing so, one is forced to make explicit in detail what the essential characteristics of the DGP may be, on what information and data one chooses to condition, which variables are exogenous and fixed, or exogenous though random, and how the endogenous variables depend on others. The results of such a simulation study illustrate immediately what type of inferences can be made in a particular design: i.e. inferences on countries in general, just 20th century countries, only democratic countries, and also what its accuracy might be, and how this depends on imposing conditions and restrictions. For sure, this yields


Econometrics

Jan Kiviet Jan Kiviet was appointed to the chair of econometrics at the University of Amsterdam (UvA) in 1989. He is a Fellow of the Tinbergen Institute, which is the joint graduate school of the University of Amsterdam, Erasmus University and the Free University. He teaches in Econometrics and is director of UvAEconometrics. His research focuses on improving inference obtained from small samples - especially panel data - on dynamic simultaneous relationships, and on enhancing Monte Carlo simulation methodology.

great insights on the significance and the vulnerability of significance tests and other inference methods, and helps econometricians to improve the manual for employing their tools and methodologies.

Answer James Davidson The probability model underlying time series analysis is a fundamental question that ought to be the first topic on the syllabus of every econometrics course but, alas, is almost entirely neglected. The text books are nearly all useless. For what it's worth, I try to tell my students something along the following lines. 1. It's a commonplace that the evolution of the natural and social worlds through time exhibits regularities, as if obeying natural or behavioral "laws", but is also contingent, with accidents and happenstances largely determining the course of events. Fully predictable outcomes, such as the movements of the planets in their orbits, are the exception to the general rule. 2. Probability theory is the mathematical model of behavior in repeated random trials, subject to laws that allow predictability of average outcomes though not individual outcomes. However, in time series analysis we are not dealing with repeated random trials. The realization of a stock price in T successive periods is one observation, not T observations! 3. Nonetheless, we can apply probability theory to time series if we are willing to postulate a "many worlds" sampling framework. Quantum theory in physics is often interpreted in this way. There exists an infinite population of possible universes, from which the one we inhabit is a random drawing. Thus, the contingencies and random outcomes that we observe in the historical record are drawings from a distribution, even though we only ever observe the single outcome. In just the same way, we use probability theory to make

predictions about the outcome of a single coin toss. Repeated tosses don't need to be actually observed - they are simply the framework within which we formulate our probability laws, although the repetitive nature of the experiment has allowed us to formulate those laws quite straightforwardly. 4. The problem is, then, how can we formulate probability laws to explain the single observation, when repeating the random experiment is not even conceivable? It is correct to say that this cannot be done unless we are willing to make some assumptions about the random mechanism. The assumptions usually cited for a random sequence are stationarity and ergodicity. The Ergodic Theorem tells us that the time mean of a single stationary ergodic sequence converges almost surely to the "ensemble mean" of the process, which is the mean of the distribution across independent realizations (the distribution of the many worlds, if you like). The essence of this technical result is that statistical analysis is a feasible project provided time series possess some features of an independent sampling model. 5. It may be objected that economic processes are generally non-stationary - but, of course, cointegration theory has now convincingly resolved that dilemma. It tells us that certain relevant transformations of economic series can be stationary and ergodic notably, differences and linear combinations - and, hence, can be modeled in a probabilistic framework and so amenable to statistical inferences. Arguably, this is the most fundamental of all the insights of late20th century economic science. I try to argue in this way that tests of significance in time series regression are a legitimate research methodology. (Though so brainwashed are most students by their elementary statistics training that the main difficulty is to have them appreciate that a problem exists at all - never

James Davidson James Davidson is professor of econometrics at the University of Exeter, and he has also taught at the London School of Economics and Cardiff University. He is chiefly interested in econometric time series analysis and much of his recent research has dealt with long memory models. In addition to two books on econometric theory he is the author of the software package Time Series Modelling. His website is at http://people.ex.ac.uk/jehd201/ .

AENORM

vol. 17 (66)

December 2009

59


Econometrics

mind that it has a resolution!) When a sample is coincident with the (finite) population, then of course a regression is purely descriptive and significance tests are indeed irrelevant. One might take this view of cross-country regressions. However, in practice such observations are of a multivariate random process at a point in time (or in a panel, over a period of time) and hence a probabilistic aspect does exist. A manyworlds argument might be invoked to define the relevant distribution, but there are clearly problems; the implied distribution over countries is non-stationary and exhibits obvious dependence. Spatial dependence is a hot topic in econometrics research just now, though, and maybe we can expect further progress on this front in the future.

60

AENORM

vol. 17 (66)

December 2009


Puzzle This edition we have some new puzzles for you. We hope that you will able to solve these!

A strange train A strange train rides a route of 100 miles, but that happens quite irregular. The train does not ride always as fast, stops sometimes and can even drive rear. It is all random. After the ride it shows that the train has done 20 hour about its route, which means an average speed of 50 miles an hour.

Solutions

Question that rises is how many times do you have in a day?

Solutions Solutions to the two puzzles above can be submitted up to March 1st. You can hand them in at the VSAE room; C6.06, mail them to info@vsae.nl or send them to VSAE, for the attention of Aenorm puzzle 66, Roetersstraat 11, 1018 WB Amsterdam, Holland. Among the correct submissions, one book token will be won. Solutions can be both in English and Dutch. Prove that there must have been a part of the route of 50 miles which took the train exactly one hour.

A bad design clock An artist has designed a beautiful clock but not very handy. He has made the hour hand as big as the minute hand as he thought it was beautiful. But right now we cannot decide the correct time.

AENORM

vol. 17 (66)

December 2009

61


In November the Short Trip Abroad to Koln was organized. A group of 50 VSAE members went to this city for four days to visit the university, several musea and a sport game. At the end of November we we went bowling with Kraket, where a lot of Kraket and VSAE members were dressed in disco outfit. On December 8th the annually Actuarial Congress took place in Tuschinski Theatre. This year’s theme was “Actuary of the Future” and around 160 people listen how several speakers gave their vision on this subject. The day after the congress, VSAE organized the party “Flirt searches Nerd”. More than 400 students of several study associations came to Club Home and were dressed like a flirt or nerd. On February 1st a new VSAE board will be installed and will start with the organization of new projects for the upcoming year.

In the previous months the new Kraket board had organized some great activities. In September there was a kart tournament and in October a pool tournament. In October there also was an activity for new students. They showed their shooting skills during a laser game tournament. In November three activities took place, starting with cocktail shaking and karaoke. Also the traditional bowling activity with the VSAE and a gaming marathon took place. Sinterklaas came along in December. This traditional popular activity was a greater success than ever. In the coming months Kraket will organize new activities. For the first time in the history of Kraket a ski holiday is organized. January 10th 30 members go to Saint Francois Longchamps and will hopefully return after a nice week of sportsmanship and coziness with not too many broken legs. The Kraket board hopes that the upcoming activities will be a great success, with a lot of participants.

As current VSAE board we would like to wish all VSAE members a merry Christmas and a happy 2010.

62

Agenda

Agenda

12 January

7 January

26 January

10-17 January

29 January - 6 February

26 January

February

Monthly Drink Bungee Soccer Winter sport

General Members Meeting

AENORM

vol. 17 (66)

December 2009

New Years Dinner with Watson Wyatt Winter sport Ice skating


B e p a a l d e o p t i ma l e b e l e g g i n g s m i x voo r een pens i oenfonds me t e e n v e r mo g e n v a n 4 . 3 m i l j a r d e u r o .

����� ����

����� ����� ���� �����

�������� �����

�����

�����

�����

�����

�����

���������

�����

�����������

Wat is haalbaar? En wat is verstandig? Hoeveel risico

investeringsstrategieën. We werken voor toonaan-

mag een pensioenfonds eigenlijk lopen? Je hebt

gevende bedrijven, waarmee we een hechte relatie

het wel over de oudedagvoorziening van honderd-

opbouwen om tot de beste oplossingen te komen.

duizenden mensen. Er moet voor hen hoe dan ook een

Onze manier van werken is open, gedreven en infor-

flinke taart overblijven. Bij Watson Wyatt kijken we

meel. We zijn op zoek naar startende en ervaren mede-

verder dan de cijfers. Want cijfers hebben betrekking

werkers, bij voorkeur met een opleiding Actuariaat,

op mensen. En op maatschappelijke ontwikkelingen.

Econometrie of (toegepaste) Wiskunde. Kijk voor meer

Dat maakt ons werk zo interessant en afwisselend.

informatie op werkenbijwatsonwyatt.nl.

Watson Wyatt adviseert ondernemingen en organisaties wereldwijd op het gebied van ‘mens en kapitaal’: pensioenen, beloningsstructuren, verzekeringen en

Watson Wyatt. Zet je aan het denken.


To some people, it’s just a job

To others, it’s demonstrating our teamwork Opportunities at Towers Perrin

http://careers.towersperrin.com


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.