AENORM
59
April 2008
1
Preface
A brand new season of Aenorm H
olland has finally been put on the map! This not due to amazing achievements in sports or the intellectual gifts of our well-educated econometrist. No, it is caused by Fitna. For those who have lived in a cave during the last few months; Fitna is a film made by Geert Wilders, a Dutch politician with some radical ideas. “What is in this film”, you might ask coming out of your cave, “that is so controversial that it would put HOLLAND on the map?“ The answer is somewhat disconcerting: Nobody knows. Except for Limburgian Geert that is. Of course in the world of econometrics nobody ever knows anything. At most we can make a statement with a certain confidence. Econometrics has everything to do with deducting suspicions from observed data. In this case the data is the following: Cover design: 1. Geert Wilders is making a film. Carmen Cebrián 2. The film has something to do with the Islam. 3. In the past Geert Wilders has proposed to ban the Koran and has saidAenorm that the has aIslamic circulation of 1950 culture is unequal to the western one. copies for all students Actuarial
Sciences and Econometrics &
Operations Research Ok, this is enough to make your timid sister a bit shaky. As an econometrist you would wantat the Uniof Amsterdam to plot these datapoints in a graph, draw a line through the points and makeversity a forecast about and for all students Econometrics at the film. This means that there must be plenty econometrists in Afghanistan: weeks inbefore Free University of Amsterthe premiere of the film masses of people hit the streets of Islamabad and the Mazar-i-Sharif to dam. Aenorm is also distributed protest against the insults of the unseen film... among all alumni of the VSAE.
If this is just a foretaste of what is going to happen after the film actuallyAenorm has been is a aired, joint publication you might want to cancel your summer vacation to Teheran. But at least, as said,and Holland of IVSAE Kraket. A free subsciption canPeter be obtained at has made its way to global recognition. The next time that our Prime Minister Jan Balkenende visits the United States, President Bush will actually know www.aenorm.nl. what Holland is. Well... ok, Bush might still not know, but Barack Obama certainly will. Insertion of an article does not mean that the opinion of the
As it comes to the Aenorm, the VSAE has a new board since the first of February. Erik board of the VSAE, the board of Beckers retired from the board of the VSAE and simultaneously quit as Kraket chairman the or theof redactional staff Aenormcommittee. As the new chairman, I can only thank Erik for his immense efforts to from this is verbalized. Nothing make all those beautiful Aenorms. Luckily Erik will keep his seat in the committee, he be canduplicated magazine so can without permission of VSAE or still use his expertise to help the rest of us. Kraket. No rights can be taken
from the content of this magaI will replace Erik both as secretary of the VSAE and as chairman of the Aenormcommittee. zine. From this position I would like to put a committee member in the spotlights. Siemen van der Werff has been in the committee for over three and a half years, surviving none less than © 2008 VSAE/Kraket five chairmen. In the past period he has finished his master thesis, so he will be leaving the committee. Off course he cannot quit without writing an article about his thesis, which will be published in one of the following Aenorms. Siemen, thanks for your efforts for Aenorm!
Lennart Dek
2
AENORM
59
April 2008
Statistical Tools for Non-Life Insurance
5
Due to the quantitative nature of both a priori and a posteriori rating, one of the primary attributes of an actuary should be the successful application of upto-date statistical techniques in the analysis of insurance data. Therefore, this article highlights current techniques involved in this area of actuarial statistics. It introduces some basic concepts, illustrated with real-life actuarial data and summarizes references to complementary literature. Katrien Antonio
Models and Techniques for Hotel Revenue Management using a Rolling Horizon 11 Hotels typically offer different prices for different guests. An important decision for a hotel is to determine how many rooms to sell for each price. Revenue Management is the art of doing this the best way possible. This article presents booking control policies for accepting reservations based on deterministic and stochastic mathematical programming techniques. It makes use of a rolling horizon of overlapping decision periods to account for the continuing combinatorial effects of multiple day stays. Kevin Pak
Lessons from the credit crunch
16
Recent months have seen some remarkable events in the world’s money markets. As fears about possible bank losses from exposure to sub-prime mortgages spread, liquidity in the inter-bank market dried up almost overnight. The U.S. Federal Reserve quickly reversed a trend of raising interest rates to provide support. Since then, many banks have confirmed large losses and some have had to seek large capital injections. Steve Taylor-Gooby
Limitations and usefulness of the QALY model: what an economist thinks your life is worth 19 Quality-Adjusted-Life-Years (QALY) is a method to express periods of different health states in equivalent years of living in full health, providing a criterion for decisions in health care. It provides a theoretical foundation and empirical measure for making decisions about treatment and spending in health care. This article discusses the behavioral foundations of QALYs and the elicitations methods to assess it, and the problems that come with both. David Hollanders
There’s no Theorem like Bayes Theorem
23
The Bayes Theorem is the key to understand decision making under uncertainty. Therefore this article sketches how Bayes Theorem could be the leading thread in a study of econometrics. From the first course in probability calculus to the last course on econometrics, where Operations Research can be integrated. Aart de Vos
Riding Bubbles
28
Since the early days of financial markets, investors have witnessed periods of bubbles and subsequent crashes. In this article will be analyzed what an investor should do if she observes that an asset experiences a bubble. The analysis is based on real data, and refrains from specific assumptions underlying theoretical models. Hence, the analysis reveals which theoretical predictions pass an empirical investigation. Nadja Guenster and Erik Kole
AENORM
59
April 2008
3
Aenorm 59
Contents List
Modeling Current Account with Habit Formation
36
The current account is one of the most prominent variables summarising a country’s economic relationship with the outside world. In particular, increased lending to developing countries in the eighties had led to the need to evaluate the sustainability of external debt levels and the idea of an intertemporally optimal current account deficit. In this article the author creates and empirically evaluates the model allowing for variable interest rate and prices of traded goods as potential sources of external shocks as well as habit formation in the utility function. Sergejs Saksonovs
History and challenges of the European Financial Integration process 43 The integration process among the European countries is not just from the recent past, but was initiated long before, both for political and economical reasons. This article starts with a short political history of the integration and then continues with analyzing the consequences to the financial markets. It ends by stating challenges for further European financial integration. Gerard Moerman
Market value of life insurance liabilities under Chapter 11 Bankruptcy Procedure 45 In this article the focus lays on the formulation of a prevailing bankruptcy procedure and investigates its impacts on the fair valuation of life insurance liabilities. It designs a default and liquidation mechanism by using Parisian barrier options which will describe Chapter 11 Bankruptcy Procedure. An Chen
Education and Human Capital: The Empirical Survey
52
The aim of this article is to discover, identify and describe economical mechanisms linking education with unemployment. An attempt at measuring the strength of these relations is made, using tools based on Ross–Quinland data processing algorithms. The paper begins with a short discussion of selected theoretical aspects of the analyzed issue, followed by the presentation of the obtained results. Finally, a summary of the findings is provided. Krzysztof Karbownik
Replicating Portfolios for Insurance Liabilities
57
This article will discuss a recent development in the risk management of insurance companies, namely replicating portfolios for insurance liabilities. This development is a next step for improving the asset liability management of insurance companies and integrating them fully in today’s financial markets. It starts by discussing replicating portfolios as a representation of insurance liabilities. Then it explains the need for having such a representation and discusses replicating portfolios in practice. David Schrager
Valuation of long-term hybrid equity-interest rate options 63 The starting point valuations and risk management lies in option pricing models. The foundation of such a model needs to make some modeling assumptions with respect to the financial underlying that underlies the exotic option. The resulting choices are then conceived in a derivative pricing model. This article will try to explain some concepts behind some (hybrid) equity-interest rates option pricing models and discuss how some of the weaknesses of classical models can be improved. Alexander van Haastrecht
4 4
AENORM
59
April 2008
Puzzle
67
Facultative
68
Volume 15 Edition 59 April 2008 ISSN 1568-2188 Chief editor: Lennart Dek Editorial Board: Lennart Dek Design: Carmen Cebrián Lay-out: Jeroen Buitendijk Editorial staff: Raymon Badloe Erik Beckers Daniëlla Brals Jeroen Buitendijk Lennart Dek Nynke de Groot Marieke Klein Hylke Spoelstra Siemen van der Werff Advertisers: Achmea Aegon AON Deloitte Delta Lloyd De Nederlandsche Bank EOM Ernst & Young IMC KPMG Mercer Michael Page PricewaterhousCoopers Towers Perrin Watson Wyatt Worldwide Information about advertising can be obtained from Tom Ruijter info@vsae.nl Editorial staff adresses: VSAE, Roetersstraat 11, 1018 WB Amsterdam, tel: 020-5254134 Kraket, de Boelelaan 1105, 1081 HV Amsterdam, tel: 020-5986015 www.aenorm.nl
AENORM
59
April 2008
5
Actuarial Sciences
Statistical Tools For Non-Life Insurance Within the actuarial profession a major challenge can be found in the construction of a fair tariff structure. In light of the heterogeneity within, for instance, a car insurance portfolio, an insurance company should not apply the same premium for all insured risks. Otherwise the so-called concept of adverse selection will undermine the solvability of the company. ‘Good’ risks, with low risk profiles, pay too much and leave the company, whereas ‘bad’ risks are attracted by the (for them) favorable tariff. The idea behind risk classification is to split an insurance portfolio into classes that consist of risks with a similar profile and to design a fair tariff for each of them. Classification variables typically used in motor third party liability insurance are the age and gender of the policyholder and the type and use of their car.
Being able to identify significant risk factors is an important skill for the non-life actuary. When these explanatory variables contain a priori correctly measurable information about the policyholder (or, for instance, the vehicle or the insured building), the system is called an a priori classification scheme. However, an a priori system will not be able to identify all important factors because some of them can not be measured or observed. Think for instance of aggressiveness behind the wheel or the swiftness of reflexes. Thus, despite of the a priori rating system, tarification cells will not be completely homogeneous. For that reason, an a posteriori rating system will re-evaluate the premium by taking the history of claims of the insured into account. Due to the quantitative nature of both a priori and a posteriori rating, one of the primary attributes of an actuary should be the successful application of up-to-date statistical techniques in the analysis of insurance data. Therefore, this article highlights current techniques involved in this area of actuarial statistics. This type of research is at the frontiers of actuarial science, econometrics and statistics. We introduce some basic concepts, illustrate them with real-life actuarial data and summarize references to complementary literature. Examples of likelihood-based as well as Bayesian estimation are included where the latter has the advantage that it provides the analyst with the full predictive distribution of quantities of interest. Regression models classification
for
a
priori
risk
In order to build a tariff that reflects the various risk profiles in a portfolio in a reasonable way, actuaries will rely on regression techniques.
6
AENORM
59
April 2008
Katrien Antonio is working at the department of Actuarial Science at the University of Amsterdam. Before coming to Amsterdam, Katrien obtained a MSc and PhD in mathematics at the Catholic University of Leuven (Belgium).
Typical response variables involved in this process are the number of claims (or the claims frequency) on the one hand and its corresponding severity (i.e. the amount the insurer will have to pay, given that a claim occurred) on the other hand. Generalized Linear Models Generalized linear models (GLMs) extend the framework of general (normal) linear models to the class of distributions from the exponential family. A whole variety of possible outcome measures (like counts, binary and skewed data) can be modelled within this framework. This paper uses the canonical form specification of densities from the exponential family, namely ⎛ yθ − ν (θ ) ⎞ f (y ) = exp⎜⎜ + c (y , φ )⎟⎟ φ ⎝ ⎠
(1)
where ν(.) and c(.) are known functions, θ is the natural and φ the scale parameter. Members of this family, often used in actuarial science, are the normal, the Poisson, the binomial and the gamma distribution. Instead of a transformed data vector, GLMs model a transformation of the mean as a linear function of explanatory variables. In this way g (μi ) = ηi = (Xβ )i
(2)
Actuarial Sciences
where β = (β1, ... , βp)’ contains the model parameters and X (n x p) is the design matrix. g is the link function and ηi is the ith element of the so-called linear predictor. In a likelihoodbased approach, the unknown but fixed regression parameters in β are estimated by solving the maximum likelihood equations with an iterative numerical technique (such as NewtonRaphson). In a Bayesian approach, priors are assigned to every parameter in the model specification and inference is based on samples generated from the corresponding full posterior distributions. Flexible, parametric families of distributions and regression Modeling the severity of claims as a function of their risk characteristics (given as covariate information) might require statistical distributions outside the exponential family. Distributions with a heavy tail, for instance. Principles of regression within such a family of distributions are illustrated here with the Burr XII and the GB2 (‘generalized beta of the second kind’) distribution. Illustration 1 Fire Insurance Portfolio The cumulative distribution function for the Burr Type XII and the GB2 distribution are given by λ
⎛ β ⎞ ⎟ , y > 0, β, λ, τ > 0, (3) FBurr ,Y (y ) = 1 − ⎜ ⎜ β + yτ ⎟ ⎝ ⎠ λ
⎞ ⎛ (y / b)a ;γ , γ ⎟ , FBurr ,Y (y ) = B⎜ ⎜ 1 + (y / b)a 1 2 ⎟ (4) ⎠ ⎝ y > 0, a ≠ 0, β, γ1, γ 2 > 0,
where B(., .) is the incomplete Beta function. Say the available covariate information is in x (1 x p). By allowing one or more of the parameters in (3) or (4) to vary with x, a Burr or GB2 regression model is built. To illustrate this approach, consider a fire insurance portfolio (see Antonio (2007)) which consists of 1,823 observations. We want to assess how the loss distribution changes with the sum insured and the type of building. Claims expressed as a fraction of the sum insured are used as the response. Explanatory variables are the type of building and the sum insured. Parameters are estimated with maximum likelihood. Residual QQplots like those in Figure 1 can be used to judge the goodness-of-fit of the proposed regression models. Antonio (2007) explains their construction.
Figure 1: Fire Insurance Portfolio: QQplots for Burr and GB2 regression.
residual
A posteriori ratemaking To update the a priori tariff or to predict future claims when historical claims are available over a period of insurance, actuaries can use statistical models for longitudinal or panel data (i.e. data observed on a group of subjects, over time). These statistical models (also known as ‘mixed models’ or ‘random-effects models’) generalize the so-called credibility models from actuarial science. Bonus-malus systems for car insurance policies are another example of a posteriori corrections to a priori tariffs: insured drivers reporting a claim to the company will get a malus, causing an increase of their insurance premium in the next year. The credibility ratemaking problem is concerned with the determination of a risk premium that combines the observed, individual claims experience of a risk and the experience regarding related risks. The framework of our discussion of this a posteriori rating scheme is the concept of generalized linear mixed models (GLMMs). For a historical, analytical discussion of credibility, Dannenburg et al. (1996) is a good reference. Frees et al. (1999), Frees et al. (2001) and Antonio and Beirlant (2007a) explicitly discuss the connection between actuarial credibility schemes and (generalized) linear mixed models and contain more detailed examples. GLMMs extend GLMs by allowing for random, or subject-specific, effects in the linear predictor. Say we have a data set at hand consisting of N policyholders. For each subject i (1 ≤ i ≤ N) ni observations are available. These are the claims histories. Given the vector bi with the random effects for subject (or cluster) i, the repeated measurements Yi1, ..., Yini are assumed to be independent with a density from the exponential family
(
)
f yij | bi , β, φ = (5) ⎛ yij θij − ν(θij ) ⎞ + c(yij , φ)⎟⎟, j = 1,..., ni exp⎜⎜ φ ⎝ ⎠
AENORM
59
April 2008
7
Datawarehousing, Business Intelligence, Analytics werk@eom.nl
EOM Data Solutions BV WTC Alnovum P.J. Oudweg 11 1314 CH Almere telefoon +31 (0)36 548 39 50 fax +31 (0)36 845 02 24 internet www.eom.nl e-mail info@eom.nl 8
AENORM
59
April 2008
Actuarial Sciences
PQL
adaptive G-H
Bayesian
Est.
SE
Est.
SE
Mean
90% Cred.int.
β0
-3.529
0.083
-3.557
0.084
-3.565
(-3.704, -3.428)
β1
0.01
0.005
0.01
0.005
0.01
(0.001, 0.018)
δ0
0.790
0.110
0.807
0.114
0.825
(0.648, 1.034)
β0
-3.532
0.083
-3.565
0.084
-3.585
(-3.726, -3.445)
β1
0.009
0.011
0.009
0.011
0.008
(-0.02, 0.04)
δ0
0.790
0.111
0.810
0.115
0.834
(0.658, 1.047)
δ1
0.006
0.002
0.006
0.002
0.024
(0.018, 0.032)
δ0,1
/
/
0.001
0.01
0.006
(-0.021, 0.034)
Model (7)
Model (8)
Table 1: Workers’ compensation data (frequencies): results of maximum likelihood and Bayesian analysis. δ0 = Var(bi,0) and δ0,1 = δ1,0 = Cov(bi,0,bi,1) Obeserved Values
Expected number of claims PQL
Class
Payrolli,7
adaptive G-H
Bayesian
Counti,7
Mean
s.e.
Mean
s.e.
Mean
90% Cred. Int.
11
230
8
11.294
2.726
11.296
2.728
12.18
(5,21)
20
1.315
22
33.386
4.109
33.396
4.121
32.63
(22,45)
70
54.81
0
0.373
0.23
0.361
0.23
0.416
(0,2)
89
79.63
40
47.558
5.903
47.628
6.023
50.18
(35,67)
112
18.810
45
33.278
4.842
33.191
4.931
32.66
(21,46)
Table 2: Workers’ compensation data (frequencies): predictions for selected risk classes.
The following (conditional) relations then hold
[
]
( ) and VAR[Yij | bi ] = φν (θij ) = φV (μij ) μij = E Yij | bi = ν ' θij ''
(6)
where g( μij ) = xij' β + zij' bi is called the link and V(.) the variance function. β (p x 1) denotes the fixed effects parameter vector and bi (q x 1) the random effects vector. xij (p x 1) and zij (q x 1) contain subject i’s covariate information for the fixed and random effects, respectively. The specification of the GLMM is completed by assuming that the random effects, bi (i = 1, ... , N), are mutually independent and identically distributed with density function f(bi | α). Hereby α denotes the unknown parameters in the density. Traditionally, one works under the assumption of (multivariate) normally distributed random effects with zero mean and covariance matrix determined by α. The random effects bi represent unobservable, individual characteristics of the policyholder. Correlation between observations on the same subject arises because they share the same random effects. Illustration 2 Workers’ Compensation Insurance The data are described in Antonio (2007). 133 occupation or risk classes are followed over a period of 7 years. Frequency counts
in workers’ compensation insurance are observed on a yearly basis. Possible explanatory variables are Year and Payroll, a measure of exposure denoting scaled payroll totals adjusted for inflation. The following models are considered Yij|bi ~ Poisson(μij) where log (μij) = log (Payrollij) + β0 + β1Yearij + bi,0 versus log (μij) = log (Payrollij) + β0 + β1Yearij + bi,0 + bi,1Yearij.
(7) (8)
Hereby Yij represents the jth measurement on the ith subject of the response Count. β0 and β1 are fixed effects and bi,0, versus bi,1, is a risk class specific intercept, versus slope. It is assumed that bi = (bi,0, bi,1)’ ~ N(0,D) and that, across subjects, random effects are independent. The results of both a maximum likelihood (Penalized Quasi-Likelihood and adaptive Gauss-Hermite quadrature) and a Bayesian analysis are given in Table 1. The models were fitted to the data set without the observed Counti7, to enable outofsample prediction later on. To illustrate prediction with model (8), Table 2 compares the predictions for some selected risk classes with the observed values. Predictive distributions obtained with a Bayesian analysis are easily obtained.
AENORM
59
April 2008
9
Actuarial Sciences
Additional remarks Generalized count distributions for a priori and a posteriori ratemaking Dealing with regression models for claim counts, the huge number of zeros (i.e. no claim events) is often apparent. For data sets where the inflated number of zeros causes a bad fit of the regular Poisson, negative binomial, zeroinflated and hurdle regression models provide an alternative. Antonio et al. (2008) provides a detailed study of the use of these generalized count distributions for a 4–level data set on claim counts registered for fleet policies. The multilevel model accommodates clustering at four levels: vehicles (v) observed over time (t) that are nested within fleets (f), with policies issued by a collection of insurance companies (c). Fleet policies are umbrella–type policies issued to customers whose insurance covers more than a single vehicle, with a taxicab company being a typical example. We build multilevel models using generalized count distributions (Poisson, negative binomial, hurdle Poisson and zero–inflated Poisson) and use Bayesian estimation techniques. The effect of explanatory variables at the different levels in the data set is investigated. We find that in all models considered, there is the importance of accounting for the effects of the various levels. The results also indicate possible different styles for penalizing or rewarding past claims. Generalized additive models So far, only regression models with a linear structure for the mean or a transformation of the mean have been discussed. To allow for more flexible relationships between a response and a covariate, generalized additive models (GAMs) are available, see e.g. Antonio and Beirlant (2007b) for several actuarial examples (on loss reserving and credibility). Link with statistical techniques for loss reserving The statistical techniques discussed here in the context of risk classification provide a useful framework for a stochastic approach to loss reserving. In claims reserving the data are displayed in a traditional run-off triangle or variations of it. See Antonio et al. (2006) and Antonio and Beirlant (2007b) for connections with reserving techniques. References Antonio, K. (2007). Statistical Tools for Non-Life Insurance: Essays on Claims Reserving and Ratemaking for Panels and Fleets. PhD Thesis,
10
AENORM
59
April 2008
Katholieke Universiteit Leuven, Belgium. Antonio, K. and Beirlant, J. (2007a). Actuarial statistics with generalized linear mixed models. Insurance: Mathematics and Economics, 40(1), 58–76. Antonio, K. and Beirlant, J. (2007b). Issues in claims reserving and credibility: a semiparametric approach with mixed models. Accepted for The Journal of Risk and Insurance. Antonio, K., Beirlant, J., Hoedemakers, T. and Verlaak, R. (2006). Lognormal mixed models for reported claims reserves. North American Actuarial Journal, 10(1), 30–48. Antonio, K., Frees, E.W. and Valdez, E. (2008). A multilevel analysis of intercompany claim counts. Submitted. Dannenburg, D.R., Kaas, R. and Goovaerts, M. (1996). Practical actuarial credibility models. Institute of actuarial science and econometrics, University of Amsterdam. Frees, E.W., Young, V.R. and Luo, Y. (1999). A longitudinal data analysis interpretation of credibility models. Insurance: Mathematics and Economics, 24(3), 229–247. Frees, E.W., Young, V.R. and Luo, Y. (2001). Case studies using panel data models. North American Actuarial Journal, 5(4), 24–42.
Wat doe je? als je weet dat de beroepsbevolking steeds kleiner wordt
De Nederlandse bevolking vergrijst, mensen leven gemiddeld langer en er worden minder kinderen geboren. Dit betekent dat de pensioenlasten zullen stijgen. Dit heeft ook ingrijpende gevolgen voor een verzekeringsmaatschappij. Binnen Achmea is de actuaris de aangewezen persoon om te berekenen en uit te leggen wat de vergrijzing betekent: wat zijn de financiële effecten van wet- en regelgeving? En wat betekent dit voor de prijs en premiestelling van producten? En hoe gaan we de in- en externe rapportage hierover zo efficiënt mogelijk inrichten?
Actuaris
Bovendien kun je rekenen op veel mogelijk-
Natuurlijk houdt de actuaris zich binnen
heden tot persoonlijke en professionele
Achmea zich niet alleen met dit vraagstuk
ontwikkeling.
bezig, maar ook met alle andere relevante maatschappelijke vraagstukken. Als
Achmea
(aankomend) actuaris binnen Achmea kan
Het motto van Achmea luidt: Achmea
jij je op verschillende vakgebieden
ontzorgt. Om dat waar te kunnen maken,
ontwikkelen. Als grootste actuariële werk-
hebben we medewerkers nodig die verder
gever in Nederland bieden we veel
kijken dan hun eigen bureau. Die oog
mogelijkheden in jouw vakgebied op
hebben voor wat er speelt. Maar vooral:
verschillende locaties.
mensen die zich inleven in onze klanten en dat weten te vertalen naar originele oplos-
Wat we vragen
singen. Achmea onderscheidt zich door
Heb jij een actuariële achtergrond? Of heb
haar maatschappelijke betrokkenheid. Het
je een bèta studie als wis-/natuur-/sterren-
streven van Achmea is om de samenstelling
kunde of econometrie voltooid en wil je
van ons personeelsbestand een afspiege-
actuaris worden? Dan maken we graag
ling te laten zijn van de maatschappij.
kennis met jou! Wij vragen nadrukkelijk ook
Wij zijn ervan overtuigd dat diversiteit in cul-
parttimers om te solliciteren.
turen bijdraagt aan zakelijk en persoonlijk succes.
Wat we bieden Wij bieden een afwisselende baan in een
Meer weten?
moderne en enthousiaste organisatie. De
Voor meer informatie over de functie kun
arbeidsvoorwaarden zijn uitstekend: bijvoor-
je bellen met Faïda Silver, (06) 53 35 9338.
beeld een 13e maand, een pensioenvoor-
We ontvangen je sollicitatie graag via
ziening en een marktconform salaris.
www.werkenbijachmea.nl.
CENTRAAL BEHEER ACHMEA FBTO AV É R O A C H M E A INTERPOLIS Z I LV E R E N K R U I S A C H M E A
Ontzorgen is een wer kwoord AENORM
59
April 2008
11
ORM
Models and Techniques for Hotel Revenue Management using a Rolling Horizon In the hotel industry different prices are charged for a room depending on features such as: the time of booking, company affiliation, multiple-day stays and the intermediary sales agent. This way, hotels offer the same room to different guests for different prices. While hotel managers would like to fill their hotels with highly profitable guests as much as possible, it is generally also necessary to allow for less profitable guests in order to prevent rooms from remaining vacant. The number of low-price guests in the hotel should be managed carefully, however, in order to be able to allocate as many high-price guests as possible.
Hotel Revenue Management In the hotel industry different prices are charged for a room depending on features such as: the time of booking, company affiliation, multipleday stays and the intermediary sales agent. This way, hotels offer the same room to different guests for different prices. While hotel managers would like to fill their hotels with highly profitable guests as much as possible, it is generally also necessary to allow for less profitable guests in order to prevent rooms from remaining vacant. The number of low-price guests in the hotel should be managed carefully, however, in order to be able to allocate as many high-price guests as possible. The art of managing the availability of a fixed and perishable capacity over a range of prices is called Revenue Management. Revenue Management originates from the airline industry, where the seats on a plane can be sold to different types of passengers. The uncertainty of demand and the on-line decision making process bring stochastic and dynamic aspects into the problem. Further, the fact that guests in a hotel can stay multiple days gives the problem a strong combinatorial aspect. Consider for example, one guest staying from Monday to Friday and another from Thursday to Sunday. Then, the guests compete for the same room capacity on Thursday and Friday. Although a hotel manager would like to fill his hotel with guests who pay a high price, it can be more profitable to accept a lower price request when this request would also fill up capacity for other days where demand is low. Literature Hotel Revenue Management has received atten-
12
AENORM
59
April 2008
Kevin Pak is Revenue Management consultant at ORTEC since 2005. Before this, he lectured and did research on Revenue Management at the Erasmus University Rotterdam (EUR). His research has been published in and presented at a number of international journals and conferences. In 2005 he obtained a Ph.D. for his thesis: “Revenue Management: New Features and Models”.
tion in a number of papers. Bitran and Mondschein (1995) and Bitran and Gilbert (1996) concentrate on the room allocation problem at the targeted booking day. This means that they concentrate on the stays that start on the current day itself. Weatherford (1995) concentrates on a booking control policy for the booking period. He constructs booking limits based on a deterministic mathematical programming model. Baker and Collier (1999) compare the performances of five booking control policies under 36 hotel operating environments by means of simulation and advise on the best heuristic for each operating environment. In this paper we concentrate on the booking control problem. This makes our work comparable to the work of Weatherford (1995) and Baker and Collier (1999). Unlike these previous studies, we use the booking control policies over a rolling horizon of decision periods, such that all overlap between the different types of stay can be accounted for. Also, next to the booking control policies based on the well-known deterministic model, we present nested booking limits and bid prices based on a mathematical programming model that accounts for the stochastic nature of demand.
ORM
Mathematical Programming Models We present two mathematical programming models to find the optimal allocation of the rooms over the different types of guests. The first is the well-known deterministic model. The second is a stochastic model. Deterministic Model The deterministic model that we consider is the same as Weatherford (1995) proposes for his nested booking-limit policy. This model treats demand as if it were deterministic and equal to its expectation. To formulate the model, we note that each type of booking request is defined by three aspects: its price class, its starting day and its length of stay. We consider n of such booking types. Further, we let the decision period of the model consist of m target booking days. Let r = (r1, r2, …, rn)T, D = (D1, D2, …, Dn)T and c = (c1, c2, …, cm)T denote the prices, demand and capacities of the various booking types and days. Further, we define the matrix A = [aij], such that aij = 1 if booking type j stays on day i and aij = 0 otherwise. We denote the jth column of A by Aj, which gives the capacities used by booking type j and consists of exactly as many 1 entries as the number of days that the guest wishes to stay in the hotel. The deterministic mathematical programming model can now be formulated as follows: max rTx
(1)
s.t. Ax ≤ c 0 ≤ x ≤ E[D], where E[D] denotes the expected demand for the various booking types and x gives the partitioning of the room capacity. The objective of the model is to maximize revenues under the restriction that the total number of reservations for a day does not exceed the room capacity for that day. The number of rooms allocated to each booking type is restricted by the level of the demand, which in this model is replaced by its expectation. Since it is usually difficult to optimize an integer programming model, the decision variables are generally not restricted to be integer. Although the constraint matrix is not totally unimodular, previous experiences of Williamson (1999) and de Boer et al. (2002) with the LP model in (1) show that when demand and capacity is integer, the optimal solution often is as well. When the model produces a fractional solution, however, it will generally not take much effort to produce an integer solution by applying branch-and-bound techniques. Stochastic Model The deterministic model approximates the distribution of the demand by a point estimate of its
expectation. This means that it assumes Pr(Dj ≥ d) = 1 ∀ d ≤ E[Dj] and Pr(Dj ≥ d) = 0 ∀ d > E[Dj], ∀ j = 1, 2, …, n. Using such a rough estimate as the deterministic model does, means that the probability of another type j arrival is overestimated (by the value 1) as the number of arrivals has not reached its expectation yet and underestimated (by the value 0) as the expected number of arrivals has been reached. Here we present a mathematical programming model that approximates the distribution function of demand more smoothly. This stochastic model was first introduced by De Boer et al. (2002) for the airline industry. The model incorporates the stochastic nature of demand by discretizing it to N a limited number of values: d1j < d 2j < ... < d j j , where Nj is the number of discretization points for booking type j. The stochastic model is then given by: n
max
Nj
∑ ∑ z Pr(D ≥ d ) rj
j =1
s.t.
xj =
k =1 Nj
∑z
k j
j
k j
k j
(2)
∀ j = 1, 2, …, n
k =1
Ax ≤ c 0 ≤ z 1j ≤ d 1j 0 ≤ z kj ≤ d kj − d kj −1
∀ j = 1, 2, …, n ∀ j = 1, 2, …, n k = 2, 3, …, Nj
The decision variables z kj each accommodate for the part of the demand Dj that falls in the interval (d kj − 1, d kj ]. Note that z kj + 1 will only be nonzero after z kj has reached its upper-bound of d kj − d kj − 1, since Pr(D j ≥ d kj ) ≥ Pr(D j ≥ d kj + 1) . Summing the decision variables z kj over all k, gives the total number of rooms allocated to booking type j. As for the deterministic model, the decision variables of the stochastic model are not restricted to be integer. Note that the deterministic model can be obtained from the stochastic model by limiting the number of discretization points to one and setting it equal to the expected demand. Booking Control Policies The mathematical programming models presented in the previous section give an allocation of the capacity over the various price classes. Next we discuss how to form booking control policies based on the mathematical programming models. Nested Booking Limits The number of rooms allocated to each booking type by the models from the previous section can easily be interpreted as booking limits. These limits can be used as the maximum number of booking requests to accept for each booking type. It is never optimal, however, to reject a booking request when rooms are still available for other less profitable booking types, even if its
AENORM
59
April 2008
13
ORM
own booking limit has been reached. Therefore, each booking type is allowed to tap into the rooms allocated to any booking type that is less profitable. When this is allowed, the booking limits are called nested. In order to form nested booking limits, the different booking types need to be ranked by their contribution to the overall revenue of the hotel. The dual price corresponding to the capacity restriction for a day can be interpreted as the opportunity costs of a booking on that day. Adding the dual prices of all the days used by a stay, gives an indication of the opportunity costs of the stay. A measurement for nesting is then obtained by subtracting these opportunity costs from the revenue generated by the stay. Thus, a nesting order is based on:
r j = r j − μT A j ,
(3)
where μ = (μ1, μ2, …, μm)T denotes the dual prices of the capacity constraints. Nested booking limits can now easily be constructed for the deterministic and stochastic models, which we will call the Deterministic Nested Booking Limits (DNBL) and the Stochastic Nested Booking Limits (SNBL) policies. Bid Prices A bid-prices policy directly links the opportunity costs of a stay to the acceptance/rejection decision. A bid price is constructed for every day to reflect the opportunity costs of renting a room on that day. As before, we estimate the opportunity costs by the dual price of the capacity constraint for that day in the underlying mathematical programming model. A booking request is only accepted if the revenue it generates is greater than the sum of the bid prices of the days it uses. This means that a booking request for type j, is accepted if and only if there is sufficient capacity and:
r j ≥ μT A j ,
(4)
This way, bid prices can be constructed from the deterministic and stochastic models, which we will call the Deterministic Bid Prices (DBP) and the Stochastic Bid Prices (SBP) policies. Rolling Horizon
AENORM
59
Test Case We provide computational results for the various booking control policies in a simulated environment. This environment is chosen to reflect the situation described to us by a hotel in the Netherlands. Table 1 gives an overview of the simulation parameters. We simulate the arrivals of booking requests by a non-homogeneous Poisson process with intensities dependent on the price class, the starting day of the stay (e.g. Monday, Tuesday, etc.) and the time until the target booking day. We allow for different booking patterns for different price classes. Further, we let some days, e.g. Friday, be busier than other days, e.g. Thursday. We compare the performances of the different booking control policies over a 6 week period. 10 price classes 150 identical rooms max 7 day stay max 13 weeks in advance booking 6 week evaluation period
The mathematical programming models that we presented provide an allocation of the rooms for a fixed decision period. Here we discuss how to use them over a rolling horizon of decision periods. Assume that booking requests can not be made more than F days in advance, and that the longest possible stay in the hotel consists of M days. The booking requests that come in at day t can then start their stay in the hotel at day t at the earliest and at day t + F at the latest. The last possible day that a booking request can end, is at day t + F + M. Therefore, if a booking
14
control policy is determined at day t, the decision period we consider is given by the time interval [t, t + F + M]. Within this decision period all overlap between the different booking types are taken into account, except for the overlap at the end of the interval corresponding to the stays that fall partly outside the decision period. However, these are the booking types for which booking has only just opened. First of all, hardly any booking requests will come in this early in the booking process except for some extremely early bookings. Second, since booking has just started for those target booking days, the hotel will be nearly empty for those days and critical decisions will not have to be made yet. By the time critical decisions have to be made for these days, the decision period will have rolled forward and have captured all overlap between the neighboring days. The booking control policy is constructed at different points in time. Every time a new policy is constructed, the decision period rolls forward. The booking limits and bid prices for the booking period that was already available are adjusted, while new booking limits and bid prices are constructed for the booking period that has just been opened.
April 2008
2 week start-up and cool-down periods demand exceeds capacity different booking patterns for the different price classes demand is independent of the booking control policies no overbooking no cancellations and no-shows no group bookings Table 1 Specification of the test case.
ORM
policy outperforms the DBP policy can be explained because a bid-price policy simply accepts all booking requests whose revenue exceeds the opportunity costs and does not distinguish bookings that contribute marginally from those that contribute greatly. Booking limits on the other hand can actually limit the number of bookings for each booking type individually. This drawback of the DBP policy can be taken away by updating the estimate of the opportunity costs more often. In fact, additional computations show that the gap between the two policies can be diminished greatly by updating the policies daily instead of weekly. The stochastic SNBL and SBP policies show to be very dependent on the parameters chosen for the model, such as the number of demand scenarios and the probabilities of the scenarios. We obtained results for 20 different configurations of the underlying model and see that the results vary from 86.03% to 89.94% of the optimal revenue dependent on the chosen configuration of the model. We see only an improved performance for the bid-price policy. The SNBL policy doesn’t perform better than the DNBP policy at all. This can be explained by the fact that nesting in itself already creates an opportunity to allocate more high profit demand than expected. The bid-price policy however can benefit greatly from the use of a stochastic model. Figure 1 periods.
Illustration
of
the
rolling
decision
Since the hotel is empty at the beginning and end of the simulation, the booking control policies that are constructed for those periods are not representative. Therefore, we make use of a start-up and cool-down period of 2 weeks. Let time t = 0 denote the beginning of the first day of the start-up period and t = 1 the beginning of the second day. Then, because a booking request can be made 13 weeks in advance, the booking process starts at t = -91. At that moment, booking control policies are derived for all booking requests that can come in that week. A graphical illustration of the rolling decision periods is given in Figure 1. In this figure, the startup and cool-down periods are colored light and the actual evaluation period is colored dark. Computational Results We apply the various booking control policies to the test case described in the previous section and compare the results with those obtained from a simple First Come First Serve (FCFS) policy and with the ex-post optimal revenue that can be determined when all demand is known. The results show that there is a considerable gap between the FCFS policy and the booking control policies described above. On average the FCFS policy does not obtain more can 79.75% of the optimal revenue, whereas the deterministic DNBL and DBP policies reach up to 87.89% and 86.50% respectively. The fact that the DNBL
References Baker, T.K. and Collier, D.A. (1999). A Comparative Revenue Analysis of Hotel Yield Management Heuristics, Decision Sciences, 30, 239-263. Bitran, G.R. and Gilbert, S.M. (1996). Managing Hotel Reservations with Uncertain Arrivals, Operations Research, 44, 35-49. Bitran, G.R. and Mondschein, S. (1995). An Application of Yield Management to the Hotel Industry Considering Multiple Day Stays, Operations Research, 43, 427-443. De Boer, S.V., Freling, R. and Piersma, N. (2002). Stochastic Programming for MultipleLeg Network Revenue Management, European Journal of Operational Research, 137, 72-92. Pak, K. (2005). Revenue Management: New Features and Models, ERIM Ph.D. Series Research in Management, 61, Rotterdam, The Netherlands. Weatherford, L.R. (1995). Length of Stay Heuristics: Do They Really Make a Difference, Cornell Hotel and Restaurant Administration Quarterly, 36(6), 70-79. Williamson, E.L. (1992). Airline Network Seat Inventory Control: Methodologies and Revenue Impacts, Ph.D. Thesis, Flight Transportation Laboraty, Massachusetts Institute of Technology, Cambridge, MA.
AENORM
59
April 2008
15
Pas op! Verhoogd risico op een glansrijke carrière Michael Page Banking & Financial Services is toonaangevend op het gebied van werving & selectie en interim management bij banken, verzekeringsmaatschappijen, asset managers, pensioenfondsen en andere financiële partijen. Zij vervult financiële, commerciële en specialistische functies, waar gewenst in samenwerking met Banking divisies in bijvoorbeeld Londen, New York, Parijs, Frankfurt, Singapore en Sydney. Door onze sectorexpertise en internationale wervingskracht bieden wij onze relaties meerwaarde. Zowel recent afgestudeerden als ervaren professionals kunnen wij in binnen- of buitenland hierdoor de best mogelijke stap in hun loopbaan bieden. Ook de eerste! Naast de mogelijkheid om vrijblijvend kennis te maken met onze gespecialiseerde consultants kun je met betrekking tot de onderstaande vacatures gedetailleerde informatie opvragen via onze website www.michaelpage.nl. Je vindt de vacatures door op onze homepage het bijbehorende referentienummer in te toetsen. Vanaf onze website kun je direct reageren op de daar getoonde vacatures. Daarnaast kun je jouw vragen, opmerkingen of cv direct sturen naar banking@michaelpage.nl.
(Credit) Risk Manager
Econometrisch Analist
Onze opdrachtgever maakt deel uit van internationaal toonaangevende organisatie op het gebied van financiële dienstverlening. Voor de bancaire tak zijn we momenteel op zoek naar een ambitieuze (credit) risk manager. De afdeling houdt zich met name bezig met het in kaart brengen van de operationele- en kredietrisico’s. De risk manager is verantwoordelijk voor de analyse van verschillende hypotheek- en consumptief krediet portefeuilles. Daarnaast is het ontwikkelen en beheren van modellen een belangrijke taak. Kennis van of ervaring met Basel II is een vereiste. Ref. 135381
De Nederlandsche Bank (DNB) is een naamloze vennootschap die deels als zelfstandig bestuursorgaan opereert en ook deel uitmaakt van het Europees Stelsel van Centrale Banken. Zij is verantwoordelijk voor het bewaken van de financiële stabiliteit in Nederland. Binnen de afdeling Toezicht Strategie wordt vanuit diverse disciplines beleid ontwikkeld voor de (inter)nationale toezichtpraktijk en worden de grote lijnen uitgezet voor het toezicht op financiële instellingen. De DNB biedt je een uitdagende en dynamische werkomgeving als Econometrisch Analist met veel ruimte voor eigen groei. Ref. 135507
Investment Analyst
Financial Consultants
Parcom Ventures (Parcom), the private equity branch of ING Group, is an active investor since 1982 with offices in Hilversum, Paris, London, Frankfurt, Brussels and Milan. Its portfolio currently includes approximately 50 direct investments with a market value circa € 1 billion. Several of its participations have been brought to major stock exchanges. We are currently looking for an Investment Analyst to strengthen the team of Investment Professionals who intensively supports the Investment Managers and Directors in each stage of the investment cycle. This role offers dynamic and diversified assignments with excellent growth opportunities. Ref. 124729
Voor verschillende toonaangevende Consultancy Firms zoeken wij ambitieuze Econometristen en Actuarissen die de top van het bankwezen en de pensioen- en verzekeringsbranche in Nederland willen adviseren. Bij onze klanten bestaan goede doorgroei- en ontwikkelingsmogelijkheden en uitstekende arbeidsvoorwaarden. Voor zowel startende als ervaren Econometristen en Actuarissen zijn er diverse interessante mogelijkheden. Geïnteresseerde kandidaten die openstaan voor een objectief advies over toekomstige werkgevers, worden uitgenodigd te reageren. Ref. nr. 135611
Quantitative Analyst Provisioning
Equity Analyst
Atradius has 80 years of experience and is one of the largest credit insurers of the world with a total revenue of € 1,3 billion and a worldwide market share of 24%. Atradius insures trade against the risk of non payment. Besides credit risk insurance Atradius offers collections services. The provisioning team produces and reports the technical provisions and is thus responsible for half of Atradius’ balance sheet. For this challenging team we are looking for a Quantitative Analyst in defining Atradius Group provisioning. Ref. 133642
LaSalle Investment Management is a leading global real estate investment manager and a member of the worldwide Jones Lang LaSalle group. For the real estate securities team we are currently looking for an Equity Analyst who will be primarily responsible for the coverage of European listed real estate securities for institutional clients. The job offers extensive international exposure and a chance to be part of a dynamic and challenging environment. The ideal candidate will have proven ability to analyze stocks or experience within real estate funds. Ref. 131752
Wereldwijd 149 kantoren in 25 landen www.michaelpage.nl 16
AENORM
59
April 2008
Actuarial Sciences
Lessons from the credit crunch Recent months have seen some remarkable events in the world’s money markets. As fears about possible bank losses from exposure to sub-prime mortgages spread, liquidity in the inter-bank market dried up almost overnight. Two German Landesbanken were pushed into rapidly arranged mergers, and a U.K. bank had to be bailed out by the U.K. Treasury. The U.S. Federal Reserve quickly reversed a trend of raising interest rates to provide support. Since then, many banks have confirmed large losses and some have had to seek large capital injections.
Steve Tayor-Gooby is the Managing Director of the Tillinghast Insurance Practice of Towers Perrin, responsible for managing Tillinghast’s operations worldwide. His areas of specialist expertise include: - Mergers, acquisitions and corporate restructurings; - Capital management; and - Economic value analysis. He holds a BSc with first class honours in mathematics from Bristol University and is a Fellow of the Institute of Actuaries.
What started as an isolated problem in one sector of the US mortgage market is now spreading fast. As banks foreclose on problem mortgages and tighten lending criteria, US house prices are falling across the board. This in turn is creating further problems and is tipping the US economy into a recession. Where the US leads, many other countries will surely follow... Banking regulators in the UK, embarrassed by the failure of one of the most sophisticated regulatory regimes in the world, are examining recent events to see what went wrong, and what could have been done better. It is clear that they will require banks to hold more protection against liquidity risk in future, and other changes may follow. The insurance industry has been relatively unscathed so far. As global regulators and experts consider the details of the new Solvency II regime and other new solvency requirements, what lessons can be learned from these events? The events that started in August were new to the money markets and were quite unexpected, marking the end of a long benign period in the credit cycle characterized by low credit spreads. It is common for the credit cycle to turn rapidly when conditions change, but the speed and extent to which liquidity dried up took many by surprise.
Banking is not unique In fact, similar events happen commonly in the market for insurance liabilities, which are much less liquid to start with, and events such as 911 and Hurricane Katrina remind us that insurer risk profiles involve considerable uncertainty. This has important implications for the way we assess capital needs in the industry. The current basic method proposed for Solvency II is to require companies to hold sufficient capital so that they will have enough resources to fund the market value of liabilities at the end of a 1 in 200 adverse year. The theory is that they can then sell on the liabilities if necessary. Other global solvency regulatory developments are moving in a similar direction. The problem with this theory is that, when conditions so adverse actually arise, the market for insurance liabilities, particularly non-life portfolios in run-off, dries up, and it becomes impossible to sell. Just as banks refuse to lend to other banks for fear of what unknown liabilities they may be carrying, no insurance company will buy a portfolio for fear of what future losses may be unaccounted for. A portfolio of mortgages is relatively simple to assess when compared with a reinsurance portfolio! Lessons for solvency II This problem doesn’t undermine the fundamental basis for the Solvency II proposals. Indeed, we generally believe that the Solvency II methodology is a huge step forward from the current methodology, and we expect that the Solvency II model will be followed around the world by many regulators who are reviewing their capital adequacy standards. It does mean that in developing the detailed rules and calibrations, regulators need to look very carefully at areas where the theoretical model is a poor match for the nature of the risk. For example,
AENORM
59
April 2008
17
Actuarial Sciences
there are several areas where results from longer term capital adequacy models that consider projections over many years can differ significantly from the results of one-year models. These areas need to be studied carefully, and the calibration of the whole system can be modified to take account of these differences. To understand this point better, consider the example of employers’ liability insurance thirty years ago, when asbestos exposure was only just beginning to be recognised as a source of potentially massive claims. Anyone examining claims trends and projecting a 1 in 200 scenario through to the end of the following year would project a relatively modest variation in experience. A thirty year projection of possible outcomes would be very different and the 99.5th percentile would give a much more reasonable assessment of the capital that should be held
Banking regulators will now be examining carefully the risk patterns revealed in the aftermath of the current credit crunch and apply that learning in future risk management models for money markets. In insurance, we need to take this as another lesson to remain vigilant that we are using our rapidly advancing modeling tools effectively.
"It was a shortcoming of the model that failed to predict the possibility" against such a risk. The one year method can easily be adapted to allow for such risks by modifying the parameters used in the projection scenarios. When considering their own capital management strategies, insurance companies are well advised to consider longer time horizons than one year. Longer term projections frequently give many insights into strategies that simply can’t be captured in shorter models. Also, we all need to keep in mind the limitations of models that simply reproduce past behaviour. As financial markets change and evolve, there will always be new dangers that we can’t foresee. Put another way, we can expect 1 in 200 year events to come around more often than every 200 years! Alternatively, we should just not take our models too seriously! Shortly after the credit crisis started, one modelling expert claimed publicly that his model showed the recent events to be a one in a million year occurrence. Common sense might say that the event wasn’t so rare. It was a shortcoming of the model that failed to predict the possibility! Models are extremely useful. They enable us to analyse the future in a way that is difficult to do in any other way. However, they are not perfect, and cannot hope to predict every possible outcome.
18
AENORM
59
April 2008
(advertorial)
waarom je carrière start bij Delta Lloyd Groep Aantal starters per jaar: voor goede starters maken we plek Aantal medewerkers: 3000 Aantal landen operatief: 3 Omzet: Delta Lloyd Groep: premie-inkomen 5,8 miljard euro. Gemiddelde leeftijd van de werknemer: verschilt sterk per team
‘
Waarom zou ik, als starter bij Delta Lloyd Groep willen werken? Bij de afweging om voor een consultant of een verzekeraar te willen werken, heb ik gekozen voor een verzekeraar. Ik denk dat een actuarieel consultant vooral bezig is met certificering en review van de actuariële werkzaamheden terwijl een actuaris bij Delta Lloyd Groep veel meer invloed heeft op de bedrijfsvoering. De vakinhoudelijke voorsprong van Delta Lloyd Groep t.o.v. andere verzekeraars heeft de doorslag gegeven. In hoeverre zijn jouw verwachtingen over Delta Lloyd Groep uitgekomen? Bij Delta Lloyd Groep heb ik, zoals ik hoopte, vanaf het eerste moment veel verantwoordelijkheden gekregen en heel veel geleerd. Hoe ben jij bij Delta Lloyd Groep terecht gekomen? Via een open sollicitatie.
’
Wees even eerlijk, wat is jouw grootste blunder bij Delta Lloyd Groep tot nu toe? Ik heb een keer een rapport gemaakt voor de directie waarin ik rapporteerde dat “mijn voorziening” 1,5 miljard bedroeg, terwijl in werkelijkheid deze slechts 1,5 miljoen was.
Frédérique Bovy Actuaris/Traineemanager Actuarieel Traineeship
Carrièremogelijkheden Je solliciteert bij Delta Lloyd Groep niet op een specifieke functie, maar bij Delta Lloyd Groep als bedrijf. Je start bij ons als algemeen management trainee, actuarieel trainee, financieel trainee, (junior) actuaris of risicomanager. Voor goede mensen creëren we zelfs een plek! Delta Lloyd Groep heeft ontwikkeling van haar medewerkers hoog in het vaandel. Voor elke medewerker is een ruim persoonlijk opleidingsbudget beschikbaar. Het afronden van een postdoctorale opleiding wordt zeer gestimuleerd naast het volgen van vakinhoudelijke en persoonlijke vaardigheidscursussen. Delta Lloyd Groep is een sociaal bedrijf; de bedrijfs-CAO biedt faciliteiten om in drukke tijden de balans tussen werk, studie en privé te behouden. Talentvolle mensen met ambitie komen in aanmerking voor een ontwikkelingstraject, waarin het opdoen van nieuwe ervaringen centraal staat. Dit kan in de vorm van een andere functie, of bijvoorbeeld door nieuwe taken binnen een bestaande functie. Delta Lloyd Groep biedt twee hoofdrichtingen in dit ontwikkelingstraject: • ontwikkeling van leidinggevenden: Management Development Programma • ontwikkeling van professionals: Professional Development. Kortom de carrieremogelijkheden bij Delta Lloyd Groep zijn uitstekend!
Wil je solliciteren of meer informatie over werken bij Delta Lloyd Groep? Ga dan naar: www.deltalloydgroep.com/werkenbij Je kan ook een e-mail sturen naar recruitment@deltalloyd.nl of contact opnemen met Recruitment (020) 594 32 59.
AENORM
59
April 2008
19
Econometrics
Limitations and usefulness of the QALY model: what an economist thinks your life is worth We get to hear the shortest summary of economics early in life: Santa Claus doesn’t exist. Even for grown-ups the message that everything comes with a cost is hard to accept in such an important domain as health care. But there too trade-offs have to be made. How much do we spend on medical care? Do we operate (elderly) people if such an operation is costly without adding much to the survival chance? Do we always prescribe the best medicines or do we also somehow take the costs into account?
There is no doubt that these are difficult questions, in which a lot -including ethical concerns, equity and altruism- is involved. However, even if such choices are avoided at the individual level, they cannot be avoided at the aggregate level of society. That subsequently begs the question on what criteria doctors or politicians should base such choices. One frequently given answer is QALY, which is short for quality-adjusted-life-years. QALY is a method to express periods of different health states in equivalent years of living in full health, providing a criterion for decisions in health care. It provides a theoretical foundation and empirical measure for making decisions about treatment and spending in health care. It may then be decided that the number of QALYs is maximized under a budget constraint, or that one QALY is worth for example 80.000 euro. The following discusses the behavioral foundations of QALYs and the elicitations methods to assess it, and the problems that come with both. QALY: behavioral measurement
foundations
and
The longer life lasts and the healthier we are, the better –all else equal- it is. QALY formalizes this notion by postulating that life-time utility depends on both the number of life-years remaining and the quality of those years. The formula in the simplest case reads: U(Q, T)=H(Q)*T
(I)
Here Q is the health state and T is the (remaining) duration of life, which enters linearly. H(.) increases in the health state. H(Full Heath) and H(death) are normalized to 1 and 0, respec-
20
AENORM
59
April 2008
David Hollanders holds degrees in Econometrics, History, and Economics (Tinbergen M.Phil). Momentarily he is a Ph.D student at the UvA and Netspar.
tively. So, for example, if H(back pain)=0.5, then living ten years with back-pain provides five QALYs, and in utility terms this is equivalent to living five years in full health. Under expected utility the QALY-model is equivalent to two behavioral foundations. The first is the zero condition, which states that the utility of any health state with a zero duration is independent of that health state; if one dies on the spot, the condition in which that happens does not matter. The second restriction is risk neutrality with respect to life years. Someone should be indifferent between living five years for sure and a 50%-50% chance of dying immediately or living ten years. The zero condition is plausible, even unobjectionable, the second condition is prima facie unsatisfying. Most people are risk averse with respect to life years, which (under expected utility) is equivalent to a concave utility function. An alternative is the non-linear or generalized QALY, where life duration enters non-linearly: U(Q, T)=H(Q)*G(T)
(II)
Here the concave and increasing function G(.) values the number of life years. This generalized model holds if and only if two behavioral foundations hold. The first is again the zero condition. The second condition is so called standard gamble invariance (which implies risk neutrality, but not the other way round). Though the non-linear and increasing function G(.) is more
Econometrics
general, it does not capture every conceivable situation. A disease may for example come with a maximum endurable time S; up to time S an individual values (extra) life duration positively, but after S life duration adds negatively to life time utility. Even if the QALY-model holds completely, it remains to assess the function H(.), as it is subjective. Two elicitation methods are frequently used, the standard gamble method (SG) and the time trade off method (TTO). The first asks people which probability p makes them indifferent between living in health state Q with certainty and living in full health with probability p and dying with probability 1-p. The indifference relation implies that p is chosen such that: H(Q)=p*H(FH)+(1-p)H(D)=p
(III)
So, if one knows the probability p one immediately knows the value H(Q). This relationship holds irrespective of whether the linear or nonlinear model applies. The second method relies on the linear QALYmodel and asks people to trade of life years of
be determined. SG and TTO determine H(Q) by designing questionnaires, assuming that the participants conform to expected utility. Most people do not and this biases results that SG and TTO report. There are at least four types of biases; the first falls within the framework of EU, the second and third are related to prospect theory and the fourth is a psychological bias usually not incorporated in economic models. The overall result is that SG is biased upwards, while the effect on TTO is ambiguous. The important role of utility curvature was indicated already. For the standard gamble-method it does not affect results, as SG remains valid in the non-linear QALY-model. For the TTO method things are different. If people are risk-averse with respect to life-years, as is reasonable to assume, then G(.) is a (strictly) concave function. Then the TTO utilities are biased downwards. When a participant reports T2, the researcher will (incorrectly) infer that H(Q)=T2/T1, whereas the correct relationship is H(Q)=G(T2)/G(T1). The downward bias follows from T2/T1<G(T2)/G(T1) (which in turn follows from concaveness of G(.)).
"In health care Santa Claus does not exist" living in health state Q and living in full health. Participants are asked for which T2 they are indifferent between (Q, T1) and (FH, T2), where Q and T1 are given. Note that T2<T1. In the linear case, the indifference relation implies: T1*H(Q)=T2*H(FH)= T2 => H(Q)=T2/T1
(IV)
When the respondent reports T2, the value of health state Q can be assessed. If one holds expected utility to be a reasonable normative approach and the behavioural foundations to be at least approximately convincing then QALY can in principal be used. However, even then the two described methods of elicitation still come with some problems. Biases in QALY-measurement Both introspection as well as the outcome of many experiments suggests that people do not conform to expected utility (EU). In one respect deviations from expected utility are not unsettling, as the QALY model is a normative model, not a descriptive one. It does not claim to describe how decisions are made, but tries to propose how they should be made. The descriptive violations of expected utility do matter for another reason. For the QALY-model to be used, the subjective function H(.) needs to
The second bias results from a deviation from expected utility, called probability weighting. Many people tend to overweight small probabilities, as well as attaching a disproportional value to outcomes that are certain. (An example is the well-known Allais paradox.) With rankdependent utility, which captures these kind of effects, utility of the lottery in equation III becomes: H(Q)=w(p)*H(FH)+[w(1)-w(p)]H(D)= w(p)*H(FH)+[1-w(p)]H(D)=w(p)
(V)
Here w(p) is a probability weighting function, increasing on the domain [0,1] with w(0)=0 and w(1)=1. A common finding is that w(p)<p for p>0.33. So, for most values of p, the SG is biased upwards; the researcher infers H(Q)=p, whereas the true relation is H(Q)=w(p)<p. The TTO-method does not use probabilities and is consequently unaffected by this bias. A second deviation from expected utility is loss aversion and it introduces a third bias. Loss aversion formalizes the notion that people are more sensitive to losses than to gains, or that, as Kahneman and Tversky [1979] state, “losses loom larger than gains” and that “carriers of value or utility are changes of wealth, rather than final asset positions that include current wealth”. These changes are relative to a refe-
AENORM
59
April 2008
21
Econometrics
rence point, generally current wealth. In the TTO people are asked how much life-years in health state Q they are willing to give up for living in full health. The reference point is then the health state Q, and the participant needs to trade-off losses in life-years and a gain in health state. Loss aversion makes people more unwilling to give up life years, biasing T2 upwards. Consequently, TTO utilities are overstated. The same holds for SG-utilities, they too are biased upwards. The reference point here also is the health state Q, and loss aversion implies that the probability p of living in full health, the gain should be really high in order to offset the loss of dying with probability 1-p. A fourth bias is scale compatibility. This points to the situation where people pay more attention to the attribute that corresponds with the response scale. With the TTO-method people are asked to trade off losses in life-years for gains in health status. Scale compatibility makes people more sensitive to life-years, as this is the response scale, than to health status, neglecting the latter. The result is that participants are unwilling to give up life years, leading to an upward bias in TTO-utilities. For SG-utilities, where the response scale is a probability, the result is effectively unknown. Taken together the biases result in the following table (taken from Bleichrodt [2002]): Effect
Bias in SG utility
Bias in TTO utility
Utility curvature
None
Downward
Probability weighting
Generally upward
None
Loss aversion
Upward
Upward
Scale compatibility
Ambigous
Upward
Total effect
Upward
?
TTO-utilities can be biased upward or downward, while SG-utilities are generally biased upward. This explains the common finding that SG utilities exceed TTO utilities (while under expected utility and risk-neutrality these should give the same results). The SG was for a long time considered to be the golden standard; for one thing it does not need to assume that life duration enters the utility function linearly. However, the TTO-method may give more satisfying results when the different biases (partly) cancel. So, the assumption of risk neutrality, though in itself unconvincing, can help to offset other biases. A first way to subsequently deal with the reported violations of expected utility is to try to avoid them by carefully instructing participants, confronting them with any inconsistencies that their answers may give rise to (‘constructive preference approach’). Beyond that, different correction methods have been proposed, see for
22
AENORM
59
April 2008
example Van Osch et al. [2004]. Corrections, while theoretically desirable, need then to be considered with care for two reasons. First, as indicated, correcting for one bias only may even decrease the congruence between the measurement and the true preferences. Second, the corrections lead to an ‘embarrassment of riches’ by providing multiple correction-methods, all differing in outcome. This in itself introduces arbitrariness in the elicitation methods, as outcomes depend on the (used) correction. Other objections to (corrected) measurement of the QALY-model are, as Bleichtrodt et al. [2001] point out, “the normative assumption of expected utility, the assumption of true preferences at all, (..), the paternalistic nature of deviating from stated preferences, or, our main concern, the particular biases assumed.” All this taken together, the (uncorrected) TTO-method, which is also easy to use in practice, may not be a bad approximation after all, without giving the final answer. And indeed, TTO-method is the most used in practice nowadays. Some further considerations on the measurement of QALYs are due. First, it need be assumed that health states are either chronic or additive over time, otherwise subsequent changes in health states cannot be accommodated. Second, people are asked hypothetical questions. An economist would like to have revealed preferences, but experiments designed for this purpose would probably not be very popular with participants. So one has to rely on stated preferences instead. This is not problematic, as Kahneman and Tversky point out, if people know reasonably well how they would behave in real-world choice-situations, and have no special reason to misrepresent their preferences. The second assumption is plausible, while the first is not unproblematic but also not ipso facto implausible. QALY and social welfare Leaving the mentioned problems aside, there are still some qualifications in order before QALYs can be used in policy decisions, for example by maximizing the total number of QALYs. QALY in itself is not a measure of utility. This is a consequence of the application of the same QALY to different people, who however have different preferences and therefore experience different utilities in the same health state (corresponding to equal QALYs). The maximization of the number of QALYs can then not be defended with a utilitarian social welfare function (as summed up by Bentham’s “the greatest happiness to the greatest number of people”). It also departs from Pareto-efficiency, in the sense that an (re)allocation of health care such that the total number of QALYs increases but some individual is worse off, is desirable by the criterion of QALY maximizati-
Econometrics
on. This in turn relates to the notion that maximization of QALYs does not incorporate equity considerations. An allocation of ten extra QALYs for one person is equivalent to one extra QALY for ten persons. So it is problematic to formulate a satisfactory social welfare function. There is one way in which the QALY-model incorporates equity, as every QALY counts equally irrespective of who receives it. This is however arguably not entirely satisfying from another point of view. Many people feel that young children need to be given priority over adults and that people who have liven a healthy life are more entitled to health care than people that had a less healthy life style (drinking, smoking and drugs). This could be incorporated by weighting QALYs by personal characteristics of the person receiving them, although it is not a priori clear in what way to do that. There is also the danger that, as Williams [1997] points out, “such weights become arbitrary and capricious and come to be used to fudge outcomes in ways that would not be acceptable if their basis were exposed”. Conclusion Milton Friedman once remarked that “when you cannot measure something, you’re knowledge is meager and unsatisfying”. On the other hand “replacing the unmeasurable by the unmeaningful is not progress”’ So, does one make progress when using QALY? The above has pointed out some problematic aspects that come with the QALY-model, of which the measurement-issues are probably the most important. Still, the QALY model has much to show for it. It comes with an intuitive interpretation, it is relatively straightforward to use, and its elicitation method is tractable. All in all, there remains reason enough to continue research. From a practical point of view, it is however good to emphasize that the QALY-model addresses choices that, at least at the aggregate level, cannot be avoided. The QALY-model may then be better than any alternative. The alternative being to either use another model (but which one, and is it more satisfying than the QALY-model?) or to use rules of thumb, personal sentiments, or political bargaining. Operations, treatments and medicine come with a cost, which is another way of saying that also in health care Santa Claus does not exist. But just as we should not stop buying presents for Christmas, we do not need to stop hospital treatment. But there is much to say for the words of the economist Frank that “claiming that different values are incommensurable, simply hinders clear thinking about difficult trade-offs”. Especially in health care those trade-offs are hard. All the more reason to think them through carefully. And rationally, or at least as rational as possible.
Literature Bleichrodt, H. (2002). A new explanation for the difference between time trade-off utilities and standard gamble utilities, Health economics, 11, 447-456. Bleichrodt, H., Pinto, J.L. and Wakker, P.P. (2001). Making Descriptive Use of Prospect Theory to Improve the Prescriptive Use of Expected Utility, Management Science, 47(11), 1498-1514. Kahneman, D. and Tversky, A. (1979). Prospect Theory: An Analysis of Decision under Risk, Econometrica, 47(2), 263-292. Miyamoto, J.M., Wakker, P.P., Bleichrodt, H. and Peters, H.J.M. (1998). The Zero-Condition: A Simplifying Assumption in QALY Measurement and Multiattribute Utility, Management Science, 44(6), 839-849. Osch, S.M.C. van, Wakker, P.P., van den Hout,W. B. and Stiggelbout, A.M. (2004). Correcting Biases in Standard Gamble and Time Tradeoff Utilities, Medical Decision Making, 24, 511517. Stalmeier, P.F.M., Wakker, P.P. and Bezembinder, T.G.G. (1997). Preference Reversals: Violations of Unidimensional Procedure Invariance, Journal of Experimental Psychology: Human Percetion and Performance, 23(4), 1196-1205. Wagstaff, A. (1991). QALYs and the equity-efficiency trade-off, Journal of Health Economics, 10, 21-41. Williams, A. (1997), Intergenerational equity: an exploration of the ‘Fair Innings’ Argument, Health Economics, 6, 117-132.
AENORM
59
April 2008
23
ORM
There’s no Theorem like Bayes Theorem This is the title of the song, sang in 1960(!) by the famous statistician George Box. You can find it in the Bayesian songbook on the internet. I sing it in every course on Bayesian inference I give. And I mean it. There is no theorem like Bayes Theorem. It is the key to understand decision making under uncertainty. It is the key to a revolutionary different way of doing statistics. More and more scientists acknowledge that, but scientific revolutions appear to take time. Almost 60 years have passed since Box sang his song and still the studies of econometrics and statistics in Amsterdam hardly pay attention. Despite the beautiful Bayesian literature and fact that “Bayesian” has 4.540.000 Google hits. I’ll sketch how Bayes Theorem could be the leading thread in a study of econometrics. From the first course in probability calculus to the last course on econometrics, integrated with Operations Research.
The first course would be much more exciting than now. Probabilities not defined as a mathematical concept applied to throwing dice and drawing balls from urns, but as a logical concept following from coherent betting (de Finetti,1970). Probabilities do not exist. They are a mathematical construct needed for coherent decision making under uncertainty (Savage, 1960). You can use probability calculus to compute Your (partly subjective) probability that Kevin Sweeney is guilty of murdering his wife. See below. The second course, statistics, would be totally different from now. The data are given, the goal is to make probability statements about parameters. And these can be used for decisions and predictions. The data come in by the likelihood function, and “posterior is proportional to prior times likelihood”. One example: one has a sample and wants to make a coherent probability statement about the number of successes in the part of the sample that is not investigated (a beta-binomial distribution if beta priors are used). That is the thing an auditor needs to decide whether he can approve the books he investigates, not the “p-value”. The decision rule “disapprove if you find many errors” looks the same but instead of using some default 5% “significance level” the break-even point follows from priors and cost-benefit considerations. The corresponding size of a conventional test will in many cases be very different from 5%. The third course, business econometrics, would explain the coherent scheme of decision making by maximizing expected utility. How to incorporate risk. How to deal with priors if more decision makers are involved. With examples ranging from coherent inventory decisions to Big decisions involving high risks and little data.
24
AENORM
59
April 2008
Aart de Vos is a convinced Bayesian and as an econometrist associated to the Faculty of Economical Sciences and Business Administration of the VU University in Amsterdam. In the Netherlands he is best known for his articles about the lawsuit of Lucia de B., a nurse who was convicted for the murder of four patients.
The fourth course, introduction to econometrics would explain how regression models are motivated the Bayesian way. After the conventional treatment, it would be shown how many extensions are possible by using MCMC (Markov chain Monte Carlo) sampling methods (sampling parameters conditional upon data!), easily performed by the superb (freely available) software program WINBUGS (Windows Version of Bayesian inference Using the Gibbs Sampler) from David Spiegelhalter, as described in Lancaster(2004). No longer would the terms “unbiased” or “efficient” be used except as properties of point estimators for some standard decision problems. Many other models would be treated, all from one coherent perspective: again “posterior is proportional to prior times likelihood”. Not a mixture of recipes that sometimes work and sometimes don’t. The fifth course, Operations Research, could treat algorithms needed to solve decision problems when the probability distribution of future outcomes is given (analytically or by simulation, the latter possibly from an MCMC algorithm). The course would incorporate in a coherent way that predictions come from models and priors. The sixth course, history of science, would, among other things, treat the struggle of more than 50 years it took before statistics finally turned around. Not by the beauty of the argu-
ORM
ments, but by the possibilities to use MCMC algorithms that made things possible one could only dream of in the conventional setup. This program should be the core of a coherent Bachelor education. Maybe a seventh course should be given explaining conventional recipes, nice but incoherent concepts to solve problems. After all, the students must be able to communicate with others that do not understand the Bayesian point of view. There are universities nowadays where programs like this flourish.. But it are still exceptions. De Finetti guessed in 1970 that we will all be Bayesians by 2020. I am afraid he underestimated the power of scientific conservatism. But there is still hope. The proof of the pudding is in the eating Dennis Lindley once wrote: “Everybody would be a Bayesian if he read the literature thoroughly and was honest enough to admit that he might have been wrong”. But professionals do not like to read things that undermine their profession, as I experienced in my long struggle to convince colleagues at the VU. You must be triggered young. Lindley was assistant of the genius Sir Harold Jeffreys, astronomer and geophysicist, who’s “Theory of Probability” (1939) is in his collected works only a part of volume 8. The famous Bayesian econometrician Arnold Zellner read Jeffreys work during his studies. I “invented” Bayesian inference (with some errors, Jeffreys had done it better) in 1972. So students I can convince sometimes, but colleagues hardly. However, recently things change. Henk Tijms is triggered by law cases and Siem-Jan Koopman by MCMC. At the UvA the situation is still hopeless I guess. Arguments apparently fail to do the job, and saying that others are wrong certainly does not help. Examples must be given. To show that you can do things you cannot do the conventional way. I give two examples. A law case and an inventory problem. The probability that Kevin Sweeney murdered his wife. T
P(T|C)
P(No|T)
2:15
0.000009
2:30
0.00009
2:45 3:00
Kevin Sweeney left in 1995 Steensel (near Eindhoven) his wife Suzanne Davies at 02:00 a.m. Between 02:47 and 03:00, two policemen and the housekeeper walked all around the house not noticing anything. At about 03:45 a fire was reported. Firemen arrived at 03:55. Suzanne Davies died at 04:37 by carbon monoxide poisoning. The question was: did Sweeney lite the fire or was it caused by a burning cigarette? Many facts were unclear, but the main riddle is the time span if Kevin made fire before 2.00. House room fires start rapidly. In 6 attempts by TNO in comparable circumstances the fire spread within 5 minutes. At first Kevin was not convicted (lack of proof). In the appeal case, that lasted 3.5 years, he was. In 2001 he got 13 years for murder. The basis for law case calculations is Bayes’ rule for two alternatives: posterior odds is prior odds times likelihood ratio: P(Guilty | Facts) P(Guilty) x = P(Not Guilty | Facts) P(Not Guilty) P(Facts | Guilty) P(Facts | Not Guilty)
The puzzle I will address here is the computation of P(Facts|Guilty) as far as it concerns the aspects of the time span between Kevin leaving home (2:00) and the conflagration (3:45). The trick is to use indirect reasoning through T: the time the fire was causing the CO poisoning. First we derive P(T|I), where I stands for information, the facts: C: conflagration at 3:45. No: Nothing noticed at 3:00 O: the state of CO poisoning at 4:00 The calculations are in table 1. P(T|C) is the prior. As fire spreads rapidly, it is unlikely that it had reasonable size more than a quarter before. Than the likelihoods: P(No|T): it is unlikely that nothing was noticed P(T|No,C)
P(O|T)
P(T|I)= P(T|No,C,O)
0.01
9.07E-08
0.03125
2.975E-09
0.01
9.071E-07
0.0625
5.949E-08
0.0009
0.1
9.074E-05
0.125
1.190E-05
0.009
0.2
1.815E-03
0.25
4.760E-04
3:15
0.09
1
9.074E-02
0.5
4.760E-02
3:30
0.9
1
9.074E-01
1
9.519E-01
P(No|C)
0.9919
P(O|No.C)
0.9532
2:00
Total scaling
1
1
Table 1
AENORM
59
April 2008
25
ORM
if the fire had reasonable size for some time. This gives P(No|T): posterior is proportional to prior times likelihood. And his has to be scaled to sum to unity. (see my primer). The scaling factor is P(No|C): it is no surprise that nobody noticed anything at 3:00 given that conflagration was at 3:45. Then P(O|T): the time between CO poisoning by such a fire and death cannot be very long, specialists say. Again multiply and scale and you have P(T|No,C,O)=P(T|I).
the judge. I filled in numbers according to my best knowledge. And I gave the grammar to decompose this problem into bits one can argue about, using experts in the spread of fire, CO poisoning etc. But even one takes the prior odds that he is guilty 2000 to 1 , the posterior odds are 1 to 7, corresponding with a probability of a meagre 12%. The counter-evidence is overwhelming. Winbugs and the glory of MCMC
Now we must make the switch to P(G|I). G stands for the fact that the fire was lit at 2:00, in which case Sweeney is guilty. We use P(F|C,No,O)=ΣT(P(G|T,I)P(T|I)=ΣT(P(G|T)P(T| I) The first part is true by definition, and P(G|T,I)=P(G|T) as the information is no longer relevant once T is known. We can say something about P(T|G). That is the distribution of the time it takes when fire is raised before it gets serious. Mostly short, as the TNO experiments showed (the analysis can even be extended such that use these data are used!). The link with P(G|T) is given by the odds formula: P(G | T ) P(G) P(T | G) x = P(−G | T ) P(−G) P(T | −G)
For each value of T (-G meaning not guilty) If a cigarette caused the fire, any moment that the fire starts is, without further information, equally likely. As there are six intervals P(T|G)=1/6. I get the result presented in table 2. The likelihood ratio is simply P(T|G)/(1/6). The prior odds P(G)/P(-G) are here chosen 100. The posterior odds P(G|T)/P(-G|T) are transformed to P(G|T)=1/(1+P(G|T)/P(-G|T)). Multiplication with P(T|I) and summation gives the required result: the probability that Sweeney is guilty given our assumptions is 0.78%. In other words: he is almost surely innocent. This is MY probability statement. And I am not Time
Computation of Bayesian posteriors in complex econometric models was a problem until in the nineties was found that MCMC algorithms could do the job by simulation. The first was the Gibbs Sampler, but the most amazing algorithm is the Metropolis-Hastings algorithm. You specify a prior and a likelihood, then you have the posterior f(θ|data) apart from the normalizing constant. That’’s all you need. Random proposals are done, accepted if the function is higher in the proposed point and otherwise accepted with some probability. The draws may (after some checks) be coonsidered as draws from the posterior. Just keep track of anything you want to know. Helped by the speed of modern computers this type of algorithm has overthrown all other possibilities (like brute force won in chess programs). Winbugs is a software package that has brought the possibilities of MCMC to great heights. It is developed in the statistical department of the Medical faculty of Cambridge. The examples are not econometric. But last year I fell in love with Winbugs. In research on stochastic efficiency frontiers the possibilities of maximum likelihood estimation were limited and it appeared that Winbugs copied our most advanced research in a wink. And it could do much more! Just specify your model in a compact code and the job is done! I decided to show students at the VU this miracle. I give the “essay” course, meant to combine elements that the students learned before. I used it to learn them something completely different encompassing anything they learned before. They estimated models much more complex than they had ever seen, motivated by
P(T|I)
P(T|G)
likelihood ratio P(T|G)/P(T|-G)
IF prior Odds 100 Post odds
P(G|T)
P(G|T)*P(T|I)
2:15
3.0E-09
0.9
5.4
540
0.998
3.0E-09
2:30
5.9E-08
0.09
0.54
54
0.982
5.8E-08
2:45
1.2E-05
0.009
0.054
5.4
0.844
1.0E-05
3:00
4.8E-04
0.0009
0.0054
0.54
0.351
1.7E-04
3:15
4.8E02
0.00009
0.00054
0.054
0.051
2.4E-03
3:30
9.4E-01
0.000009
0.000054
0.0054
0.005
5.1E-03
P(G|I)
0.773%
2:00
Table 2
26
AENORM
59
April 2008
ORM
theory that they could hardly understand. But they succeeded! And they are young, I hope it works. The example I give is an inventory model, usually OR, so illustrating my plea for an integrated Bayesian study of econometrics. Solving an inventory problem advanced time series model
with
an
Suppose a business wants to order an optimal quantity of a product. There are data on demand X(t). One can see that there is nonstationary development in the demand and that that there are often unexpected outliers. A suitable model might be: X(t)~Poisson(mu(t)) ln(mu(t))=d(t)+u(t) d(t)=d(t-1)+e(t) u[t] has a student distribution (to allow for outliers) It is an unobserved component model: d(t) and u[t] are latent variables. Traditionally the Kalman Filter is used for such models, but that can hardly cope with student distributions and the hierarchical Poisson structure. The Winbugs code is simply: Model #means comment { d[1] <- dstart dstart ~ dunif(-100,100) #vague prior for d[1] tae ~ dgamma(0.001,0.001) #vague prior for precision tau ~ dgamma(0.001,0.001) degree ~ dunif(2,100) #vague for t-distribution for(t in 1:N){ u[t] ~ dt(0,tau,degree) e[t] ~ dt(0,tae,2) y[t] ~ dpois(mu[t]) #the likelihood mu[t]<- exp(d[t] + u[t]) } for(i in 2:N){d[i] <-d[i-1] + e[i] } #and now the predictions yf[t] for(j in N+1:N+4){ u[j] ~ dnorm(0,tau) e[j] ~ dnorm(0,tae) d[j] <- d[j-1] +e[j] mu[j] <- exp(d[j] + u[j]) yf[j-N] ~ dpois(mu[j-N]) } }
offers enormous possibilities. Further note the feature that the number of degrees of freedom of the student distribution is just treated as a parameter. Apart from the posterior distributions of the parameters one may also get those of the latent variables (the “smoothed” d[t]). And a criterion for model fit is also available, the DIC or deviance information criterion. But the really beautiful thing is that one obtains directly a simulation from the possible sales in the future. Not conditional upon parameter estimates, no unconditional forecasts, incorporating parameter uncertainty. The simulated version of the ultimate Bayesian formula: p(z|x)= ip(z|θ)p(θ|x)dθ which says that if you want to make a prediction z on the basis of data x, you have to make a model p(x|θ), which with a prior p(θ) leads to a posterior p(θ|x) and then make the convolution with p(z|θ). Conventional statistics has no equivalent. So now we have the true input for inventory decisions. Just formulate inventory cost, and benefits of selling no, and optimize with respect to orders the expected value (or utility) using the simulated future demand. Conclusion There’s no theorem like Bayes’ theorem. Literature de Finetti, B. (1970). Theory of probability (2 vol). Wiley classics. Lancaster, T. (2004). Modern Bayesian Econometrics. Fist chapter free download. Savage, L. (1960). The foundations of Statistics. Dover books. de Vos, A. (last version 2008). A primer in Bayesian Inference. Website
That’s all. Read the data, choose some reasonable initial values for start, tae(the precision of e[t]), tau, degree, u[t], e[t] and yf[j] and run. One nice aspect is that latent variables can be treated in the same way as parameters. This
AENORM
59
April 2008
27
Wat als zijn overboeking naar Hong Kong halverwege de weg kwijtraakt? Een paar miljoen overmaken is zo gebeurd. Binnen enkele seconden is het aan de andere kant van de wereld. Hij twijfelt er niet aan dat zijn geld de juiste bestemming bereikt. Het gaat immers altijd goed. Maar wat als het toch van de weg af zou raken? Door hackers, fraude of een computerstoring? Daarom levert de Nederlandsche Bank (DNB) een bijdrage aan een zo soepel en veilig mogelijk betalingsverkeer. We onderhouden de betaalsystemen, grijpen in als problemen ontstaan en onderzoeken nieuwe betaalmogelijkheden. Het betalingsverkeer in goede banen leiden, is niet de enige taak van DNB. We houden ook toezicht op de financiële instellingen en dragen – als onderdeel van het Europese Stelsel van Centrale Banken – bij aan een solide monetair beleid. Zo maken we ons sterk voor de financiële stabiliteit van Nederland. Want vertrouwen in ons financiële stelsel is de voorwaarde voor welvaart en een gezonde economie. Wil jij daaraan meewerken? Kijk dan op www.werkenbijdnb.nl.
Werken aan vertrouwen. 28
AENORM
59
April 2008
Econometrics
Riding Bubbles Since the early days of financial markets, investors have witnessed periods of bubbles and subsequent crashes. Famous examples include the South Sea Bubble in the 18th century, the Roaring Twenties at the beginning of the 20th century and more recently, the internet bubble. These bubbles place investors in an agonizing dilemma. Investing in bubbly assets makes them susceptible to crashes, when bubbles burst. By avoiding bubbly assets all together they forgo the high returns that the assets yield, when the bubble in ates further.
Nadja Guenster is currently finalizing her PhD at RSM Erasmus University and since October 2007 working as an Assistant Professor of Finance at Maastricht University. Her main research topics are stock market bubbles and socially responsible investing (SRI). She obtained a master’s degree in International Economics Studies (IES) from Maastricht University in 2003. Erik Kole obtained his PhD from RSM Erasmus University in 2006 for his dissertation “On Crises, Crashes and Comovements”. Before, he studied econometrics at Maastricht University, where he graduated cum laude in 2001. Since 2006, he works as an Assistant Professor in Financial Econometrics at the Econometric Institute of Erasmus University Rotterdam.
The theoretical literature on bubbles makes contradictory propositions on the optimal response of rational investors. The efficient market hypothesis predicts that rational investors go short and “cause these “bubbles” to burst” (see, Fama, 1965, p. 37). The limits to arbitrage literature, for example De Long et al. (1990a), Dow and Gorton (1994), Shleifer and Vishny (1997), posits that rational investors should not actively trade against the mispricing (hereafter: “sideline”). De Long et al. (1990b) and Abreu and Brunnermeier (2003) propose that investors should actively increase their holdings upon the detection of a bubble i.e., “ride the bubble”. To overcome this theoretical divide, we take an empirical perspective. We systematically analyze what an investor should do if she observes that an asset experiences a bubble. We base our analysis on real data, and refrain from specic assumptions underlying theoretical models. Hence, our analysis reveals which theoretical predictions pass an empirical investigation. Central in our approach is a new bubble identication method that is applicable to a real-world
setting. Our method only requires a basic information set that was available at the respective point in time. Just like a “real” investor, we can therefore not faultlessly identify a bubble. Since historical periods of bubbles often started in specific industries(e.g., the railway boom, the electricity boom and the internet bubble), our analysis is based on the sample of 48 US industries from Fama and French (1997). The identification relies upon two main characteristics of bubbles, described for example by Abreu and Brunnermeier (2003): (1) the growth rate of the price is higher than the growth rate of fundamental value and (2) the growth rate of the price experiences a sudden acceleration. The investor concludes that she discovered a bubble if both conditions are fulfilled. She estimates the growth rate of fundamental value based on the Capital Asset Pricing Model (CAPM)1. To detect a sudden acceleration, she conducts a structural change test. We find a significantly positive relation between a bubble and the subsequent abnormal returns. The abnormal return is on average 0.64% higher in a month following the detection of a bubble compared to months for which no bubble was detected. The downside to the high abnormal returns is the risk of rare but extremely negative returns: crashes. Logit models for crash likelihood reveal a significant increase upon the detection of a bubble. The probability of a crash (defined as a return below 1.65 times the standard deviation of abnormal returns) in the following months more than doubles. Using extreme value theory, we further analyze whether the magnitude of crashes, given that a crash happens, depends on the detection of a bubble. However, we find no clear link between the magnitude of crashes and bubbles. To find out whether an investor should ride bubbles, sideline or short the bubbly asset, we investigate the asset allocation implications of our findings. Since the prominent risk of riding
In Guenster et al. (2008) we also consider the Fama-French (1993) three-factor model and the Carhart (1997) fourfactor model 1
AENORM
59
April 2008
29
Econometrics
bubbles is to encounter a crash, we focus on an investor with a mean-lower partial moment utility function. We assume that the investor has an average risk-aversion, is fully invested and can choose between an additional investment in the risk-free asset and the typical industry. We find that the risk-return trade-off of a typical industry improves considerably upon the detection of a bubble: the additional return an investor can earn by riding a bubble more than outweighs the risk of a crash. If we detect a bubble, the additional investment in the industry asset increases from 11% to 134%. Bubble identification Our bubble definition is designed to capture the two basic characteristics of bubbles outlined in the literature while only using past price information. To adjust returns for the growth rate of fundamental value, we consider the CAPM. To identify a sudden acceleration in price growth, we test for a positive structural break in returns which is not explained by the asset pricing models. In addition, to capture the idea that the price of the asset grows faster than fundamental value, we require significantly positive anomalous returns following the break. Unlike the cointegration approach put forward by Campbell and Shiller (1987) or the regime-switching model proposed by Brooks et al. (2005), this bubble identification method allows us to use a limited history of past price information, which we believe is also available to investors. Formally, we investigate whether an asset experiences a bubble at time t by estimating the following model:
rτ = ατ + β'rτm + ετ , τ = t-T + 1,...,t
(1)
where rτ is the asset’s excess return and T is the estimation window, which typically equals 120 months, and rτm is the excess return on the market portfolio. Our test procedure concentrates on ατ. To capture the two basic characteristics of a bubble we interpret a bubble (i) as a structural break in ατ, (ii) after which ατ is significantly positive. Our setup closely follows the structural break literature (see Andrews, 1993; Hansen, 2001) and the null hypothesis of no bubbles implies that ατ does not change significantly during the test period:
H0 : ατ = α0
for all τ
(2)
The alternative is that we observe a structural break in ατ. As we have no a priori expectations of when a bubble starts, we test for different breakpoints. Since we are interested in recent bubbles, we require that the bubble lasts until time t. In addition, we require a bubble to be a prolonged acceleration in price growth and set
30
AENORM
59
April 2008
its minimum length to twelve months. Its maximum length is five years. Formally, the alternative hypothesis reads:
⎧α (ς ) for τ = t − T + 1,..., t − ς H1T (ς ) : ατ = ⎨ 1 ⎩α2 (ς ) for τ = t − ς + 1,..., t ,
(3)
with α2(ς) > α1(ς), where ς ranges from 12 to 60, α1(ς) refers to the first part of our test period and α2(ς) to the second part. For each value of ς we calculate the Wald test-statistic for the hypothesis α1(ς) = α2(ς). We select the breakpoint ς with the largest test statistic and determine the critical value for it based on the tables in Andrews (1993). If we reject H0 in favor of α2(ς) being significantly larger than α1(ς), we subsequently test whether α2(ς) is significantly larger than zero. If both criteria are fulfilled, we conclude that the asset experiences a bubble. To investigate whether our detection method discovers economically meaningful bubbles, we compute standardized abnormal returns as well as “raw” returns during bubbles. For the abnormal returns, we use the same factor models as in the bubble identification. We estimate the respective model over the previous 120 months to compute the abnormal return for the following month according to:
ηt + 1 = rt + 1 − β'rtm+ 1,
(4)
where the rt+1 is the excess return at t + 1, and β’ is based on the regression in equation (1) under the null hypothesis. To accommodate time-varying volatilities and different volatilities across industries, we standardize the abnormal returns by dividing them by the residual stand~ ard deviation: ηi , t ≡ ηi , t / σ i , t . During bubbles, raw returns are equal to 25.7% p.a.. The average standardized abnormal return equals 0.389. Assuming an average idiosyncratic return volatility of 4%, it translates into an annual abnormal return of about 19%. We also find that the residual volatility of the standardized abnormal returns is substantially larger than one. These findings indicate that we indeed observe economically meaningful deviations from the null hypothesis. Risk and Return To examine the profitability of riding bubbles, we compare the abnormal returns in month t+1, given that the investor has detected a bubble up to month t, to the abnormal returns if no bubble has been detected. Table 1 shows the characteristics of return distribution. We observe significantly positive abnormal returns after the detection of a bubble as indicated by the coinfidence intervals.2 The returns are also economically large. Assuming an idiosyncratic return volatility of about 4%, the mean
Bij Aon mag je gebaande wegen achter je laten om te komen tot een goed pensioenadvies.
Voor onze vestigingen in Amsterdam, Purmerend, Rotterdam en Zwolle zoeken wij actuarieel geschoolde mensen met relevante werkervaring (2-5 jaar) voor de functie van analist. Heb jij de ambitie om complexe actuariële vraagstukken op een ondernemende en creatieve manier op te lossen en al doende je het vak eigen te maken? Wil jij van je collega-specialisten het adviesvak leren om daarna snel door te groeien tot een zelfstandig adviseur van de klant? Kijk dan op www.aon.nl (onder vacatures) voor meer informatie of bel de heer R. K. Sagoenie, Managing Consultant, op telefoonnummer 020 430 53 93.
Aon Consulting is wereldwijd de op twee na grootste risico-adviseur op het gebied van arbeidsvoorwaarden en verleent in Nederland adviesdiensten aan (beursgenoteerde) ondernemingen en pensioenfondsen. De Aon Actuariële Adviesgroep biedt adviezen en praktische ondersteuning op het gebied van pensioenen. Het dienstenportfolio strekt zich uit van strategische beleidsadvisering, pensioenadvies en administratie tot en met procesbegeleiding en tijdelijke ondersteuning bij bijvoorbeeld implementaties en detachering. Aon is thuis in alle actuariële diensten zoals het maken van kostenprognoses, het uitvoeren van (waarderings)berekeningen, certificering van de jaarstukken tot het analyseren van behaalde beleggingsresultaten.
AENORM
59
April 2008
31
Econometrics
bubble detected
no bubble detected
p-value
a: abnormal returns based on the CAPM #obs
3757
34607
mean
0.14
(0.020)
[0.10, 0.18]
-0.02
(0.006)
[-0.03, -0.01]
< 0.0001
q(0.25)
-0.55
(0.025)
[-0.61, -0.51]
-0.61
(0.008)
[-0.63, -0.60]
0.031
median
0.14
(0.019)
[0.10, 0.18]
-0.02
(0.006)
[-0.03, 0.00]
< 0.0001
q(0,75)
0.84
(0.025)
[0.79, 0.88]
0.56
(0.007)
[0.55, 0.58]
< 0.0001
volatility
1.18
(0.019)
[1.15, 1.22]
1.03
(0.006)
[1.02, 1.04]
< 0.0001
skewness
0.19
(0.119)
[-0.02, 0.45]
-0.05
(0.037)
[-0.12, 0.02]
0.029
kurtosis
4.86
(0.664)
[3.92, 6.30]
5.10
(0.154)
[4.82, 5.42]
0.636
VaR(0.95)
1.80
(0.066)
[1.67, 1.91]
1.65
(0.013)
[1.62, 1.67]
0.020
ES(0.95)
2.42
(0.062)
[2.30, 2.54]
2.32
(0.023)
[2.27, 2.36]
0.119
VaR(0.975)
2.24
(0.072)
[2.12, 2.38]
2.09
(0.019)
[2.06, 2.13]
0.021
ES(0.975)
2.83
(0.083)
[2.66, 2.99]
2.79
(0.035)
[2.72, 2.86]
0.677
Table 1: Standardized abnormal returns with and without prior bubble detection This table reports summary statistics and downside risk measures for the pooled set of standardized abnormal returns. The abnormal returns are based on rolling regressions in Eq. (1) with a 120-month estimation window. For each regression, we construct an abnormal return for the period after the estimation window as in Eq. (4). To correct for time-varying volatility, we standardize the abnormal return by a division by the residual volatility of the regression model. We split the abnormal returns according to the detection of a bubble. For each statistic, we construct a 95% confidence interval based on 10,000 bootstraps. The column p-values reports the results of tests for equality of the statistics for the cases “bubble detected” and “no bubble detected”.
abnormal returns equals 0.56% (0:14 4%) per month. If no bubble was detected, which is obviously the case for most of our sample period, the abnormal returns are slightly negative. The p-values in the final column of each panel indicate that the abnormal returns are signicantly different below the 1% signicance level depending on the absence or presence of a bubble. On an annual basis, the return differential adds up 6.7%. The evidence so far supports the idea that riding bubbles is a highly protable strategy. However, the different risk measures presented in Table 1 indicate that it is also a risky strategy. We find that the abnormal return volatility following a bubble is signicantly larger than if no bubble was detected. Furthermore, we observe significant differences in downside risk. For example, assuming again an idiosyncratic volatility of a about 4%, the Value-atRisk (Expected Shortfall) at the 95% level is 7.2% (9.68%) if we detected a bubble versus 6.6% (9.28%) if there was no bubble. Modeling risk and return Crashes, although relatively rare, pose a serious risk to any investor’s wealth. Since our preliminary evidence suggests that investors riding bubbles face the risk of a crash, we investigate the relation between bubbles and crashes in more detail. We define crashes as large nega2 3
32
tive returns. Specifically, an industry experiences a crash if its standardized abnormal return as given by equation (4) falls below a threshold equal to -1.65.3 We find 42.5 crashes per industry, with an average loss of 9.2% for the typical industry. To investigate how a bubble affects the probability of a crash, we specify a logit model. Its general form is:
(
59
April 2008
(5)
where yi,t+1 equals one if a crash occurs in industry i at time t + 1, and zero otherwise. Λ (.) denotes the logistic function. The vector of explanatory variables Zit represents the information set available to the investor at time t and γ is a vector of coefficients. Since we are interested in forecasting crashes conditional on the presence of a bubble, we introduce a one-period lag between the dependent and explanatory variables. We estimate the logit model separately for each crash category. If riding a bubble involves crash risk, crash likelihood should increase significantly once a bubble is detected. Since crashes tend to occur in consecutive months, we incorporate the possibility that crashes are related: a first crash may increase the probability of a subsequent crash. Our model reads:
We use 10,000 bootstraps to construct confidence intervals and conduct tests. In Guenster et al. (2008) we also consider more extreme thresholds.
AENORM
)
Pr[y i , t + 1 = 1] = Λ γ 'Zit
Econometrics
⎛ Pr[yi , t + 1 = 1] = Λ⎜ γ0 + γ1Bit + γ 2Cit + γ3 ⎜ ⎝
1 ⎞⎟ , (6) Lit ⎟⎠
γ0
-3.53a
(0.04)
γ1
a
0.28
(0.07)
where Bit is a dummy variable that equals one if the investor has detected a bubble in industry i in month t and zero otherwise. Cit and Lit capture the effects of previous crashes. Cit is a dummy variable that equals one if a crash took place during the twelve months prior to t, and Lit is the actual number of months passed since that previous crash. As shown in Table 2, a bubble increases the risk of a crash: the coefficient of Bit is significant at the 1% level. The effect is also economically large. The probability of a crash is 2.84% given that there was no bubble and no previous crash. The crash probability increases to 3.72% upon the detection of a bubble, representing a change of 31%. We also find that previous crashes have a substantial effect on the probability to encounter another crash. If a crash occurred six months ago and no bubble was detected, the probability of another crash is 6.35% compared to 2.84% in the case of no previous crash. If a crash happened more than 12 months ago, its impact on the probability of another crash decreases substantially.
γ2
0.41a
(0.07)
γ3
1.06
(0.12)
log L
-7465.4
[0.024]
a
Table 2: Predicting crash likelihood using bubble information. This table presents the estimation results of logit models to predict the occurrence of an industry crash during month t + 1, conditional on information available in month t. The logit model is given by equation (6). The model is estimated with maximum likelihood on the pooled set of standardized abnormal returns. We report the estimated coefficients and the standard errors in parentheses. Subscript a indicates significant difference from zero at the 1%. We also report the value of log likelihood function and McFadden’s pseudo-R2 in brackets.
converges to the survivor of the generalized Pareto distribution (GPD) for k → -∞, which is given by
⎧⎪(1 + ξ (k − x) / ν )−1 / ε if ξ ≠ 0 W (x; ξ , k , ν ) = ⎨ (8) ⎪⎩exp((k − x) / ν ) if ξ = 0,
"An investment in the bubbly industry is more attractive than an investment in the non-bubbly industry or in the risk-free asset" A natural follow-up question to our finding that bubbles significantly influence crash likelihood is whether bubbles also affect the size of a crash. We investigate the relation between crash sizes and bubble detection by applying extreme value theory (EVT). Specifically, we are interested in the distribution of the standardized abnormal returns below the cut-off values for crashes k. We want to know whether a crash is more likely to be large when a bubble has been detected. Therefore, we concentrate on the (cumulative) distribution of the stand~ ardized abnormal returns η conditional on the occurrence of a crash with threshold k,
~ ~ Gk (e ) ≡ Pr[η ≤ e | η ≤ k ], e ≤ k.
(7)
The Balkema and de Haan (1974) and Picklands (1975) theorem proves that the distribution Gk
where ν > 0, x ≤ k for ξ ≥ 0 and k + ν/ξ ≤ x ≤ k for ξ < 0.4 ξ is called the shape parameter, ν the scale parameter. If ξ > 0 applies, the distribution is fat-tailed, which is generally the case for asset returns (see Longin, 1996). 1/ξ is referred to as the tail index. If the detection of a bubble does not affect the distribution of crash size conditional on the occurrence of a crash, the coefficients of the GPD for the abnormal returns with prior bubble detection should not differ significantly from the coefficients of the GPD for the abnormal returns without prior bubble detection. To test this hypothesis, we estimate the parameters ξ and ν in Eq. (8) using maximum likelihood (see Smith, 1987, for details). Table 3 shows the results for the different values for k, which in this case determine where the left tail of the distribution ends. The esti-
Extreme value theory is generally formulated for the right tail of a distribution. Models for the left tail of the distribution for a random variable X can be derived by applying EVT to the right tail of -X. Since Pr[X ≤ x] = 1 - Pr[-X ≤ x], we need the survivor function of the generalized Pareto distribution. 4
AENORM
59
April 2008
33
Econometrics
#obs
ν
ξ
bubble detected
215
0.75
(0.068)
-0.126
(0.060)
no bubble detected
1714
0.63
(0.022)
0.062a
(0.026)
no distinction
1929
0.64
(0.021)
0.044
(0.024)
b
b
Table 3: Parameter estimates for the Generalized Pareto Distribution This table reports estimation and test results for the generalized Pareto distribution applied to the left tail of the pooled standardized abnormal return distributions. The estimates are based on maximum likelihood as described in Smith (1987). We consider subsets based on the detection of a bubble, and the full set of abnormal returns. For each (sub)set we report the number of observations, the scale parameter ν and the shape parameter ξ, with standard errors in parentheses. The location parameter K is set equal to the cuto value of crashes , i.e., -1.65.
mates for the shape parameters ξ for the abnormal returns with prior bubble detection are considerably lower than the corresponding estimates for the returns without prior bubble detection. However, we also observe that the estimates have relatively high standard errors. The estimates for the scale parameters ν are consistently higher for the case “bubble detected”, but here differences are smaller. A Wald test on the hypothesis of equality of both the ξ and ν parameters is not rejected (F = 8:11 with a p-value of 0.11). We conclude that the size of a crash, given that a crash takes place, does not depend on the detection of a bubble. Imposing the equality hypothesis, leads to a 34-estimates that is signicantly larger than zero at the 5% level. This indicates that the idiosyncratic returns we investigate are fat tailed, though less than typical asset returns. As previous crashes are related to future crashes, the question arises whether crashes also predict future returns. To ensure the robustness of our results, we estimate the relation between bubbles, crashes and next month’s returns in a model: d0
0.00
(0.01)
d1
0.15
(0.02)
d2
-0.05a
(0.01)
a
R2
0.002 Table 4: Predicting abnormal returns using bubble information This table presents the estimation results for three linear models to predict the standardized abnormal return of an industry for month t + 1, conditional on information available in month t. The standardized abnormal returns are based on the CAPM. The model is estimated in a two-step procedure using OLS in step one and WLS in step two. The weights in step 2 are the inverse of the standard deviations of the error terms of step 1, which depend on the presence of bubble detection. We report the estimated coefficients and the standard error in parentheses. Subscripts a, b and c indicate signicant dierence from zero at the 1%, 5% and 10% level, respectively. We also include R2-values.
~ η i,t+1 = δ0 + δ1Bit + δ1Cit + uit, E[ui,t+1] = 0
(9)
~ where η i,t+1 is the standardized abnormal return for sector i at time t+1, Bit is the dummy variable for bubble detection and Cit is a dummy variable for the occurrence of a crash during the past twelve months. Since the volatility of abnormal returns depends on the detection of a bubble (see Table 1), we estimate our model using Weighted Least Squares. In line with our previous findings, Table 4 shows a statistically significant and economically large positive relation between the detection of a bubble and next month’s abnormal return. Assuming again an idiosyncratic return volatility of about 4%, the detection of a bubble is associated with an abnormal of 0.60% per month. The magnitude of these estimates is very similar to the abnormal return estimates provided in Table 1. We find that the occurrence of a crash during the previous 12 months relates negatively to next month’s returns. If the idiosyncratic return volatility is about 4%, the abnormal return is 0.20% per month lower. The Investor’s Asset Allocation Decision Based on our findings so far, an investor faces the trade-off between high abnormal returns and a high risk of a crash. Therefore, we focus on an investor with a mean-lower partial moment utility function as it better captures crash risk than, for example, a mean-variance framework. Following Fishburn (1977) and Harlow and Rao (1989) the investor maximizes:
⎧⎪R − γ(K − R )v p U Rp;ν, K = ⎨ p R ⎪⎩ p
(
)
for Rp ≤ 0 for Rp > 0,
(10)
where Rp is the return on the portfolio, γ is the coefficient of risk aversion, and K is the target rate of return. We typically choose ν = 2 so that the utility function is quadratic below the target rate of return K. This is in line with
The details of the derivation are discussed in the working paper. The estimates might slightly dier from the estimates discussed above since they are based on more elaborate models shown in the working paper. 5 6
34
AENORM
59
April 2008
Econometrics
decreasing absolute risk aversion (see Arditti, 1967) and ensures that marginal utility is positive. We define the portfolio return - in line with our analysis so far - as standardized abnormal returns. This assumption allows us to make a general statement, which is independent of the investor’s existing holdings and their factor exposures. Risk is deifned as the portfolio return Rp which falls below a certain threshold K. In line with the crash deifnition, we define K as a multiple of the abnormal return volatility σi, K = wσi k. The risk of the portfolio consists of two components, the probability that the return falls below the threshold as well as the crash size given that the crash occurs. As above, we measure crash size by the GDP. Maximizing the investor’s utility function with respect to the portfolio weight and substituting risk and return leads to the optimal weight allocated to the bubbly asset5:
w* (Z ) =
[
]
~ 1 Eη|Z (1 − ξ )(1 − 2ξ ) , ~ 2γ ⋅ σ i Pr η ≤ k | Z 2ν 2
[
]
(11)
where Z, a vector of explanatory variables, indicates that the expected abnormal returns and the crash probability depend on the detection of a bubble, its characteristics and past crashes. Since section 3.1 shows that the magnitude of crashes does not depend on the detection of a bubble, the parameters ξ and ν do not depend on Z. Intuitively, Equation 11 shows that the weight allocated to the bubbly industry increases in line with its expected return. It decreases as the probability of a crash, the idiosyncratic volatility or the investors risk-aversion increases. The previous sections provide empirical estimates for all variables in this equation except the risk aversion coefficient γ.6 Since previous literature does not provide any estimates, we calibrate γ to the market. Intuitively, an investor with an average risk-aversion should hold the market portfolio. To find out whether it is optimal for the investor to ride bubbles, sideline or trade against the overvaluation, we compute how the optimal weight allocated to the industry (w*) changes upon the detection of a bubble. Instead of a choice between the typical nonbubbly industry and the risk-free asset, the investor faces, upon the detection of a bubble, the choice between the typical bubbly industry and the risk-free asset. If the optimal weight increases substantially, we conclude that the investor rides bubbles in line with the theoretical predictions of Abreu and Brunnermeier (2003). It implies that the additional return she can earn during the next month more than outweighs the risk of a crash. If she does not or only barely change her port-
folio allocation upon the detection of a bubble, we conclude that she sidelines (e.g., Shleifer and Vishny, 1997; Dow and Gorton, 1994). If she substantially decreases her weight allocated to the industry upon bubble detection, we conclude that she trades, consistent with Fama (1965) against the overvaluation. Upon detecting a bubble in a typical industry, the investor changes her expected return from 2 basis points to 56 basis points. Also the likelihood of a crash increases from 2.7% to 5.7%. The expected crash size does not change but stays constant at -9.19%. In total, this leads the investor to increase her weight in the typical industry from 11% of her reference wealth to 134%. So, the weight allocated to the bubbly industry is much larger than the weight allocated to the non-bubbly industry. Our findings show that an investment in the bubbly industry is more attractive than an investment in the non-bubbly industry or in the risk-free asset. We therefore conclude that an investor would rather choose to ride a bubble than to sideline. We find no evidence that she would trade against bubbles. References Abreu, D. and Brunnermeier, M.K. (2003). Bubbles and crashes. Econometrica, 71(1), 173-204. Andrews, D.W.K. (1993). Tests for parameter instability and structural change with unknown change point. Econometrica, 61(4),821-856. Arditti, F.D. (1967). Risk and the required return on equity. Journal of Finance, 22(1), 19-36. Balkema, A. and de Haan, L. (1974). Residual life at great age. Annals of Probability, 2(5),792804. Brooks, C., Clare, A., Dalle Molle, J. and Persand, G. (2005). A comparison of extreme value theory approaches for determining Value-at-Risk. Journal of Empirical Finance, 12(2), 339-352. Campbell, J.Y. and Shiller, R.J. (1987). Cointegrating and tests of present value models. Journal of Political Economy, 95(5), 10621088. Carhart, M.M. (1997). On persistence in mutual fund performance. Journal of Finance, 52(1), 57-82. De Long, J.B., Shleifer, A., Summer, L.H. and Waldmann, R.J. (1990a). Noise trader risk financial markets. Journal of Political Economy, 98(4), 703-738.
The details of the derivation are discussed in the working paper. The estimates might slightly dier from the estimates discussed above since they are based on more elaborate models shown in the working paper. 5 6
AENORM
59
April 2008
35
Econometrics
De Long, J.B., Shleifer, A., Summers, L.H. and Waldmann, R.J. (1990b). Positive feedback investment strategies and destabilizing rational speculation. Journal of Finance, 45(2), 379395. Dow, J. and Gorton, G. (1994). Arbitrage chains. Journal of Finance, 49(3), 819-849. Fama, E.F. (1965). The behavior of stock market prices. Journal of Business, 38(1), 34-105. Fama, E.F. and French, K.R. (1997). Industry costs of equity. Journal of Financial Economics, 43(2), 153-193. Fishburn, P.C. (1977). Mean-risk analysis with risk associated with below-target returns. American Economic Review, 67(2),116-126. Guenster, N., Kole, E. and Jacobsen, B. (2008). Riding bubbles. Working paper, Maastricht University, Netherlands. Hansen, B.E. (2001). The new econometrics of structural change: Dating breaks in US labor productivity. Journal of Economic Perspectives, 15(4), 117-128. Harlow, W.V. and Rao, R.K. (1989). Asset pricing in a generalized mean-lower partial moment framework: Theory and evidence. Journal of Financial and Quantitative Analysis, 24(3), 285-311. Longin, F.M. (1996). The asymptotic distribution of extreme stock market returns. Journal of Business, 69(3), 383-408. Picklands, J. (1975). Statistical inference using extreme order statistics. The annals of statistics, 3, 119-131. Shleifer, A. and Vishny, R. (1997). The limits of arbitrage. Journal of Finance, 52(1), 35-55. Smith, R.L. (1987). Estimating the tails of probability distributions. The Annals of Statistics, 15(3), 1174-1207.
36
AENORM
59
April 2008
Econometrics
Modeling Current Account with Habit Formation The current account is one of the most prominent variables summarising a country’s economic relationship with the outside world. In particular, increased lending to developing countries in the eighties had led to the need to evaluate the sustainability of external debt levels and the idea of an intertemporally optimal current account deficit (Obstfeld and Rogoff, 1995). This notion became more significant in context of currency crises. Edwards (2001) points out that in the aftermath of the currency crises in the nineties current account sustainability became the centerpiece of policy debate. In order to clarify whether large current account deficits can cause or signal currency crises, it is necessary to have a model capable of generating current account levels that are consistent with some optimization framework.
Sergejs Saksonovs has a masters degree in Economics from the University of Cambridge and is a second year PhD student there. His research is on modeling current account and, most recently, on modeling net foreign asset position.
Currently the best candidate for such a framework is the intertemporal approach, which explains current account fluctuations by consumption smoothing motives of economic agents. However, before using it to judge the optimality of current account, it needs to be rigorously empirically tested. Empirical research on this question has lagged behind the theoretical literature (Bergin and Sheffrin, 2000). Formal statistical tests of the model tended to reject it and the models underpredicted volatility of the current account. Bergin and Sheffrin (2000) suggest that the poor performance of the most basic intertemporal models is because they do not account for external shocks frequently affecting small open economies. Another possible cause had been the assumption of time separable utility. Gruber (2004) acknowledges that habit formation, which relaxes that assumption, improves empirical performance. In this article the author creates and empirically evaluates the model allowing for variable interest rate and prices of traded goods as potential sources of external shocks as well as habit formation in the utility function. Deriving and Testing the Model
tion reflecting habit formation: ∞
U = Et
∑
k =0
C* 1 βk [ ( * t + k δ )1 − σ ] 1 − σ (Ct + k − 1)
(1)
where β is the discount factor (0 < β < 1) and α 1− α Ct* = CTt CNt is the aggregate consumption index of traded and nontraded goods, where α is the share of traded goods. The intertemporal elasticity of substitution (IES) for this functional form is 1/σ and δ is the parameter measuring the strength of habits (0 < δ < 1). With habit formation, consumers derive utility not only from the absolute level of consumption, but also from the change from a past quarter per capita consumption in the economy, which they take as given. There is a traded, riskless bond asset paying out a variable interest rate rt at the end of period t. The budget constraint (2) is given by:
Pt CNt + CTt + It + Gt + (Bt − Bt − 1) = Yt + rt Bt − 1
(2)
where CNt and CTt is the consumption of nontraded and traded goods, Pt is the relative price of non-traded goods in terms of traded goods, It and Gt is investment and government spending, and Yt is output. Bt are the holdings of the international asset. All quantities are in traded goods at the end of period t. Maximising the utility function (1) subject to the budget constraint (2) implies the following log-linearised Euler equation1.
Consumers maximise the lifetime utility func1
Detailed derivation is available from the author on request.
AENORM
59
April 2008
37
Econometrics
1 1 Et rt + 1 + (1 − )(1 − α) * σ σ (3) Et Δpt + 1 + ht + ξ where ht is the ‘habit term’ given by: ht = δ(1 − 1/σ)(∆ct − (1 − α)∆pt), ξ is a constant incorporating variances and covariances of model variables, which are assumed to be constant over time. This term is disregarded, because the author uses demeaned time series. Lower case letters denote the logarithms of their respective variables. The Euler equation (3) relates consumption to the expected interest rate, the expected change in the price of non-traded goods and the habit term. Its significance for our model of the current account is due to it describing the process of consumption. The first two terms are described in Bergin and Sheffrin (2000). To illustrate the contribution of this article, (3) can then be rewritten expanding the definition of the habit term: Et Δct + 1 =
Et Δct + 1 =
∞
− Et
∑ β (Δno
∑ β (Δno i =1
(4)
The second term of (4) shows that an increase in the change of consumption from t to t−1 has two effects on the expected change in individual consumption. A higher level of current consumption lowers the marginal utility of future consumption, because the change between future and current consumption is now lower for any given level of future consumption. Thus the expected change in consumption falls. The quantitative effect is determined by the habit parameter and the intertemporal elasticity of substitution. However, a higher level of current consumption also means that an increase in future consumption is necessary to keep the future utility the same because of habit formation. This increases the expected future consumption. The overall effect of an increase in current consumption on future consumption is thus determined by the strength of habits and the intertemporal elasticity of substitution. If the intertemporal elasticity of substitution is less than one, then an increase in current consumption increases future consumption. The final term of (4) shows the effects of the change in relative price of non-traded goods from period t − 1 to period t on expected consumption growth. The intertemporal effect of an expected rise in the relative price of nontraded goods increases consumption expenditure by an elasticity of 1/σ(1 − α). If this effect had now occurred and ∆pt > 0, then in the next period agents consume less non-traded goods by the same elasticity, modified by the habit para-
t + i − Δct + i ) = not − ct
(5)
where not = log NOt = log(Yt − It − Gt). Substituting (4) into (5) one obtains:
− Et 1 )Δpt + 1 σ
i
i =1
∞
1 1 Et rˆt + 1 + δ(1 − )Δct − σ σ δ(1 - α)(1 -
meter δ, thus increasing the expected change in consumption. The intratemporal effect of the expected rise in relative price of nontraded goods decreases consumption expenditure. If this effect had now occurred and ∆pt > 0, agents substitute back from non-traded goods to traded goods, increasing the consumption expenditure by 1 − α and decreasing the expected change in consumption. After summing the budget constraint (2) and log-linearising around the steady state where net foreign assets are zero, following Huang and Lin (1993) one can obtain2:
i
t +i −
1 ˆ rt + i − ht + i − 1) = not − ct (6) σ
(6) provides the testable implication of the model. The right-hand side of the equation is a transformed representation of the current account with its components in log form, which one can label CAt = not − ct. This variable is henceforth referred to as the ‘current account’. The first implication of (6) is that if net output is expected to fall, the current account rises as agents smooth consumption, and vice-versa. The second left-hand side term means that a rise in the ‘consumption-based interest rate’ increases the current account by inducing consumption below its smoothed level. The third left-hand side term of (6) reflects the influence of habits. They affect the current account in the same direction to that in which they affect consumption. One can further assume that β = 1/(1 + r ), where r is the average world interest rate over time, thus there is no consumption tilting across countries and over time. (6) can now be rewritten: Et CAt + 1 = (1 + r )CAt + Et (Δnot + 1 −
1 ˆ rt + i − ht ) (7) σ
In (7), the expected value of the current account in the next period includes the current period interest rate payment on the current account. In addition, from (6) it is known that the current account is supposed to be equal to a negative discounted sum of the future expectations of the relevant information set in all future periods. If the difference between them is nonzero, then bringing the right-hand side of this equation to the left, one obtains CAt + Et (Δnot+1 − (1/σ) r̂ t+1 - ht), included into the next period
Detailed derivation is available from the author. An important assumption in the log-linearisation is to rule out perpetual accumulation of debt or assets. This is justified by the marginal utility of current consumption always being positive and the fact that assets cannot be consumed. 2
38
AENORM
59
April 2008
Econometrics
Country
No Habits
No Habits - PD
Habits
Habits - PD
USA
0.1 ≤ 1/σ ≤ 0.3
NA
0.1 ≤ 1/σ ≤ 0.4, 0.2 ≤ δ ≤ 0.5
0.1 ≤ 1/σ ≤ 0.7, 0.2 ≤ δ ≤ 1
UK
NA
0.1 ≤ 1/σ ≤ 0.7
NA
0.1 ≤ 1/σ ≤ 0.8, 0.1 ≤ δ ≤ 1
Japan
NA
0.1 ≤ 1/σ ≤ 0.3
0.1 ≤ 1/σ ≤ 0.2 δ=1
0.1 ≤ 1/σ ≤ 0.3 δ=1
Canada
NA
0.1 ≤ 1/σ ≤ 0.3
0.1 ≤ 1/σ ≤ 0.4, 0.1 ≤ δ ≤ 0.4
0.1 ≤ 1/σ ≤ 0.6, 0.1 ≤ δ ≤ 1
France
NA
NA
NA
NA
Germany
NA
NA
NA
NA
Italy
NA
NA
0.1 ≤ 1/σ ≤ 0.5, 0.2 ≤ δ ≤ 0.6
0.1 ≤ 1/σ ≤ 0.6, 0.1 ≤ δ ≤ 1
Table 1: Summary of the Acceptance Regions for the G7 Countries
expected current account. To test restrictions implied by (6), one needs a method of generating expectations. The author uses an unrestricted vector autoregression (VAR), which in matrix notation is given by: xt = Bxt−1 + et, with xt - the variable vector, B - the coefficient matrix and et is the vector of errors, assumed to be white noise, but possibly correlated across the equations. Let li be a row vector with its ith element being equal to one and all the other elements being zero. After some algebra3, one can express the restrictions of the intertemporal model as l2 = −((I1 − (1/σ)l3)βB − βI4(I − βB)−1. It can be tested empirically in three ways. The first method converts the restriction into a linear one. The second method implements likelihood ratio test, estimating the likelihood function of the VAR first with no restrictions and then with model restrictions imposed. The third test is based on the fact that if the intertemporal model determines the current account, then the difference between the forecasted current account and its actual value is unpredictable given the relevant information set. The same tests with appropriate modifications are used to test the model without habit formation. Data Collection and Analysis The author collects the required data series of G7 countries over the period from the first quarter of 1970 to the final quarter of 2005 mostly from the IMF International Financial Statistics and OECD Main Economic Indicators. The real interest rate for each country ri,t is computed using: ri,t = ii,t − Et−1пi,t, where it is the nominal interest rate and Et−1пt are inflation expectations, generated using autoregressive equations. The world real interest rate rt is calculated as the real GDP weighted average of G7 real interest rates. The author uses the real exchange rate for the relative price of nontraded goods. For the GDP data and its compo3 4
nent parts, all series are taken in a seasonally adjusted form and in constant prices. The series are also adjusted for population changes by dividing by total population. Ghosh (1995) separates the ‘consumptionsmoothing’ and ‘consumption-tilting’ components of the current account. The measure used in this paper CAt = not − ct corresponds to the ‘consumption-smoothing’ component, which arises due to the consumers’ desire to use the current account as a buffer to smooth consumption in the presence of transitory shocks to income. ‘Consumption-tilting’ component arises when the world interest rate differs from the subjective rate of time preference (households become net creditors when the interest rate exceeds the rate of time preference and vice versa). In this paper, the rate of time preference is assumed to be equal to the average world interest rate, eliminating consumption-tilting. For the value of α = 0.5 the author follows an estimate from Stockman and Tesar (1995). This article considers a range of values both for the IES: 0.1 < 1/σ < 1 and the habit parameter: 0 ≤ δ ≤ 1. Testing the Model Significance of Habits
and
Assessing
the
The main results of the statistical tests4 are summarised in Table 1. Since the predictable deviations test often differs from the Wald and the likelihood ratio tests, it is summarised in separate columns (denoted by -PD). Table 1 provides the values of the parameters, for which the null hypothesis of our model being true is not rejected. Where the likelihood ratio and the linear Wald test differ, the more restrictive alternative is given. ‘NA’ means that the hypothesis was rejected for any value of the parameters. The indicated regions, where the model was not rejected, do not mean that the hypothesis is accepted for any combination of the values of the parameters within the bounds
The steps omitted are valid, when all the series in x vector are stationary. Detailed results available from the author on request.
AENORM
59
April 2008
39
Be reken de i nvloed van h e t r i j g e d r a g v a n j o n g e r e n o p d e p r em i e van hun autover zeker i ng .
Jeugdige overmoed leidt nogal eens tot onnodige
strategieën. We werken voor toonaangevende be-
schade aan mens en materieel. Voor een verzeke-
drijven, waarmee we een hechte relatie opbouwen
ringsmaatschappij roept dat vragen op. Is er verschil
om tot de beste oplossingen te komen. Onze manier
tussen mannen en vrouwen? Tussen de ene en de
van werken is open, gedreven en informeel. We zijn
andere regio? En wat betekent dat voor de premies?
op zoek naar startende en ervaren medewerkers, bij
Bij Watson Wyatt kijken we verder dan de cijfers. Want
voorkeur met een opleiding Actuariaat, Econometrie
cijfers hebben betrekking op mensen. Dat maakt ons
of (toegepaste) Wiskunde. Kijk voor meer informatie
werk zo interessant en afwisselend. Watson Wyatt
op werkenbijwatsonwyatt.nl.
adviseert ondernemingen en organisaties wereldwijd op het gebied van ‘mens en kapitaal’: verzekeringen, pensioenen, beloningsstructuren en investerings-
40
AENORM
59
April 2008
Watson Wyatt. Zet je aan het denken.
Econometrics
IES
USA
UK
Japan
Canada
France
Germany
Italy
0.1
27.96***
4.45
11.62***
10.82**
24.12***
6.34*
10.89**
0.2
28.43***
4.49
12.03***
10.64**
23.62***
6.09
10.69**
0.3
29.06***
4.69
12.00***
9.56**
22.73***
5.17
10.49**
0.4
***
30.47
5.36
11.13
8.95
**
21.59
3.58
10.67**
0.5
33.80***
7.37*
9.20**
8.59**
20.84***
1.97
12.01***
0.6
40.13
***
12.84
7.16
10.32
***
21.47
1.48
15.30***
0.7
48.67***
25.45***
8.76*
13.95***
23.15***
2.46
20.01***
0.8
53.21
44.57
19.32
15.80
***
22.96
3.37
22.29***
0.9
40.20***
42.35***
28.03***
12.11***
18.70***
2.21
16.57***
***
***
***
**
*
***
**
**
***
Table 2: Testing the Significance of the Habit Term Country
Autoregressive
NH-PF
NH - EX
H - PF
H - EX
United States
0.0088
0.0148
0.0099
0.0105
0.0093
Japan
0.0107
0.0264
0.0111
0.0238
0.0105
Canada
0.0145
0.0131
0.0138
0.0107
0.0131
Italy
0.0161
NA
NA
0.0087
0.0155
Table 3: RMSE Comparison for Various Predicted Current Account Formulations
shown, but only for some of them. The range of IES (from 0.1 to 0.4) outlined by the linear Wald and likelihood ratio tests confirms that the intertemporal elasticity of substitution is likely to be not very far from zero. In particular, if the minimisation of the test statistic is a valid criterion, then for all four countries the value of IES is around 0.1. This is consistent with Bergin and Sheffrin (2000), who estimate the value of IES to be low and, also with Hall (1988). The range of values for the habit parameter is more heterogenous. By the criterion of the minimum test statistics, habits are at their lowest in Japan (δ = 0.1) and the highest in Italy (δ = 0.4) with Canada and the United States being the intermediate cases. Lower values of the habit parameter are consistent with, for example, Naik and Moore (1996), who use data on food consumption for estimating the value of the habit parameter for the United States. There are two countries for which all three tests reject the null hypothesis for all of the parameter values considered in the grid - France and Germany. For the UK, only the predictable deviations test does not reject the model for some parameters. The predictable deviations test for the UK exhibits an interesting pattern of not rejecting progressively higher values of the habit parameter together with progressively higher values of the IES. The author performed several robustness checks for these three countries by examining the likelihood ratio test results with varying parameters. For the UK the test statistics increase as the share of traded goods is increased and vice versa. That movement is rather slow and the first failure to reject the hypothesis at 10% only appears when the share of traded goods is reduced to a rather implausible 0.3 (with 1/σ =
0.4 and δ = 0.1). Including values of IES that are lower than 0.1 with α = 0.5 did not affect the results. In case of France and Germany, even with the share of traded goods reduced to 0.1, the likelihood ratio test still rejects the hypothesis with all parameter values. Considering values of the IES smaller than 0.1 also did not help. For Germany, the possible cause is a major, arguably relatively unexpected shock of the unification. For France and the United Kingdom, there are no obvious analogues to this event, but one could conjecture that one of the reasons behind statistical rejections is due to some structural breaks in the processes generating expectations over the long time period considered. Table 2 summarises the results of the significance tests of the habit term for different countries and different values of the IES, which enters the consumption-based interest rate additively. The test statistic is χ2-distributed with three degrees of freedom, with *** denoting rejection of the null hypothesis of habits not being significant at 1%, ** at 5% and * at 10% significance. Habits are generally significant for the majority of countries and the majority of values of the intertemporal elasticity of substitution. For Germany habits are not significant for all the values of IES except 0.1 and for the United Kingdom, habits are not significant for low values of IES (from 0.1 to 0.4). This is consistent with poor performance of the intertemporal model with habits for those countries. To assess the significance of habits one can consider predictions of the current account with and without habits (H-EX and NH-EX) using (2.7) and compare them with actual data. The comparison is done using the root mean square error (RMSE) between the prediction and the
AENORM
59
April 2008
41
Econometrics
actual values. To isolate the influence of expectations, the author also considers the case of perfect foresight - that is generating predicted current account series by replacing the expectations with the actual variables (H-PF and NH-PF). A final benchmark considers an autoregressive specification for the current account of the form CAt = (1 + r )CAt-1 to assess the contribution of additional terms in (7), since from (7), it might appear that most of the fit of the model comes from including the lagged value of the current account for the one period prediction. In order to determine the values of 1/σ and δ, the author picks the minimum values of the test statistic from the linearWald test. This means that the values of the intertemporal elasticity of substitution are the same for all countries and equal to 0.1, but the values of the habit parameter are different ranging from 0.1 for Japan to 0.4 for Italy. The results are summarised in Table 3. Table 3 confirms that the prediction with expectations outperform the prediction with actual levels for the United States. However the simple autoregressive specification gives an even lower RMSE than the expectations case, albeit only a slightly lower one. The RMSE for the model with no habits and expectations is slightly larger than for the one with habits and expectations. Japan in Table 3 illustrates vividly that expectations play a crucial role in the model. Both with and without habits the RMSE of the predictions based on perfect foresight exceed those based on expectations by about a factor of two. In addition, the specification with habits and expectations produces a slightly lower RMSE than the autoregressive one, although the difference between habits, no habits and autoregressive specifications is very slight. For Canada in Table 3 all four model based specifications outperform the simple autoregressive one. The specifications with habits outperform those of perfect foresight. Interestingly however, for Canada the specifications with perfect foresight outperform those with expectations and the difference is larger for the case of habits. For Italy, Table 3 does not give the RMSE of the model with no habits since it was rejected by all three statistical tests. The model with habits outperforms the autoregressive specification. However, perfect foresight for Italy again outperforms expectations by an even bigger margin then that for Canada. In conclusion, our results indicate a mixed, but an overall favorable empirical picture for the intertemporal model with habit formation being important. One limitation is that our model assumes that countries have no problems financing their intertemporally optimal current account deficits or surpluses by selling or buy-
42
AENORM
59
April 2008
ing necessary quantities of the homogenous international riskless bond on the international financial market. This is not the case in reality, where it are the problems with financing current account deficits that can cause currency crises. It would be useful to consider the ‘financial side’ of the model, which determines how international financial markets finance current account deficits. Financial markets do not operate with a single, homogenous international asset and asset structure is likely to be an important determinant of financing for the current account deficit as well as its size. We have established the empirical validity of the intertemporal model of the current account. In order to make it more relevant to policymaking, it is necessary to include considerations of asset structure and a more detailed picture of international financial markets. References Bergin, P.R. and Sheffrin, S.M. (2000). Interest Rate, Exchange Rates and Present Value Models of the Current Account. The Economic Journal, 110, 535–558. Edwards, S. (2001). Does the Current Account Matter? NBER Working Paper Series, 8275. Ghosh, A.R. (1995). International Capital Moblity Amongst the Major Industrialized Countries: Too Little or Too Much? The Economic Journal, 105(428), 107–128. Gruber, J.W. (2004). A present value test of habits and the current account. Journal of Monetary Economics, 51, 1495–1507. Hall, R.E. (1988). Intertemporal Substitution in Consumption. The Journal of Political Economy, 96(2), 339–357. Huang, C.H. and Lin, K.S. (1993). Deficits, government expenditures, and tax smoothing in the united states: 1929 - 1988. Journal of Monetary Economics, 31(3), 317–339. Naik, N.Y. and Moore, M.J. (1996). Habit Formation and Intertemporal Substitution in Individual Food Consumption. The Review of Economics and Statistics, 78(2). Obstfeld, M. and Rogoff, K. (1995). The Intertemporal Approach to the Current Account, Elsevier, chapter 34, 1731–1799. Stockman, A.C. and Tesar, L.L. (1995). Tastes and Technology in a Two-Country Model of the Business Cycle:Explaining International Comovements. The American Economic Review, 85(1), 168 –185.
De Wil® van Marine Regnault-Stoel, Consultant Investment Solutions
‘Steeds meer uit mezelf halen. Om te groeien als mens.’
www.aegon.nl
Werken bij AEGON “Ik wil mensen om me heen die me inspireren. Ik wil uitgedaagd worden om het beste uit mezelf te halen en te groeien. Steeds nieuwe dingen leren. Bij AEGON heb ik die werkomgeving gevonden. AEGON heeft een cultuur waarin ik geboeid blijf en steeds weer word uitgedaagd. Ik werk bij AEGON bij het bedrijfsonderdeel Asset Management. Een interessant werkterrein waar ik mijn actuariële kennis over verzekeringsverplichtingen en mijn kennis van vermogensbeheer kan combineren. Die komen in mijn functie mooi
samen. Net als meer collega’s binnen het bedrijfsonderdeel Asset Management volg ik de driejarige opleiding aan het CFA Institute. Ik krijg hier de ruimte om steeds nieuwe dingen te leren. AEGON wil natuurlijk ook iets van mij. AEGON verwacht dat ik nieuwsgierig ben en blijf. En een bijdrage lever in kennis en ervaring, leren van elkaar. AEGON verwacht ook dat ik bijdraag aan een leuke en inspirerende werkomgeving. Het is geven en nemen. Ik ben heel tevreden.”
AENORM
59
April 2008
43
Actuarial Sciences
History and challenges of the European Financial Integration process1 The integration process among the European countries is not just from the recent past, but was initiated long before, both for political and economical reasons. The actual historical roots of the European Union lie just after the Second World War. In order to prevent wars like these happening again, the European countries needed to come together, starting with the age-old opponents of France and Germany. This led to the “Schuman declaration” on May 9, 1950, which is considered to be the birth of the European Union as we know it now, and is called Europe Day for this reason.
The monetary integration process is somewhat different. It’s history is just as long, but let me jump to 1979 directly.3 March 1979 was the start of the European Monetary System (EMS) with the goal to create a zone of monetary stability, consisting of all EU members. However, not all of these members joined the cornerstone of the EMS, namely the Exchange Rate Mechanism (ERM). The ERM kept each currency within a certain band defined by a grid of rates for the various pairs of currencies that could only be changed by mutual consent. In the beginning of 1992 it looked like the ERM would slowly converge to the EMU. However, stabilized expectations changed dramatically after the Danish rejected their participation with the EMU through a referendum. This moment is usually indicated as the trigger that initiated the ERM crises in 1992-1993. As a consequence, most currencies came under attack and the UK Pound and Italian Lira even left the ERM system. The start of third stage of EMU took place at 1 January 1999 by fixing the exchange rates of the eleven participating countries that fulfilled the convergence criteria (Austria, Belgium, Finland, France, Germany, Ireland, Italy, Luxembourg, the Netherlands, Portugal and Spain). Two years later Greece also adopted the euro, while Denmark, Sweden and the U.K. chose not to join the EMU. As of 1999 one central bank (the European Central Bank) has been responsible
Gerard Moerman2 received his Master’s degree in Econometrics from the Erasmus University Rotterdam in 1999 and defended his Ph.D.thesis in 2005 at the same university. In 2003 (jan-aug) he also worked at the European Central Bank as an economist at the General Economic Research Department. As of February 2005 he joined AEGON Asset Management as Investment Strategist, where he currently manages the Global TAA portfolio.
for deducting the monetary policy in the euro area in cooperation with the central banks of the member states (called European System of Central Banks). The euro coins and paper were introduced on 1 January 2002, three years after fixing the exchange rates. Consequences for financial markets These changes in Europe are very important for both academics and practitioners, since they play an important role for stock picking and portfolio construction. Let me give you an example on the latter one. Due to the harmonization of monetary and policy rules and the elimination of exchange rate risk, the characteristics of financial markets in the euro area have been changing. In other words, investors and researchers cannot base their expectations on the (long) historical evidence of these
This article is largely based on the Ph.D. thesis of Gerard Moerman that he defended in June 2005 at the RSM Erasmus University and is called: “Empirical Studies on Asset Pricing and Banking in the Euro Area”. A copy of the thesis is available from the author or ERIM. Some of the chapters are also available online through: http://papers. ssrn.com/author_id=90298 2 Gerard Moerman (gmoerman@aegon.nl) is affiliated as an investment strategist at AEGON Nederland N.V., business unit AEGON Asset Management, The Hague. The opinions expressed in this article are those of the author and do not necessarily represent those of AEGON Nederland N.V. 3 Gros and Thygesen (1998) give an overview of the total history of monetary policy in Europe starting with the European Payments Union (1950) as a first step towards convertibility. 1
44
AENORM
59
April 2008
Actuarial Sciences
markets, because the structural changes have a clear impact on the characteristics of the markets. For example, industry information has become more valuable in terms of portfolio diversification benefits than country information, especially after the introduction of the euro, which contrasts significantly with the literature of the 90’s. Therefore, investors should change their view in the euro area to a sector-based approach. Most institutional investors, which are the biggest investors in the euro area, have already changed their view into a sector-based approach. As a consequence, euro area portfolio managers are nowadays tracking sector indices instead of country indices. Figure 1 depicts the evidence of this. Corporate Finance theory tells us that we can calculate the maximal attainable return for any specified risk level given the expected return and risk characteristics of the investment opportunities. This is usually visualised through a mean-variance efficient frontier. Figure 1 visualises this based on MSCI country indices and MSCI industry indices. For example, the solid line shows that maximal attainable return based on country indices alone given the desired risk level. All lower outcomes are possible as well, but never above the efficient frontier. The dashed line based on industry indices alone is clearly above the country efficient frontier, while the best strategy is to diversify over both types of indices. This picture presents evidence that diversifying along the industrial scale could give better results than by diversifying along geographical lines. The traditional top-down approach (divide money over different countries and let the country portfolio manager do the stock-picking for that country) is therefore outdated in the euro area. A clear change compared to the 80’s and 90’s!4 Challenges for further European financial integration The most interesting part of the European integration process is that it is far from finished. Sofar, the introduction of the euro and the European Central Bank has been the biggest accomplishment that is reached, but many more changes have to follow these important steps. Amongst economists, there is a strong perception that Europe has to reform the labour market in order to stimulate growth and perhaps become a more important player in the global economy. The European Commission is currently working on the reforms and proposed a new constitution as a basis for their reforms. However, in the summer of 2005 the French and Dutch citizens voted against the introduction of this new constitution. This reminds us that we cannot compare Europe or the euro area with the US. Europe is based on a lot of different countries that each have their own language, 4
source: Moerman (2005) Figure 1: This figure plots the mean-variance frontiers for three investment categories. The solid line (red) represents all investment possibilities when only country indices are considered. The dashed line (blue) is the mean-variance frontier for the industry indices. The dotted line (black) considers both types of indices.
culture and desires. Hence, it will be a difficult task to get all member states in line and make a progression in the harmonization and (labour market) reforms. Europe still has a route to go! References De Grauwe, P. (2003). Economics of Monetary Union, Oxford University Press, Oxford. Gros, D., and Thygesen, N. (1998). European Monetary Integration, Longman, London. Moerman, G.A. (2005). Empirical Asset Pricing and Banking in the Euro Area, Ph.D. thesis, Erasmus University Rotterdam, RSM Erasmus University. Moerman, G.A. (2008). Diversification in Euro Area Stock Markets: Country vs. Industry, Journal of International Money and Finance, forthcoming.
See Moerman (2008) for a more detailed discussion.
AENORM
59
April 2008
45
Actuarial Sciences
Market value of life insurance liabilities under Chapter 11 Bankruptcy Procedure The topic of insolvency risk in connection with life insurance companies has recently attracted a great deal of attention. Since the 1980s a long list of defaulted life insurance companies throughout the world has been reported. Table 1 lists some exemplary bankruptcies of life insurance companies in the United States1. All these defaulted companies filed for Chapter 11 Bankruptcy Procedure.
American defaulted Co.
Year
Days in default
Executive Life Ins. Co.
1991
462
First Capital Life Ins. Co.
1991
1669
Monarch Life Ins. Co.
1994
392
ARM Financial Group
1999
245
Penn Corp. Financial Group
2000
119
Conseco Inc.
2002
266
Metropolitan Mortgage & Securities
2004
n/a
Table 1: Some defaulted companies in the United States
insurance
In the U.S. Bankruptcy Code there are two procedures: Chapter 7 and Chapter 11 bankruptcy2. It is generally assumed that a firm is in financial distress when the value of its assets is lower than the default threshold. With Chapter 7 bankruptcy, the firm is liquidated immediately after default, i.e., no renegotiations or reorganizations are possible. With Chapter 11 bankruptcy, first the reality of the financial distress is checked before the firm is definitively liquidated, i.e., the defaulted firm is granted some “grace” period during which a renegotiation process between equity and debt holders may take place and the firm is given the chance to reorganize. If the firm is unable to recover during this period, then it is liquidated. Hence, the firm’s asset value can cross the default
An Chen received her Ph.D in Economics from the University of Bonn and works now as a postdoctoral fellow at Secion Actuarial Science of the University of Amstedam. Her research interests range widely, from market-consistent valuation of life insurance liabilities to exotic options. In a recent project, she presents results for pricing life insurance contracts by using utility indifference pricing under general utility functions. At the moment, she is also very interested in some extensions of Parisian options.
threshold without causing an immediate liquidation. Thus, the default event is only signalled. In table 1, the “grace” period lasted from 119 days up to 1669 days. As these examples show, it is important to consider bankruptcy procedures that are explicitly based on the time spent in financial distress and to include such a “grace” period into the model if one wants to capture the effects of an insurance company’s default risk on the value of its liabilities and on the value of the insurance contracts more realistically. In the present article, we construct a contingent claim model along the lines of Briys and de Varenne (1994, 1997) and Grosen and Jørgensen (2002) for the valuation of the liability of a life insurance company where the liability consists only of the policy holder’s payments. Their main contribution is to explicitly consider default risk in a contingent claim model to value the equity and the liability of a life insurance company. In Briys and de Varenne (1994, 1997), default can only occur at the maturity date, whereas in Grosen and Jørgensen (2002) default can occur at any time before the maturity date, i.e., they introduce the risk of a prema-
These data are taken from Lynn M. LoPucki’s Bankruptcy Research Database, http://lopucki.law.ucla.edu/index. htm. 2 Similar bankruptcy laws are also applied in Japan, France and Germany... 3 Bernard et al. (2005a) recently extended this model by taking into account stochastic interest rates. 1
46
AENORM
59
April 2008
Actuarial Sciences
ture default to the valuation of a life insurance contract3. Grosen and Jørgensen (2002) model default and liquidation as equivalent event ( → Chapter 7 Bankruptcy Procedure). Their approach does not reflect the reality well because default and liquidation cannot be considered as equivalent events. We therefore extend their model in order to be able to capture the effects of the Chapter 11 Bankruptcy Procedure and to study the impact of a delayed liquidation on the valuation of the insurance company’s liabilities and on the ex–ante pricing of the life insurance contracts. We do this by using so–called Parisian barrier option frameworks. Here we distinguish between two kinds of Parisian barrier options: standard Parisian barrier options and cumulative Parisian barrier options. Assume, we are interested in the modelling of a Parisian down–and–out option. With standard Parisian barrier options, the option contract is knocked out if the underlying asset value stays consecutively below the barrier for a time longer than some predetermined time d before the maturity date. With cumulative Parisian barrier options, the option contract is terminated if the underlying asset value spends until maturity in total at least d units of time below the barrier. In a corporate bankruptcy framework these two Parisian barrier options have appealing interpretations. Think of the idea that a regulatory authority takes its bankruptcy filing actions according to a hypothetical default clock. In the case of standard Parisian barrier options, this default clock starts ticking when the asset price process breaches the default barrier and the clock is reset to zero if the firm recovers from the default. Thus, successive defaults are possible until one of these defaults lasts d units of time. One may say that in this case the default clock is memoryless, i.e., earlier defaults which may last a very long time but not longer than d do not have any consequences for eventual subsequent defaults. In the case of cumulative Parisian barrier options, the default clock is not reset to zero when a firm emerges from default, but it is only halted and restarted when the firm defaults again. Here d denotes the maximum authorized total time in default until the maturity of the debt. This corresponds to a full memory default clock, since every single moment spent in default is remembered and affects further defaults by shortening the maximum allowed length of time that the company can spend in default without being liquidated.4 Thus, in the limiting case when d is set equal to zero (or is going to zero), we are back in the model of Grosen and Jørgensen (2002). Our model therefore encompasses that of Grosen and Jørgensen (2002) and also those of Briys and de Varenne (1994, 1997). Both kinds of Parisian options are of course not new in the literature on exotic options. They have been in4
troduced by Chesney et al. (1997) and subsequently developed further in Hugonnier (1999), Moraux (2002), Anderluh and van der Weide (2004) and Bernard et al. (2005b). The remainder of this article is structured as follows. In Section 2, we establish the model setup, particularly the contract payoff and the default and liquidation mechanism with the help of Parisian options. Section 3 deals with the valuation of the issued contract under and the following section demonstrates some numerical results. Section 4 concludes. Model setup Consider an insurer operating on the time horizon [0, T]. At time 0, the insurer issues a participating equity-linked contract to a representative policyholder who pays an upfront premium L0. The insurer also receives an amount of initial equity contributions E0 at time 0. Consequently, the initial asset value of the insurer is given by A0 = L0 + E0. From now, we shall denote L0 = αA0 with α ∈ (0, 1). The initial capital structure of the insurer is summarized in the table below: Asset
Liability
Equity
A0
L0 = αA0
E0 = (1 - α)A0
As a compensation to their initial investments L0 and E0, the policy and equity holder acquire a claim on the firm’s assets at or before maturity T depending on the insurer’s solvency status. Indeed if liquidation does not occur on [0, T], the total payoff to the policyholder at maturity T, νL(AT), is given by: ⎧ ⎪ AT if AT < LT ⎪⎪ L if LT ≤ AT ≤ T ν L (AT ) = ⎨LT α ⎪ ⎪L + δ (αA − L ) if A > LT T T T ⎪⎩ T α LT = L0egT corresponds to the guaranteed minimum payment at maturity where g is the guaranteed minimum rate of return. If the final asset’s value AT < LT , the company is bankrupted at maturity T. The priority of policyholders implies that they get the full remaining asset’s value. Here, δ(αAT − LT) represents the bonus payment to the policyholder as a fraction of the residual surplus adjusted by the policyholder’s share α in the insurer’s initial capital and a participation rate δ. This bonus is paid if the company has enough benefits. This payoff is depicted in Figure 1 and can also be rewritten as: ν L (AT ) = LT + δ (αAT − LT ) − (LT − AT ) +
+
The real life bankruptcy procedures lie somewhere in between these two extreme cases.
AENORM
59
April 2008
47
Actuarial Sciences
Figure 1: The payoff ΨL(AT) to the policyholder given no premature liquidation
Figure 2: Default and liquidation under Standard Parisian framework
where the bonus payment appears to be a calloption and the short put option −(LT − AT)+ comes from the equity holder’s limited liability5. We now turn to the case when a premature liquidation occurs. As mentioned in the introduction, we generalize the model of Grosen and Jørgensen (2002) in order to allow for Chapter 11 bankruptcy. This can be realized by adding a Parisian barrier option feature instead of the standard knock–out barrier option feature to the model. Along the lines of Grosen and Jørgensen (2002), in order to meet the time-increasing guaranteed amount, we assume an exponential barrier:
liquidation. The liquidation of the firm is declared when the financial distress has lasted consecutively at least d, i.e. at time TB− . In a Cumulative Parisian framework, the options are lost by their owners when the underlying asset has stayed below the barrier for at least d units of time during the entire duration of the contract. Therefore, the options do not lose their value when the following condition holds:
Bt = ηL0egt, where η is a regulation parameter. The requirement A0 > B0 = L0 must be satisfied initially so that the firm is still solvent at the contract–issuing time. In what follows, we distinguish between standard and cumulative Parisian options. In the standard Parisian down–and–out option framework, the final payoff νL(AT) is only paid if the following technical condition is satisfied: TB− = inf{t > 0 | (t − gBA,t )1{AT < BT } > d} > T with gBA,t = sup{s ≤ t | AS = BS }, where gBA,t denotes the last time before t at which the value of the assets A hits the barrier B. TB− gives the first time at which an excursion below lasts more than d units of time. In fact, TB− is the liquidation date of the company if TB− < T. Figure 2 illustrates a simulation for one possible default and liquidation evolution according to Standard Parisian options6. It is observed that premature default lasting less than a period of length d is possible and leads to no premature 5 6
48
We use (x)+ to denote max(x, 0). In the figure, we use a constant barrier level.
AENORM
59
April 2008
Γτ−,B :=
T
∫1 0
{ At ≤ BT } dt < d
where Γτ−,B denotes the occupation time of the process describing the value of the assets { AT }t ∈[0,T ] below the barrier B during [0, T]. Let т be the premature liquidation date, then it shall hold: Γτ−,B :=
τ
∫1 0
{τ ≤T }1{ At ≤ BT } dt = d
If Figure 2 is taken as an example, the liquidation time т shall be somewhere before TB− , when the firm’s asset stays in total exactly d units of time below the barrier. Upon liquidation, a rebate for the liability holder is introduced to the model and it has the form of: Θ L (τ ) = min{LT , AT }, where т is the liquidation time. The rebate term implicitly depends on the regulation parameter η. Because of the following inequality: Aт ≤ Bт = ηLт , it is observed that for η < 1, the rebate corresponds to the asset value Aт. Both Parisian barrier option features could lead to the result that at the liquidation time the asset price falls far below the barrier value, which makes it im-
Actuarial Sciences
Figure 3: Fair combination of δ and g for different (Standard Parisian case)
possible for the insurer to offer the rebate as in Grosen and Jørgensen (2002), which corresponds to the barrier level. Valuation This section aims at valuing the issued life insurance contract. In general, we assume a continuous–time frictionless economy with perfect financial market, no tax effects, no transaction costs and no other imperfections for the valuation framework. Hence, we can rely on martingale techniques for the valuation of the contingent claim. Under the equivalent martingale measure, the price process of the insurance company’s assets { AT }t ∈[0,T ] is assumed to follow a geometric Brownian motion: dAt = At(rdt + σdWt),
V C ( A0 ,0) L
= EQ [e − rT (δ[αAT − LT ]+ + LT − [LT − AT ]+ ) 1{Γ −, b < d}] + EQ [e − rτ ΘL (τ )] T
where r denotes the deterministic interest rate, σ the deterministic volatility of the asset price { AT }t ∈[0,T ] the equivaprocess { AT }t ∈[0,T ] and {W lent Q–martingale. The price of the issued life insurance contract is determined by the expected discounted payoff under the equivalent martingale measure. In the standard Parisian barrier framework, the value is given by VL ( A0 ,0)
= EQ [e − rT (δ[αAT − LT ]+ − [LT − AT ]+ + LT ) −
1{T − >T }] + EQ [e − rTB min{LT − , AT − }1{T − ≤T }] B
Various approaches are applied for valuing standard Parisian products, such as Monte– Carlo algorithms (Andersen and Brotherton– Ratcliffe (1996)), binomial or trinomial trees (Avellaneda and Wu (1999), Costabile (2002)), PDEs (Haber et al. (2002)), finite–element methods (Stokes and Zhu (1999)) or the implied barrier concept (Anderluh and van der Weide (2004)). In this article, we adopt the original Laplace transform approach initiated by Chesney et al. (1997). Later in the numerical analysis, we rely on the recently introduced and more easily implementable procedure by Bernard et al. (2005b) for inverting the Laplace transforms. They approximate the Laplace transforms needed to value standard Parisian barrier contingent claims by a linear combination of a number of fractional power functions in the Laplace parameter. The inverse Laplace transforms of these functions are well–known analytical functions. Therefore, due to the linearity, the needed inverse Laplace transforms are obtained by summing up the inverse Laplace transforms of the approximate fractional power functions. Similarly, we obtain the following present value of the liability or of the contract issued to the policy holder in the cumulative Parisian framework:
B
B
B
It is observed that the price of this contingent claim consists of four parts: A Parisian down– and–out call option with strike LT / α (multiplied by δα), i.e., the bonus part, a Parisian down– and–out put option with strike LT, a deterministic guaranteed part LT which is paid at maturity when the value of the assets has not stayed below the barrier for a time longer than d and a rebate paid immediately when the liquidation occurs.
The results of Hugonnier (1999) and Moraux (2002) and some newly derived extensions are used to value the cumulative Parisian claims7. Numerical results This section determines the fair premium implicitly through a fair combination of the parameters according to fair contract principle. A contract is called fair if the accumulated expected discounted premium is equal to the accumulated expected discounted payments of the contract under consideration. This principle requires the equality between the initial investment of the policy holder and his expected benefit from the contract, namely the value of the contract equals the initial liability: VL(A0,0) = αA0 = L0. Certainly, this equation holds for both standard and cumulative Parisian barrier claims. Henceforth, we mainly look at the fair combination of δ and g given various parameter constellations. Throughout our numerical analysis, we fix the following parameters: A0 = 100; L0
We refer the reader to Chen and Suchanecki (2007) for a detailed valuation of the contract, both in standard and cumulative Parisian framework. 7
AENORM
59
April 2008
49
Actuarial Sciences
Figure 4: Fair combination of δ and g for different d (Standard Parisian case)
= 80; α = 0.8; r = 0.05; η = 0.8; T = 12; σ = 0.2, d = 1. We start our analysis with two graphics for the standard Parisian case. The relation between the participation rate δ and the minimum guarantee g for different volatilities is demonstrated in Figure 3. First, it is quite obvious to observe a negative relation between the participation rate and the minimum guarantee (decreasing concave curves) which results from the fair contract principle. Similarly to Grosen and Jørgensen (2002), for smaller values of δ (δ < 0.83), either higher values of g or of δ are required for a higher volatility in order to make the contract fair. For higher values of δ (δ > 0.83), this effect is reversed. As the volatility goes up, the value of Parisian down–and–out call increases, while the value of the Parisian down–and–out put increases with the volatility at first and then decreases (hump–shaped). The value of the fixed payment goes down and the rebate term behaves similarly to the Parisian down–and–out put, i.e., goes up at first then goes down after a certain level of volatility is reached. For the low values of δ, the fixed payment dominates, therefore a positive relation between δ and σ (also g and σ) is generated. On the contrary, the reversed effect is observed for high values of δ. Therefore, a volatility–neutral fair combination of ( δ*,g* ) ≈ (0.83, 0.033) is observed. Figure 4 exhibits how the contract value changes with the length of excursion d. Obviously, a positive relation exists between the Parisian down–and–out call and the length of excursion (positive effect). The longer the allowed excursion, the larger the value of the option. In fact, the value of the call does not change much with the length of excursion when a certain level of d is reached, i.e., the value of the Parisian down– and–out call is a concave increasing function of d. The put option changes with the length of excursion in a similar way. It increases with d but the extent to which it increases becomes smaller after a certain level of d is reached. The fixed payment arises only when the asset price process does not stay below the barrier for a time longer than d. Hence, as the size of d
50
AENORM
59
April 2008
Figure 5: Fair combination of δ and g for different (Cumulative Parisian case)
Figure 6: Fair combination of δ and g for different d (Cumulative Parisian case)
goes up, the probability that the fixed payment will become due increases. Consequently, the expected value of the fixed payment rises. Its magnitude is bounded from above by the payment LTe−rT . In contrast, the rebate payment appears only when the considered insurance company is liquidated, i.e., when the asset price process stays below the barrier for a time period which is longer than d. Therefore, the longer the length of excursion, the smaller the expected rebate payment. The cumulative Parisian down–and–out call, the down–and–out put and the fixed payment assume smaller values than the corresponding standard Parisian contingent claims. This is due to the fact that the knock–out probability becomes higher in the cumulative case, given the same parameters. This is quite obvious because the knock–out condition for standard Parisian barrier options is that the underlying asset stays consecutively below barrier for a time longer than d before the maturity date, while the knock–out condition for cumulative Parisian barrier options is that the underlying asset value spends until the maturity in total d units of time below the barrier. In contrast, the expected cumulative rebate part of the payment assumes larger values, because it is contingent on the reversed condition compared to the other three parts of the payment. Moreover, (usually) the total effect of these other parts together dominates that of the rebate. Figure 5 depicts how the participation rate δ (or the minimum guarantee g) varies with the volatility. The figure is very similar to Figure 3.
Actuarial Sciences
Figure 6 illustrates the effect of the length of excursion d on the fair combination of δ and g. As in the standard Parisian case (c.f. Figure 4), the parameter d does not show a big influence (but bigger than in the standard Parisian case) on the fair combination of δ and g. All four parts of the payment change with d similarly to the standard Parisian case, namely the cumulative Parisian down–and–out call, the cumulative Parisian down–and–out put and the expected fixed payment go up when d is increased (positive effect). The opposite is true for the rebate part (negative effect). However, the magnitude of the changes in the values is bigger. Conclusion In the present article, we extend the model of Grosen and Jørgensen (2002) and investigate the question of how to value an equity–linked life insurance contract when considering the default risk (and the liquidation risk) under different bankruptcy procedures. In order to take into account the realistic bankruptcy procedure Chapter 11, these risks are modelled in both standard and cumulative Parisian frameworks. References Anderluh, J., van der Weide, H. (2004). Parisian options – the implied barrier concept. Bubak, M., van Albada, G.D., Sloot, P.M.A., Dongarra, J.J. [eds.]: Computational Science – ICCS 2004, Lecture Notes in Computer Science, Springer, 851-858. Andersen, L., Brotherton–Ratcliffe, R. (1996). Exact Exotics. Risk, 9(10), 85–89.
64(4), 673–694. Chen, A., Suchanecki, M. (2007). Default risk, bankruptcy procedures and the market value of life insurance liabilities. Insurance: Mathematcis and Economics, 40, 231-255. Chesney, M., Jeanblanc–Picqu´e, M., Yor, M. (1997). Brownian excursions and Parisian barrier options. Advances in Applied Probability, 29, 165–184. Costabile, M. (2002). A combinatorial approach for pricing Parisian options. Decisions in Economics and Finance, 25(2), 111–125. Grosen, A., Jørgensen, L. (2002). Life insurance liabilities at market value: an analysis of insolvency risk, bonus policy, and regulatory intervention rules in a barrier option framework. Journal of Risk and Insurance, 69(1), 63–91. Haber, R.J., Schönbucher, P.J., Wilmott, P. (1999). Pricing Parisian options. The Journal of Derivatives, 6(3), 71–79. Hugonnier, J.N. (1999). The Feynman–Kac formula and pricing occupation time derivatives. International Journal of Theoretical and Applied Finance, 2(2), 153–178. Moraux, F. (2002). On pricing cumulative Parisian options. Finance, 23, 127–132. Stokes, N., Zhu, Z. (1999). A finite element platform for pricing path–dependent exotic options. Proceedings of the Quantitative Methods in Finance Conference, Australia.
Avellaneda, M., Wu, L. (1999). Pricing Parisianstyle options with a lattice method. International Journal of Theoretical and Applied Finance, 2(1), 1–16. Bernard, C., Le Courtois, O., Quittard–Pinon, F. (2005a). Market value of life insurance contracts under stochastic interest rates and default risk. Insurance: Mathematics and Economics, 36, 499–516. Bernard, C., Le Courtois, O., Quittard–Pinon, F. (2005b). A new procedure for pricing Parisian options. The Journal of Derivatives, Summer 2005, 45–53. Briys, E., de Varenne, F. (1994). Life insurance in a contingent claim framework: pricing and regulatory implications. Geneva Papers on Risk and Insurance Theory, 19(1), 53–72. Briys, E., de Varenne, F. (1997). On the risk of insurance liabilities: debunking some common pitfalls. The Journal of Risk and Insurance,
AENORM
59
April 2008
51
Hoeveel moet je als KPMG’er weten over het downloaden van muziek? © 2007 KPMG Staffing & Facility Services B.V., een Nederlandse besloten vennootschap, is lid van het KPMG-netwerk van zelfstandige ondernemingen die verbonden zijn aan KPMG International, een Zwitserse coöperatie. Alle rechten voorbehouden.
Interesse brengt je verder bij KPMG. Welke nummers op dit moment in de top-100 staan, hoef je wat ons betreft niet te weten. Het zou wel goed zijn als je geïnteresseerd bent in muziekstromingen. En ook in de problematiek van het onbetaald downloaden. Wat dat met je werk als accountant of adviseur bij KPMG te maken heeft? Veel! Bij KPMG werk je met een gevarieerd pakket klanten. Daar kan zomaar een internationaal muzieklabel tussenzitten. Of een grote softwarefabrikant. Om hen goed te kunnen adviseren heb je interesse nodig in de wereld waarin die klanten opereren. En in de zaken waar ze dagelijks mee te maken hebben. Bij KPMG zijn we ervan overtuigd dat die interesse je een betere adviseur maakt. Daarom zijn we op zoek naar mensen die breed durven kijken én denken. Die op een goede manier ‘streetwise’ zijn. Als je over die mentaliteit beschikt, kun je hier aan de slag als trainee (bij Audit) of junior adviseur (bij Advisory). Kijk voor meer informatie over deze functies en over onze manier van werken op www.kpmg.nl/carrieres.
A U D I T TA X A DV I S O R Y 52
AENORM
59
April 2008
Ecnometrics
Education and Human Capital: The Empirical Survey The aim of this paper is to discover, identify and describe economical mechanisms linking education with unemployment. An attempt at measuring the strength of these relations is made, using tools based on Ross–Quinland data processing algorithms. Analyzed data comprises of the unemployment rate and selected macroeconomic indicators linked directly to educational and active labour market policies; overall, 20 different indicators for 13 European countries are used, regarding the time span between the year 1999 and 2005. The paper begins with a short discussion of selected theoretical aspects of the analyzed issue, followed by the presentation of the obtained results. Finally, a summary of the findings is provided.
Krzysztof Karbownik is a graduate student at the Warsaw School of Economics and teaching assistant in the Institute of Econometrics, the Division of Decision Support and Analysis. His core interests focus on labour economics management science, econometrics and data mining. E-mail contact: kkarbownik@gmail.com
Theory Various theories of the labour market distinguish between different factors that affect the demand and supply on the labour market and consequently the unemployment rate. Among them the neoclassical theories such as the Theory of the Natural Rate of Unemployment or the Job Search Theory, in which these factors can be described as actively limiting the unimpeded functioning of labour market, i.e.: cost of employment or lack of information. On the other hand, the theories based on Keynesian economics, such as the Efficiency Wage Theory or the Insider–Outsider Theory, point the indicators restricting the demand, i.e. overvaluated remuneration or division of the market into sectors out as important. These two main trends in the studies on labour market differ also in the evaluation of purposefulness of introducing governmental or institutional intervention. Approaches stemming from the classical economy more often demand actions that eliminate barriers in the unimpeded functioning of the labour market1. The problem of methods and scope of impact on the labour market has been widely studied, including research of the effectiveness of so called ALMP’s - Active Labour Market Programmes, which include job
placement and creation of work places together with vocational training and activation of unemployed. The effectiveness of active labour market policy, viewed as an action aiming at decrease in unemployment and evening out of maladjustments on the labour market, is a subject of both theoretical and empirical research. Heckman and others (1999) surveyed the effectiveness of two aspects of such programs, i.e. benefits for program recipients and benefit for the whole society. In their analysis, data concerning the classroom training, aimed at providing the participants with the skills necessary for particular jobs, the subsidies to private firms for the provision of on-the-job training, the training on job-hunting, as well as subsidized employment with public or private employers and in-kind subsidies to job search, was used. The program’s effectiveness was defined in several different ways, yet each case proved both the private and the social benefit low. One of the possible factors for obtaining such a result might be the fact that the actions surveyed were directed mainly at low skilled and unqualified people. Further analysis indicated also that the programmes do not have a significant influence on lowering the unemployment rate. Martin (2000) analyzed the influence of the labour market policies on unemployment level on the basis of the OECD countries data. Similarly, he points out that recent surveys do not give sufficient foundation to the claim that these measures are effective. Furthermore, in comparison to the passive labour market policies, not enough financial funding is provided to active labour market policies. The author postulates that disregarding those means while attempting to overcome the long–term unem-
Such limitations might be i.e. lack of information or existing costs of employment (See: Salop, 1996) 1
AENORM
59
April 2008
53
Ecnometrics
ployment could prove disastrous. Moreover, in the study, the countries are differentiated based on their economic performance; because of their nature, the means discussed may not be effective when stagnation and low labour force supply level are encountered. Finally however, the author prompts to accept the labour market policies as potentially important tool in overcoming unemployment. Calmfors and Lang (1995) demonstrated in their study on influence of ALMP’s on employment equilibrium increase that ALMP’s might be an effective tool in decreasing unemployment provided that it would be directed at long-term unemployed or new entrants on the labour market. Saint–Paul (1996) analyzed effectiveness of the labour market policy and showed that both intervention and ALMP’s might in some cases cause results contrary to what was expected, i.e. increase the unemployment rate2. Unfavourable influence of ALMP’s on the labour market occurs when it increases the ratio of skilled to unskilled workers3. As an effective mean of decreasing unemployment, the author indicates an increase in low–qualified employees’ productivity, which might be achieved by the increase of the overall quality of the education system. Rather than targeting only certain groups, improving the human capital of the whole society is postulated. Saint–Paul’s findings prompted other authors to empirically verify the hypothesis of the existence of a link between labour market – mainly unemployment rate - and society’s level of education. The impact of one’s education on both their employment status and their activity rates is significant and can be observed on quantitative basis. Mincer (1991) confirmed the negative correlation between job loss probability and level of education, demonstrating simultaneously that the influence of education on the length of an unemployment period is far lower. The natural consequence of the influence of individual’s education on their employment should be a negative relation between aggregate unemployment rates and the level of society’s education. Phelps and others (2000) examined the link between changes in the educational structure of societies and those in aggregate unemployment. Education’s influence on the labour market – mainly on the unemployment size and wages – was examined by Lange, Topel (2006), Card (1999) and Mincer (1991). The indicators used to quantify the level of education were i.e. years of schooling and educational attainment as the percentage of the adult population for different levels of education. Results obtained by Mincer (1991) point out that the probability of being unemployed is
decreasing with the increase of education level. Data and Tools The data accessible from the Eurostat’s Internet site regarding 13 countries in the period from 1999 to 2005 was used. The set of countries was as follows: Austria, Denmark, Finland, France, Germany, Great Britain, Greece, Holland, Ireland, Italy, Portugal, Spain and Sweden. The selection of the variables for the model was at first wide which, resulting directly from the Data Mining tools used in the survey, was later bounded by the limited access to the data and its incompleteness. Thus, the following variables influencing unemployment rate and related to human capital level were used: - The labour market: - Unemployment rate - Activity rate - Employment rate - Monthly minimum wage in Euro - Labour productivity per hour - Average labour market exit age - Aggregated variable covering Eurostat’s labour market policies (from 2 to 7) - Aggregated variable covering Eurostat’s labour market policies (from 2 to 9) - Expenditures on R&D: - Total R&D expenditures as a % of GDP - Total R&D expenditure as a % of total government expenditure - Total civil R&D appropriations in % of GDP - Education: - Total public expenditure on education as % of GDP - Total civil expenditure on education appropriations in % of GDP - Students aged 15-24 year as a % of corresponding age population - Lifelong learning – percentage of the population aged 25-64 participants in education and training - Early exit from education as a % of population aged 18 – 24 at the lowest level of education and not continuing education at labour trainings - Number of people continuing education at 5th and 6th level of education according to EU – academic education - International exchange: - Export/Import – volume ratio - Terms of trade - Annual inflation rate In all the analyses, a variable identifying classes of unemployment rate obtained by automatic discretization was used. The algorithm divided
An example of a negative influence of intervention on labour market might be inappropriate regulation for employees‘ lay – offs; such restrictions lead to an increase of demand on skilled employees and a decrease of demand on unskilled employees, which increases unemployment in the second group. 3 The author proved that in this case the aggregate unemployment rate increases and in the group of unskilled people. 2
54
AENORM
59
April 2008
Deloitte zoekt
talent
Deloitte zoekt toptalent. Altijd. Want toptalent levert topprestaties. En dat is precies wat Deloitte wil: de beste zijn. In dienstverlening. En in knowhow. Toptalent zoeken we dus ook voor Consulting, Enterprise Risk Services en Financial Advisory Services. We hebben altijd ruimte voor ambitieuze starters. Nieuwsgierige studenten die grenzen kunnen verleggen en beschikken over goede analytische vaardigheden. Gedreven om de top te bereiken. Erop gebrand het beste uit henzelf te halen. Cliëntgerichte zelfstandige werkers, maar tegelijkertijd teamplayers. Toppers dus. Ben jij dat? Breng dan jouw talent in de praktijk!
Deloitte Consulting
Deloitte Financial Advisory Services
Deloitte Consulting adviseert de top van het (inter-)nationale bedrijfsleven
Deloitte Financial Advisory Services (FAS) houdt zich onder andere bezig
en veel (semi-)overheidsorganisaties over complexe strategische en
met complexe financiële transacties, kapitaalmarktvraagstukken, vastgoed
organisatorische vraagstukken. We bieden waar mogelijk een totaaloplossing:
en risicobeheersing. Financiële specialisten op uiteenlopende terreinen
van strategie tot en met implementatie. Deloitte Consulting adviseert op
bundelen hier hun kennis en ervaring in de volgende Service Lines:
het gebied van Corporate Strategy, Financial Management en Change
Transaction Advisory (Corporate Finance & Transaction Services), Capital
Management. Maar ook over kostenreductietrajecten, de wereldwijde uitrol
Markets (Treasury, Energy & Quantitative Modelling), Actuarial & Employee
van SAP- en Oracle-applicaties, CRM-oplossingen, het ontwikkelen van ICT
Benefits en Real Estate Advisory (Management Consulting & Development
maatwerkoplossingen en IT Strategie. Daarnaast geven onze consultants
& Consulting). De werkzaamheden lopen uiteen van financieel advies bij
strategisch marketingadvies en ondersteunen zij organisaties bij Supply Chain
grote bedrijfsovernames tot het realiseren van financiering voor grote
Management.
nieuwbouwprojecten. Maar ook IFRS, financial modelling en waardebepaling van contracten en pensioenen komen aanbod.
Deloitte Enterprise Risk Services Deloitte Enterprise Risk Services (ERS) adviseert en ondersteunt multinationals,
Interesse?
nationale bedrijven, de overheid en non-profitinstellingen bij het signaleren,
Ben jij het toptalent dat Deloitte zoekt?
analyseren, beoordelen en managen van risico’s. Deze risico’s variëren
Kijk dan op www.treasuringtalent.com of neem contact op met
van boardroom risico’s op strategisch niveau tot technische risico’s op
Olivier Wilmink (Consulting) op 020 - 454 71 31 of Lisette van Alphen
netwerkniveau. De betrouwbaarheid van bedrijfsprocessen, informatie en
(Enterprise Risk Services & Financial Advisory Services) op 020 - 454 74 64.
technologie is het werkterrein van ERS. Werkzaamheden lopen uiteen van vraagstukken op het gebied Corporate Governance, internal/operational auditing, IT-auditing, proces- en systeemrisico’s en data-analyse tot complexe technische vraagstukken over informatiebeveiliging, infrastructuurbeveiliging, ethical hacking en identitymanagement. Tevens kan je bij ons als softwarespecialist werken aan het Deloitte INVision platvorm.
TreasuringTalent.com
AENORM
59
April 2008
55
Econometrics
continuous variable of the unemployment rate into three classes corresponding to low, average and high unemployment. The results of the division are presented in Table 1, below. Other variables were either discretized or used in their continuous form, depending on the need. Unemployment class
Low
Average
High
Unemployment rate
(-inf-6.4]
(6.4-8.5]
(8.5-inf)
Table 1: Discretization of continuous variable into three classes.
The use of data mining tools for the analysis of unemployment size is a modern approach. Models describing unemployment are usually built on the basis of traditional econometric methods or by employing general systems theory. Those methods have, however, certain limitations, which do not occur in the data
ment rate and the number of students continuing academic education. Furthermore, all the variables related to education and R&D expenditures proved more important than the indicators characterizing the labour market policies. International exchange rate showed to not have any significant influence on the problem surveyed. The resulting Kappa7 statistic was relatively high – namely at the level of 0.55 – and the classification accuracy equalled 72.5%. The algorithm had, however, a tendency to decrease the unemployment rate. In the second stage of the analysis, the J48 algorithm was used. The decision tree designated the activity rate, the minimal monthly wage in Euro, the total R&D expenditure as a % of total government expenditure, the average labour market exit age and the lifelong learning – percentage of the population aged 25-64 participants in education and training – as the most important variables. The algorithm classified correctly 83.52% of instances with Kappa sta-
"Labour market policies are a potentially important tool in overcoming unemployment" mining analysis. Construction of econometric models requires specification of equations‘ analytical form, requiring of the constructor the assumption about stability of parameters characterizing the processes described. Granger and Timmermann (1999) noticed advantages of using non–parametric methods in situations where the information about modelled phenomenon is incomplete. In case of unemployment, such methods allow the identification of relations occurring in data without building a model based on a fixed labour market theory. The final data set contained 91 instances and 20 attributes, which were processed with NNge4, J485 and RIDOR6 classifiers. Model results The first stage of the analysis employed the NNge algorithm with 20-brakes cross validation. It provided weights for all variables, which could be interpreted as an indication of the importance of variables in the overall influence on the unemployment rate. The higher the weight, after normalization, the stronger influence of the variable on the outcoming unemployment class allocation. The highest weight was noted for the activity rate, followed by the employ-
tistic at 0.74. The obtained decision tree might be presented as a set of decision rules, which effectively give all the conditions for the leaves’ allocation; for instance, if the activity rate is below 60% and the average labour market exit age is below 60 years then the unemployment ranges between 5 and 10% etc. The third stage of the analysis was based on the RIDOR classifier. A set of 10 decision rules was developed, given as the basic rule and the exceptions from it. The basic rule dealt with the allocation to the high unemployment level class. It depended on the activity rate, the labour market policies expenditures, the total civil expenditure on education appropriations in % of GDP, the terms of trade, the minimum monthly wage in Euro, the average labour market exit age and the employment rate. According to the rules generated, the lowest unemployment rate – below 6.4% – is indicated by the activity rate, the total civil expenditure on education appropriations in % of GDP, the terms of trade and the employment rate. Thus, it can be claimed that the more active the people on the labour market are, the higher the employment in economy is, the more the country trades and the more private money is spent on education, the lower unemployment would be. Obtained
Algorithm based on Non – Nested Nearest Neighbourhood Algorithm. Tree based on C 4.5 algorithm. 6 Algorithm based on Ripple Down Rules methodology. It generates basic rule and then gives succesive excceptions from the given rule. 7 Statistics which measures the level of agreement between expected values and observed values in the data set. 4 5
56
AENORM
59
April 2008
Econometrics
rules had the accuracy and kappa statistic equal to 85.71% and 0.78 respectively. Summary In the paper, the results of the analysis of the relations existing in the labour market, especially those of the unemployment rate to other characteristics, were examined. The investigation of the links between unemployment rate and other factors, with the exception to natural dependence from employment and activity indices, indicated that the factors related to education and to the expenditures on R&D are most significantly correlated with the phenomenon. The link between ALMP’s and unemployment level however, proved rather weak. All methods used in the analysis showed substantial agreement in the obtained results. The analysis of the stability over time could be one of the proposals for the further research in the field. Also, the verification of revealed patterns between unemployment and other characteristics on wider set of data – going beyond European countries – might produce some relevant and interesting results. Bibliography Calmfors, L. (1995). Labour market policy and unemployment, European Economic Review, 39, 583-592. Card, D. (1999). The Causal Effect on Education and Earnings. Handbook of labour economics, Elsevier B.V., Amsterdam. Granger, C. and Timmermann, A. (1999). Data mining with local model specification uncertainty: a discussion of Hoover and Perez, The Econometrics Journal, 2, 220 Heckman, J.J., Lalonde. R.J. and Smith, J.A. (1999). The Economics and Econometrics of Active Labour Market Programmes. Handbook of labour economics, Elsevier B.V., Amsterdam. Lange, F. and Topel, R. (2006). The social value of education and human capital. Handbook of the economics of education, Elsevier B.V., Amsterdam. Martin, J. (2000). „What works among active labour market policies: Evidence from OECD countries‘ experiences“, OECD Economic Studies. Mincer, J. (1991). Education and unemployment, NBER Working Paper 3838, Cambridge. Phelps, E., Francesconi, M., Orszag, J.M. and Zoega, G. (2000). Education and the Natural
AENORM
59
April 2008
57
Actuarial Sciences
Replicating Portfolios for Insurance Liabilities In this article I will discuss a recent development in the risk management of insurance companies, namely replicating portfolios for insurance liabilities. This development is a next step for improving the asset liability management of insurance companies and integrating them fully in today’s financial markets.
“Vanishing Swaps, Asian Basket Options, Double Knock Outs en CMS Caps; no science fiction titles but products traded in today’s financial markets. …But where these products have been invented only recently by investment banks, insurance companies have offered these derivatives as part of their products for decades.” This is how I started my previous article in Aenorm (vol. 50, 2005) in which I argued for insurance companies to embrace market consistent valuation and risk management. Since then, with the help of Solvency II taking shape, life insurers are spending an increasing amount of time on building an infrastructure for risk based solvency reporting. Furthermore the emergence of European Embedded Values (EEV) has increased the industry’s awareness for options and guarantees in insurance products. Moving towards risk based solvency measurement includes building models to calculate the fair value, or equivalently market consistent value, of insurance liabilities. Risk analysis requires a market consistent balance sheet under different economic scenarios. Having such an infrastructure allows not only for timely reporting but also provides unique insights in the portfolio which allows insurers to better manage their business on an economic basis1. The outline of the remainder of this article is as follows, first I discuss replicating portfolios as a representation of insurance liabilities. Second, I explain the need for having such a representation. Third, I discuss replicating portfolios in practice. Finally, I conclude.
David Schrager holds a Ph. D. in Quantitative Economics from the University of Amsterdam. As a professional he has worked on ALM at Nationale-Nederlanden and derivatives pricing at ABN Amro. Currently, he works on market risk and economic capital for Corporate Insurance Risk Management at ING. The views expressed in this article are those of the author and are not necessarily shared by ING.
What is a replicating portfolio? As mentioned in the introduction insurance products share many characteristics with standard derivative contracts. Take for example profit sharing contracts, where profit sharing takes place when returns are high but not when returns are low. This is very similar to call options on a stock or payer swaptions2. See also Bouwknegt and Pelsser (2001). Similar, a guarantee in a unit linked contract is nothing less than a put option on the underlying investment funds. When seen through the eyes of a financial specialist, many features of insurance contracts can be translated into financial products. Taken a bit further this insight can be used to let liabilities be represented by a portfolio of financial products in risk calculations as well. If insurance contracts share so many characteristics with certain derivative contracts, why not capture the risk profile of insurance liabilities by mapping them onto a set of standard financial instruments? One of the techniques used by insurance companies to get market consistent values for their liabilities is based on a set of risk neutral scenarios3. Under these scenarios4 the liability cash flows are calculated, dis-
Instead of an accounting basis which might not give the correct incentives to produce shareholder value. A payer swaption is an interest rate derivative which pays out when interest rates are above a certain strike level. 3 Valuation using a set of stochastic risk neutral scenarios is referred to as valuation using Monte Carlo simulation. In general analytical valuation or more efficient valuation techniques are preferable to Monte Carlo simulation. However this technique fits in nicely with the traditional Embedded Value projection systems companies have been using for some time. 4 For finding a replicating portfolio one can also use real world scenarios instead of risk neutral. 1 2
58
AENORM
59
April 2008
Actuarial Sciences
Figure 1. Economic Capital calculations without and with replicating portfolios. In the former, time consuming scenario based methods need to be employed for the Fair Valuation (equivalently Market Consistent valuation) of insurance liabilities. Since all components of the replicating portfolio can be valued using simple formulas, in the latter method, the calculation of the Value at Risk or Tail Value at Risk is simplified considerably.
counted and averaged to give an estimate of the market consistent value of the liabilities. However we can also use the information provided in those cash flows in a different way. We can define a replicating portfolio as a portfolio of standard financial instruments which matches the cash flows generated by the liabilities as good as possible. It is a key ingredient of the approach that these standard instruments are well understood, easy to value and also easy to produce cash flows for. Finding the replicating portfolio then reduces to some form of cash flow matching optimization problem. See also Oechslin et al. (2007). Why replicating portfolios? If this can be accomplished using actual liability portfolios it would mean a significant simplification of all calculations involving insurance liabilities. Normally, liabilities would have to be valued using time consuming Monte Carlo simulations under every scenario a risk manager would like to consider. These are typically many different scenarios; thousands even, for many risk factors in an Economic Capital calculation (a solvency measure based on fair values of assets and liabilities, which typically is intended to equal a 1 year Value at Risk at a certain confidence level). This is hardly possible to do in practice, running the scenarios through a liability model typically takes hours. This means that evaluation of portfolios over more than 10,000 scenarios is virtually impossible.
If a liability portfolio can be reduced to simple financial instruments for which there’s market information available to value them, for which analytical valuation formulas exist then this makes valuation almost instantaneous and also makes more sophisticated risk calculations possible. See figure 1 for a graphical explanation. The simplification comes in evaluating the liability in each 1 year scenario used to determine VaR and hence Economic Capital. Replicating Portfolios in action: an example of Profit-sharing After introducing the concept and its benefits it is time to see whether this can actually work on real liability cash flows. Consider a regular profit sharing portfolio with part of the profit sharing over 3% and another part over 4%. We use the actual cash flows projected by the liability model from an ING Business and, using the insights from Bouwknegt and Pelsser (2001), replicate using bonds and swaptions. The results are displayed in Figure 2 and Table 1. Figure 2 shows a scatter plot of portfolio cash flows vs. liability cash flows. Table 1 shows the value of the replicating portfolio under current market circumstances and a number of stress scenarios. The results are excellent, there’s a maximum of 5% difference between the sensitivity of the replicating portfolio and the result produced by the internal model for market consistent valuation. Not only does this significantly reduce the time
AENORM
59
April 2008
59
of weet jij* een beter moment voor de beste beslissing van je leven? www.werkenbijpwc.nl
Assurance • Tax • Advisory
*connectedthinking ©2007 PricewaterhouseCoopers. Alle rechten voorbehouden.
60
AENORM
59
April 2008
Actuarial Sciences
to evaluate risk calculations. In addition, the replicating portfolio helps us understand profit sharing contracts in terms of financial instruments! Furthermore the replicating portfolio can be used for liability driven investment and as a risk management tool: financial risk under
Replicating Portfolios at ING Although replicating portfolios are interesting from a theoretical perspective they are an extremely powerful tool in practice as well. At ING all calculations for Economic Capital are based on
"Using a replicating portfolio is a technique with a lot of promise for improving practical risk models and ALM decisions" fair value accounting of these products can be hedged using swaptions and zero bonds.
replicating portfolios in ING’s Economic Capital System (ECAPS). Because insurance liabilities can now be represented by simple, easy to value financial instruments, Value at Risk (VaR) calculations are executed using Monte Carlo simulations of economic scenarios. Using these Monte Carlo techniques allows for much better calculation of diversification between both different risk types as well as different ING entities. Graphically this process is represented in figure 3 showing the improved accuracy in the Economic Capital model. Furthermore replicating portfolios can be very useful during ALM studies and can support hedging decisions. It are especially these decisions that require an understanding of insurance products in terms of financial products. These insights can also create a better understanding of insurance products during product design and enforce pricing of products in a way that is consistent with both the risk associated with these products and the potential hedge costs. Conclusion
Figure 2. Scatterplot of replicating portfolio cash flows against projected liability cash flows from a portfolio of fixed annuities with a minimum guaranteed rate. The fit is extremely good as evidenced by the “Goodness of Fit Statistics”.
In this article I have discussed replicating portfolios and their merits. I have argued replicating portfolios to be an important tool in risk based solvency calculations where market va-
Life Traditional Portfolio Change market value of the replicating and the target portfolio under different shock scenarios Shock
Replicating Portfolio
Internal model
Difference
% Difference
-200 bps
-1,257
-1,195
-62
5%
-100 bps
-509
-508
-1
0%
Non parallel down shock
-923
-904
-19
2%
EC size down shock
-781
-780
-1
0%
Current market value
3,364
3,399
-35
-1%
1
1
150 bps down 10Y, 200 bps down 5Y and 115 bps down 1Y
All numbers in (000 000s) Table 1: Market value and sensitivities of replicating portfolio vs. internal model outcomes. This shows that replicating portfolio represents the risk profile of the cash flows very well.
AENORM
59
April 2008
61
Actuarial Sciences
Figure 3. Old method vs. current method (ECAPS). The latter is based on replicating portfolios. Because time consuming scenario based methods needed to be employed for the Fair Valuation (equivalently Market Consistent valuation) of insurance liabilities only simplified risk calculations and aggregation techniques could be employed. In the current method although an approximation is made by using the replicating portfolio instead of the “true” liabilities the diversification calculations can be done in a much more sophisticated way.
lues of insurance liabilities are needed. In an example using an actual liability cash flow model of a profit sharing contract I have shown a simple replicating portfolio to produce stunning results. Developments in the insurance industry show that this is a technique with a lot of promise for improving practical risk models and ALM decisions. Literature Bouwknegt, P. and Pelsser, A. (2001). “Marktwaarde van Winstdeling”, De Actuaris, March. Oechslin, J., Aubry, O., Aellig, M., Käppeli, A., Brönnimann, D. and Tandonnet, A. (2007). “Replicating embedded options in life insurance policies”, Life & Pensions Magazine, February. Schrager, D. (2005). “Nog maar 1 optie…”, Aenorm, December Wilson, T. (2007). “Insurance Economic Capital Framework”, presentation at the ING Investor Relation Symposium, September 20, London.
62
AENORM
59
April 2008
Natuurlijk kun je voor jezelf beginnen. Maar dat kun je ook bij ons komen doen. IMC Trading is gespecialiseerd in aandelen- en derivatenhandel voor eigen rekening. IMC Trading is naast marketmaking actief in het zelfstandig ontwikkelen van innovatieve trading strategieën. IMC Trading is onderdeel van de IMC groep met daarin naast Trading toonaangevend activiteiten op het gebied van Brokerage, Consultancy in derivaten en Asset Management. De ruim 450 medewerkers verdeeld over het hoofdkantoor in het financiële centrum van Amsterdam - Zuid WTC - en vestigingen in Chicago, Sydney, Zug en Hong Kong. We bestaan sinds 1989 en zijn een niet-hiërarchische, dynamische en jonge organisatie, waar innovatie en ondernemerschap voorop staan. Onze cultuur kenmerkt zich door professionaliteit, gedrevenheid en teamspirit. Vanwege de groei van onze activiteiten zijn we op zoek naar enthousiaste
Junior Traders - Strategists/ Trading Engineers Na een traineeship - dat bestaat uit een theoretisch, praktisch en strategisch gedeelte - kijken we welke richting het beste bij je past. Je profiel: een afgeronde academische studie, bijvoorbeeld een technische, of kwantitatieve richting en affiniteit met financiële markten. Uitmuntende analytische vaardigheden en 0 tot 3 jaar werkervaring. Je bent van nature een teamplayer, innovatief, resultaatgericht en je wilt graag het maximale uit jezelf halen. Voor meer informatie kun je terecht bij Margo Nederhand, Recruiter (020-7988512). Een rekentest en/of development test kunnen onderdeel uitmaken van de selectieprocedure.
Trading globally IMC (International Marketmakers Combination), Strawinskylaan 377, 1077 XX Amsterdam, www.imc.nl
AENORM
59
April 2008
63
Econometrics
Valuation of long-term hybrid equity-interest rate options During the last decades the financial world has witnessed an enormous growth in the trading of derivatives1. Not only did the volumes increase massively, the complexity of some of the traded products truly went through the roof: as a consequence the so-called complex ‘exotic’ derivatives market shifted from being a strange and rare category (the word ‘exotic’ says it all) to a multi-billion industry that currently forms a crucial part of the growth strategy of many investment banks and insurance companies. Often these products are developed by financial institutions to reduce the financial risks of their clients; however they can also be used to create highly speculative positions which then can result in large profits, or big losses: the most recent example probably being the loss of +- 5 billion euros that was reported by the French bank Société Générale as consequence of a highly speculative position in futures contracts which was created by a (fraudulous) trader.
This recent booming has led to the situation that many financial institutions nowadays face complex risks, which are hard to value, let alone to manage. Appropriate risk management of such products therefore at least requires ‘realistic’ models to describe the stochastic nature of the involved market risks. One of the measures that can be used to characterize the risk of an investment is ‘volatility’, which roughly stands for the uncertainty in fluctuations of the financial assets such as stocks or interest rates. Therefore, volatility is usually one of the major determinants for the price of a financial derivative. The earliest and most common derivative flavors were the ‘so-called’ plain vanilla options (just as vanilla ice cream); the holder of a vanilla call option has the right (but not the obligation!) to buy a certain amount of stock for the contract-specified price at some future date, while a put option gives the right to sell. The value of such a right is hence given by the corresponding option prices. In times there is a lot of uncertainty about future asset prices, the value of such a right is more valuable than in less volatile times. Exotic options Apart from the vanilla call and put options, all kind of other options are traded by financial institutions; in the early days of option trading, the other derivative flavors were such strange and rare that the market labeled them as ‘exotic’ options. In contrast, during the last decade the exotic derivatives market made a sharp u1
64
Alexander van Haastrecht has a Bachelor in Mathematics and a Master in Econometrics from the VU (both cum laude). Currently he holds a position as PhD-student at UvA’s Actuarial Science department (supervisor Antoon Pelsser), next to a parttime job as quantitative analyst at Delta Lloyd Insurance. The main parts of this article stem from a chapter of his Master’s thesis that was written under supervision of Ad Ridder.
turn and has transformed into a gigantic multibillion industry! To determine the value of an ‘exotic’ derivative, option pricing models are used as an ‘extrapolation’ tool. Given the prices of liquid vanilla options that are available in the market, pricing models try to conceive as much information as possible from the probability distributions that drive such prices and use this information to value exotic derivatives. To successfully incorporate the market information into a pricing model, one seeks internally consistent models that can explain the observed market prices of vanilla options for the same underlying asset. Option pricing models The starting point valuations and risk management lies in option pricing models. At the foundation of such a model we need to make some modeling assumptions with respect to the financial underlying that underlies the exotic option. The resulting choices are then conceived in a derivative pricing model whose main goal is to determine the fair value of the option. Once
A financial derivative stands for a contract that derives its value from another asset, i.e. the underlying.
AENORM
59
April 2008
Econometrics
In this article I will try to explain some concepts behind some (hybrid) equity-interest rates option pricing models and discuss how some of the weaknesses of classical models can be improved.
t
4.5% 4.0% 3.5% 3.0% 2.5%
1.5%
In practice, the most widely applied model is probably still the celebrated Black-Scholes (1973) model, in which the underlying stock price is assumed geometric Brownian motion: dSt = rt St dt + σ t St dZ ,
3-month EURIBOR interest rate 5.0%
2.0%
The Black-Scholes model
S
• Interest rates are not constant, they are stochastic: during the life-time of the option the term structure of interest rates changes all the time and in fact the biggest derivatives market is also the interest rate market! Though for short equity options it is common practice and harmless to assume constant interest rates (the changes are relatively small in comparison to equity movements), doing the same for long term equity options (for which the impact of stochastic rates is quite
Rate
a certain model has been chosen, the pricing of exotic options is often a two-step procedure. 1 Calibration: We want to extract information from vanilla market option prices by trying to match our model with these prices as good as possible: for example we try to minimize the sum squared errors between the model and the relevant market prices. 2 Numerical valuation: Once the model parameters are fine-tuned to market prices, we can use the model in a numerical procedure (like Monte Carlo simulation) to determine the fair value of the option price.
S0 ≥ 0.
The main advantage (and key of the success) of this model is that it leads to an easy and fast calculation of many vanilla option prices, e.g. the famous Black-Scholes call/put option pricing formula. Basically the model has only one free parameter which is the underlying stock volatility, which is assumed to be constant. This constant can for example be extracted from historical or vanilla option data. As convenient this all may seem, the Black-Scholes model is severely misspecified: • Volatility is not constant, it is stochastic: periods of high and low stock volatility alternate in financial markets (also called volatility clustering). Figure 1 illustrates this effect. It plots the daily returns on the AEX stock index for the period August 1999 - December 2006. Note that the spread in the return realizations (i.e. volatility) is not constant.
Figure 1: Moving average of the volatility of a major stock index from July 2001 - July 2007.
7/2/2001
7/2/2003
7/2/2005
7/2/2007
Date
Figure 2: three months EURIBOR interest rate for the period July 2001 - July 2007.
significant) can lead to serious hedge and pricing errors. Additionally the incorporation of stochastic interest rates into an equity model also enables us to price so-called hybrid equity-interest rate options, i.e. those options that explicitly depend on the evolution of the underlying. Especially for insurers, which have a lot of longterm hybrid options embedded in their pension or unit-linked contracts, such hybrid models are important for appropriate valuations and risk management of these products. • Asset prices do not move smoothly, they jump: by looking at sample paths of asset prices, one will notice that now and then asset prices experience a sudden market shock, for example the equity crash of 1987 or the smaller shocks due to the recent credit crunch. The hybrid Heston-G2 model In the previous section we discussed some empirical market phenomena that indicate that the Black-Scholes model is severely misspecified. We will show how to conceive these aspects into a pricing model of long-term hybrid equityinterest rate options; we choose to extend the Black-Scholes model by incorporating stochastic volatility and interest rates. Though econometric research certainly indicates that short-term equity options (up to three months) should be priced using a model that somehow incorporates jumps in the price process, similar research also indicates that for the pricing of most of the long-term exotic options modeling stochastic
AENORM
59
April 2008
65
Econometrics
volatility (and interest rates) is of higher importance than modeling jumps. The explanation for this is as follows: though the asset price might jump over a short period of time, a large decrease (or increase) of the asset price over a longer period can equally well be modeled by repeated decreases of asset prices as by one single jump. Moreover since a sum of independent jumps (with finite second moment) convergences back to a lognormal (Black-Scholes) distribution, one cannot properly explain long-term excess kurtosis and skewness (implied by market prices) by solely using a jump diffusion model. Thus for the pricing of long-term equity options I think that one should definitely try to incorporate some kind of stochastic volatility mechanism instead of (or perhaps in combination with) a jump component. The excess kurtosis (“the wings”) and skewness (“the slope around the strike level 1.0”) for long-term options can be seen from the implied volatility plot from figure 3.
modeling a square-root process we assure that the volatility cannot become negative. Though model is certainly richer than the BlackScholes model, it is also more complex and incorporates non-trivial calibration and pricing methods for vanilla and exotic options. For example the model price of a standard call option already has to be calculated by a Fourier inversion and for almost all exotic options one has to resort to Monte Carlo simulation or approximations. Hence if time is an issue, it is crucial to develop efficient Monte Carlo procedures, e.g. by using fast discretisation schemes or employing variance reduction techniques. To see the benefits of this more complex model, one can for example take a look at the calibration quality of the model; since the model is used as an extrapolation tool of liquid market prices, an important criterion is how well the market prices are captured by the pricing model. An example of such a calibration result is reported in table 1. Equity Calibration Results
Figure 3: EuroStoxx50 implied volatility surface for different strikes and maturities on 11 June 2007. The ‘quoted’ volatility (by market convention!) should be plugged into the Black-Scholes to obtain the corresponding market price of the vanilla call/put option (source: anonymous broker).
To be specific we use a combination of two famous models for the pricing of hybrid equityinterest rate options: the Heston stochastic volatility equity model with a two-factor Gaussian (G2) interest rates model (also: the Hull-White model). The so-called ‘hybrid Heston-G2 model’ (with the instantaneous squared volatility and the short interest rate) reads
dSt = rt St dt + vt St dZ S , t
dvt = κ (θ − vt )dt +
vt dZtV
drt = (...)dt + λdZtλ + ηdZtη
S0 ≥ 0, v0 ≥ 0, r0 ≥ 0,
and can be seen to be driven by four (possible correlated) stochastic variables. Hence it easy to see that the Heston-G2 model generalizes the Black-Scholes model by making both volatility and interest rates stochastic. Moreover by
66
AENORM
59
April 2008
BS
HE-G2
HEG2-J
short variance
0.0205
0.0241
0.0196
mean reversion
-
1.0049
0.4607
long-term variance
-
0.0501
0.0579
volatility of volatility
-
0.4159
0.3914
leverage
-
-0.61
-0.5938
jump intensity
-
-
0.5306
mean jump size
-
-
-0.108
volatility of jump size
-
-
0.0097
6.4906
0.0149
0.0133
MSE
Table 1: Calibration results to EuroStoxx50 option data of figure 3. Reported are the parameters and the mean squared (price) errors (MSE) of the Black-Scholes (BS) model, the Heston-G2 model (HE-G2) and the Heston-G2 model with an additional jump component (HE-G2-J).
Notice that by incorporating stochastic volatility the mean squared price error becomes more than 400 times smaller in comparison to the Black-Scholes model. As expected, the addition of jumps to the stochastic volatility model does not significantly increase the explanatory power of the model; by adding three extra parameters, we only see a slight improvement of the calibration quality. Hence we find that for the pricing of long-term equity options, it is not strictly necessary (and perhaps even not desirable because of a possible overparameterization) to explicitly model a jump component. Market Incompleteness One of the latest developments in the exotic derivatives market seems to go to the so-called
Econometrics
“hybrid derivatives”, which is a name for general group of products which main characteristic is that they depend on multiple underlying instruments, e.g. multiple equity indices or a combination between an equity and interest rate derivative. For example there are quite some financial institutions which would like to buy protection against a joint movement of interest rates and equity returns. The main difficulty with these hybrid products is to determine the future dependency between
References Brigo, D. and Mercurio, F. (2006). Interest rate models - theory and practice, Springer Finance. Chen, A. Pelsser, A. and Vellekoop, M. (2007). Approximate solutions for indifference pricing under general utility functions. http://www. netspar.nl/research/themes/2006/valuation/ output/
"The Black-Scholes model is severely misspecified" the underlying assets; moreover since for the majority of products there is no liquid market to buy protection against such dependencies (e.g. in the form of a correlation market) the involved market risks of such products are hard to manage: even in the case one is fully able to determine the dependency structure between the involved quantities, one cannot hedge (or reduce) this correlation risks since there is just no market place for it! Especially insurance companies deal with several kinds of market incompleteness2; in fact many risks they face, like mortality or claims, are non-financial and are often even more difficult to manage because there hardly are liquid markets for these non-financial risks.
Heston, S.L. (1993). A closed-form solution for options with stochastic volatility with applications to bond and currency options, Review of Financial Studies, 6, 327-343. Hull, J. (2005). Options, Futures and Other Derivatives, 6th Edition, Prentice Hall. van der Ploeg, A.P.C., Oosterlee, C.W. et al. (2007). Mathematics with Industry (SWI): The ING Problem (hybrid Heston-Hull-White model).
The bottom line is that if one is confronted with some kind of market incompleteness, one has to cope with a setting in which the previously discussed option pricing models do not remain valid any more. As such the pricing of derivatives in incomplete markets is often very different and much harder to solve than for a complete market setting. Hence as far as I know, in practice the only solution against unhedgable quantities is to use conservative estimates (e.g. a safe correlation estimate), which coincides with a more theoretical framework that tries to incorporate the company’s risk appetite against unhedgable risks in a pricing procedure. In fact both methods try to determine the amount of compensation one is willing to receive (or pay) against these open risks. Currently both practitioners and academics are working on new methods that can hopefully complete (or improve) upon the puzzle of the incomplete markets!
Loosely speaking: in a complete market every option can be perfectly replicated other financial instruments, whereas in incomplete market not every option can be hedged and therefore not all the involved risks can be eliminated. 2
AENORM
59
April 2008
67
Puzzle
Puzzle As usual, we give our beloved readers two puzzles of Sam Loyd to make in their spare time. First, we will present you the solutions of the puzzles in the previous Aenorm. Domestic complications Mrs. Jones was the daughter of Smith and the niece of Brown, so there were but four persons. $100 was contributed, $92 spent and each received $2 in the distribution. The Yacht Race Unfortunately, there was an error in this question, the length of the first three quarters and the last three quarters should be unequal. The puzzle can still be solved, but becomes relatively easy. It immediately becomes clear that the time it took to complete the first leg of the triangle should be equal to the time of the last lag. We can then solve for the time of the first leg by solving (x/4) + x + 10 + x = 270, where x is the time of the first leg. This gives x = (4/9)*260. Therefore the total winning time is 356 minutes and 40 seconds. We only received one correct solution of the puzzles of the previous Aenorm, so Simen Hoving is the winner of the book token! The new puzzles for this edition are: Tell mother’s age Readers often tell us they like age puzzles, so we decided to please our readers and choose an age puzzle of Sam Lloyd. One of the trio in the picture was having a birthday anniversary. This aroused Master Tommy’s curiosity regarding their respective ages, and in response to his queries his father said: ‘Now, Tommy, our three ages combined amount to just seventy years. As I am just six times as old as you are now, it may be said that when I am but twice as old as you, our three combined ages will be twice what they are present. Now let me see if you can tell me how old is mother?’ Tommy, being bright at figures, immediately solved the problem,+ but then he has the advantage of knowing his own age. Are our readers able to solve this puzzle with only the data
68
AENORM
59
April 2008
regarding the comparative ages of father and son? Jealous Couples The VSAE is celebrating its 45th anniversary this year, and therefore a ball took place on March 14th. In order to get to the location, the couples had to cross a river with a small island in the middle of the river. The boat was only able to carry two persons at the same time. As it is hard to find a date among econometricians for our male students, the men were extremely jealous and none of them permitted his date to remain at any time in the company of another man or men unless he was also present. Nor was any man to get into a boat alone when there happened to be a girl along, on the island or shore, other than the one who was his date. This leads one to suspect that the girls were also jealous and feared that their dates would run off with another girl if they got the chance. If there were four couples and just one island in the middle of the river on which any number of people can stand, how many trips would the boat make to get the four couples safely to the ball without breaking up any relationships? Solutions Solutions to the two puzzles above can be submitted upto May 1st. You can hand them in in the VSAE room, room C6.06, mail them to info@vsae.nl or send them to VSAE, for the attention of Aenorm puzzle 59, Roetersstraat 11, 1018 WB Amsterdam, Holland. Among the correct submissions, one book token will be won. Solutions can be in both English and in Dutch.
Facultive
Free University Amsterdam
University of Amsterdam After the first two hard months, the new board of the VSAE has kind of settled in. Since the first of February the entire board is replaced and five new, enthusiastic students will try to bring the VSAE to even higher levels. The old board didn’t retire before organising some major events in the last couple of months. Of these, the Congress on Actuarial Sciences is certainly worth mentioning. In December we held this congress, which had Pension Risk Management as its general theme. With about 200 participants there is no doubt that this congress was a huge success! The first actual project that was organized under the supervision of the new board was the Lustrum: the 45th anniversary of the VSAE. Since this moment could not pass unnoticed, the festivities filled up an entire week. The next major event on the menu is the Econometric Game, which will take place on the 7th and 8th of April. In this event 22 teams of different European universities will compete against each other to be best in solving an econometric case. Finally, a group of 24 students will visit Mexico-city from 10 to 20 April. Here they will try to solve a case for Greenpeace, regarding the reduction of CO2-emission. Agenda 7 and 8 April Econometric Game 10-20 April International Study Project 22 April Monthly Free Drink 14 May ORM-dag
We can look back at a great and intense indoor football tournament and a very nice LED. Coming up are a few activities organized in association with some companies. The first will be a day showing what an econometrist does at PricewaterhouseCoopers and later on an application training by FaradayClark. After these more serious activities there will be the yearly football tournament versus (or together with, whichever you prefer) the VSAE. A week later the ORM-Day will take place, also organized in association with the VSAE. Last but not least is our mystery activity, the Kraket weekend which will start the 30th of May. We are looking forward to welcome you on all of those very exciting activities. Agenda 11 April Meeloopdag PWC 17 April Diner Towers Perrin 24 April Mini-sollicitatie bij FaradayClark 7 May Voetbaltoernooi i.s.m. VSAE 14 May ORM-dag 22-23 May Nacht van Eindhoven 30 May-1 June Kraket weekend
AENORM
59
April 2008
69