Success is a team effort! Trading on the future. All Options is a leading market maker providing liquidity to the derivatives markets in Europe and Asia. In only 2 years we have grown from 60 to 300+ employees and are now one of the largest market makers in the world. And we don’t stop there – this year we are seeking another 50 young talents for trading careers.
Join our team: www.alloptions.nl/life
This success is due to our belief in support for personal achievement and discipline in success. Working in unity is at the core of our culture. That’s why we reward our traders on both their individual and team performance. If you are interested in a challenging and rewarding career in trading check out your options at: www.alloptions.nl
Preface
Black Swans A
s nowadays all the newspapers are already filled with reports about the current situation in the financial world, I intended not to mention it. But the impact of the crisis forces me to think about the possibilities of having predicted this situation, which seems to influences more lives every day in the world. An interesting question that pops up is whether events in life are predictable. Can history be used to predict the future or do we simply fall from one random event into the next random one? Is it possible to predict events, like the credit crunch, that are highly improbable? The theory of the black swans occurs while asking these questions. The black swan theory stands for a philosophical observation. For ages, humankind believed that all swans were white, the evidence for this theory was found in the fact that every day only white swans could be observed. But when the Dutch explorer Willem de Vlamingh arrived in Australia in the year 1697, he discovered a group of black swans. In one moment, the immemorial theory of white swans was disproved. You might believe that all the swans are white, but no matter how many white swans you observe, you can never prove it for certain. When a black swan occurs unexpectedly, the theory is immediately completely disproved. Events like black swans (or for example the attacks on September 11th) are unexpected and seem to be unpredictable as the writer Nassim Taleb summarizes in his book “The Black Swan: The impact of the Highly Improbable.” It seems that even our exhaustive knowledge of statistics does not make us capable of fully predicting the future. Future seems to have still some surprises for us, which we cannot predict. Well, I think that is what makes life interesting and fascinating. Another interesting happening is that a new board of the VSAE has entered this February and I am proud to be the new chief editor of the Aenorm. This Aenorm is full with interesting articles concerning various topics, like “Implications for optimal road taxes”, “Portfolio allocation in times of stress” and “The Case of Approval Voting”. Hereby I would like to thank Lennart for all his effort for the Aenorm during the last year and I wish him good luck with continuing his study in econometrics. I also would like to thank Taek who is into the fantastic lay-out of the Aenorm and is already doing this for a long time. This April the tenth edition of the Econometric Game will be organized by the VSAE. During three days teams from all over the world will work on a challenging econometric case. We are honoured to welcome all the participants in Amsterdam and we wish them good luck with the case. Annelies Langelaar
AENORM
63
April 2009
1
Aenorm 63
Contest List
Interview with William Brock
4
William Brock is professor in economics at the University of Wisconsin-Madison. He is one of the pioneers in applying the research on complexity to economic science. In January of this year he received a honorary doctorate from the University of Amsterdam. Lennart Dek
The Freakonomics Controversy: Legalized Abortion and Reduced Crime 10
Cover design: Michael Groen Aenorm has a circulation of 1900 copies for all students Actuarial Sciences and Econometrics & Operations Research at the University of Amsterdam and for all students in Econometrics at the VU University of Amsterdam. Aenorm is also distributed among all alumni of the VSAE. Aenorm is a joint publication of VSAE and Kraket. A free subsciption can be obtained at www.aenorm.eu. Insertion of an article does not mean that the opinion of the board of the VSAE, the board of Kraket or the redactional staff is verbalized. Nothing from this magazine can be duplicated without permission of VSAE or Kraket. No rights can be taken from the content of this magazine.
The best-seller “Freakonomics” (2005) almost does not need an introduction. The writers, Steven Levitt and Stephen Dubner, managed to present economics in a way both entertaining and understandable for the wide audience. One of the most infamous chapters concerns the link between legalized abortion and crime reduction. Chen Yeh
SAFFIER, CPB’s Workhorse for Short- and Medium-Term Analyses 15 In times of economic crises, people have many questions about the current and future state of the economy and how to affect it. If used properly, macroeconomic models can be very useful in answering these types of questions. Since its inception in 1945, the Central Planning Bureau (CPB) has built and used many models of the Dutch economy. These have played an important role in macroeconomic policy preparation. Henk Kranendonk and Johan Verbruggen
How Buyer Groups can Effectively Operate as Stable Cartels 22 Buyer groups play an important role across a wide range of sectors in the economy. They may facilitate pro-competitive forces based on buyer power considerations, which is potentially to the benefit of final consumers. However, buyer groups may also induce strictly anti-competitive market outcomes.
© 2009 VSAE/Kraket
Martijn Han
Labour Supply and Commuting: Implications for Optimal Road Taxes 26 Commuting is one of the main contributors to road congestion. In order to address congestion, policymakers may influence the workers’ commute by introducing a road tax. According to a recently established paradigm, the revenues of such a welfaremaximising road tax should be employed to reduce the level of a distortionary income tax. Eva Gutierrez Puigarnau
22
AENORM
63
April 2009
Credit Risk Restructuring: a Six Sigma Approach in Banking 33
Volume 17 Edition 63 April 2009 ISSN 1568-2188
Consolidation in banking sweeps over Europe as Italian banks Unicredit and Capitalia merge (in 2007), Fortis NL and ABN Amro bank have been taken over by the Dutch government and German bank Commerzbank merges with Dresdner bank (both events in 2008). This is only the continuation of a trend that has started since the start of the century.
Chief editor: Annelies Langelaar Editorial Board: Annelies Langelaar
Marco Folpmers and Jeroen Lemmens
Design: Carmen CebriĂĄn
On the Manipulability of Votes: The Case of Approval Voting 41
Lay-out: Taek Bijman
National elections, the Eurovision Song Festival, and councils of scientific communities have in common that voters choose from candidates by some fixed voting procedure. The theorem of Gibbard (1973) and Satterthwaite (1975) says that if we want such a voting procedure to be non-dictatorial, then it will necessarily be strategically manipulable.
Editorial staff: Raymon Badloe Erik Beckers DaniĂŤlla Brals Lennart Dek Jacco Meure Bas Meijers Chen Yeh
Hans Peters
Portfolio Allocation in Times of Stress
Advertisers: Achmea Aegon All Options AON APG Delta Lloyd De Nederlandsche Bank Eneco IMC PricewaterhouseCoopers SNS Reaal Towers Perrin TNO Watson Wyatt Worldwide
44
With financial markets in turmoil, this might at first sight not be the most ideal moment to look at portfolio allocation. But then again, as long as one considers portfolio allocation as a dynamic continuous action, then maybe the present times are very attractive times for choosing an optimal portfolio Charles Bos
Dynamic Asset Allocation in a Hybrid Pension Fund
50
There is an increasing shift from Defined Benefit (DB) schemes to Defined Contribution (DC) around the world. In the UK, for example, the vast majority of private sector DB schemes (mainly final salary schemes), have been closed to new entrants and being replaced by DC or money purchase schemes.
Information about advertising can be obtained from Daan de Bruin info@vsae.nl
Denise Gomez-Hernandez
Interview with Jan Kiviet
Editorial staff adresses: VSAE, Roetersstraat 11, C6.06 1018 WB Amsterdam, tel: 020-5254134
54
Professor Jan Kiviet is currently professor of Econometrics and Director of the research group in Econometrics, both since 1989, at the University of Amsterdam. He has obtained a PhD in Economics (1987) and a MSc in Econometrics (1974) at the University of Amsterdam. His research interest are Dynamic Models, Finite Sample Issues, Asymptotic Expansions, Exact Inference, Bootstrap, Monte Carlo Testing and Simulation, Panel Data Analysis, History of Statistics and Econometrics.
Kraket, de Boelelaan 1105, 1081 HV Amsterdam, tel: 020-5986015
Annelies Langelaar
www.aenorm.eu
Puzzle 59 Facultive 60
AENORM
63
April 2009
3
Interview
Interview with William Brock William Brock is professor in economics at the University of Wisconsin-Madison. He is one of the pioneers in applying the research on complexity to economic science. In January of this year he received a honorary doctorate from the University of Amsterdam.1
You are nicknamed Buzz. According to Wikipedia, which of course was consulted in preparation of this interview, its origin is uncertain. Can you perhaps shed some light on this mystery? The Wikipedia site has been edited by a comedian, who was joking. The actual origin of my nickname is quite boring: My parents were just fooling around. To give you an idea of the nicknames in my families; my younger sister is called Pudge. This originated when my father patted the belly of my mother and felt a kick. If she is now called by her real name within the family, she gets angry. During the Second World War some song about a bee was very popular. My older sister was always singing this song. She did not like the idea of a younger sibling follow her around and destroying her toys. So to annoy me she would always make sounds like a bee. It did not work that well: I annoyed her even more [laughs]. You were a very promising student in mathematics, even obtaining a prize for excellence for your PhD-thesis. However, you chose to pursue a career in economics. That story is actually kind of interesting. I grew up on a farm, so I was always fascinated with price movement. It seemed like every time we were getting into business the price would be high and then it would drop after we got out of it. As a young farm kid this got me thinking; what causes the formation of prices? What causes the movement; are there dynamics over time? I was a somewhat lazy student in high school. After completing it, I went to junior college in Michigan, living with my aunt. The tuition was 90 dollars [laughs]. Students can only dream about that now. For a semester, my results were not that great since I goofed off in high school. But then I caught on to calculus and started getting A’s in mathematics. After the first year I switched to State University
in Missouri, which was considered much harder than the junior college in Michigan. Although we all made fun of that junior college, it was actually a superb institution. If it was not for that junior college, I would not have been here. They really helped me and got me going. In Missouri I was taking classes and looking for a job. We did not have a lot of money so you had to support yourself. I got a job with a fellow named Russell Thompson who was an economist. While working with him, I fell in love with economics. You will even see some articles on my CV which I wrote with Russell Thompson. We wrote some articles together in economics, sent them to journals and some were rejected. Russell taught me how to deal with rejection. He was wonderful; if it was not for him I would not have been here either. My buddy and I for example signed up for the Air Force when we were undergraduates. I thought I would make pilot for sure. I was in good physical condition –still am- and had excellent eye sight. The day after I signed up I called Russell and told him I was going to join the Air Force. “Oh no, you are not,” he said. The next day he marched me down to that Air Force office and we unsigned the papers. He said: “You are not going to the Air force, you are going to graduate school.” I went to graduate school as a probationary student, because of my choppy undergraduate record. I asked Russell whether I should go in economics or mathematics. He asked me what I really wanted to do: economics or mathematics. I answered I wanted to do economics and did not want to waste any time doing mathematics. “Well,” he said, “you should do math. If you can actually graduate your PhD in mathematics and continue in economics afterward, you will never have to think about mathematics.” He told me that if I went for economics right away, I would basically waste my time thinking about mathematics rather than on the economic substance. Without having a thorough background in economics, you got a position at the eco-
I would like to thank both professor Brock and professor Cars Hommes for their very kind cooperation to this interview. 1
4
AENORM
63
April 2009
Interview
nomics department of Rochester. How did you obtain enough knowledge about economics to work in this area? The only economics I knew was the economics of the field. I had actually started a small agricultural business together with my dad with which we lost most of our money. So I was kind of a small businessman, but I knew no formal economics. However, my thesis adviser was David Gale, who was a very prominent mathematical economist. So, I ended up writing my PhD-thesis on mathematical economics, rather than pure mathematics. After completing my PhD-thesis, it was time to look for a job and I got a joint appointment in mathematics and economics at the University of Rochester. At first, I taught mostly mathematic courses at Rochester, but I thought economics was far more exciting. So they put me on undergraduate courses like micro. I would study like hell to be ahead of the students, so I could teach them the substance. In that way I obtained the knowledge quite fast. Your transfer to the University of Chicago seems like quite a big career switch as well. The economics department of Chicago is one of the most prestigious in the world. This was indeed a big career switch, but you have to remember I was a bit of a risk taker –I for instance wanted to be a jet pilot. So when this opportunity opened up at the University of Chicago my thought was that it would be an adventure. Coming from Berkeley I was kind of left of centre. I thought it would be great to debate the great minds that were right of centre like Milton Friedman and George Stigler. For me, this was a great opportunity, so I took it. Was there a different atmosphere opposed to Rochester? Rochester had a lot of Chicago economists, so the difference was not enormous. Both Rochester and Chicago are private universities, but since Chicago is more prestigious, the atmosphere was indeed a little different. Chicago was definitely more competitive, but it also really believed in freedom of speech. At Rochester, you worked at stochastic optimal growth models. Could you explain what the aim of your research was? As assistant professor at Rochester I got in touch with a graduate student, named Leonard Mirman. His research focused on stochastic growth models and he was basically working with stochastic difference equations. The two of us started working on the much harder opti-
mal growth problem, where you actually have to solve the equivalent of the stochastic optimal control problem. We tried to prove properties of the solution, especially long run properties. Just this intellectual problem occupied us to no end. The objective was to understand the patterns of a growing economy, when it is buffeted by stochastic chocks. The planning authority tries to smooth the marginal utility across time. So even though the economy is getting shocks, sometimes hard, sometimes softly, the planner struggles to keep the marginal utility roughly constant across time. You do not want to consume all of your wealth today; you want to save some of it for tomorrow. So you essentially smooth your consumption over time. What would that make the time series of capital accumulation and consumption look like? Surprisingly, it actually kind of looked like that of a real economy, but it was generated by an efficient economy with no Keynesian type of inefficiency Later on people started to turn that kind of model into a competitive equilibrium model. This is however exactly the same as the planner problem, which we solved in the infinite-dimensional space. At first you believed that the economy was basically stable, but later you realized that this was really restrictive and unrealistic. You became one of the first to argue that the economy is not globally stable. The history of that line of thinking came from Keynes, who wrote the general theory. He argued that the economy could stay in a high unemployment state for a long time. That is where his famous quote “in the long run we are all dead” comes from. The neo-classical economists argued that the economy would be stabilized in the long run. He wanted to figure out a way to bring the system quicker to the stable state. In many places in the US, not just Chicago, there is a view that dates back from Milton Friedman´s work on what happened during the Great Depression. The idea was that the Federal Reserve went into the wrong direction in the Great Depression. It was argued that the underlying economy would have stabilized much quicker, if it had not been for the stupid government who actually destabilized it. There is even a body of thought that argues that the New Deal of Franklin D. Roosevelt did not really assist in stabilizing the economy, because all of the interventions that the government did, actually made it harder for the system to reach stability. So according to this view, government actions did a lot to destabilize a system that was basically stable enough, to eventually reach an equilibrium and reach it quicker than it did.
AENORM
63
April 2009
5
Interview
This equilibrium might be a stochastic one. Neoclassical economists, without a doubt, did have in mind a stochastic notion of equilibrium, because the shocks the economy gets hit with are visible to everyone and everyone can see they are definitively not deterministic. All these views were circulating around at the time when I was working with Mirman at Rochester. Later, I also did some of this work at Chicago. I worked on multi sector models of the economy. Planning models are a variation on that except that they are even more complicated. This is because there are many different kinds of capital good sectors and many different kinds of consumption good sectors. The planner is maximizing a discounted sum of welfare payoffs for the economy from now till a very long time horizon. I was actually trying to prove mathematical theorems where I would locate sufficient conditions for this system to be stochastically stable after the planner had optimized it. Since all kinds of interactions were going on, the mathematical problem was rather difficult. Locating sufficient conditions, so that this math problem would produce a long-run stable solution was extremely difficult, but I managed to do so. However, they were extremely severe. It required some very strong type of concavity and a discount rate on the future which was basically zero. In other words, you had to rate your distant future just as much as your current present. I proved my theorem that the system was stable, but the conditions were extremely severe. After this, I went on in the other direction. I made the variance of the noise –the shockszero, so to obtain a deterministic system. I would take the time horizon to infinity to get what is called a time stationary optimal control problem. I did an operation, called linearization, around the steady state of this problem. I got a big Jacobian matrix and calculated the eigenvalues to find out which ones were stable and which ones not. By doing so, I could locate condition for instability as well as local stability. The problem I was wrestling with was still a mathematical one. I did not have any “religious” beliefs of any kind one way or the other. I was working in Chicago where a number of economists had some very good arguments that the underlying economy was stable and it was the government that was destabilizing it. Being a mathematical economist, I had proven that this indeed could be the case. However, the conditions were so severe that stability was unlikely. In the 80s your research interest turned to the branch of mathematics known as chaos theory. Could you tell us what attracted you to the subject and what your research exactly involved? The simplest example of chaos in mathematics is the difference equation:
6
AENORM
63
April 2009
Figure 1
= − − −
This system is stable if
= →∞
I.e. xt in the long run converges to some stea dy state value . Of course this value depends on a and the starting point. To show the relation between a and we can draw a bifurcation diagram (figure 1). What you see is that when a increases the number of stable states increases as well. Increasing a even more leads to the following bifurcation diagram (figure 2). So if a is big enough, the system goes into fully developed chaos. If we take a look at the socalled window or regime lengths (Dan in figure 1), we can see the following:
Figure 2
Interview
− →
The growth factor of the lengths converges to a constant, the so-called Feigenbaum constant. It is so beautiful to play with these equations and that is what attracts me to it. It is tough though to find economic relevance of it. This is because a lot of economists, especially macroeconomists, work with aggregate data. A lot of this stuff that might happen at a more micro-level disappears when averaged out. Also, there are a lot of smoothing mechanisms in economics. In the stock-exchange for example; if you think a stock is going up or down on a weekly
bers kind of fill out the cube, without sub patterns, you have got a good random number generator. But some will create patterns inside the cube which you can see with your naked eye. This implies there is some kind of predictability. You can use xt and xt-1 to make a forecast about xt+1, so it is not random. In four dimensions you do not have the possibility to draw a graph any longer. This is where the BDS-test comes in. It tests whether there are any patterns or clusters in higher and higher dimensional cubes. Working out the math on that is quite challenging to say the least. So the BDS-tests shows whether a series is stochastic or deterministic by testing for any kind of predictability in the series.
"There’s nothing like searching for hidden patterns in data to stimulate the imagination" basis, you construct a portfolio to exploit it. But then everyone else can do the same thing and the entire effect will vanish. The only way to really get chaos going and be able to defend it, is that the economy has to have a large number of sectors. This means that the difference equation has to be replaced with vectors. However, the sufficient conditions to get chaos are just too tough, because of intertemporal and cross sectional smoothing.
Later on you got involved in the Resilience Alliance and started exploring the dynamics of social-ecological systems. The best example of a trade-off between (economic) benefits and costs caused by pollution is global climate change. What is your opinion on this subject? Should we invest heavily in trying to revert global warming or focus our attention at coping with its consequences?
Your research resulted in the BDS-test, which professor Hommes called “extremely clever” in his speech yesterday. What makes this test extremely clever?
One might imagine that the climate is and has been in some kind of complicated steady state. What is happening today is that we are giving an external shock to this equilibrium. As in any system, this can have two possible consequences. First of all, the climate can transfer into a new equilibrium, which would mean that, after we have raised all dikes, we will be safe. The second possibility is the disaster scenario. The system does not get into a new steady state but collapses in total. It is impossible to say what this means in practice, but surely it will not be any good. Of course some people will say that the second possibility is very unlikely. In my view, this does not matter one still will want to “purchase insurance”. Let us say that the chance for the second scenario to occur is indeed very slim. We then have two options: one; we do everything in our power to prevent climate change to happen or two; we sit back and try to cope with the consequences of it. If we have invested in trying to prevent climate change to occur and the system would eventually have reached a new relatively benign new steady state, we would only have lost some
The problem is that empirically it is hard to distinguish between a stochastic system and chaos buffeted by noise. To get an idea about what we did you should take a random number generator. To test whether a random number generator is any good, e.g. whether it generates numbers that are totally unpredictable, you can let it generate a series of drawings from a uniform(0,1) distribution. If you plot xt against xt+1 with a series of 100,000 the unit square should fill up completely. So the plot should look uniformly grey. But in practice, this kind of plots with drawings from a number generator will look like snowflakes. They will fill out the square, but you will see some patterns. This means that some random number generators are lousy. To continue this, you can add xt-1 and end up with a cube. It is hard to draw a cube, but if you are good, you can do so and you can see where the points lie in the cube. If the generated num-
AENORM
63
April 2009
7
Andere tijden, andere ambities. Young talents die werken aan een duurzame toekomst. Energie produceren uit mest, afval of de restwarmte van je dagelijkse douche. Het kan allemaal. En veel meer. Want de energiewereld verandert nu sneller dan welke andere branche dan ook. Dat moet ook wel om de levering van energie te kunnen blijven garanderen. Ook in de toekomst. Als het gaat om de opwekking van duurzame energie loopt Eneco voorop. En jij kunt zorgen dat we dat blijven doen. Hoe? Dat ontdek je tijdens het Young Talent Program: een uniek tweejarig opleidingstraject voor afgestudeerde hbo’ers en wo’ers.
www.enecojobs.nl 8
AENORM
63
April 2009
Vanaf de eerste dag bouw je aan je eigen duurzame toekomst. Je volgt een opleidingsprogramma gericht op zowel persoonlijke als vakinhoudelijke ontwikkeling en tegelijkertijd doe je ervaring op in een uitdagende functie.
Interesse? Voor meer informatie over het Young Talent Program kun je contact opnemen met Leonie Prins via 06 - 4605 6505 of kijk op onze site
Interview
money at worst. However, if the system would otherwise not have reached a new steady state, investing has prevented a catastrophe. If we sit back and see what happens, we will keep our money, but this might turn out to be of little significance if the climate system collapses. So, in my opinion investing in preventing climate change from happening is a small price to pay compared to its possible consequences or costs. The range of subjects which have your interest seems extremely broad. Is there a reason for this? I enjoy working on multiple subjects because they cross fertilize each other. Furthermore all my work is thematically connected because it all involves dynamical systems and their applications to ecology and to the economy. What can we expect from you in the future? Are there still other areas you would like to explore? I plan to continue my work in macroeconomics. I especially want to understand what caused the current global crisis and to work on designing institutions to prevent such crises in the future. I also plan to continue my closely related work on Early Warning Signals (EWS’s) in ecological systems. I want to investigate possible applications and extensions of this work to economic systems.
AENORM
63
April 2009
9
Econometrics
The Freakonomics Controversy: Legalized Abortion and Reduced Crime In this issue of AENORM, we present a new series of articles. These series contain summaries of articles which have been of great importance in economics or have caused considerable attention, be it in a positive sense or a controversial way. Reading papers from scientific journals can be quite a demanding task for the beginning economist or econometrician. By summarizing the selected articles in an understanding way, the AENORM sets its goal to reach these students in particular and introduce them into the world of economic academics. For questions or criticism, feel free to contact the AENORM editorial board at info@vsae.nl
The best-seller “Freakonomics” (2005) almost does not need an introduction. The writers, Steven Levitt and Stephen Dubner, managed to present economics in a way both entertaining and understandable for the wide audience. One of the most infamous chapters concerns the link between legalized abortion and crime reduction. Although the book presents a very adequate way of explaining this relationship, it does not mention the used methodology and technical details. One could of course read the original academic paper, but this can be quite daunting for a beginning economist or econometrician. This article steers a middle course by explaining both the theory and techniques, used in the paper of Donohue and Levitt (2001), in a concise and understanding way. Introduction Since the beginning of the 90’s hundreds of debates both in the academic literature and popular press have tried to answer the following question: What has caused crime to fall in the United States since 1991? Politicians contributed this decline to several factors: the increasing use of incarceration, growth in the number of police, improved policing strategies (such as those in New York under the Giuliani regime), declines in the crack cocaine trade and increased expenditure on victim precautions (such as security guards and alarms). Economists on the other hand argued that it was the strong economy that mostly contributed to the significant drop in crime. In their paper (2001) however Donohue and Levitt (henceforth D&L) question these explanations. According to D&L, none of the abovementioned factors can provide an entirely satis-
10
AENORM
63
April 2009
factory explanation of the observed crime drops in the 90’s. They mention that the increasing scale of imprisonment, police and expenditure on victim precautions are trends that have been ongoing for over almost two decades and thus cannot explain the abrupt decrease in crime. Moreover previous academic research only showed a weak relationship between economic performance and crime. Thus D&L consider an entirely different, yet highly controversial explanation: the decision to legalize abortion. Theoretical framework: Mechanism legalized abortion and lower crime rates
of
D&L identify two ways through which abortion can affect crime. The first one is fairly simple, which they call the “smaller cohort size” effect: Legalized abortion implies fewer births and this simply reduces the amount of people that can commit crimes. Thus when assuming that the fall in births is a random sample of all births, crime would fall proportionately. However more interesting is the situation when the fall in births, caused by abortion, is not a random sample of births, i.e. that abortion has a disproportionate effect. D&L refer to this as the “selection” effect. One can imagine that abortion is more likely to happen among those mothers who are less willing or unable to provide a safe and nurturing home environment. Given this fact, D&L suspect that the impact of legalized abortion might be far greater than just its smaller cohort size effect. In their study D&L refer to Levine e.a. (1996), who indicate that the drop in births associated with abortion legalization was roughly twice as large for teenage and non-white mothers as for non-teen, white mothers. Next to this study, the
Econometrics
a
Figure 1: Crime rates in the United States (1973 – 1999)
results of Angrist and Evans (1996) also show that abortion reforms had a greater impact on the fertility of black women. Moreover, Gruber, Levine and Staiger (1999) note that children who would have been born due to the non-legalization of abortion, would have been 60 percent more likely to live in poverty, 45 percent more likely to be in a household collecting welfare and 40 percent more likely to die during the first year of life. In all, D&L conclude that abortion legalization was not occurring evenly across all groups. Previous research has found that the best predictors for (juvenile) delinquency are related to family environment and a variety of parental behaviours and qualities. D&L mention among others the mother’s low education, the fact that the mother was a teenager and/or did not want the pregnancy and the fact that the child grew up in a single parent family. Thus children who were born, because their mothers were denied an abortion, are substantially more likely to be involved in crime and have poorer life prospects (after controlling for income, age and other effects). D&L thus believe that unwanted children are more likely to end up in criminal activity, which in turn may explain the causal effect of legalized (or greater availability of) abortion to lower rates of crime. Empirical evidence The authors support their theory, which establishes a (negative) relationship between legalized abortion and criminal activity, by performing an extensive econometric study. Before constructing and testing their econometric model however, they give a brief overview of crime trends and abortion data. Suggestive evidence: Crime rates and the case of Roe vs. Wade D&L distinguish between three types of crime: violent and property crime and murder. In figure 1 crime rates from 1973 to 1999 are shown. It can be easily seen that for all three of the crime categories, crime was at its peak in 1991. Since that year, crime has been falling steadily.
b
c
Figure 2: a) Changes in violent crime and abortion rates, 1985 – 1997 b) Changes in property crime and abortion rates, 1985 – 1997 c) Changes in murder and abortion rates, 1985 – 1997
D&L argue that the timing of this break, as can be seen in the figure, coincides with their theory. Five US states (Alaska, California, Hawaii, New York and Washington) legalized abortion in 1970. In the remaining states abortion did not become legal until 1973 with the case of Roe versus Wade. In 1991, the first group of teenagers affected by Roe versus Wade would be around 17 years old, the first year of the highest crime adolescent years. The decrease from 1991 and onwards is also consistent with a hypothetical effect of abortion legalization. Further suggestive evidence can be found in figure 2. Here a negative correlation between legalized abortion and crime rates can be found. With each passing year, the fraction of the
AENORM
63
April 2009
11
Econometrics
criminal population that was born after abortion legalization increases, thus the impact of abortion will only be felt gradually. D&L take this gradual effect into account by defining a so called “effective legalized abortion rate”. Intuitively one can see this measure of abortion as a weighted average across all cohorts of arrestees. This simply means that the number of abortions per live birth at a certain time will have a greater weight when the cohort with the
D&L also test for robustness (sensitivity) to further strengthen their results. They exclude early legalizers one at a time with different results, but the fundamental idea, i.e. a significant, negative effect of abortion, is still present. Other sensitivity tests (e.g. including state-specific trends, region year interactions) yield the same outcome.
"Unwanted children are more likely to end up in criminal activity" respective appropriate age commits relatively more crimes. The econometrics in Freakonomics State level regressions D&L use so-called “panel data”: data containing observations on multiple variables over multiple time periods. The first model they use, consists of the natural log of crime rate per capita as the dependent variable and the effective abortion rate as primary explanatory variable. Furthermore D&L use other variables, denoted by Xst, which are all on state-level: prisoners and police per capita, variables concerning state economic conditions, lagged state welfare generosity, the presence of concealed handgun laws and beer consumption per capita. Last, but not least D&L add state (γs ) and year (λt ) dummies to estimate effects that are not captured by the other mentioned variables. Thus the model can be denoted by: ln(CRIMEst)= β(Abortionst)+XstΘ+γs+λt+εst where the coefficient β is our primary interest, reflecting the effect of abortion on crime. Furthermore s and t indexes state and year respectively and εst represents a disturbance term. D&L estimate their model with and without control variables (other than the state and year effects). Regardless of this choice, the results of their regressions indicate a confirmation of their suspicions: the coefficient β is negative and statistically significant, implying that higher abortion rates are associated with lower crime rates. They show that an increase in the effective abortion rate of 100 per 1000 live births is associated with a decrease of 12, 13 and 9 percent for murder, violent and property crime respectively. The other coefficients all seem plausible, carrying the expected sign.
12
AENORM
63
April 2009
Impact of abortion on arrests by age of offender To further test their hypothesis, D&L also test the impact of abortion on arrest rates by age of offender. Especially interesting is the fact that if legalized abortion is the main reason for the decline in crime, then it is expected that decreases in crime are concentrated among those groups of children born after abortion legalization. D&L use the same specification as above, with the only difference being the dependent variable. This variable is now separated into two groups: the (natural log of) arrest rate per capita for offenders aged under and above 25 years. One would expect that the coefficient for abortion is non-significant for the older aged cohort, since this cohort is too old to be affected by the decision to legalize abortion. The results show that the coefficients for abortion for older aged cohorts are indeed non-significant and still significant for offenders under age 25. While the magnitude of the effects differ, compared with previous estimations, the basic story regarding abortion and crime remains. Conclusion The cause of the large and persistent drops in US crime in the 90’s has been a large topic in both the academic and popular press. Even though a lot of explanations have been presented by politicians and economists, Donohue and Levitt (D&L) consider those as unsatisfying and consider a novel explanation for the crime drops of the 90’s: the legalization of abortion. They link the case of Roe versus Wade with the crime drops. The authors believe that abortion affects crime in two ways. They refer to the first as the “smaller cohort size” effect. The logic is fairly simple: Legalized abortion implies fewer births and this
Did you recognize the Fibonacci sequence? Good. Did you also see the small mistake? Great. Because that is what IMC Trading is looking for: ambitious young people with excellent quantitative skills. Graduates and young professionals who want to be part of a professional, dynamic and nonhierarchical organisation where innovation and entrepreneurship are key. IMC Trading was founded in 1989 as a market maker on the Amsterdam Options Exchange and has grown into one of the leading derivatives trading houses. Besides our head office in Amsterdam, we are located in Chicago, Sydney, Hong Kong, Zug, Munich and London.
Junior Traders-sTraTegisTs Great challenge for a junior with an academic background, a maximum of 3 years working experience and a passion for financial markets. Do you have excellent analytical and quantitative skills? And are you result-driven, eager to learn and willing to excel? Then IMC Trading is the place to start your career! Our (international) traineeship will prepare you to become a successful trader.
graduates and young professionals go to imcjobs.com AENORM
63
April 2009
13
Econometrics
reduces the amount of people that can commit crimes. Thus when assuming that the fall in births is a random sample of all births, crime would fall proportionately. The second and probably far more interesting effect is the so-called “selection” effect. Children, who were born, because their mothers were denied an abortion, are substantially more likely to be involved in crime and have poorer life prospects. D&L thus believe that unwanted children are more likely to end up in criminal activity, which in turn may explain the causal effect of legalized (or greater availability of) abortion to lower rates of crime. They suspect that the impact of legalized abortion might be far greater than just the smaller cohort size effect. To support and prove their theory, D&L perform an extensive econometric study. It turns out that the evidence is consistent with their hypothesis of a causal effect of legalized abortion on crime. By using a panel data model, the results suggest that an increase of 100 abortions per 1000 live births reduces a cohort’s criminal activity by roughly 10 percent. According to D&L these estimates furthermore suggest that legalized abortion should be considered as the primary explanation for the large drops in crime. The paper, published in the Quarterly Journal of Economics (2001), is regarded as highly controversial and has been criticized by both politicians, opinion writers and academics, of which Foote and Goetz (2005) is an often cited example. Nevertheless, one cannot deny that Levitt is a man of his word as “a rogue economist explores the hidden side of everything”. References Donohue, J.J. III and Levitt, S.D. (2001). The impact of legalized abortion on crime, Quarterly Journal of Economics, 116(2), 379420. Foote, C.L. and Goetz, C.F. (2008). The impact of legalized abortion on crime: Comment. Quarterly Journal of Economics, 123(1), 407423. Levitt, S.D. and Dubner, S.J. Freakonomics: A Rogue Economist Explores the Hidden Side of Everything. William Harper/HarperCollins.
14
AENORM
63
April 2009
Econometrics
SAFFIER, CPB’s Workhorse for Short- and Medium-Term Analyses In times of economic crises, people have many questions about the current and future state of the economy and how to affect it. If used properly, macroeconomic models can be very useful in answering these types of questions. Since its inception in 1945, the Central Planning Bureau (CPB) has built and used many models of the Dutch economy. These have played an important role in macroeconomic policy preparation.1 The models of CPB’s first director Jan Tinbergen were small and simple. The last sixty years have seen a strong development in statistics, econometrics and economic theory. This development has stimulated the construction of all kind of larger and more complex models. In this article we present CPB’s most recent macromodel SAFFIER, which stands for “Shortand medium-term Analysis and Forecasting using Formal Implementation of Reasoning.” We will explain the purpose and structure of the model and will discuss some econometric issues concerning the specification and estimation of the behavioural equations.
Purpose of the model Making short- and medium-term forecasts for the Dutch economy is the most important purpose of the SAFFIER model. Short-term forecasts are published four times a year: more elaborate publications in March (CEP: ‘Centraal Economisch Plan’) and September (MEV: ‘Macro Economische Verkenning’) and two minor publications in June and December consisting of relatively marginal updates based on new insights of the business cycle and/or new or changed policy plans.2 Mediumterm forecasts are mostly published once every four-year period, immediately preceding general elections and coalition-negotiations for a new cabinet. Another frequently used application of the model are ‘what if’-analyses. Uncertainty about a projection can be illustrated by presenting the consequences of an alternative set of assumptions.3 These ‘what if’-analyses can illustrate how the SAFFIER model, and hopefully the Dutch economy, operates in a fully endogenous way. The outcomes of these so-called variants depend on some a priori assumptions, such as: • Is the shock temporary or permanent; • Does the government change taxes to prevent their financial budget from changing (‘balanced
Henk Kranendonk is senior economist at the Cyclical Analysis unit at CPB Netherlands Bureau for Economic Policy Analysis, where he is involved in the analysis and forecasting of the business cycle of the Dutch economy. His main research fields are leading indicators and model building. Johan Verbruggen is head of the Cyclical Analysis unit at CPB Netherlands Bureau for Economic Policy Analysis. Before, he had several appointments at the Ministry of Economic Affairs and the CPB, always being involved in model building and macro economic policy.
budget’) or not; • Are all wages and benefits linked to inflation or not. In its forecasting publications, CPB always presents some specific ‘what if’ analyses, such as CEP and MEV, concurrent with its central projection to illustrate the uncertainty margins of short-term forecasts. These calculations show, for example, the effects on the Dutch economy of alternative assumptions about world trade, the euro exchange rate, the oil price or the house price.
For a short historical overview see Don and Verbruggen (2006). Almost every year, CPB evaluates the accuracy of its short-term forecasts. See e.g. Kranendonk, De Jong and Verbruggen (2009) and earlier editions of that publication. 3 In Chapter 5 of Kranendonk and Verbruggen (2007) 12 so-called standard variants are described. The alternative assumption relate to world trade, interest rate, exchange rate, oil price, contractual wages, minimum wage and linked benefits, income tax, VAT rate, public expenditures, labour supply, share prices and house prices. 1 2
AENORM
63
April 2009
15
Wat als ze geen vertrouwen hebben in de economie? Wie weet wat de toekomst brengt. En geld kun je maar één keer uitgeven. Dus als mensen wat minder vertrouwen in de economie hebben, zullen ze eerder gaan sparen dan besteden. Handhaving van het vertrouwen is dan essentieel. Daarom houdt de Nederlandsche Bank (DNB) toezicht op de soliditeit van financiële instellingen. We stellen eisen aan banken, verzekeraars en pensioenfondsen en houden de vinger aan de pols. Verder dragen we – als onderdeel van het Europese Stelsel van Centrale Banken – bij aan een solide monetair beleid en een zo soepel en veilig mogelijk betalingsverkeer. Zo maken we ons sterk voor de financiële stabiliteit van Nederland. Want vertrouwen in ons financiële stelsel is de voorwaarde voor welvaart en een gezonde economie. Wil jij daaraan meewerken? Kijk dan op www.werkenbijdnb.nl. | Juristen | Accountants | Actuarissen
Werken aan vertrouwen. 16
AENORM
63
April 2009
| Economen | EDP-auditors
Econometrics
Figure 1 Core relations within SAFFIERa
Structure of SAFFIER4 All in all, the model consists of two parts; the core of the model with the equations that describe the behaviour of the economic agents and the other part containing the institutional relations and bookkeeping equations that guarantee the macroeconomic and mathematical consistency of the outcomes. The core of SAFFIER concerns the market for goods and services and the labour market. Figure 1 shows the economic relationship between the variables and the equilibrium-restoring mechanisms that ensure the economy returns to its trend-based growth path after shocks. The market for goods and services contains behavioural equations for the final demand components of private consumption, business investments and exports. Part of the demand for goods and services originates from abroad, with the remainder produced in the Netherlands. This production is described in way of a CES production function, with labour and capital as the production factors. From this production function the equations for investments and employment are derived. The capacity utilization rate functions as a tension indicator. Higher demand and production lead to a higher capacity utilization, which causes prices to increase and demand, both domestic and foreign, to diminish. In the labour market, labour supply is largely modelled using trend-based factors such as demographical developments, while the explana-
tion of wages is based on a right-to-manage model - a negotiation model between employer’s associations and trade unions. If the tension on the labour market intensifies, expressed by a falling unemployment rate, this will lead via the Phillips curve, as is usual in empirical macroeconomic models, to an upward effect on wages. Simultaneously, the higher real wages encourage more people to join the labour market, increasing the labour supply, so that supply and demand on the labour market will tend to converge. The restoration of equilibrium on the labour market also runs in part via the production side. Higher wages lead to higher prices of domestically-produced exports, which undermines the Dutch price competitiveness and export performance. This also has a downward effect on the demand for labour. However, counteracting this effect over the short term is that higher wages sparks additional private consumption, so that the restoration of equilibrium via the production side will be limited over the short term. The complete model consists of about 2600 equations of which only 50 are so-called behavioural equations. It also contains approximately 300 parameters. In SAFFIER, the institutional arrangements in the social security, health-care and tax systems are described in large detail. In addition to this institutional block, SAFFIER contains a large book-keeping system, in line with the system of the National Accounts, to guarantee the consistency of all the economic relationships. In addition to the 2600 endoge-
A more elaborate description of SAFFIER can be found in Kranendonk and Verbruggen (2007). Thicker lines indicate the start of equilibrium-restoring mechanisms. Exogenous factors are shown in an ellipse, while wholly or largely endogenous factors are shown in a rectangle. 4 a
AENORM
63
April 2009
17
Econometrics
nous variables, the model has 250 exogenous variables and 200 autonomous variables. Most exogenous variables in the model relate to international variables, government policy instru-
from the optimal long-term development. This ECM-specification guarantees that, while in the short run many cyclical factors can influence forecasts, in the long run the growth path devel-
"Macroeconomic models are useful instruments in policy preparation and deserve full attention and dedication" ments and specific sectors such as health care, mining and quarrying. In the past, both quarterly and yearly models were used at CPB. Quarterly models are especially useful for short-term forecasts, where the dynamics of the business cycle are important. These models have the ability to incorporate quarterly macroeconomic data from Statistics Netherlands (CBS). For medium-term analyses, like simulations of policy options and election platforms, this quarterly information is more or less redundant and complicates the technical handling of the model. However, using two types of models at the same time also has disadvantages. It led regularly to awkward interpretation problems. Which model outcomes were the most relevant, and which model should be used for which analysis? In 2004, these dilemmas, in combination with a desire for greater cost efficiency, led to the integration of the yearly model JADE and the quarterly model SAFE into the new SAFFIER model.5 There are two operational versions of SAFFIER, a quarterly version for short-term analyses and a yearly version for medium-term analyses. These model versions only differ with regard to the specification of the time lag structures. The core of the models, containing the behavioural, institutional and accounting equations, are identical in both model versions.6 Error correction equations The most important part of a macroeconomic model are the behavioural equations. Their specifications and estimated parameters are crucial for the outcomes of the model analyses. For almost all behavioural equations SAFFIER applies the error correction mechanism (ECM). The long-term equations are modelled in (log-) levels, while the short-term equations are specified in changes (delta logs). One of the elements of the short-run equation is the deviation
ops according to the theoretical specifications. The long and short run equations are estimated simultaneously wherever possible though occasionally it is necessary to apply a two-step procedure.7 The long-term and short-term equations are respectively: ln y*=α1ln x + c Δln y=α2Δln x - ε(ln y - ln y*)-1
(1) (2)
where: α parameters c constant ε error correction parameter y endogenous variable (actual level) y* long-term equilibrium level endogenous variable x exogenous variables The error correction parameter indicates how quickly the actual level of the explanatory variable converges towards its long-term value. As epsilon moves closer to 0 or 1, the adjustment process proceeds slower or faster respectively.8 Estimation equations
process
of
the
behavioural
Economic theory discusses relevant variables that seek to explain the behaviour of economic agents, such as firms and consumers. Based on those theories, model builders decide which variables to select for the specification of the behavioural equations. It is necessary to be aware of the statistical properties of those variables. Before estimating the relationships between variables, it is helpful to start with a graphical inspection of the time series and to perform unit-root tests to learn about their statistical properties. In general, the variables in the short-term equations should be stationary.
See CPB (2002a) and CPB (2002b) for a description of these models. See Kranendonk and Verbruggen (2007), paragraph 2.2. 7 A complete description of the behavioural equations is published in appendix A of Kranendonk and Verbruggen (2007) 8 In the quarterly version this adjustment proces is [ 1 - (1 - ε ) 0. 25 ], see Kranendonk and Verbruggen (2007), page 20. 5 6
18
AENORM
63
April 2009
Econometrics
A co-integrated relationship should exist between the non-stationary variables in the longterm equation or the results may be spurious. Performing the necessary tests can prevent researchers selecting an incorrect specification. An important aspect of the estimation process is the choice between the estimation of single equations or a system of equations. For SAFFIER the export equations are estimated as a full system: real export and export price over both the short and long term. The price-elasticity in the (volume) export equation should be estimated in a system, which contains endogenous interaction. For other variables the short-term and the long-term equations are estimated simultaneously. This can be done by substituting equation (1) into (2). When a combined equation has too many parameters to estimate, the short-term and the long-term equations can only be estimated using a two-step process; the first step being estimation of the long-term equation and once completed the estimation of short-term relationships. An alternative is sometimes to calibrate specific parameters on a fixed value. For the significance and acceptance of parameters, t-statistics are important. However, for us the theoretical relevance of parameters and the plausibility of the estimated parameter hold even greater importance. To illustrate this point, we refer to the parameter for the marginal propensity to consume from labour income in the consumption function. We could not find an acceptable free estimate of this parameter. Because the consumption should include the labour income, we decided to fix this parameter to 0.55. The final step in the estimation process is the analysis of the effects of the (new) specification on the model outcomes of the complete SAFFIER model. We run our standard ‘what-if’ scenarios and analyse the simultaneous effects on other important variables such as GDP, prices, unemployment etc. If these outcomes are implausible, this is a signal to continue the estimation process by attempting to find a better specification or to adjust the parameters. Estimating behavioural equations that are suitable for an econometric model used in policy preparation, involves occasional ‘dirty’ handiwork. It is the model builder’s task to develop a model that is suitable for the applications at hand. The art of model building requires knowledge of economics, experience, intuition and a large dose of common sense. Econometric tests and techniques should come second. Messy compromises are often unavoidable.9 CPB cannot afford not to deliver short-term forecasts or an analysis of a policy proposal because one of the behavioural equations in the model does not fully pass some econometric tests.
9
Non-linear elements in the model The specifications (1) and (2) give linear relationships between the relevant economic variables. The effects of a change in certain variables do not depend on the level of the variables. For the most part equations in SAFFIER have a linear specification. However, in some equations non-linear elements are introduced. The most important example is the specification of the wage equation. This equation is based on a negotiation model between employer’s associations and trade unions. The underlying assumption is that over the long term, wages in the market sector depend on productivity, the price of production (value added), unemployment, the replacement rate and the wedge. The replacement rate indicates what percentage of their net income employees will on average continue to receive after they have been made redundant and become reliant on benefit payments. In a tight labour market employees are less likely to lose their jobs and are more likely to find new jobs than is the case in a loose labour market with high unemployment. The fall back position of employees, as expressed in the replacement rate, will therefore play a smaller role in wage negotiations when the labour market is tight than when unemployment is high. In order to take account of this relationship between the replacement rate and the unemployment rate, the parameter of the replacement rate depends on the level of the unemployment rate. A second example of non-linearity is the house price equation where the error-correction parameter in the short-term equation is influenced by a regime dummy. Empirical research showed that a downward adjustment of the actual house price to its long-term value takes longer than an upward adjustment. Hence there is downward price rigidity. A last example of non-linearity mentioned here refers to the short-term equation of private consumption. Microeconomic research by Berben, Bernoth and Mastrogiacomo (2006) shows that Dutch consumers react appreciably stronger to a fall in share prices than to a comparable increase. The reaction to a price fall is nearly three times as strong as to a price gain. The explanation for this phenomenon offered by the psychological economic literature is that people are more affected by losing something they already possess than by not gaining something they do not yet possess, the so-called “endowment effect”. The consequence of non-linearities in the model is that the ‘what-if’ analyses are not neutral for the baseline scenario. The outcomes of positive and negative impulses are not completely symmetric. Also the labour market situation is relevant for the speed of the equilibrium restoring mechanisms in the model.
See Don and Verbruggen (2006).
AENORM
63
April 2009
19
Econometrics
Final remarks If properly used, models like SAFFIER can be powerful instruments for the analysis of the Dutch economy. Although the model is sizable, it is not suited to answer all possible questions. For example, the implication of higher public spending in education and infrastructure for economic growth in the short and medium term can be calculated. However, these expenditures also have ‘programme effects’: more education or better infrastructure can have an effect on productivity or labour supply in the long(er) run. These effects are not (yet) incorporated in SAFFIER. CPB has a scale of models, each developed for a specific research area, with every model having its particular limitations and weaknesses. It is necessary for the user of these models to be aware of possible pitfalls in their application. When using models, in order to prevent misleading results, it is crucial to bear in mind the limitations and weaknesses of the models. This implies that a model may have to be adjusted as necessary for the analysis in question, with messy compromises from the perspectives of economic and econometric theory often unavoidable. When correcting for a model’s imperfections, the model builder faces some awkward puzzles. This means that expertise is essential not only in model building but more particularly in the use of the model. It is certainly possible to make useful analyses with a simple and incomplete model if those who analyse the outcomes and prepare the reports are sufficiently aware of the model’s weaknesses. It is also possible to make some serious errors with a sophisticated and complex model such as for instance using it unquestioningly in analyses for which it is not suited. Nevertheless, macroeconomic models remain useful instruments in policy preparation and deserve full attention and dedication. A captain will never throw overboard his compass because it does not help to avoid a foul in the dark. References Berben, R.P., Bernoth, K. and Mastrogiacomo, M. (2006). Household’ Response to Wealth Changes, Do Gains or Losses make a Difference?, CPB Discussion Paper 63, The Hague. CPB, (2002a). JADE, A model for the Joint Analysis of Dynamics and Equilibrium, CPB Document 30, The Hague. CPB, (200b). SAFE, A quarterly model of the Dutch economy for short-term analyses, CPB Document 42, The Hague.
20
AENORM
63
April 2009
Don, F.J.H. and Verbruggen, J.P. (2006). Models and methods for economic policy: 60 years of evolution at CPB, Statistica Neerlandica, 60(2), 1445-1479, Blackwell Publishing Ltd, Oxford. Also published as CPB Discussion Paper 55. Gelauff, G. and Graafland, J. (1994). Modelling Welfare State reform, North Holland, Amsterdam. Kranendonk, H.C. and Verbruggen, J.P. (2006). Trefzekerheid van korte-termijnramingen en middellange-termijnverkenningen, CPB Document 131, The Hague. Kranendonk, H.C. and Verbruggen, J.P. (2007). SAFFIER, A multi-purpose model of the Dutch economy for short-term and medium-term analyses, CPB Document 144, The Hague. Kranendonk, H.C., de Jong, J. and Verbruggen, J.P. (2009). Trefzekerheid CPB-prognoses 1971-2007, CPB Document 178, The Hague.
Wat doe je? als je ambities als actuaris verder reiken dan de Nederlandse grenzen Achmea is in Nederland de grootste actuariële werkgever. Maar ons werkterrein beperkt zich niet tot Nederland. Ook over de grenzen heen zijn we actief. Met ons driejarig internationaal actuarieel traineeship leiden we actuarissen op om op internationaal niveau te werken. Na dit intensieve programma kun je als actuarieel professional aan de slag bij één van de Eureko onderdelen in Europa. Heb jij de ambitie om jezelf zowel inhoudelijk als persoonlijk te ontwikkelen in een internationale omgeving? Dan maken wij graag kennis met jou.
Internationaal actuarieel traineeship Tijdens het traineeship volg je op inhoudelijk vlak, naast een training on the job, een actuariële opleiding. Je start met een introductieperiode van een half jaar op één van onze actuariële afdelingen in Nederland. Gedurende deze periode leer je het actuariële vak beter kennen en bepaal je zelf je specialisatie. Tevens leer je zowel de internationale organisatie Eureko als de nationale organisatie Achmea beter kennen. Na deze periode vertrek je voor tweeënhalf jaar naar één van de Eureko onderdelen in Athene (Interamerican) of Dublin (Friends First), afhankelijk van jouw specialisatie. Na dit programma wordt je ingezet op kort- of langdurende opdrachten binnen Europa, afhankelijk van zowel jouw persoonlijke voorkeur als de vraag van de organisatie.
Wat we vragen Als internationaal actuarieel trainee beschik je over een afgeronde universitaire opleiding, bij voorkeur actuariële wetenschappen, econometrie of wiskunde. Je hebt maximaal twee jaar werkervaring bij een financiële dienstverlener. Daarnaast herken je jezelf in de volgende competenties: zeer analytisch, leergierig, zelfstandig handelend, bereid om internationaal te werken en communicatief zeer vaardig in de Engelse taal
Wat wij bieden Een unieke kans om jezelf in korte tijd snel te ontwikkelen in het actuariële vakgebied in een internationale omgeving. Vervolgens kun je jouw ambities waarmaken met de vele mogelijkheden die Eureko / Achmea te bieden heeft.
Achmea Achmea maakt deel uit van Eureko; een financiële dienstverlener met flinke ambities en ondernemingen in verschillende Europese landen. Zowel Eureko als Achmea hebben tot doel het creëren van waarde voor al onze stakeholders: klanten, distributiepartners, aandeelhouders en medewerkers. Om dat waar te kunnen maken hebben we medewerkers nodig die verder kijken dan hun eigen bureau. Die oog hebben voor wat er speelt. Maar vooral: mensen die zich inleven in onze klanten en dat weten te vertalen naar AV É R O A C H M E A
originele oplossingen.
CENTRA AL BEHEER ACHMEA
Meer weten?
FBTO
Voor meer informatie over het actuarieel traineeship kun je contact opnemen met Joan van Breukelen,
INTERPOLIS
recruiter, (06) 20 95 7231. We ontvangen je sollicitatie graag via www.werkenbijachmea.nl.
Z I LV E R E N K R U I S A C H M E A
Ontzorgen is een werkwoord AENORM
63
April 2009
21
Econometrics
How Buyer Groups can Effectively Operate as Stable Cartels Buyer groups play an important role across a wide range of sectors in the economy. They may facilitate pro-competitive forces based on buyer power considerations, which is potentially to the benefit of final consumers. However, buyer groups may also induce strictly anticompetitive market outcomes. We show how buyer groups can effectively operate as stable cartels. This article is a non-technical impression of the paper “Expropriating Monopoly Rents through Stable Buyer Groups”, by Christopher Doyle (LSE) and Martijn A. Han (ACLE/UvA).1
Martijn A. Han is a Ph.D. student in Economics at the Amsterdam Center for Law & Economics (ACLE), University of Amsterdam (UvA). Before joining ACLE, Martijn obtained an M.Sc. in Mathematical Economics (2007, with distinction) at LSE, as well as a B.Sc. in Econometrics (2006, cum laude) and an M.Sc. in Medicine (2005) at the UvA. Martijn’s research interests are in Industrial Organization and Competition Policy, specifically the theory of cartels. He has interned at the UK Office of Fair Trading and the UK Competition Commission (British competition authorities). Martijn was on the 2005 VSAE board.
Introduction Buyer groups are cooperative arrangements between independent buyers – usually retailers – to combine their purchases in input markets. There are many examples of such groups. Some exist only within a market segment, such as the Independent Grocers Alliance, which is the world’s largest voluntary supermarket network with annual retail sales of more than $21 billion. Other buyer groups extend across different markets; for example, Corporate United covers industries as various as health care, chemicals, telecommunications, defense and financial services. A buyer group may allow its members to improve their bargaining position vis-à-vis suppliers, resulting in more favourable contractual terms, such as lower input prices or increased input quality. As such, buyer groups have the potential to improve (consumer) welfare, since, for example, lower input prices may be passed on to consumers. However, buyer groups may also facilitate strictly anti-competitive market outcomes,
which are detrimental for welfare and resemble hardcore output market cartels. First, cartels may install a buyer group to cover up dodgy meetings in which, besides (legal) cooperation on the input market, illegal cooperation on the output market is discussed. Examples of such illegal cooperation include price fixing, market sharing and bid rigging. Second, even if buyer group members refrain from discussing output market arrangements, they may jointly realise monopoly profits by making clever use of input contracts negotiated via the buyer group. The next section explains how this works.2 Anti-competitive buyer groups To fix ideas, consider an industry consisting of two identical retailers. Both retailers source their inputs from two identical suppliers, and sell their outputs to final consumers – see Figure 1. Retailers transform one unit of input into one unit of output without cost. Assume that retailers have all the bargaining power in negotiations with suppliers.3 Retailers compete on the output market for final consumers – i.e. there is no cartel. Furthermore, retailers independently write input contracts with suppliers. For now, assume that the details of the input contract of each retailer is observable by its rival retailer. Retailers may then soften competition between them on the output market, by writing input contracts with artificially high unit prices. The intuition is that when each retailer contractually commits to sourcing its inputs at inflated prices from suppliers, they effectively commit not to compete too fiercely on the output market. In particular, to avoid making a loss on each unit sold, retailers will at
The paper is available at http://ssrn.acle.nl. For other mechanisms underlying impacts of buyer groups on market outcomes, see, for example, the report on buyer groups by the UK Office of Fair Trading (2007). 3 For the sake of argument, we use a simplified set-up in this article. We refer to the paper for a more general setup and a discussion of the assumptions. 1 2
22
AENORM
63
April 2009
Econometrics
Figure 1. The market without a buyer group
least not set output prices below the contractually high input price. Now, is this a smart strategy for retailers? Output prices are higher, leading to higher revenue, but retailers pay the cost of inflated input prices to suppliers. However, retailers can prevent losing their profits to suppliers by including a so-called slotting allowance in the input contract. A slotting allowance is an (annual) fixed fee paid from the supplier to the retailer for carrying its product (input).4 Retailers are now in a position to prevent their revenues from flowing into their suppliers’ pockets: each retailer simply sets a slotting allowance exactly equal to the amount by which the input price has been artificially inflated, multiplied by the number of inputs bought. In that way, inflated input prices lead to higher output prices, which in turn lead to higher revenues, while the increased input costs are refunded by suppliers to retailers through slotting allowances. Retailers thus enjoy greater profits. The success of this mechanism, first formally outlined by Shaffer (1991), critically relies on the assumption that input contracts are observable to all rival retailers. This assumption is not always realistic, because, for example, input prices are not necessarily listed and negotiations often take place “behind closed doors”. When input contracts are unobservable to rival retailers, each retailer has an incentive not to write the contract as specified above, but to write a simple contract with a unit input price as low as possible, without a slotting allowance. In that way, the retailer has a strategic advantage vis-à-vis its rival when competing in the output market, because its unit cost is lower than its rival’s unit cost. Hence, if input contracts are not observable, retailers are unable to credibly (i.e. observably) commit to a contract with inflated input prices. This inability to soften competition will in turn lead to lower profits and increased consumer welfare.
Figure 2. The market with a buyer group
However, the formation of a buyer group may help retailers restore the anti-competitive mechanism as outlined above. A buyer group is an organization or common representative that negotiates contracts on behalf of its members (retailers) – see Figure 2. Thus, erecting a buyer group effectively ensures that input contracts are observable to all buyer group members, thereby allowing these members to credibly commit to contracts à la Shaffer (1991). Moreover, since retailers now explicitly cooperate on the input market, they can fine tune their input contracts in such a way that combined final output will be reduced all the way to the monopoly level. As a result, although retailers do not collude on the output market, cooperation in the input market leads to input contracts that induce retailers to set output market monopoly prices, a result which is also obtained by Foros and Kind (2008). In other words, monopoly output prices are “fair” equilibrium prices, given the specifications of the input contracts negotiated through the buyer group. A buyer group therefore allows retailers to jointly extract monopoly profits, without engaging in illegal cartel activities. Stability The stability of this anti-competitive mechanism depends on the ability and incentives of retailers to cheat on the buyer group by secretly signing an additional input contract with a different supplier outside of the buyer group arrangement at a lower input price.5 Such a deviation allows the deviant retailer to source cheaper inputs, so as to gain a competitive advantage over its rival in the output market, and is therefore a relevant concern for the stability of the anti-competitive buyer group mechanism. What we show in the paper, however, is that commonly observed
Slotting allowances are especially wide-spread in grocery retailing, where the slotting allowance is effectively a rent paid by the supplier to display its products on the supermarket’s (retailer’s) shelf space. 5 Remember that input contracts negotiated outside of the buyer group arrangement are still unobservable for the rival retailer, only contracts negotiated through the buyer group are observable. 6 We refer to the paper for the intuition and the formal proof that the optimal minimum purchase clause ensures that the buyer group arrangement is as stable as a “conventional” output market cartel. 4
AENORM
63
April 2009
23
My passion Targeted innovation. Creating new products, new services,
new opportunities. Finding creative answers to the questions posed by our society. Working to improve solutions by making them faster, safer, smarter and more efficient. That is my passion.
24
AENORM
63
April 2009
Econometrics
contractual restraints – exclusivity provisions and minimum purchase clauses – facilitate the stability of the buyer group arrangement. Consider the exclusivity provision which contractually prohibits buyer group members from sourcing from alternative supplier(s) outside of the buyer group arrangement. By including such provisions in the input contracts that are negotiated through the buyer group, retailers effectively tie their own hands, thus credibly committing not to cheat altogether. When these exclusivity provisions can be legally enforced in court, the buyer group induced monopoly is then perfectly stable, because no member can undermine the buyer group by secretly writing another input contract. If input contracts are not allowed to specify ex-
Concluding remarks Retailers may jointly extract monopoly profits in their output market through the formation of a buyer group in their input market. The buyer group negotiates input contracts with slotting allowances, and the stability of the anti-competitive mechanism is facilitated by using exclusivity or minimum purchase provisions. Such an anti-competitive buyer group is likely to be harder to detect than standard forms of output market collusion, such as (tacitly) raising prices. A competition authority investigating the output market may not be able to find evidence of anti-competitive behavior if the retailers use a buyer group to jointly expropriate monopoly profits. An analysis of, for example, price-cost
"Buyer groups may facilitate strictly anti-competitive market outcomes" clusivity provisions, the buyer group arrangement can still be as stable as an output market cartel if input contracts contain minimum purchase clauses. A minimum purchase clause specifies a minimum number of inputs that each retailer is obliged to purchase through the contract negotiated via the buyer group. Such a clause thus limits a potentially deviant retailer’s ability to source from alternative suppliers outside of the buyer group arrangement to the extent that a minimum quantity of inputs must be sourced through the buyer group.6 In summary, commonly observed contractual restraints – exclusivity provisions and minimum purchase clauses – enhance the stability of the anti-competitive buyer group arrangement by effectively limiting retailers’ ability to defect from the arrangement. These contractual restraints set up by a buyer group may attract the attention of competition authorities. However, the European Commission’s guidelines on horizontal cooperation agreements (2001) recognise that: “An obligation to buy exclusively through the cooperation [buyer group, MAH] can in certain cases be indispensable to achieve the necessary volume for the realisation of economies of scale.” Hence, buyer groups using exclusive deals in their contracts are not necessarily illegal, and can actually be (potentially falsely) defended using the argument that they are required to allow the group to achieve sufficient scale.7
mark-ups would reveal no evidence of firms pricing above competitive levels if the retailers’ costs are taken as given. It may be a significant step for competition authorities to expand the analysis of suspected retailer collusion to include an examination of the process of input contracting between retailers and suppliers in the upstream market. However, our work indicates that exclusive dealing and minimum purchase contracts negotiated through buyer groups, in combination with slotting allowances, may be worthy of closer scrutiny. References European Commission (2001). Guidelines on the Applicability of Article 81 of the EC Treaty to Horizontal Cooperation Agreements, Official Journal of the European Commission, 2001/C, 3/01-3/30. Foros, Ø. and Kind, H.J. (2008). Do Slotting Allowances Harm Retail Competition?, Scandinavian Journal of Economics, 110, 367-384. Office of Fair Trading (2007). The Competitive Effects of Buyer Groups, Economic Discussion Paper, A Report Prepared for the OFT by RBB Economics. Shaffer, G. (1991). Slotting Allowances and Retail Price Maintenance: A Comparison of Facilitating Practices, RAND Journal of Economics, 22, 120-135.
By logical extension, the EC’s position on exclusivity provisions also holds for minimum purchase clauses, since the former is effectively a stronger, more restrictive version of the latter. 7
AENORM
63
April 2009
25
Econometrics
Labour Supply and Commuting: Implications for Optimal Road Taxes Commuting is one of the main contributors to road congestion. In order to address congestion, policymakers may influence the workers’ commute by introducing a road tax. According to a recently established paradigm, the revenues of such a welfare-maximising road tax should be employed to reduce the level of a distortionary income tax. An essential assumption in this model is that the number of workdays is optimally chosen, whereas daily workhours are fixed, implying that with a given road tax workers can only reduce their commuting cost by reducing total labour supply. However, a labour supply model, which also allows for optimally chosen daily hours implies that commuting costs increase daily hours, whereas the effect on total labour supply is ambiguous. Based on this model, it is not guaranteed that given a road tax labour supply would decrease. This could question whether recycling the revenue of a road tax is necessary as advocated in the literature.
Eva Gutierrez Puigarnau is a PhD student in transport economics at the Vrije Universiteit Amsterdam. Her research is on the consequences of firm behaviour for employees' travel behaviour, under the supervision of Dr. Jos van Ommeren. This article is a summary of the working paper available online at www.tinbergen.nl/ discussionpapers/09008.pdf.
This article analyses the effects, both monetarily and on time, of commuting costs, measured by commuting distance, on labour supply patterns using the socio-economic panel data for Germany between 1997 and 2007. Several theoretical and empirical works have studied how working weeks per year and weekly or yearly working hours respond to changes in commuting distance. This study is the first to analyse the influence changes in commuting distance has on changes in daily hours worked. The analysis of daily hours is important. Cogan (1981) establishes that when fixed costs of work such as commuting costs per day are present, the period of time over which the fixed costs are incurred is the appropriate measure of labour supply. This implies that the daily labour supply is the appropriate reference. Endogeneity of commuting distance is accounted for by means of a worker first-differences approach for a sample of employer-induced changes in com-
muting distance resulting from firm/workplace relocation. The model and the literature We extend a standard labour supply model by allowing for commuting costs and distinguishing between daily work time and number of workdays.1 The model shows that commuting time increases daily work hours, whereas the number of workdays decreases. The effect on total labour supply turns out to be ambiguous. These results are in contrast to labour supply literature that finds the opposite result, and inconsistent with the transport literature that keeps daily work hours constant.2 Furthermore, monetary commuting costs increase daily work hours (consistent with labour supply literature), but the effect of monetary costs on days and total labour supply is ambiguous. Introduction of a road tax will reduce the time of commuting trips, but will increase the monetary commuting costs. Our model will show that a road tax might lead to an increase in labour supply. That workers are able to choose their daily labour supply as well as the number of workdays is a fundamental assumption regarding modelled labour supply patterns. Although one may have intuitive feelings about the effects of commuting costs on total labour supply, the model developed here indicates that
The full model details are available online at www.tinbergen.nl/discussionpapers/09008.pdf The labour supply literature, such as Gubits (2004), assumes that workers optimally chose their daily labour supply, whereas the number of workdays is fixed. Transport economists, such us Parry and Bento (2001), make the opposite assumption. 1 2
26
AENORM
63
April 2009
Econometrics
Observed weekly labour supply (in log)
Preferred weekly labour supply (in log)
Workdays per week (in log)
Daily hours (in log)
Commuting distance in log (in km)
0.013 (0.002)
0.008 (0.002)
0.003 (0.002)
0.013 (0.002)
Net hourly wage rate in log (in euros)
0.085 (0.031)
0.004 (0.036)
0.040 (0.025)
0.029 (0.030)
Other household income/10 (in log)
–0.047 (0.013)
–0.025 (0.015)
–0.029 (0.012)
–0.034 (0.014)
Female × number of children
–0.066 (0.005)
–0.061 (0.006)
–0.030 (0.004)
–0.028 (0.005)
Child dummy
–0.023 (0.005)
–0.012 (0.006)
–0.016 (0.005)
–0.021 (0.005)
Number of household members
–0.008 (0.002)
–0.008 (0.003)
–0.002 (0.002)
–0.001 (0.002)
Employment dummies
Included
Included
Included
Included
Year dummies
Included
Included
Included
Included
Number of observations
43,694
35,264
20,558
20,558
Table 1. Estimates of logarithm of changes in labour supply with changes in commuting distance (1997–2007 GSOEP) Notes: Note that for some workers information on preferred weekly labour supply is missing. For the years 1998, 2001 and 2003, information on daily hours and workdays per week is missing. The reference category for employment region is ‘old federal states, Berlin and unknown employment region’. Standard errors are in parentheses.
empirical analysis is needed. Econometric model We analyse the effect of commuting distance on labour supply patterns, measured by weekly labour supply, number of workdays and daily hours for a specific worker in a specific residence and with a specific employer. For this we use a sample of employees working away from
employment dummies. Consistent estimation of the effect requires that the change in worker’s commuting distance is exogenous to the change in labour supply patterns. This may not be the case, since a change in the commuting distance may be the result of an endogenously chosen residence or job move. However, the change may also be the result of workplace relocation when staying with the same employer. The latter type of reloca-
"Road pricing may have little or no effect on the participation decision" home. We analyse the effect in two ways: (i) the effect of commuting distance on weekly labour supply must be the sum of the effect on hours per day and the number of workdays, or (ii) a direct analysis of the effect on weekly labour supply. Following the labour supply literature, we assume a double-log labour supply specification and formulate all models in terms of first-differences. That is, we use variables as changes from one time period to another. Taking first-differences essentially removes unobserved worker heterogeneity. We control for a large number of (time-varying) explanatory variables including year dummies, presence of children, wage, other household income, and
tion can be argued to be exogenous, particularly in the case of a firm relocation (when all workers in the firm’s establishment are moved to another workplace location). Keeping the worker’s residence and employer constant as it is done in this article, any observed change in a worker’s commuting distance must be employer-induced (due to a firm relocation) or is due to measurement error, so the estimated effect of distance is consistent.3 Empirical results We estimate all models taking first-differences and show the results in Table 1. The empirical
Following the labour supply literature on endogenous wages, we instrument change in wage rate using age and its square. A non-linear specification of age is appropriate, because one expects that older individuals are less likely to receive a wage increase, but one expects this effect to decrease after a certain age. 3
AENORM
63
April 2009
27
Econometrics
results show a slight positive effect of commuting distance on weekly labour supply (0.013).4 This indicates, for example, that if the commuting distance increases from 20 to 40 kilometres, workers increase labour supply by approximately 20 minutes per week. The weekly effect is the combination of a positive effect on daily hours (0.013) and a smaller insignificant effect on number of workdays (0.003). The theoretical model assumes that the chosen labour supply patterns (hours and days worked) are optimally chosen, which may not be true for every worker. An analysis of the effect of commuting distance on ‘preferred’ weekly labour supply shows an effect of 0.008, which overlaps with the 5% confidence interval of the reported effect on the observed labour supply. To the extent that the point estimate of distance on the preferred labour supply is less than the observed hours suggests that the results obtained may partially be the consequence of employer restrictions on the number of hours. Conclusion When workdays and daily hours are optimally chosen, our empirical results show a slight positive effect of commuting distance on weekly labour supply. This result is in contrast to assumptions in the literature (see e.g. Parry and Bento, 2001), suggesting that when introducing a road tax, a budget-neutral reduction in the income tax as advocated in the literature (Parry and Bento, 2001; Mayeres and Proost, 2001), may not be necessary in order to increase welfare. Note however that results reported in this article need to be interpreted with some caution as they are based on employed workers only and do not consider the effect of changes in commuting costs/time on labour market participation. However, there are reasons to believe that commuting costs, and therefore road pricing, may have little or no effect on the participation decision. One reason is that female workers, for whom the participation decision is most strongly affected by economic incentives, do not belong to the same group of workers who generally will face a road tax.5 The estimated positive effect of distance is consistent with the model developed. However, it is also consistent with other explanations. An alternative explanation is that workers may reduce commuting costs by leaving earlier from home or departing later from work in line with bottleneck economic models (e.g. Vickrey, 1969). When individuals leave earlier from home or depart later from work, they simultaneously increase labour supply when the num-
ber of workdays remains constant. We hope to examine this possibility in the near future. References Cogan, J.F. (1981). Fixed costs and labor supply, Econometrica, 49, 945–963. Gubits, D.B. (2004). Commuting, Work Hours, and the Metropolitan Labor Supply Gradient, Mimeo. Mayeres, I. and Proost, S (2001). Marginal tax reform, externalities and income distribution, Journal of Public Economics, 79, 343–363. Parry, I.W.H. and Bento, A. (2001), Revenue recycling and the welfare effects of road pricing, Scandinavian Journal of Economics, 103, 645–671. Vickrey, W.S. (1969). Congestion theory and transport investment, American Economic Review (Papers and Proceedings), 59, 251– 260.
Not controlling for time-varying variables generates almost identical results. Furthermore, other specifications for commuting distance (e.g. controlling for workplace within the municipality of residence) have been employed, but results are very similar. 5 The percentage of women that commute a long distance and work a few hours is usually low. Men usually work full-time and commute longer distances. Female workers with few hours of work are less likely to travel by car and have shorter commuting distances if they travel by car. 4
28
AENORM
63
April 2009
Economics
Credit Risk Restructuring: a Six Sigma Approach in Banking In this article we show how the credit approval process for corporate customers of a large bank can be streamlined. The result of this optimization is an improved throughput time, so that the front office and the customer receive faster approval for the requested loan. The optimization is restricted by the requirement of the accurate execution of the risk function. In this respect the loan loss ratio (write-offs divided by total exposure) may not be affected by the streamlining efforts. After an explanation of the credit approval process, we show how the optimization has been carried out using simulation. Subsequently we show that the improvement of the throughput time is dramatic.
Introduction Consolidation in banking sweeps over Europe as Italian banks Unicredit and Capitalia merge (in 2007), Fortis NL and ABN Amro bank have been taken over by the Dutch government and German bank Commerzbank merges with Dresdner bank (both events in 2008). This is only the continuation of a trend that has started since the start of the century and its occurrence is not only witnessed by mega-deals such as the ones mentioned above, it is also apparent in EU banking statistics: the number of banks within the EU declines, whereas total assets of the EU banking sector increases, signaling the emergence of larger institutions.1 The continuous shift in organizational boundary necessitates the rethinking of existing designs for both primary and support processes. In this article we will focus on the restructuring of the credit approval process. We will show how a generic credit approval process works and how it can be restructured and optimized. The credit approval process has not been the object of much recent research. A good description of best practices has been published by the national bank of Austria in 2004 (see reference list). However, this description focuses on internal control and risk management, whereas our focus is the optimization of the process in terms of efficiency (total throughput time, resource utilization) within the constraint of an adequate risk management performance. Overview of the credit approval process The objective of the credit approval process is to prevent two types of errors: substantive errors and procedural errors. A substantive er1
Marco Folpmers holds a PhD in economics from the Free University Amsterdam and is GARP certified financial risk manager and ASQ certified Six Sigma Black Belt. He leads the financial risk management consulting segment within Capgemini Consulting NL. Jeroen Lemmens works for the Operational Excellence Group within Capgemini Consulting NL. He is ASQ certified Six Sigma Black Belt and he specializes in restructuring, LEAN and Six Sigma
ror is the erroneous assessment of the credit exposure despite comprehensive and accurate presentation of the risk analysis. A procedural error is the incomplete and/or inaccurate presentation of the credit exposure, or the incorrect performance of the credit approval process. The latter case refers to fraud or intentional misconduct by the persons in charge of conducting the credit approval process. In short, the credit approval process is aimed at mitigating the risk of (1) a wrong presentation of the credit exposure, (2) an erroneous assessment of the credit exposure and (3) fraud. The effectiveness of the credit approval process is measured by the (net) loan loss ratio. The ratio captures the historical write-offs (loss after restructuring / liquidation of the collateral) against the total exposure. Often the ratio is converted to a sensitivity: an increase of one basis point of the loan loss ratio equals an increase in loan losses of â‚Ź 5 million, for example. The efficiency of the credit approval process is primarily measured by the total throughput time of the analysis and decision-making processes and the number of resources needed.
European central bank, EU Banking Structures, October 2007, p. 7.
AENORM
63
April 2009
29
Economics
Figure 1: breakdown of the credit approval process
Efficiency optimization efforts should always be restricted by pre-set values of the net loan loss ratio: the business case for the reduction of a number of analysts that increases the loan loss ratio will often be doomed to fail since a small increase of the loan loss ratio has a considerable bottom-line impact. In order to increase efficiency, the credit approval process is implemented by standard or individual processes for risk analysis and decision-making. Standard processes allow for the automated processing of small-exposure, standardized loans, whereas individual processes assess each loan application separately. Often only retail loans2 are suitable for standard processes, although a case can be made to include small corporate loans as well. Standard processes use pre-determined limits for the exposure of each customer. If a new loan application is within this limit, approval is granted with the help of mainly automatic procedures. On the other hand, individual processes are characterized by an adaptive design which makes it possible to deal with a variety of products, collateral, and conditions.3 In the remainder of this article we will focus on individual credit approval processes for corporate loans. The credit approval process consists of four main processes that are illustrated in Figure 1.
Double voting means that two departments are needed for the approval of the credit: both the front office and the risk department.4 The decision-making authority is obviously a (senior) management responsibility. The generic process illustrated in Figure 1 can be fine-tuned dependent on the customer’s total exposure. An example is presented in Table 1. The authority structure illustrates that small exposures can be decided at front office level only (single vote). Medium size exposures need a risk analysis and two votes (whether implemented with the help of a credit committee or a sign off). For large exposures the risk department issues an advice after which it is decided at corporate level with two votes. The table illustrates as well the principle of ‘bypassing of hierarchical layers’, since the medium and large exposures are not decided at multiple levels. This is the recommended practice since multiple, subsequent decision makers have a tendency to rely on each other (‘socialization of responsibility’).5 In the example that is presented in the next section, the principles introduced above will be illustrated. An illustration: restructuring local credit risk centers for corporate clients
A transfer of the loan application and supporting documents takes place from the front office to the risk department. The risk analysis consists primarily in the review of the obligor’s creditworthiness, the valuation of the collateral and the assessment of the exposure. The decision to grant the loan can be organized by either a credit committee or by a simple sign off. In both cases the principle of ‘double voting’ needs to take place if the credit risk is considerable.
Our example of local business centers for corporate clients is presented in Figure 2. The local business centers for corporate clients have their own risk analysts and local committees. In case of large exposures, the loan application is forwarded to the central risk department for further analysis and decision-making. The analysis can be placed ‘on hold’ if the file is not complete. The analysis can only be completed once all relevant data has been delivered
For Basel II the retail asset class includes SME loans if the total exposure of the counterparty is below € 1 million, see BCBS, International Convergence of Capital Measurement and Capital Standards, June 2006, art. 70. Apart from standardized credit approval processes, the risk analysis of the retail portfolio can also be made more efficient since Basel II allows the use of pooled risk parameters (PD, LGD), see BCBS, art. 331. 3 Oesterreichische Nationalbank, Guidelines on credit risk management: credit approval process and credit risk management, 2004, p. 13. 4 Doube voting should not be confused with the four eyes principle. Both are internal control measures, but double voting refers to the joint authority of two persons of unlinked departments, whereas the four eyes principle refers to the joint authority of two persons within the same department. 5 See Oesterreichische Nationalbank, o.c., p. 31. 2
30
AENORM
63
April 2009
Economics
Table 1: the authority structure for three exposure volumes
by the front office (modeled as a ‘delay’ / ‘file on hold period’). A distinction is made between ‘full analysis’ and ‘short analysis’. Short analysis is applicable for standardized products if the new loan amount is within a pre-defined limit. The principle of ‘bypassing’ is not implemented since the loan application for large exposure customers is decided both locally and centrally. In our case, during the Measure phase process data has been gathered with the help of Work Sampling and File Tracking: • Work Sampling refers to the measurement of time spent by the resources in the credit approval process (the analysts and risk managers). Work Sampling has been implemented with the help of forms on which the analyst records on a daily basis his or her activities for blocks of 10-minutes. The activities are categorized across analysis tasks (such as: collect data needed for a risk analysis), decision related tasks (such as: prepare credit committee) and other tasks (such as: lunch). The Work Sampling data allows the analysis of net resource availability for analysis and decision-related tasks and the net process durations (excluding waiting times) of the processes depicted in Figure 2; • File Tracking follows the file through the processes. At each stage, the date/time stamp is added to the form. The File Tracking data allows the analysis of current throughput times, including waiting times and delays. The measurement phase comprises six weeks at three local business centers for corporate clients. All forms have been entered into a database. With the help of statistics from this database, the statistical parameters as shown in Figure 2 have been calculated. A few remarks apply:
• The arrival process has been modeled as the usual exponential distribution for the interarrival time.6 The mean interarrival time is 1.5 hours. The model uses a 9 hour working day, so we expect six new loan applications to arrive at the local risk department per day. Since the use of an exponentially distributed interarrival time implies a Poisson process, the number of arrivals per day is Poisson distributed (with parameter lambda equal to six). There is no restriction on the maximum number of arrivals per day (infinite calling population); • For the local approval process, a full analysis is needed in 66% of the cases. A short analysis is allowed for standardized products and low transaction volumes; • If the file is not complete (in 33% of the cases), the file is placed ‘on hold’, waiting for additional information from the front office. This is modeled as a delay with the help of a continuous uniform distribution with minimum equal to 2 hours and maximum equal to 27 hours; • The analysis processes (local full and short risk analysis; large exposure full risk analysis by the central risk department) have been modeled as triangular distributions. The triangular distribution is a good option for the distribution used for transaction processes in simulation models;7 • The local analysis processes are carried out by 2.33 risk analysts on average per day. If all analysts are busy and new files arrive for analysis, a queue starts to develop. The files waiting for analysis are served by the available analysts on a first-come first-serve basis. Again, this is the usual queue discipline assumed in simulation models; • The central analysis for this particular business center for corporate clients is carried out
See Hillier, F.S., Lieberman, G.J., Introduction to operations research, 1995, 663, Law, A.M., W.D. Kelton, Simulation modeling and analysis, 2000, p. 389. 7 In Gross, D., C.M. Harris, Fundamentals of queueing theory, 1998, p. 377, the distinction is made between a diagnostic process (we must find the trouble in order to fix it) and a repetitive service process (the longer a file is in process, the greater the probability of completion in a given interval of time). The diagnostic process is ‘memoryless’ and a Poisson process could be applied, whereas for the repetitive service process, the Poisson process is not an option. For the use of the triangular distribution, see Gross, o.c., p. 381. 6
AENORM
63
April 2009
31
Economics
Figure 2: credit approval process – current state
by 1.22 central risk analysts on average per day; • The local and central decision processes have been modeled as local and central committees that convene twice a week. With the help of the parameters mentioned above we can calculate that the average duration excluding waiting time for a small exposure transaction equals 19 hours, i.e. 2.1 working days (remember that a working day contains 9 hours). The average duration for a large exposure transaction equals 4.5 working days. The average duration for both small and large transactions equals 2.7 working days. However, the total throughput time includes waiting time and the only way to calculate the duration of the waiting time is to use simulation. With the help of simulation software (ARENA™), we have simulated the process for 2 replications of 200 consecutive days each. We allow the process to reach a steady-state after 20 days, so the statistics are based on two runs of 180 consecutive days each.8 The average total throughput time for both small and large exposures including waiting time equals 2.9 working days. Resource utilization is approximately 75% for both the local risk staff and the central risk staff. Now the restructuring process starts. One of
the first questions of the Six Sigma change managers will be: what requirements does the (internal) customer have? The front office, when asked this question, responds that it needs approval within a week (5 working days, i.e. 45 working hours). Current process performance is assessed using this benchmark as Upper Specification Limit. In Figure 3 the histogram of the total throughput time is shown (generated by Minitab™). From the figure we conclude that the credit approval process is not compliant with the five working days in 13.1% of the working days (approximately 131,000 defective PPM, parts per million). This level of process failures (‘defects’ in Six Sigma) is far too high. So far we can conclude that, although the average throughput time seems to be fine, the process variance is very large so that a considerable number of files (13%) is not approved within the required five working days. The following restructuring measures are implemented: • By improving the interface between the front and the risk departments, the percentage first time right (complete and accurate files) is improved from 67% to 95%. That means that after restructuring only 5% of the files needs to be placed ‘on hold’;9 • The credit presentations in the committee
See Hillier & Lieberman, o.c., 925 for the ‘warm-up period’, an initial stabilization period for approaching a steady-state condition. 9 See Oesterreichische Nationalbank, o.c. , p. 10, for the interface between sales and risk. Improvement measures include: aligning the product segmentation used by front and risk and the use of internal templates of data to be delivered by front to the risk department for each type of analysis. 8
32
AENORM
63
April 2009
RUIMTE
voor uw ambities Risico’s raken uw ondernemersgeest en uw ambities. Aon adviseert u bij het inzichtelijk en beheersbaar maken van deze risico’s. Wij helpen u deze risico’s te beoordelen, te beheersen, te bewaken en te financieren. Aon staat voor de geïntegreerde inzet van hoogwaardige
expertise,
diensten
en
producten op het gebied van operationeel, financieel en personeel risicomanagement en verzekeringen. De focus van Aon is volledig gericht op het waarmaken van uw ambities.
In Nederland heeft Aon 12 vestigingen met 1.600 medewerkers. Het bedrijf maakt deel uit van Aon Corporation, Chicago, USA. Het wereldwijde Aon-netwerk omvat circa 500 kantoren in meer dan 120 landen en telt ruim 36.000 medewerkers. www.aon.nl.
4982aa
R IS IC O M AN A G E ME N T | E MPL OYEE B EN EF I T S | VER Z EKER I N GEN
AENORM
63
April 2009
33
Economics
Figure 3: process capability – current state
meetings have been shortened so that the committees are able to convene every other day instead of twice a week; for large exposures, the first committee delivers an advice and not a first decision; • The analysis durations have not been shortened, since the fear is that this would impact the loan loss ratio; • In order to reduce waiting time variance, the risk departments of four local business centers for corporate clients have been merged. The improved level of load balancing reduces both the average waiting time and its variance – without adding one single resource. The average total throughput time for both small and large exposures, including waiting time now equals 1.8 working days (before restructuring: 2.9), a reduction of 35%. A larger improvement is seen in the variance reduction. The defects percentage has been decreased to 1.3%, a ten -fold reduction (see Figure 4, less than 13,000 defective PPM, parts per million). The restructuring project succeeded in drastically improving the process capability with the help of improving the first time right percentage and the load balancing. The centralization of four small local risk department into a larger one that covers a larger geographical area means that the proximity of risk analysts to the account manager is removed. The adequate digital transfer of relevant customer and transaction data is an important condition for centralization. However, often both the financial statements of corporate
34
AENORM
63
April 2009
clients and their transaction parameters are already entered into banking systems and supporting documents, such as asset valuation reports, can easily be scanned and saved in an electronic repository. Although the physical separation of account manager and risk analyst will necessitate some adaptation, the centralization of risk analysts into a larger pool has some organizational benefits of its own: temporary dips in availability due to sickness and job vacancies are more easily accommodated and knowledge transfer from senior analysts to junior analysts is facilitated. A larger pool also encourages specialization (e.g. real estate loans or loans to the public sector) and offers more career opportunities. Up to now we have only described the Define, Measure and Analyze phases of a Six Sigma project. The new process design is implemented in the Improve phase and monitored in the Control phase. On behalf of this last phase a system of continuous performance measurement and management needs to be developed in order to maintain process capability at a high level (and the loan loss ratio at a low level). Conclusion: throughput time improvement by variance reduction In this article we have shown how the credit approval process can be improved with the help of Six Sigma principles. Although we are aware that other objectives may apply, our current scope has been limited to optimizing the throughput time.
Economics
Figure 4: process capability – after restructuring
We have shown that the centralization of the risk analysis staff from small departments of 2 or 3 analysts each to a larger one with 9 or 10 analysts reduces the variation in total throughput time dramatically due to a more effective load balancing. Small local risk departments servicing local business centers for corporate clients are often implemented as a result of the perceived necessity to have local couples of account manager and risk analyst. However, if there are no special reasons for their proximity and the credit approval process is assessed according to the number of files with a duration within a specific Upper Specification Limit, the case for the centralization of the risk analysis staff is very convincing.
(2004). Guidelines on credit risk management: credit approval process and credit risk management, Oesterreichische Nationalbank. Pyzdek, T. (2003). The Six Sigma Handbook, revised and expanded: the complete guide for greenbelts, blackbelts and managers at all levels.
References (2006). International Convergence of Capital Measurement and Capital Standards, BSBS, June. (2007). EU Banking Structures, European central bank, October. Gross, D. and Harris, C.M. (1998). Fundamentals of queueing theory. Hillier, F.S. and Lieberman, G.J. (1995). Introduction to operations research. Law, A.M. and Kelton, W.D. (2000). Simulation modeling and analysis.
AENORM
63
April 2009
35
Economics
of weet jij* een beter moment voor de beste beslissing van je leven? www.werkenbijpwc.nl
Assurance • Tax • Advisory
*connectedthinking ©2007 PricewaterhouseCoopers. Alle rechten voorbehouden.
36
AENORM
63
April 2009
Econometrics
On the Manipulability of Votes: The Case of Approval Voting The famous result of Gibbard and Satterthwaite shows that every voting procedure is manipulable if the voters can have any preferences over the candidates. That is, a voter may improve the voting result by not voting according to his true preference. Approval voting, introduced by Brams and Fishburn, is not manipulable if preferences are dichotomous: each voter only distinguishes between acceptable and non-acceptable candidates. Approval voting offers a compromise between flexibility and non-manipulability of the voting procedure. Based on recent and ongoing research we discuss the extent to which approval voting is manipulable if preferences are more refined. We also provide some evidence that k-approval voting, in which voters approve of exactly k candidates, may offer an alternative to approval voting that is better in terms of potential manipulation.
Strategic manipulation of votes
Hans Peters
National elections, the Eurovision Song Festival, and councils of scientific communities have in common that voters choose from candidates by some fixed voting procedure. The theorem of Gibbard (1973) and Satterthwaite (1975) says that if we want such a voting procedure to be non-dictatorial – and dictators are generally disiked – then it will necessarily be strategically manipulable. This means that there are situations in which some voter may obtain a better result by not voting according to his true preference over the candidates. Consider the following example with three voters and five candidates. Voter
a1
a2
a3
a4
a5
1
5
1
3
2
4
2
1
2
3
4
5
3
3
4
5
2
1
The voters are 1, 2, and 3, and the candidates a1,...,a5. The numbers represent preferences. E.g., voter 1 likes a1 best and a2 least. These numbers can also be used for voting: a1 obtains a total score of 9, a2 of 7, a3 of 11, a4 of 8, and a5 of 10. This particular voting procedure, the Borda rule, therefore results in the social ranking a3, a5, a1, a4, a2. If exactly one candidate is to be elected, then this would be candidate a3. Since preferences are private information, voter 1 could change his scores to 5,3,1,2,4, for a1,...,a5, respectively, resulting in total scores of 9,9,9,8,10 and thus in a5 as the final winner. Since voter 1 prefers a5 over a3, this voter gains by strategic manipulation.
is professor in mathematical economics at the Department of Quantitative Economics of Maastricht University. His main interests are game theory and social choice theory. This article is based on recent work joint with Ton Storcken and Souvik Roy, associate professor and Ph.D. student at the same department.
Can strategic manipulation be avoided? Strategic manipulation may result in the ‘wrong’ candidate being elected. In the example above, a3 seems to be a good compromise but a5 is the worst candidate for voter 3. The possibility of strategic manipulation may lead to an election result that does not properly reflect the true preferences of the voters. Unfortunately, the Gibbard-Satterthwaite theorem is quite robust and holds whenever there are at least two voters, three candidates, and each individual preference over the candidates is possible. The last condition is crucial. Suppose, for instance, that every profile of preferences is single-peaked. This means that the candidates can be lined up such that each voter’s preference decreases both to the left and the right from his top candidate. As a concrete example, suppose that the five candidates above are the temperatures 18◦-22◦ in a room, which can be adjusted by a thermostat. The three inhabitants of the room vote for the temperature. It is reasonable to assume that each person has an ideal temperature and that preference decreases further away from this ideal temperature. Such a profile of preferences is single-peaked. (The reader may want to verify that the preferences in the example above are not single-peaked.) Consider
AENORM
63
April 2009
37
Econometrics
the voting procedure that picks the median of the reported ideal temperatures. It is not hard to check that under this procedure no voter can improve the result by strategic voting, i.e., by not reporting his true preference. Dichotomous voting
preferences
and
approval
We now focus on a different preference restriction, related to the procedure of approval voting as proposed in Brams and Fishburn (1983). Under approval voting, each voter votes for a subset of the candidates, that is, assigns 1 to each of these candidates and 0 to the others. The candidates with most votes are the winners. So this voting procedure results in a set of candidates. (If a fixed number of candidates is to be elected, then we need an additional rule but this point is ignored here for simplicity.) A voter’s preference is dichotomous if this voter has a set of top candidates between which he is indifferent, and which he prefers to the other candidates between all of which he is again indifferent. Under quite natural assumptions on how a preference over candidates is extended to a preference over sets of candidates, approval voting cannot be strategically manipulated when preferences are dichotomous. That is, no voter can bring about a better set of candidates by not voting exactly for his own top set. This is a very attractive property, but the assumption of dichotomous preferences is quite strong. If we view a voter’s top candidates as his acceptable candidates and the others as non-acceptable, then it is likely that he is not (completely) indifferent between the candidates that he finds acceptable, nor between those that are not acceptable. Then, what does this imply for the strategic manipulability of approval voting? Strategic manipulability of approval voting If preferences are not dichotomous then strategic manipulation of approval voting is possible (Brams and Fishburn (1983)). In Peters, Roy and Storcken (2009) we try to obtain some insight into the seriousness of this problem. For the purpose of this article, assume that voters have strict preferences (no indifferences), and a specific top set of candidates that they find acceptable. Strategic manipulation means that a voter can improve the set of winners by not voting exactly for his set of acceptable candidates. To evaluate when a set of winners becomes better, we consider three different extensions of strict preferences over candidates to preferences over sets of candidates. Worst comparison means that a voter prefers a set of candidates B over a set C if he prefers the worst candidate in B over the worst candidate in C. Best comparison is analogous, but now comparing the best candidates of B and C. A more refined notion
38
AENORM
63
April 2009
is stochastic comparison: B is preferred over C if the lottery assigning equal probabilities over the candidates in B stochastically dominates the lottery assigning equal probabilities over the candidates in C. Note, however, that the last preference extension is not complete: not each pair of sets can be compared this way. Examples of manipulation Here are some examples of manipulation of approval voting. There are six voters (1,...,6) and four candidates a, b, c, d. We consider manipulation by voter 1 and under approval voting it is sufficient to know the total votes cast by the other voters. • Assume that the votes from 2,...,6 add up to 4, 4, 3, 2 for a, b, c, d, respectively. If voter 1 has preference cab|d (meaning that he prefers c over a over b over d and finds the first three acceptable), then truthful voting results in the winning set {a, b}. If 1 votes only for a and c then the winning set is {a}, which is better than {a, b} both by worst and by stochastic comparison. If 1 votes only for c then the winning set is {a, b, c}, which is better by best comparison. • Now the votes cast by 2,...,6 add up to 2, 4, 2, 4 for a, b, c, d, respectively,and voter 1 has preference ca|bd. Truthful voting results in {b, d}. Voting for b, a and c results in {b}, which is better by worst and stochastic comparison. • Finally, the votes cast by 2,...,6 add up to 3, 4, 2, 2 for a, b, c, d, respectively,and voter 1 has preference c|abd. Truthful voting results in {b}. Voting for a and c results in {a, b}, which is better by best comparison. Observe that in all these examples voter 1 still votes sincerely, even if he manipulates. This means that he still votes for a top ranked set of candidates. Nevertheless, he may sometimes not vote for a candidate, even if he finds that candidate acceptable, or vote for a candidate even if he finds that candidate not acceptable. The extent of potential manipulation In Peters et al. (2009) the profiles of preferences in which some voter can manipulate, are characterized. The simplest way to obtain an idea about the extent of potential manipulation is to count the total number of manipulable profiles. In general this is combinatorially (too) complex. Using simulation, we found that for the case of six voters and four candidates, as in the examples above, the percentages of manipulable profiles under worst, best, and stochastic comparison of winning sets are about 39%, 60%, and 80%, respectively. For ten voters and four candidates these numbers are 31%, 57%, and 73%. Some general trends can be
Econometrics
distinguished, such as an increasing likelihood of manipulation from worst comparison to best comparison and stochastic comparison. Clearly, if the number of voters is large, then manipulation is almost excluded since the probability that some voter is ‘pivotal’, i.e., influences the winner, becomes very small. Therefore, these percentages are relevant in particular for cases with relatively few voters, like elections of boards or councils of scientific communities. In spite of this, individual manipulation in general – not only for approval voting – is relevant also in large elections, like national elections for Parliament. This is so since individual voters may expect other voters with similar preferences to vote in the same way. For instance, in the Dutch elections for Parliament in January 2003, clearly the Social Democratic Party (PvdA) came out very strong as many (more) left-oriented people voted for it in order to decrease the probability of a conservative cabinet. Approval voting and scoring rules In a scoring rule there is a fixed (weakly) decreasing sequence of numbers that a voter has to assign to the candidates. The Borda rule in the first example above is a scoring rule. Closely related to approval voting is the k-approval scoring rule, according to which each voter assigns a 1 to exactly k candidates and a 0 to the remaining candidates. For k=1 this is the well known plurality rule: this rule is sometimes not manipulable at all, e.g. if there are only two voters, but it suffers from a serious drawback since it works to the disadvantage of candidates that are often high but second ranked. Consider, for instance, the position of a party like D66 in the Dutch political landscape. There is some evidence that among all scoring rules, k-approval scoring rules with k>1 do well in terms of non-manipulability (Peters, Roy and Storcken (2008)). Specifically, if the number of voters is not too small then setting k equal to one half times the number of candidates seems to be minimally manipulable in terms of the total number of manipulable profiles. This matter is still under investigation. k-Approval scoring rules are less flexible than approval voting but seem to be also less manipulable. For instance, for our example with six voters and four candidates, the percentages of manipulable profiles under worst, best, and stochastic comparison are, respectively, 28%, 41%, and 52% for k=2, and 21%, 43%, and 64% for k=3.
or larger extent if preferences are not dichotomous. Our research so far indicates that, in this respect, k-approval voting seems to offer a better compromise. Of course, results like these should be considered in the right perspective. The number of manipulable preference profiles is just a crude measure for manipulability: it does not take the likelihood of such profiles within a certain population of voters into account, nor the seriousness of the consequences of manipulation. Also, even if possible, voters may abstain from manipulation simple because they do not have enough information about the votes of others to be able to successfully manipulate. References Brams, S.J. and Fishburn, P.C. (1983). Approval voting. Birkhauser, Boston MA. Gibbard, A. (1973). Manipulation of voting schemes: a general result, Econometrica, 41, 587–602. Peters, H., Roy, S. and Storcken, T. (2008). Manipulation under k-approval scoring rules, METEOR Research Memorandum 08/056, University of Maastricht. Peters, H., Roy, S. and Storcken, T. (2009). On the manipulability of approval voting. Mimeo, University of Maastricht. Satterthwaite, M. (1975). Strategy-proofness and Arrow’s conditions: existence and correspondence theorems for voting procedures and social welfare functions, Journal of Economic Theory, 10, 187–217.
Concluding remarks Voting procedures are almost always manipulable. Approval voting offers a compromise between the possibility to report detailed information on one’s preference and non-manipulability, but the latter is violated to a lesser
AENORM
63
April 2009
39
Econometrics
Portfolio Allocation in Times of Stress When allocating assets to a portfolio, expected returns and covariances of the underlying assets are deciding factors on the weights attached to each of the assets. The present note investigates the possibility of estimating time varying expected returns and covariances, and their influence on the optimal portfolio choice over time. The modelbase portfolio strategy is compared with an equal-weight portfolio and a historical mean/variance strategy, on a set of six main European stock indices.
Charles S. Bos obtained a PhD in Econometrics at the Erasmus University Rotterdam in 2001. Afterwards, he worked as a PostDoc researcher at the VU University Amsterdam, and a year as a Research Officer at Oxford University. At present his research focuses on time series modeling of financial and micro-series, both at low and high frequencies.
Introduction With financial markets in turmoil, this might at first sight not be the most ideal moment to look at portfolio allocation. But then again, as long as one considers portfolio allocation as a dynamic continuous action, then maybe the present times are very attractive times for choosing an optimal portfolio. Updating the view one holds on the market, on risks and possibilities, should lead to adapting one’s portfolio as well. Portfolio optimisation is an old topic, described in e.g. Markowitz (1952). In that article, given a vector of expected returns with corresponding covariance, the optimal portfolio in mean-variance sense is derived. However there is no real discussion from where the estimates of the expected returns and the covariance are obtained. In this note, a method is discussed to extract both a stochastic volatility and a stochastic correlation between assets at a daily level. This results in daily expected return vectors and daily covariance matrices, in Section 2, which could be used as input for a Markowitz’ portfolio optimisation scheme. Here, I’ll research the option of using such time varying moments in constructing a portfolio in a simple scheme. Section 3 quickly introduces the Markowitz (1952) optimal portfolio. As the optimal portfolio based on the daily perception of the market will be quite volatile, a first setup of a more practical trading rule is introduced as well.
40
AENORM
63
April 2009
Indicative results on an investment portfolio in European stock indices are provided in Section 4, followed by concluding remarks in Section 5. Modelling discussion Following work in Bos and Gould (2007), begin by considering a bivariate system of log-returns , which are modelled according to a local level model as in rti j = t + εt t + = t + ηt
εt N ∑t i j (1) ηt N diagσ iη σ j η (2)
The expected return is allowed to vary a bit over time. As there is very little (if any) predictability, ση is expected to be tiny in any case. The covariance matrix of the returns is allowed to vary over time, as σ it ∑t i j = ρ σ σ ijt it jt qt + = qt + ζ t
ρijt σ it σ jt qt − ρt ≡ (3) qt + σ jt (4) ζ t N σ ζ (5) σ it = hit + γ i
hit + = φhit + ξ it
ξ t N ∑ ξ
(6)
In short, this allows both variance and correlation to move over time, randomly. The intention is to extract, for each pair of assets, the correlation between assets ρijt, and the variance σ it j σ jt i .1 Note that the variance of asset i is calculated in this model against asset j, leading to a whole series of variance estimates. As each of those estimates should be a valid estimate of the variance, in the continuation a joined estimate can be constructed as σ it =
∑ σ it j n − j ≠i
Econometrics
Je leert meer...
...als je niet voor de grootste kiest.
Wie graag goed wil leren zeilen, kan twee dingen onze nanciële, commerciële en IT-functies, maar doen. Je kunt aan boord stappen van een groot zeil- net zo goed voor onze traineeships waarin je schip en alles leren over een bepaald onderdeel, diverse functies bij verschillende afdelingen vervult. zoals de stand van het grootzeil of de fok. Of je kiest Waardoor je meer ervaring opdoet, meer leert en voor een iets kleiner bootje, waarop je al snel aan sneller groeit. SNS REAAL is met een balanstotaal van € 83 miljard en zo’n 7000 medehet roer staat en zelf de koers kunt bepalen. Starters werkers groot genoeg voor jouw ambities Zo werkt het ook met een startfunctie bij SNS REAAL, de innovatieve en snelgroeiende dienst- en klein genoeg voor een persoonlijk contact. verlener in bankieren en verzekeren. Waar je als Aan jou de keuze: laat je de koers van je carrière starter bij een hele grote organisatie vaak een vaste door anderen bepalen of sta je liever zelf aan plek krijgt met specieke werkzaamheden, kun je het roer? Kijk voor meer informatie over de je aan boord bij SNS REAAL in de volle breedte startfuncties en traineeships van SNS REAAL op van onze organisatie ontwikkelen. Dat geldt voor www.werkenbijsnsreaal.nl.
AENORM
63
April 2009
41
Econometrics
Figure 1: Extracting portfolio means, variances and correlations, focusing on the AEX index against other European stock indices
Estimation of the parameters of this Dynamic Correlation Stochastic Volatility (DCSV) model is performed in a MCMC sampling scheme using Gibbs with data augmentation Lancaster (2004). With the estimated parameters, the volatilities and correlations are (particle) filtered Godsill et al. (2004), to ensure that only past information is used in evaluating the risks related to a portfolio. For details, please refer to Bos and Shephard (2006); Bos and Gould (2007). Figure 1 displays the outcome of estimating the above bivariate model on the AEX stock index #Δ
∆
c=0.5
0.921
11.365
c=1
0.971
c=2 c=5
4.028
1.266
-75.079
10.721
1.691
1.231
-152.189
0.973
11.725
0.201
1.219
-304.643
0.971
12.681
-0.619
1.214
-759.553
DCSV
Combining results towards a portfolio
Equal fractions c=0.5
0.005
0.027
-4.395
1.904
-123.412
c=1
0.005
0.027
-4.395
1.904
-242.429
c=2
0.005
0.027
-4.395
1.904
-480.463
c=5
0.005
0.027
-4.395
1.904
-1194.565
c=0.5
0.004
0.485
6.609
1.420
-82.158
c=1
0.004
0.398
4.899
1.322
-160.423
c=2
0.004
0.315
3.707
1.273
-314.486
c=5
0.004
0.269
2.984
1.253
-780.162
Historical mean/variance
Note: The table reports the fraction of trades, the turnover of the portfolio, the total yearly percentage returns, the variance of the returns, and the total utility attained. The first panel takes the model-based approach, whereas the latter panels take alternative investment methods. The period of evaluation is 2001/1/3–2009/2/12. There is no limit on the trading frequency. Table 1: Results of investment strategies
42
AENORM
63
April 2009
against 5 other European2 stock indices. For each bivariate estimation, an estimate for the drift μt (left panel), the standard deviation σt (middle panel) and correlation ρt (right panel) results. Clearly there is little difference found when the drift or volatility is estimated using any of the counterpart indices, and hence the average of the estimates is used in the following as the global estimate of the drift/volatility term. Each combination of assets also provides a correlation estimate between the indices. The AEX is seen to be correlated strongly with most indices, especially in the latter years. Only the Austrian ATX index is known to be relatively un(cor)related to other European indices.
With the mean and covariance matrix of the results extracted from the model estimations, it is time to optimise a portfolio. The Markowitz (1952) approach takes a simple mean-variance utility function, = −
∑
=
≥
Given a level of risk aversion c, this function can be optimised over w, the fractions of wealth that should be invested in each asset. Given a certain portfolio choice wt, all the wealth of the investor is put into the stocks. The next day, given the gains and losses of those stocks, an effective portfolio wt+1|t results, reflecting the larger effective share in stocks that gained in value, and a smaller share for the stocks that lost. Note that this optimisation in essence is static: given μ, Σ, an optimal portfolio is provided. If μ, Σ now vary over time, the optimal portfolio also changes, possibly each and every day. This would incur large trading costs, hence would not be interesting for an investor from a practical viewpoint. Instead, assume that the trader
Econometrics
#Δ
∆
DCSV α=0.5
0.921
11.365
4.028
1.266
-75.079
α=1
0.473
9.491
3.873
1.264
-75.163
α=2
0.304
8.475
4.351
1.267
-74.044
α=5
0.140
5.011
4.177
1.267
-75.044
α=10
0.066
2.412
4.436
1.262
-74.482
α=20
0.029
1.491
3.462
1.277
-76.388
α=50
0.007
0.942
8.542
1.398
-78.921
Note: See Table 1 for an explanation of the entries of the table. Here, a limit α for the reluctance-to-trade is implemented. Results are evaluated for risk aversion c = 0.5. Table 2: Results of limited-trade investment strategy
only changes the portfolio if it is worthwhile to move, i.e. if Uwt − Uwtt − wt − wtt −
> αQ
Only if utility increases enough by moving from the old portfolio wt|t−1 towards the optimal portfolio , the move is made. Q is a calibrating constant here, α a parameter which the investor can set, according to his or her reluctance to trade. There are of course many more advance approaches for dynamically allocating portfolios, see e.g. Bansal et al. (2004) for an overview. In this note, we however stick to a relatively simple approach, which could be useful for a practioner as well. Investment results in the Euro area The choices the investor has to make are now both the level of risk aversion c, and the reluctance-to-trade α. The outcome of the investment strategy could be judged by the number of trades #Δ, the total turnover of trades ∆ , the yearly percentage portfolio returns , the variance of the portfolio returns, and the cumulative utility that is attained through the strategy. At a later stage, the results will be split up by subperiod, as a check for robustness. First, Table 1 provides a first set of results on these statistics over the full period. Apart from the model based results, two alternative strategies are introduced. The first alternative reweights the portfolio at the beginning of the year such that each stock receives the same weight. The second alternative uses last year’s mean and variance to construct an optimal portfolio, thus also adapting the portfolio at most once a year. The evaluation starts in 2001, leaving one year of data for the models to get initial
estimates. The results in this first table indicate that taking an equal investment in each stock index is not a good idea, with lowest returns and utility overall. Clearly, a choice can be made. The difference between using a historical approach or a model based approach is some 7-20 points in terms of utility. The historical approach seems to deliver higher returns, but at higher risk as well. The historical approach has the advantage that it trades little (just on 1/250 of the days), whereas the DCSV approach on average buys/ sells the portfolio 11 times per year. When the reluctance-to-trade, α, is varied, results as in Table 2 are found. When α increases, fewer trades occur, timed at those moments when the apparent gain in utility is greatest. Comparing the last line in Table 2 with the first line concerning the historical mean/variance approach of Table 1, it is seen that with α = 50 the model trades just twice as often, with a turnover of less than one full portfolio per year. The yearly returns are however considerably hi gher at = 8.5%, at a variance of = 1.398 instead of = 6.6% and = 1.420, respectively. Graphically, the difference is depicted in Figure 2. It is seen how both strategies invest heavily in the Austrian stock market, until 2006. The model-based strategy only halfway 2006 decides that it is time to move part of the investment elsewhere, whereas the historical approach can only take such a decision at the change of years, when the portfolio is reevaluated. In 2008/2009, the stock markets are very risky, and the model-based approach is seen to switch more eagerly, trying to find an optimal portfolio. The historical approach is only evaluating once a year, and hence took the drop of the AEX index. Figure 2 only depicts the weights that were chosen by each of the methods. To judge whether good choices were made, a year-byyear comparison is more informative, as in Table 3. Returns tend to be higher in most years for the model-based approach, against a lower variance. Especially in the last two years, with extreme volatility on the financial markets, the added value of using a more flexible modelbased approach is proven to be very valuable
Figure 2: Portfolio weights using the DCSV model (top) or the historical mean/variance (bottom)
AENORM
63
April 2009
43
Econometrics
for lowering volatility of the results.
Bansal, R., Dahlquist, M., and Harvey, C. R. (2004). Dynamic trading strategies and portfolio choice, Working paper 10820, NBER.
Where to go, where to invest? This article started off noticing that the markets are in turmoil, and how in such times it could be useful to have a model-based system to help an investor in getting a signal when to change his/ her investment portfolio. Section 2 introduced a conceptually simple bivariate model, which could be extended to provide a multivariate estimate of the expected mean and variance of the returns. Using Markowitz (1952), these estimates of market sentiment could be translated into optimal portfolios, or at least optimal portfolios in a mean-variance sense. Noticing that it would be far too costly to move portfolios from day to day, a reluctance to trade parameter was introduced such that the portfolio was only adapted on days where a large improvement in utility could be attained. The results, on a portfolio of European stock indices, are promising. In tranquil times, the model has little advantage over a standard historical mean/variance approach. In times of change however, the model indicates when to move position, and seems to do so relatively well. The first future extension of this approach would be to introduce an alternative class of assets, e.g. government bonds. A burning question would be whether such a modelling approach would know when to switch, getting out of the stock market on time. Maybe one day one could evade, well, the next period of financial stress...
Bos, C. S. and Gould, P. (2007). Dynamic correlations and optimal hedge ratios, Discussion Paper TI 07-025/4, Tinbergen Institute. Bos, C. S. and Shephard, N. (2006). Inference for adaptive time series models: Stochastic volatility and conditionally Gaussian state space form, Econometric Reviews, 25(2–3), 219–244. Engle, R. F. (2002). Dynamic conditional correlation: A simple class of multivariate generalized autoregressive conditional heteroskedasticity models, Journal of Business and Economic Statistics, 20(3), 339–350. Godsill, S. J., Doucet, A. and West, M. (2004). Monte Carlo smoothing for nonlinear time series. Journal of the American Statistical Association, 99(465), 156–168. Lancaster, T. (2004). An Introduction to Modern Bayesian Econometrics, Oxford: Blackwell Publishing. Markowitz, H. (1952). Portfolio Journal of Finance, 7(1), 77–91.
selection.
References Year
DCSV (α=50)
2001
-5.997
0.826
-57.518
-6.619
0.912
-63.478
2002
-0.020
0.796
-49.718
-0.182
0.803
-50.308
2003
29.658
0.545
-4.753
29.710
0.547
-4.786
2004
45.394
0.855
-8.639
42.032
0.764
-6.211
2005
41.633
0.727
-4.311
29.226
0.450
0.814
2006
23.239
1.016
-40.504
17.265
1.024
-46.914
2007
1.388
0.982
-59.957
1.028
0.907
-55.662
2008
-57.433
5.227
-383.324
-51.748
5.691
-406.812
2009
-69.058
2.871
-247.543
-57.709
3.258
-260.621
Average
8.542
1.398
-78.921
6.609
1.420
-82.158
Table 3: Year-on-year investment results
AENORM
Historical
Note: See Table 1 for an explanation of the entries of the table.
44
63
April 2009
manager Con Snijders, marketing research bij APG in Heerlen
‘Stel je voor... jouw werk is belangrijk voor een op de vijf Nederlandse gezinnen’
25.1212.09A
‘Het gaat hier om duizelingwekkende aantallen. Meer dan twee en half miljoen mensen zijn van APG afhankelijk voor hun pensioen. Stel je voor... dat is een op de vijf Nederlandse gezinnen. Met mijn team doen we onderzoek naar de drijfveren en wensen van de mensen voor wie wij werken. Met die kennis stimuleer ik bij APG het ‘van-buiten-naar-binnen-denken’. Dat is belangrijk, want als we ons echt in onze klanten verplaatsen, kunnen we beter met ze communiceren.’
APG zoekt nu ambitieuze professionals voor verschillende functies. APG beheert zo’n 175 miljard euro pensioenvermogen voor 2,6 miljoen Nederlanders. Wij hebben kantoren in Heerlen, Amsterdam, Hong Kong en New York. Wil je meer weten? Ga dan naar www.werkenbijapg.nl en stel je voor...
AENORM
63
April 2009
45
Actuarial Sciences
Dynamic Asset Allocation in a Hybrid Pension Fund Hybrid pension funds are the focus of this work, as these have become popular since represent diversification of risks between sponsors and members of pension plans. The effects of dynamic asset allocation on the volatility of the fund and contribution, are investigated when the only source of unpredictable experience assumed is through volatile rates of return. To adjust the value of the contributions two methods are compared, i.e. the spreading described in Dufresne (1988) and the modified spreading developed by Owadally (2003). The main results found in this work are firstly, that no evidence is found for the so-called ’lifestyle’ investment strategy to be optimal, instead a more conservative asset allocation through the working life of an individual is found to give a smaller value of the total future cost. And secondly, that the individual accumulating this kind of pension fund would prefer to adjust the value of the contributions by assuming the modified spreading method as this minimises the volatility of the fund with a slightly higher volatility of the contribution. The basis of this work are found mainly in Owadally (2003) and Vigna and Haberman (2001).
Denise Gomez-Hernandez is a full-time professor at the “Universidad Autónoma de Querétaro” in Mexico. She has a Ph. D. in Actuarial Science with specialization in Pensions by Cass Business School in London and an M. Sc. in Actuarial Science by Heriot-Watt University in Edinburgh. Her research interests are: funding of pension funds, smoothing of contributions in pension funds, stochastic interest rates, investment risks and mortality rates.
Introduction There is an increasing shift from Defined Benefit (DB) schemes to Defined Contribution (DC) around the world. In the UK, for example, the vast majority of private sector DB schemes (mainly final salary schemes), have been closed to new entrants and being replaced by DC or money purchase schemes. This shift is said in Farr (2007) to be likely to continue, as an increasing number of final salary DB schemes will stop future accrual of benefits. Latin American countries are also shifting from DB to DC. According to Gomez Hernandez and Stewart (2008), Chile is a pioneer on this subject introducing DC schemes in 1981 followed by many other countries. It is mentioned also in Farr (2007), that the main reasons for this rapid change in provision are because sponsors of these plans have felt unable to finance the cost of current levels of DB. Then, the reason for shared risk schemes becoming popular is because these offer larger employers (or Governments) who cannot afford or are unwilling to take on the long-term risks as-
46
AENORM
63
April 2009
sociated with a balance of cost on DB schemes (e.g. the typical sixtieth final salary scheme in the UK or 100% of the 5-year average salary in Latin American countries such as Mexico). These current costs have increased mainly due to the increase on the number of pensioners with respect to the number of workers, and the increase on the life expectancy among the population. Shared risk schemes would offer employers the ability to control costs into the future by way of the ability to increase normal pension age or to hold back targeted indexation of benefits until the scheme could safely pay such benefits. The reason of hybrid schemes becoming popular is the motivation of this work. It is of interest the effects that dynamic asset allocation might have over the total future cost and the volatility of the fund and contribution by comparing two methods to adjust the value of contributions. The Model The proposed model is based on DC schemes but does not have fixed contributions. Dymanic asset allocation is considered based on the model proposed by Vigna and Haberman (2001). Then, there is a proportion of the fund invested in a high-risk asset varied through the time, in order to minimise the total future cost incurred in comparing the actual value of the pension fund with a pre-defined target every period of time. This depends on the accumulation of constant contributions paid by a single individual at a certain interest rate. The value of the fund at time t + 1 is given by
Actuarial Sciences
the following equation: + = + − +
(1)
where Ct is the contribution paid at time t, yt the proportion of the fund invested in the highrisk asset, μt ~ N(μ, σ ), νt ~ N(ν, σ ) and μ≤ν, σ ≤σ . The initial value of yt is given (i.e. y0), whereas the following values of yt for t>0, are calculated and denoted as . The value of the contribution Ct is simulated as: Ct = c + St
(2)
where c is a fixed contribution and St the adjustment to the contribution which will depend on the variation of the actual value of the fund with the value of the pre-defined target. Two methods are assumed to adjust the value of these contributions. First, the spreading method described by Dufresne (1988) as follows: St = K(Ft − ft)
(3)
where K is constant and Ft is the pre-defined target given by the following equation: ɺɺ = +
(4)
where r* represents an average return between the high-risk asset and the low-risk asset. Second, the modified spreading method developed by Owadally (2003) and defined as follows: St = λ1(Ft − ft) + λ2 ∑ = − − − ∞
(5)
where Ft as before, λ1 and λ2 constant representing the peace at which the difference between the pre-defined target and the value of the fund, is being paid. The aim of this methodology is then to find future asset allocations at every time t denoted by , that minimise a total future cost given by the following equation: Gt =
∑
N s =t
γ s − t C s
(6)
where γ is a discount factor and C(s) the cost associated to the process. Therefore, an array of future feasible investment strategies (i.e. yt) are fixed in advance, and for each t we test all these values to choose the appropriate one (denoted as ) which minimises this total future cost. The interested reader is referred to Gomez-Hernandez (2008) for a more detailed explanation of the model. Results When we make certain assumptions such as an individual accumulating a fund during his/
Figure 1: Optimal asset allocation minimum total cost through time
and
her working life over a total of 40 years, investing the accumulated value of this fund in UK equities as the high-risk asset and UK bonds as the low-risk asset, a Normal Volatility scenario for the rates of return taken from Vigna and Haberman (2001), 12% of the salary as a fixed contribution and no initial assets on the fund, the main results are as follows. The results for the optimal asset allocation as an average of 10,000 simulations for each period of time and the minimum total cost also as an average are given in Figure 1. The two curves within each graph compare the two methods to adjust the value of contributions (i.e. the modified spreading and the spreading). The results show that, when assuming the modified spreading method, the total future cost is smaller than when assuming the spreading. Moreover, when using the modified spreading method, a more conservative asset allocation is required to produce a smaller total future cost as shown in Figure 1. That is, the proportion of the fund invested in equities is smaller for the modified spreading method to produce a smaller value of the total future cost than when assuming the spreading method. This result is true because by assuming the modified spreading the contribution is gradually adjusted by the value of St in equation 5, eliminating an specific amount of past and present difference between the value of the actual fund and the value of the pre-defined target through time. When assuming the spreading method, on the other hand, these differences are accumulated through time (as only present deficits are paid) making the proportion of the fund invested in equities to increase, in order to match the value of the pre-defined target. Also from Figure 1, it is shown that the proportion of the fund invested in equities, increases with time up to a maximum of less than 60% for the modified spreading and more than 60% for the spreading method. In either case, this
AENORM
63
April 2009
47
Actuarial Sciences
Conclusions Two main conclusions are drawn from this work. Firstly, that the modified spreading is more efficient than the spreading, as with a smaller proportion of the fund invested in UK equities a smaller value of the total future cost is achieved. And secondly, that when an individual accumulates this kind of Hybrid pension fund by gradually increasing the proportion of the fund invested in equities until retirement, he or she may expect low volatility on both the fund and the contribution, when assuming the modified spreading method. References Figure 2: Variance of the fund and contribution through time
proportion is not very high, which suggests that the ’optimal’ asset allocation for this individual is to start investing a higher proportion of the fund in UK bonds and slightly switching into UK equities, in order to minimise the total future cost. The result in Figure 1, is opposite to the one found in Vigna and Haberman (2001). That is, the optimal asset allocation shown in this figure does not agree with the so-called lifestyle asset allocation. The main reason is because we vary the value of the contributions, whereas in Vigna and Haberman (2001)’s model constant contributions are assumed. Our results, however, agree with the results found in Booth and Yakoubov (2000) and Blake et al. (2001) where it is found that the so-called ’lifestyle’ investment strategy is not beneficial and that equity investment is better with a well-diversified portfolio. The interested reader should refer to Gomez-Hernandez (2008) for a more detailed explanation. The value of the fund and contribution variance through time also for both methods to adjust the value of contributions (i.e. the modified spreading and the spreading) was also investigated in this work. The results are given in Figure 2 and show that when assuming the modified spreading method the fund variance is smaller than when assuming the spreading, at the expense of slightly increasing the contribution volatility. That is, the maximum volatility of the contribution, found at the end of the period of projection increases from 0.6% to 1.2%. The opposite happens with the fund volatility which decreases from 12% when assuming the spreading method to 6% when assuming the modified spreading. These results suggest that when an individual accumulates this kind of Hybrid pension fund and by gradually increasing the proportion of the fund invested in equities until retirement and by assuming the modified spreading, he or she may expect low volatility on both the fund and the contribution.
48
AENORM
63
April 2009
Blake, D., Cairns, A. and Dowd, K. (2001). Pensionmetrics: stochastic pension plan design and value-at-risk during the accumulation phase, Insurance: Mathematics and Economics, 29, 187–215. Booth, P. and Yakoubov, Y. (200). Investment policy for defined contribution pension scheme members close to retirement: an analysis of the ”lifestyle” concept, North American Actuarial Journal, 4(2), 1–19. Dufresne, D. (1988). Moments of pension contributions and fund levels when rates of return are random, Journal of the institute of actuaries, 115, 535–544. Farr, I.A. (2007). A new breed of shared risk schemes to re-energise the provision of employer sponsored occupational pension schemes in the uk, Technical report, Association of Consulting Actuaries. Gomez-Hernandez, D. (2008). Pension Funding and Smoothing of Contributions, PhD thesis, City University. Gomez-Hernandez, D. and Stewart, F. (2008). Comparison of costs + fees in countries with privatedefined contribution pension systems, International Organisation of Pension Supervisors, Working Paper No. 6, 1–37. Owadally, M.I. (2003). Pension funding and the actuarial assumption concerning investment returns, ASTIN Bulletin, 33(2), 289–312. Vigna, E. and Haberman, S. (2001). Optimal investment strategy for defined contribution pensionschemes, Insurance: Mathematics and Economics, 28, 233–262.
Het doel is dat je zelf initiatief neemt. Ga naar aegon.nl/werk
Eerlijk over werken bij AEGON. AENORM
63
April 2009
49
Interview
Interview with Jan Kiviet Professor Jan Kiviet is currently professor of Econometrics and Director of the research group in Econometrics, both since 1989, at the University of Amsterdam. He has obtained a PhD in Economics (1987) and a MSc in Econometrics (1974) at the University of Amsterdam. His research interest are Dynamic Models, Finite Sample Issues, Asymptotic Expansions, Exact Inference, Bootstrap, Monte Carlo Testing and Simulation, Panel Data Analysis and History of Statistics and Econometrics.
Could you tell something about yourself? A long time ago I was a student in Econometrics myself here in Amsterdam, that was in a completely different era. Though many things have not changed much since then. Of course the program in Econometrics has changed a lot; that is why I still find it challenging. We still try to improve it. I have been involved in the program over a long period, first as a student, I started in 1966, and then as a lecturer and now as a professor; for already over forty years in total. In economics I studied some macro and monetary economics. Economics was certainly not my favorite topic; I much better liked the statistical and mathematical part. At secondary school I didn’t like physics and chemistry very much, so I was in doubt about my study choice. My father found out for me that econometrics, mixing math and stats and economics, existed, so that was what I chose. The first years I was not very much inspired, because the program was very much divided in economics with the economists and mathematics with the mathematicians. After four or five years, I took a job at SEO (Economic Research Foundation) as a student assistant. I had to do research, helping with analyzing surveys. That was the start of doing regressions on the poor computer equipment then available. While working on that job I got enormously motivated. I found the possibilities really challenging and I even became a good student. I finished in 1974. Then there was a vacancy for a lecturer here whose first task was to organize the courses in computer programming. That is how I started my academic career. Later I took more responsibilities in teaching econometrics, which I still enjoy very much. What do you think of the master program in Econometrics at the University of Amsterdam in comparison with the programs at other universities (Netherlands, Europe, USA)?
50
AENORM
63
April 2009
It is of course unique what we have here in The Netherlands. Only in Denmark and a little bit in Australia, they have similar types of programs. Many of our foreign colleagues are jealous of our situation that we can start with students from their very first year on with teaching them quantitative economics, which I think is a great asset. In comparison with other universities in The Netherlands, there are no major differences as far as I know with their programs in econometrics. I see most of the colleagues regularly at conferences and then we occasionally discuss, apart from research matters, teaching programs. We discuss which text book we are using and what our experiences are. I think we have a very good program, but we still have options to improve it, on which we work every year. You are affiliated with the University of Amsterdam for quite some time. What do you think are the major changes that evolved during this period? Many things didn’t change that much; over the long time of forty years there are many similarities. There is always discussion that there should be more attention to skill courses, like writing skills or presentation skills. Over the years, I am sure the program improved in that respect very much. On the other hand there is still the same situation I faced so many years ago, which is that students feel insecure at the end of their studies. They are wondering what they can really do now, they are asking themselves questions like: “What do I know now? I have attended all these courses, I have studied all these textbooks, but I am not sure I will ever be able to embark on a real research topic myself…” That is also due to the fact that in econometrics we do not train engineers or doctors, but we teach techniques, and research methods and each and every application has its own characteristics. That means that there are general aspects to it, but when you are lectu-
Interview
ring you cannot go into too many specifics. So the emphasis in our program is on the techniques, and the applications are quite often only used as illustrations. After that, students have to write their master thesis and they often want to do applied work where they suddenly understand that econometrics is more an art than a trade. Since it is an art, you cannot teach all of it, you can give some clues and of course we try to thoroughly supervise the master thesis. But in the end they will find out that all the technical and theoretical background they have achieved is of great value. I really do think that most of our students, after graduation, are happy with the study they have finished. Not many will think: if I had known before, I would have chosen economics or pure mathematics or
at the University of Pernambuco, who I vaguely knew, whether he was interested in me and that I could give a talk on my research. He kindly invited me and gave me the opportunity to hear about their situation at the statistics department. That is not fully comparable to our situation, but closely related because that group has various econometricians. Over there, all statisticians are taught econometrics, which is not really happening here, since here just occasionally some of the students in statistics attend our courses. It is not part of their core program as in Brazil. I talked about a paper that I am writing together with a PHD student which is a topic that is part of his PHD thesis. It is about the validity of instrumental variables.
"Since it is an art, you cannot teach all of econometrics" I would rather like to work in a hospital. I really think many are happy with their choice. In general I do not think the type of students that we teach has changed very much. We can be very proud of our students, especially what is organized by the VSAE. The Econometric Game is one of the examples, but of course there are other activities such as nice excursions like your study trips abroad. Our students are extremely active and motivated; we can only be very pleased by that. The students are very constructive, also in courses. I am not so sure that every generation was that constructive. If you compare it with the sixties, I think students are much earlier mature these days than we were, which I think is partly due to better education and preparation. These days students do not have much time to adapt to their student life. They have to achieve credits, otherwise they have to go away, so they are forced to accommodate quickly. Whether that is just positive, I am not so sure. You just got back from Brazil where you have given a talk at the Federal University of Pernambuco. What was the subject of this lecture? The major reason for making the trip was that two weeks prior to my talk, I had given a summer course about Monte Carlo simulations at a different university in Brazil. At the moment it is summer in Brazil and then universities organize summer courses for master and postgraduate students. As I was already in Brazil for the summer course, I contacted a colleague
Past year you have given the presentation “Strength and weakness of instruments in IV and GMM estimation of dynamic panel data models� in places like Milan, Singapore and Amsterdam. Could you tell us more about this project? The topic of panel data analysis is hardly addressed in our BSc and MSc programs. This is because our master program is simply too short. One of the things I would like to change is that the master program would last for one year and a half. Similar programs abroad are not taught in one year, but at least in one year and a half or two years. I think that students have already an overload of courses and very few of the students are able to finish the program in one year and that is not due to lack of skills or abilities, but simply because it is hard to do in one year. Courses are quite heavy, so we have to be selective in the topics that we teach. We hardly teach panel data analysis, although I think it is one of the nicest and most promising parts of econometrics. In regression analysis one usually has either cross section or time series data, but in panel data you have both. So for example you have data on a substantial number of families and you have information on their economic activities and their characteristics over five, ten or more consecutive years. Of course that contains much more information than the data that are either aggregated over all families and form pure timeseries or when you have a cross-section over just one time period. Especially the dynamics in economic behavior and causality issues can be
AENORM
63
April 2009
51
IN DE LIFT BIJ
IK WORD HIER DIRECTEUR.
TRAINEES De tijd van traditioneel verzekeren is voorbij. ‘All-finance’ is de toekomst. En Delta Lloyd Groep wil hierin haar leidende marktpositie uitbouwen. Daarvoor hebben we mensen nodig. Heel goede mensen. Zo selecteren we ieder jaar een aantal afgestudeerde academici voor onze Trainee Programma’s die opleiden tot een leidinggevende functie binnen het concern. Ook hebben we een Business Course en het Young Talent Network waarin jonge, hoog opgeleide medewerkers elkaar inspireren tot bijzondere prestaties. Waarmee we maar willen zeggen: als je wilt, kun je bij Delta Lloyd Groep heel ver komen. Aan ons zal het niet liggen. Zet jezelf in de lift. Kijk op werkenbij deltalloydgroep .nl
D E LTA L LO Y D G R O E P I S O N D E R A N D E R E D E LTA L LO Y D , O H R A E N A B N A M R O V E R Z E K E R I N G E N
52
AENORM
63
April 2009
Interview
examined much better from panel data. Since about ten or fifteen years my major research topic is the techniques for analyzing that type of models by that kind of data. It is always interesting to find an audience to tell about your findings and discuss what to examine next, and to attend lectures by other colleagues at conferences, especially if it is in nice places like Milan and Singapore. There are two things I particularly like about the type of job that I have. First I like it very much always being confronted with a younger generation. Having the brightest people of the country, just in their twenties, around you is really stimulating. And of course it helps a little bit to fight your own ageing. But the other aspect is the great liberty in choosing your research topics. Nobody is telling me what to do, I simply choose the topics I like myself and I think that is a privilege. I have great freedom, not when you are teaching of course, that is very well programmed and it should be. But as far as I do research, and that is 50% of my time, I have great freedom and in addition this option to travel and to meet and teach people abroad. I would like to stimulate students to think about their future and contemplate an academic career. Over the last ten years that has not been very popular and we notice that. Students do work hard, they are ambitious, but they have really been focused too much on the financial world. Hopefully that is over now, because of the crisis. I hope that they start to work in other directions, not only just finance. We have too few PHD students from our own MSc program. We would like to see many more. You have written a paper called “On the optimal weighting matrix for the GMM system estimator in dynamic panel data models”. Could you tell us more about this paper? This paper is also about technical issues in the context of dynamic panel data analysis. There are still great analytical problems that have not been solved yet. In GMM estimation you use instrumental variables. The word instruments arose before when I was talking about the paper I gave in Brazil. This was about the validity or invalidity of instruments. Instruments can also be either strong or weak. That is also something I am thinking about. That weighting matrix in this paper is about which weight should be given to instruments. Of course you want to give a high weight to strong instruments and you better give a low weight to weak instruments to obtain more efficient estimates because that is what you aim for. To find an optimal weighting matrix can be already pretty hard in reasonably simple models, but that paper finds some solutions.
You seem to have a preference for Monte Carlo simulations. How did this originate? When I was a student, Monte Carlo simulations triggered me already. During that time, there weren’t any PC’s yet. There were some computers, but of a completely different nature and with much fewer facilities than what you have these days. But the computer was used in econometric research and at an early stage I read papers on what people had been doing with Monte Carlo simulations and those days that was not something you heard about in the lectures of econometrics. It was also not mentioned in the textbooks and it still hardly is. Of course for the courses I teach, I choose the textbooks in which it is mentioned. I think Monte Carlo simulations are important and really motivating and stimulating for students. They are also helpful to understand all the technicalities that the students are confronted with. During my study time I majored in econometrics theory, but I also attended some courses at the mathematics department, in particular numerical analysis. I don’t think these days students have the opportunity to attend courses like that. So what you learned there was to write programs to invert a matrix, even if it was a very big matrix. Or for example examine what the effects are when there is a bit perturbation in some elements of the matrix, hence something like disturbances. You can invert matrices in different ways and some are more vulnerable than others. These courses learned me programming and generating random numbers and I could apply those skills when I wrote my master thesis. In my master thesis I did Monte Carlo simulations, and forty years later I am still doing virtually the same. I am only using a different programming language and study much more complex models and use a much better computer. I still do that in my research and I incorporated it more and more in my teaching and that is of course what students notice. I tell them I do that deliberately, because just from experience I know that Monte Carlo simulations helps them to much better understand what all these abstract notions of expectations, variances, efficiencies or test sizes are. This April the tenth edition of the Econometric Game will be organized by the VSAE. What are your experiences with the Econometric Game? I was involved in the first edition of the Econometric Game where I made the case with Peter Boswijk, although he did most of it. I don’t remember much of it, but we certainly spent the day with the students of five participating teams and I was also in the jury. After the first edition, there was a period I was very much involved in administration and there was also a period that
AENORM
63
April 2009
53
Interview
I was usually abroad in April. Over the last 4 or 5 years I usually teach a course in Spain during the period that the Econometric Game is organized. But this year I knew I was available and when I was approached this December, I accepted to take substantial responsibility. It is not the staff but really the VSAE that started this unique initiative to organize the Econometric Game and I think it is fantastic because now Amsterdam is known for it all over the econometric world. By the way, I am responsible for the name EG. I prefer the word game rather than case, which came from operations research and business. Of course we work on a different kind of case, and it should not be confused with a business case. Although, this game is certainly not a play. The word “game� does refer to competition. They are the annual Olympic games of econometrics. The organizers of the Econometric Game had already fixed the theme. They had already decided it should be about child mortality. I think it is excellent that they chose a theme that is relevant for the less developed world. It is not about marketing or finance, but something that really deserves great concern. I find it challenging, though difficult, to make the case, because in my research I usually do forget about applications as my research is about techniques and theory. But in the end the outcomes of your research are only useful when they can be and are used for practical problems. In the end, I am happy that they fixed me with this topic, also because I work on it with Hans van Ophem, who is strong in both theory and applications. I think when I had had the liberty to choose the topic myself I would perhaps have chosen Monte Carlo simulations. But this theme is much better and I am very pleased to be involved.
54
AENORM
63
April 2009
Puzzle
Puzzle Since there should always be some time for a little recreation, this page brings you quaterly two challenging mathematical puzzles. The first puzzle should be solvable for most of you, but the second one is a bit harder. Solving this puzzles may even win you a book token! But first, the solutions to the puzzles of last edition. Was it a cat I saw There are 252 ways to reach the center C and 252 ways to get back out to a W, thus the solution to the puzzle is the square of 252, which is 63,504 ways.
are 3/6 or 1/2, making the game a fair one. Of course this is how the student wants you to determine your chances. The question therefore is, is the game really fair in its chances? Long division While browsing through a very old book on mathematics you come along a problem in long division. Unfortunately, the ink has become vaguely visible on a few positions of this long division. The drawing below shows these positions with a star. Are you still able to solve the long division?
The boxer’s puzzle You should score seven boxes by beginning with a line from G to H. In a best response, your opponent could possibly play two different moves: J to K or D to H. When playing J to K, you will score two boxes by marking from K to O and P to L. Your next move is to mark L to H, therefore giving away two boxes to your opponent who will mark G to K. Whatever your opponent will do next, you will win the remaining five boxes. When playing G to H and the second player marking D to H, you should mark in the following order C to G, B to F, E to F and then M to N. Thereby you will give away two boxes to your opponent, which will eventually win four more boxes to you. This week’s new puzzles: A dice game A student in econometrics challenges you with the following dice game. The game is played on a board with six squares marked 1,2,3,4,5,6. You’re invited to place as much money as you wish on one of the squares. After placing your bets, three dice are thrown by the student. If your number appears on one die only, you get your money back plus the same amount. If two dice show your number, you get your money back plus twice the amount you placed on the square. If your number appears on all three dice, you get your money back plus three times your amount. Obviously, when your number is not on any of the dice, the student will get all your money. You might reason that the chance of one dice showing your number is 1/6 and since there are three dice, your chances to win
Solutions Solutions to the two puzzles above can be submitted up to June 1st. You can hand them in at the VSAE room; C6.06, mail them to info@ vsae.nl or send them to VSAE, for the attention of Aenorm puzzle 63, Roetersstraat 11, 1018 WB Amsterdam, Holland. Among the correct submissions, one book token will be won. Solutions can be both in English as in Dutch.
AENORM
63
April 2009
55
Facultive
VU University Amsterdam
University of Amsterdam This February a new VSAE board has started with Annelies Langelaar as Chairman, Daan de Bruin as Vice Chairman and External Affairs, Lianne Marks as Secretary and Allon van der Heijden as Treasurer. The coming months will be filled with interesting projects. In April the VSAE organizes the annually Econometric Game, where more than 25 teams will come to Amsterdam to strive for the highest honor. Among these twenty six universities are the New York University, Boston University and Monash University (Australia) and many very respectable European universities like Oxford and Cambridge University. The last two games twenty universities have tried their hand at solving a challenging econometric case in the fields of global warming and direct marketing. Also in April twentyfour members of the VSAE will travel to Hongkong for the International Study Project. During a week the participants will be challenged to learn more about trading by working on a trading game and an econometric case.
Agenda 7 April – 9 April Econometric Game 9 April Monthly drink 10 April - 22 April International Study Project to Hongkong 28 April Soccer Tournament with Kraket
The past few months have been a success for study association Kraket. The study trip to New York was a great experience for the 17 participants. They went to universities and visited companies like Ortec, Aegon, Coca Cola and ING. With a few other members of our study association we went ice-skating in Amsterdam. In February Kraket organized an indoor soccer tournament for the enthusiastic footballers of our study association. The level of the games was very high, and the final was decided by penalties. The winning team ended up with a bottle of champagne. On the third of March our fellow study association VESTING organized the National Econometrics Day in Groningen. With over 50 participants of Kraket the event was a great success for our study association. In the upcoming months we have planned some other activities. On the 9th of April Kraket organizes a Casedag. On this day six companies present themselves with interesting cases. This day will be held in the Amsterdam American Hotel. On April 21 Kraket is going to visit PricewaterhouseCoopers for an inhouse-day. But also there are some relaxing activities like beer tasting, a bonbon-workshop and a KraketVSAE soccer tournament to look forward to. Agenda 31 March Bonbon-workshop 9 April Casedag 17 April Beer tasting and Karaoke 21 April Inhouse-Day PricewaterhouseCoopers 28 April Soccer Tournament with the VSAE
56
AENORM
63
April 2009
Be reken de i nvloed van h e t r i j g e d r a g v a n j o n g e r e n o p d e p r em i e van hun autover zeker i ng .
Jeugdige overmoed leidt nogal eens tot onnodige
strategieĂŤn. We werken voor toonaangevende be-
schade aan mens en materieel. Voor een verzeke-
drijven, waarmee we een hechte relatie opbouwen
ringsmaatschappij roept dat vragen op. Is er verschil
om tot de beste oplossingen te komen. Onze manier
tussen mannen en vrouwen? Tussen de ene en de
van werken is open, gedreven en informeel. We zijn
andere regio? En wat betekent dat voor de premies?
op zoek naar startende en ervaren medewerkers, bij
Bij Watson Wyatt kijken we verder dan de cijfers. Want
voorkeur met een opleiding Actuariaat, Econometrie
cijfers hebben betrekking op mensen. Dat maakt ons
of (toegepaste) Wiskunde. Kijk voor meer informatie
werk zo interessant en afwisselend. Watson Wyatt
op werkenbijwatsonwyatt.nl.
adviseert ondernemingen en organisaties wereldwijd op het gebied van ‘mens en kapitaal’: verzekeringen, pensioenen, beloningsstructuren en investerings-
Watson Wyatt. Zet je aan het denken.
To some people, it’s just a job
To others, it’s a leap into great opportunities Actuarial opportunities at Towers Perrin
http://careers.towersperrin.com