This edition:
In Learning How to Count, We Forgot What Really Counts And:
The (De)stabilizing Effect of Pension Fund Regulation on Financial Markets On the Remarkable Success of the Arellano-Bond Estimator Turnpike Properties of Optimal Control Systems
77
vol. 20 dec ‘12
,y 0 e 5 urv th â‚Ź s ur wor o in ard e t gift c a iP a
ic in t r dw a P an
At NIBC, entrepreneurial bankers start at the deep end As a trainee banker at NIBC, you also have a daily job. Your assignments and responsibilities start from day one. And you’ll have the chance to specialise, in for example mergers and acquisitions. You and your fellow analysts will follow our incompany training programme at the Amsterdam Institute of Finance, led by professors from international business schools. A flying start at the bank that thinks and acts like entrepreneurs. For more information, visit www.careeratnibc.com.
Colofon Editorial Board Milan Schinkelshoek Ruben Walschot Editorial Staff Anjali Chouhan Linda de Koter Milan Schinkelshoek Pranay Shetty Ruben Walschot Sina Zolnoor Design United Creations © 2009 Lay-out Linda de Koter Milan Schinkelshoek Kevin Weltevreden Ruben Walschot Cover design © istockphoto edited by United Creations Circulation 2100 A free subscription can be obtained at www.aenorm.eu. Advertisers DNB Mercer NIBC Towers Watson Information about advertising can be obtained from Kevin Weltevreden, info@vsae.nl. Insertion of an article does not mean that the opinion of the board of the VSAE, the board of Kraket or the redactional staff is verbalized. Nothing from this magazine can be duplicated without permission of VSAE or Kraket. No rights can be taken from the content of this magazine. ISSN 1568-2188 Editorial Staff adresses VSAE Roetersstraat 11, E2.02 1018 WB Amsterdam tel. 020-5254134 Kraket De Boelenlaan 1105 1018 HV Amsterdam tel. 020-5986015
Pity the non-econometrician by: dr. ir. Sander van Triest When I was a student, I followed three courses on Operations Research. Quite aptly, they were named Operations Research I, Operations Research II, and Operations Research III. Now, this was long ago, when courses were many and academic years were divided into trimesters. I would follow 5 or 6 courses per trimester, so the credits per course were lower – not more than 3 or 4 ‘modern’ ec’s. I vaguely remember something with the Simplex method, dynamic programming, and even integer programming. I also remember that my course grade dropped with every next course, and so it was a good thing that there was no Operations Research IV. Now, this is not as bad as it seems. I studied Industrial Engineering (‘Technische Bedrijfskunde’), Operational Research III was an elective, and anyway I gradually drifted into the field of accounting, where I now do my teaching and research. In this field, complexity is not in modelling but in implementation. Accountants do not perform complex valuations of pension liabilities, nor do they calculate the optimal scheduling of a multi-stage supply chain. However, accountants do want to make sure that the results of a valuation or scheduling exercise are used in ways which contribute to reaching the goals of an organization. To this end, we (for I nowadays tend to view myself as an accountant rather than an engineer) try to identify ways in which we can develop performance measures that help in making the right decisions, sometimes in the form of a profit number, sometimes in terms of unit costs, sometimes in a balanced approach including measurements of customer satisfaction or product returns. Of course, it depends on your point of view what the ‘right’ decision is for a business, or any organization. Which brings me to the point of this preface. In practice, models and numbers are not neutral entities. They are used in practical situations, where there are many stakeholders who all have their own interests. The tools of the econometric trade help in making decisions, and they can have a very real impact, as we know from the current discussion on pensions. Just as important, they can help in understanding and improving business practice. They are used by researchers in all kinds of fields – accounting being very much one of those – to test theories and so develop new knowledge. To do so, these tools should be used wisely. It is at this point that there is a challenge for both developers and users of these tools. In my field of research, approaches using instrumental variables are quite popular. The mechanics of IV estimation are complex (at least for non-econometricians), but any good statistics package enables researchers to perform them. However, the quality of the instrument used often is assessed purely on statistical grounds: if the regression diagnostics are ok, then the instrument is valid. Rarely, if ever, is any thought given to the underlying theoretical process which would enable an instrument to be correlated with the independent variable of interest, while not being correlated with the dependent variable. Econometricians may not always realize that others are not as good as they are in understanding the limits of econometric models, and that the powerful tools that they develop can be dangerous in the hands of lesser mortals. So for an econometrician, Operations Research I, II, and III may be excellent names for courses, given their resemblance to xi notations, but spare a thought for outsiders who are not necessarily as good in abstract thinking but do want to use econometric tools anyway – and in the right way.
AENORM
vol. 20 (77)
December 2012
3
00 77 The (de)stabilizing effect of pension fund regulation on financial markets
06
by: Raymon Badloe Financial market disturbances over the past years have raised concern about the stabilizing effect of pension fund investment behavior. It was stressed by the United Nations (UN) that pension funds take on more risks and that their financial market operations could be a source of financial instability. Furthermore, they conjectured that regulatory measures could assist in mitigating destabilizing effects. We will address this conjecture by developing an asset pricing model with boundedly rational and heterogeneous agents. Subsequently, so-called “regulation agents” are introduced to model the effects of “stylized” FTK regulations.
Data illiteracy threatens Dutch organizations
12
by: Geertje Zeegers Businesses have a great deal of trouble processing the exponentially increasing amounts of customer data. This problem has been the subject of research at MIacademy, an Amsterdam training agency specialized in analytics. Although 60% of top managers indicate that dataanalysis is crucial to their business, only 45% of valuable data are actually used. The biggest problems arise from the lack of knowledge to analyze the available data. Training of managers and marketers seems the best solution.
On the remarkable success of the ArellanoBond estimator
15
by: Tom Wansbeek Research often begins with some kind of fascination. This is certainly true in the present case. The fascination is generated by figure on page 15. It shows the pattern, over the years, of the citations to the paper by Arellano and Bond (1991). This paper was published in one of the top journals in economics, the Review of Economic Studies. Whatever the paper may be about, the pattern is certainly striking, justifying the “remarkable success” in the title.
Aenorm Survey Win a €50,- giftcard!
22
The redaction of the Aenorm strives to improve the Aenorm every edition and we ask you, the reader, to give your opinion! Please fill in the form starting on page 22 (or at vsae. nl/aenorm) and send it to VSAE, Roetersstraat 11 E2.02-04, 1018 WB Amsterdam, The Netherlands. When you submit a completed form, you make a chance to win one of the two €50,- giftcards!
4
AENORM
vol. 20 (77)
December 2012
vol. 20 00 dec. m. y. ‘12
BSc - Recommended for readers of Bachelor-level MSc - Recommended for readers of Master-level PhD - Recommended for readers of PhD-level
Three fallacies in the pension debate
25
by: David Hollanders Every discussion has its clichés, frames, stereotypes and downright fallacies. Alas, the pension debate is no exception. Three fallacies that are often propagated, even by prestigious institutes as the IMF and the OECD, are the following.
In learning how to count, we forgot what really counts.
28
by: Carl Johan Lens While being educated in equations and laws of mathematics we don’t necessarily get a feel for what is really important: what really counts. As students we get in touch with science and through mastering science we get a grip on the world surrounding us. At the same time, while being assured on ‘what is real’ we forget that in as far the laws of mathematics refer to reality, they are not certain and as far as they are certain, they do not refer to reality (paraphrase of a quote of Einstein).
Application of the vignette methodology to measurement of education-related health inequalities among older Europeans
32
by: Teresa Bago d’Uva Heterogeneity in the reporting of health by education may bias the measurement of health disparities. We use anchoring vignettes to assess the extent of this bias in six health domains for older individuals, in eight European countries. Without correction for reporting differences, there is no evidence of health inequality by education in 32 of 48 (country-domain) cases. There is however a general tendency for the higher educated older Europeans to rate a given health state more negatively than their lower educated counterparts (except in Spain and Sweden). Correcting for this leads to a general increase in measured health inequalities (except for Spain and Sweden) and, consequently, to the emergence of inequalities in 18 cases. Measured health inequalities by education are often underestimated, and even go undetected, if no account is taken of reporting differences.
Turnpike Properties of Optimal Control Systems
36
by: Alexander J. Zaslavski In this paper we discuss recent progress in the turnpike theory which is one of our primary areas of research. Turnpike properties are well known in mathematical economics. The term was first coined by Samuelson who showed that an efficient expanding economy would for most of the time be in the vicinity of a balanced equilibrium path. These properties were studied by many researches for optimal paths of models of economic dynamics determined by set-valued mappings with convex graphs.
Puzzle
41
Facultive
42 AENORM
vol. 20 (77)
December 2012
5
econometrics
The (de)stabilizing effect of pension fund regulation on financial markets by: Raymon Badloe Financial market disturbances over the past years have raised concern about the stabilizing effect of pension fund investment behavior. It was stressed by the United Nations (UN) that pension funds take on more risks and that their financial market operations could be a source of financial instability. Furthermore, they conjectured that regulatory measures could assist in mitigating destabilizing effects. We will address this conjecture by developing an asset pricing model with boundedly rational and heterogeneous agents. Subsequently, so-called “regulation agents” are introduced to model the effects of “stylized” FTK regulations. tified mechanisms towards bounded rationality explaining the existence and persistence of behavioral biases in agent decision making. Evidence on financial markets Recent financial crises have made clear that the domi(Shiller, 1989)3 suggests that agent behavior plays a crunating paradigm of rational expectations (RE) has come cial role in the occurrence of certain stylized facts. short in explaining underlying movements of these markets. This was criticized for assuming all agents to have To explain these stylized facts the concept of RE in perfect knowledge of the actual market (Sargent, 1993)1. finance was abandoned by developing asset pricing moMore recently, Paul Krugman stressed that: “When it dels with heterogeneous agents that are boundedly raticomes to the all-too-human problem of recessions and onal. (Brock and Hommes, 1998)4 introduced heterogedepressions, economists need to abandon the neat but neous agent models (HAMs) to model financial markets wrong solution of assuming that everyone is rational and that could match these stylized facts. markets work perfectly.”2 Long-term investment objectives of pension funds Economic agents process information differently have led to the conviction that they stabilize financial based upon their perceptions which has consequences for markets. Recent financial crises have shown that pension their actual decision making. The discrepancy between funds actually may take on more risk as their focus has agent perceptions and actual market realizations was adslightly moved towards the short and medium run (IMF, dressed by Tversky and Kahneman (1974). They iden2011)5. Even before financial markets became unstable the UN was concerned about the stabilizing role of investors such as Raymon Badloe pension funds. We will examine whether penRaymon Badloe recently completed his sion fund regulation stabilize figraduate studies in Econometrics and Acnancial markets as stressed by the tuarial Science and Mathematical Finance UN. The following steps are folat the University of Amsterdam. During lowed, first the asset pricing HAM his studies he contributed to the organizain (Gaunersdorfer et al., 2008) will tion of the Econometric Game and several be extended by assuming all agents other VSAE committees. This article is to have homogeneous time-varying based on his master thesis which was subeliefs about conditional variances pervised by Prof. dr. Cars Hommes and as in (Gaunersdorfer, 2000). SeProf. dr. ir. Michel Vellekoop. condly, pension funds will be modeled as heterogeneous agents that
Introduction
1 Sargent, Thomas. Bounded Rationality in Macroeconomics, Oxford University Press, 1993. 2 Krugman, Paul. “How Did Economists Get It So Wrong?”, The New York Times, September 2009. 3 Shiller, Robert. Market volatility, MIT Press, 1989. 4 Brock, William and Hommes, Cars. “Heterogeneous beliefs and routes to chaos in a simple asset pricing model”. Journal of Economic Dynamics and Control 22 (1998): 1235-74. 5 International Monetary Fund. “Global Financial Stability Report”. World Economic and Financial Surveys 2011.
6
AENORM
vol. 20 (77)
December 2012
econometrics
are boundedly rational and more risk averse than other agents in the model. Thirdly, regulation agents enter the financial market in case pension fund regulations are violated similar to (Hermsen, 2011). The subsequent model analysis will focus on the (de)stabilizing effect of pension fund regulation in the short and long run. This article is organized as follows. In the second section the asset pricing framework is discussed. Section three is used to discuss the stylized FTK regulations. In the fourth section we conclude with the main findings of this article.
Asset pricing model Consider a financial market consisting of one risky, one risk-free asset and three boundedly rational heterogeneous agent types h = 1,…,3. The risky asset pays a stream of dividends yt, has price pt and gross risk-free return R > 1. Demand for risky asset shares of agent h is denoted as zht and excess returns per risky asset share is defined as Rt+1 = pt+1 + yt+1 - Rpt. Furthermore, conditional variances of excess returns are for each agent h. Now assuming that all agents have a CARA utility function and after mean variance optimization one obtains the following optimal type-h demand for risky shares
In which ah denotes the heterogeneous constant absolute risk aversion. We now assume that there exists a Walrasian auctioneer leading to an equilibrium between supply and demand of risky shares, i.e., We can now determine a RE benchmark by imposing mutual consistency of beliefs on all agents. The fundamental price of the risky asset then becomes
Now we will introduce three boundedly rational heterogeneous agent types that differ in their beliefs about risk aversion, asset variances and asset prices. In order to make one period ahead price predictions simple heuristics are used by agents6. The first agent type concerns fundamentalists who believe asset prices will revert back to the fundamental price at rate 1 - v, with 0 ≤ v ≤ 1. This leads to the following asset price belief of these agents E1,t = p* + v(pt-1 - p*). Agents extrapolating positive or negative trends are named chartists whose beliefs are based on historical re-
alized prices up till period t - 1. The technical trading rule considers the latest price change and realized price as follows E2,t = pt-1 + g(pt-1 - pt-2). With the extrapolation parameter , denoting trendfollowers in case g > 0 and contrarians in case g < 0. The final type of agents concern pension funds whose most important objective is to look at their strategic policy, which would lead them to think that asset prices eventually mean revert towards the fundamental price. However, investment managers try to make additional returns on shorter time horizons by trading on positive or negative trends in assets. The extent to which this strategic versus tactical policy occurs is weighted by α and leads to the following belief E3,t = α(pt-1 + h(pt-1 - pt-2)) + (1 - α)(p* + w(pt-1 - p*)). In which 0 ≤ α ≤ 1, , 0 ≤ w ≤ 1 and in case α approaches 1 tactical policy (TAA) become more important than long-term strategic policy (SAA). The risk aversions are ordered as a2 < a1 < a3, with chartists the least risk averse and pension funds the most risk averse. All the agents will have homogeneous time-varying beliefs about the conditional variance of excess returns, 7 i.e, . The conditional variance of excess returns is modeled as an exponentially weighted moving average learning rule as discussed in (Gaunersdorfer, 2000) μt = wμμt-1 + (1 - wμ)(xt-1 - Rxt-2). In which 0 < wσ < 1, 0 < wμ < 1 determine the weight attached to past conditional variances and past excess returns respectively. Intuitively, prediction rules that have the highest profits will be selected more often. We will use risk adjusted realized profits for fundamentalists and chartists8. The fundamentalists incur information costs C1 and pension funds are assumed to be a fixed fraction9. This results into the following
The parameter 0 < η < 1determines the memory of agents. Aside from this deterministic measure, switching to another prediction rule could also occur using a probabilistic measure. Frequently this is described by the multi-
6 Faced by uncertainty simple rules of thumb are used as Tversky and Kahneman (1974) have shown. 7 Agents more commonly agree about conditional variances (Gaunersdorfer, 2000; LeBaron, 2012). 8 This is consistent with our mean variance framework as discussed in Gaunersdorfer et al. (2008). 9 We assume the latter since they belong to the class of institutional investors. This investor type has a relative large size in the market and remains approximately constant in size over time.
AENORM
vol. 20 (77)
December 2012
7
econometrics
nomial logit model which describes the probability that an agent selects strategy h and reads as follows
The sensitivity of agents to the difference in performance of the prediction rules is determined by β ≥ 0 and denotes the intensity of choice parameter. This parameter makes agents more sensitive to differences in performance as β → ∞ and less sensitive in case β → 0. Chartists consider positive or negative asset price trends to be finite leading to a bound on speculative bubbles. This bound is formulated as follows Where s > 0 determines the speed at which chartists believe prices revert back to the fundamental price10.
FTK regulation Pension funds on the financial market can be optimistic (pessimistic) with respect to the price of the asset as a result of which demand for risky shares increases (decreases). Once asset returns become very negative pension funds need to avoid large losses and reduce their long positions. On the other hand once returns become very positive and pension funds have short positions, these need to be altered as well. In particular, the following components of the FTK are being judged by the DNB and are affected by the pension fund positions in risky shares • If the pension fund is capable of recovery from a reserve shortage within a reasonable period of time.
•
The extent to which indexation ambitions can be sufficiently fulfilled. • Whether the probability of underfunding for the pension fund is below 2.5 percent in the short and long run. To integrate stylized regulation effects in our model a similar approach as advocated in (Hermsen, 2011) is followed by introducing “regulation agents” as a fraction nrn3 of pension funds. This fraction of regulation agents are pension funds that still have pension fund beliefs, but have to ensure FTK requirements are met. At time t pension funds consider their long position in risky shares held in the previous period, 1) E3,t-1[xt] - Rxt-1 > 0. In addition pension funds examine whether losses have been very negative at time t-1, namely below a threshold Tr < 0, i.e., 2) z3,t-1μt-1 < Tr. For the short position pension funds consider whether 3) E3,t-1[xt] - Rxt-1 < 0 and 4) z3,t-1μt-1 > Tr hold. If conditions 1 and 2 or 3 and 4 are fulfilled, at time t pension funds reduce their long or short positions. In figure 1 it is shown that regulation agents enter the financial market as a fraction of pension funds nrn3. Then they sell b < 0 (buy b > 0) risky shares zr,t to reduce long (short) positions.
We will now examine the effect of the stylized FTK regulations by comparing price fluctuations, conditional variances, bifurcation diagrams and Lyapunov exponents in the presence and without regulations agents using model simulations11.
Figure 1: Regulation agent fractions and their demand for risky shares.
10 This could be interpreted as a state-dependent confidence such that too large deviations from the fundamental steady state cannot continue to exist. 11 We have first established benchmark parameters by assuming that pension funds do not take too risky positions and model outcomes are realistic.
8
AENORM
vol. 20 (77)
December 2012
econometrics
Figure 2: The left panel shows price fluctuations without regulations and the right panel with regulations.
Figure 2a and 2b show that price fluctuations have become slightly larger in the presence of regulations. More importantly, crashes in asset prices have become more apparent and larger. These findings are further substantiated by the conditional variances in figures 3a and 3b that show a large peak during the crash and are larger with regulations.
Long run dynamics in figures 4a and 4b show that price fluctuations become larger for smaller values of speculative behavior12. The primary Hopf bifurcation occurs slightly earlier underlining that the dynamical system becomes unstable earlier. Finally, the occurrence of chaos as indicated by Lyapunov exponents begins earlier as the surfaces in figures 5a and 5b indicate.
Figure 3: The left panel shows conditional variances without regulations and the right panel with regulations.
Figure 4: The left panel shows a Hopf bifurcation without regulations and the right panel with regulations.
12 Speculative behavior refers to large positive trend following, i.e., g > 1.5.
AENORM
vol. 20 (77)
December 2012
9
econometrics
Figure 5: The left panel shows a Lyapunov surface without regulations and the right panel with regulations.
10
Conclusion
References
In our asset pricing model pension fund regulations can lead to destabilization of financial markets. The introduction of regulations destabilizes short run and long run dynamics. Regions indicating chaotic dynamics increased in size and magnitude. The objective of regulators such as the DNB is to stimulate financial stability and conduct prudential supervision. With our stylized FTK regulations prudential supervision was found to possibly conflict with stimulation of financial stability. We emphasize that our approach used a stylized interpretation of the more general FTK regulations. Also, no regulations were imposed on chartists and fundamentalists which could be very reasonable extensions for future research.
A. Tversky and D. Kahneman: “Judgment under uncertainty: heuristics and biases”, Science, 185 (1974): 1124-1131
AENORM
vol. 20 (77)
December 2012
A. Gaunersdorfer: “Endogenous Fluctuations in a Simple Asset Pricing Model with Heterogeneous Agents”, Journal of Economic Dynamics and Control, 24 (2000): 799-831 A. Gaunersdorfer, C. Hommes and F. Wagener: “Bifurcation routes to volatility clustering under evolutionary learning”, Journal of Economic Behavior & Organization, 67 (2008): 27-47 O. Hermsen: “Does Basel III improve financial market stability? A comparison with the Basel II framework”, Working Paper University of Bamberg
econometrics
Data illiteracy threatens Dutch organizations by: Geertje Zeegers Businesses have a great deal of trouble processing the exponentially increasing amounts of customer data. This problem has been the subject of research at MIacademy, an Amsterdam training agency specialized in analytics. Although 60% of top managers indicate that data-analysis is crucial to their business, only 45% of valuable data are actually used. The biggest problems arise from the lack of knowledge to analyze the available data. Training of managers and marketers seems the best solution.
Big Data research in the Netherlands The amount of available data in businesses is exploding. According to the IDC, the amount of digital information multiplies by ten every five years. It is no surprise that Big Data is one of the most important management themes of this age according to McKinsey. This same agency calculated a shortage of 140.000-190.000 analytical talents in the United States alone who can use this data to improve the performance of their organizations. How does the Big Data situation look in the Netherlands? MIacademy found out by asking more than 100 top managers of leading Dutch companies.
Data analysis important for commercial and top management Data analysis is an important theme for management of Dutch companies: 86% of managers indicate that data analysis is of more than average importance for their business . Sixty percent indicate that data analysis is very important (see Figure 1).
Challenges of data availability and analytical competence Even though data analysis is recognized as an important theme, managers indicate that not even half (45%) of available valuable data is used. The biggest challenges in identifying business opportunities from this data arise
Geertje Zeegers Geertje is the Program Manager of the Marketing Intelligence Academy (MIacademy). She has experience as an analyst and as a team leader of analytical projects for major Dutch triple play telecom operators, insurance companies and online start-ups. During this time she specialized in creating sustainable growth by embedding analytics within an organisation. Now, she is taking the building of this capability to a new level by teaching others how to do so in the MIacademy program. Her background in Social Psychology, with a specialisation in the quantitative and consumer fields, gives her analytical experience a unique additional dimension.
12
AENORM
vol. 20 (77)
December 2012
econometrics
from data availability and from sub-optimal abilities to translate the data into relevant business insights. Sixtynine percent of questioned managers name data availability and 65% name data analysis skills (see Figure 2) as top three challenges of their organizations. Willingness to invest in analytics is a lesser (only 33%) managerial problem, while data are not a strategic priority for only 5% of managers.
The percentage of graduates having analytical skills remains alarmingly low: only 15% of honors high school graduates choose a profile that qualifies them for an analytical function (see Figure 5) despite all government campaigns to increase this number. Additional schooling for managers and marketers seems absolutely necessary to combat the shortage of analytical skills in organizations.
Demand for analytical talent will increase strongly: university supply is not enough The majority of managers expect that the demand for analytical talent will significantly increase: 71% expect a significant increase (see Figure 3). Two significant management problems will occur. The required analytical talent is not available within the organization, and the projected supply coming from universities remains problematic. Only 25% of managers state that they have sufficient right analytical talent on board (see Figure 4).
AENORM
vol. 20 (77)
December 2012
13
econometrics
About MIacademy MIacademy is an Amsterdam based training agency, run by MIcompany, that trains talented analysts, as well as marketers and managers, in the analytical field. Participating listed companies in MIacademy include KPN, the Dutch Railways (NS), Nuon/Vattenfall, the Charity Lotteries, and ASR insurances. In connection with its 5-year anniversary, MIacademy has conducted a quantitative research into the current status and future expectations for analytics in the Netherlands.
14
AENORM
vol. 20 (77)
December 2012
econometrics
On the remarkable success of the Arellano-Bond estimator by: Tom Wansbeek1 Research often begins with some kind of fascination. This is certainly true in the present case. The fascination is generated by figure 1. It shows the pattern, over the years, of the citations to the paper by Arellano and Bond (1991). This paper was published in one of the top journals in economics, the Review of Economic Studies. Whatever the paper may be about, the pattern is certainly striking, justifying the “remarkable success” in the title. If you don’t find this picture fascinating, don’t read on!
Figure 1: Citations to Arellano and Bond (1991)
The picture shows something like exponential growth. The growth seems to taper off a little recently, but an inspection of the citations data over the first half of 2012, as taken from the endlessly captivating Web of Science, shows that the citation frequency has in fact doubled, and an update of the picture at the end of the year would need stretching the vertical axis by a factor of two. By way of benchmark, the number of citations to the paper will soon overtake the number of citations to the classical paper by Hansen (1982) that started the generalized method of moments (GMM) revolution in econometrics. This being said, it may not hurt to briefly summarize
Tom Wansbeek (1947) Tom Wansbeek is Professor of Statistics and Econometrics and former Dean at the University of Groningen. He obtained his MSc in Econometrics from the University of Amsterdam (1972). After his PhD from the University of Leiden (1980) he has worked with the Netherlands Central Bureau of Statistics, the University of Southern California and the University of Amsterdam. He has published in econometrics (panel data models, measurement error and latent variables), linear algebra and marketing.
what the paper by Arellano and Bond (1991), AB hereafter for brevity, is in fact about. The issue is to estimate a dynamic model for panel data, (1)
where t = 1, ... ,T indexes time and i = 1, ... ,N indexes individuals, for whom (people) or for which (e.g. firms) a variable y is observed repeatedly over time, so we have so-called “panel data”. As usual in panel data models, there is an individual effect capturing the unobservable, time-constant traits of i, and the model is completed by an error term commonly assumed uncorrelated over time. The parameter of interest is α, the autocorrelation parameter, for which we want to have a consistent estimator. We consider the case of large N and small T, so the kind of consistency we have in mind is one where T is fixed and N goes to infinity. 1 I would like to thank Manuel Arellano and Jan Kiviet for their very useful comments, without of course implicating any support for my points of view.
AENORM
vol. 20 (77)
December 2012
15
econometrics
Figure 2: The AB estimator as from the original
The problem with consistent estimation is that yit depends on γi . As is clear from the model, this holds for all t, so the variable at the right-hand side of the model, yi,t-1 also depends on γi. Hence there is an endogeneity problem that we need to tackle if we want to avoid the risk of ending up with an inconsistent estimator of α. The usual approach is through instrumental variables (IV), and that is also the approach here, as it appears to be possible to derive instruments from the model; we are in the fortunate situation that we don’t need instruments from some external source. AB is one such approach and, in fact, by far the dominant one in terms of number of empirical applications. The idea of AB is to transform the model into first differences. This eliminates the individual effects from the model. Their presence caused the endogeneity problem and, when gone, cannot cause problems anymore. For the thus transformed model, the variables yi,t-2 and their predecessors are valid instruments. These instruments can be used in a GMM approach to obtain an asymptotically efficient estimator of α. The fame of AB is based on this simple idea. For the fun of it, I reproduce the relevant paragraph from their paper, see figure 2. This short paragraph does it, and brought the authors a number of citations of which I am a little jealous. In fact there is not so much new here since, as was of course acknowledged by AB, the idea of using earlier observations on y as instruments in a first-differenced model was already a decade old and was due to Anderson and Hsiao (1981, 1982). They use only the most recent preceding y as instrument. What AB added to this was to use all preceding y’s. A more important contribution of AB, not contained in this paragraph but described at the end of the paper, was the presentation of test statistics to test for autocorrelation in the residuals. This is a critical issue as the consistency of the AB estimator depends on the νit not being correlated over time. Incidentally, the observation made in the last sentence of the paragraph shown was later on elaborated by Ahn and Schmidt (1995).
16
AENORM
vol. 20 (77)
December 2012
AB in AB in perspective
perspective
Another view at the remarkable success of AB is offered through a comparison with a number of other papers dealing with the dynamic panel data model. Table 1 gives citation numbers, now in pairs of two as I have added the numbers from Google Scholar. These numbers are roughly four times those from the Web of Science, which is what you usually find; Google Scholar refers to any source that can be found on the internet, whereas the Web of Science only considers citations from a rather select set of international journals; having a decent refereeing system is a prerequisite for a journal to be included. The success of AB is dramatically brought out by its hitting, somewhere in the beginning of this year, the 10,000 mark on Google Scholar. Google
WoS
Arellano and Bond (1991)
10342
2349
Blundell and Bond (1998)
5465
1290
Arellano and Bover (1995)
4588
1021
Holtz-Eakin, Newey and Rosen (1988)
1380
359
Anderson and Hsiao (1982)
1384
320
Anderson and Hsiao (1981)
1257
294
Balestra and Nerlove (1966)
852
319
Ahn and Schmidt (1995)
637
153
Hsiao, Pesaran and Tahmiscioglu (2002)
153
30
32
10
Wansbeek and Bekker (1996)
Table 1: Number of references to selected papers.
econometrics
Apart from AB and the other papers that already have been mentioned, the list contains a selection of other papers on the dynamic panel data model. The papers by Arellano and Bover (1995) and Blundell and Bond (1998) were the basis of “System GMM”, with instruments added to AB that in a sense are the mirror image of AB, that is, instruments in difference form used to estimate the model in its original form in levels. Holtz-Eakin et al. (1988) elaborated vector autoregression in panel data models, and Balestra and Nerlove (1966) were the first to present an estimator of the dynamic panel data model by maximum likelihood. A fairly recent and exhaustive treatment of maximum likelihood estimation was given by Hsiao et al. (2002). I could not resist the temptation to include a paper that I wrote with my Groningen colleague Paul Bekker in the 1990’s. This paper is mentioned as the last (and leastcited) entry in the table. As is apparent from the number of citations, it has more or less been neglected, although ten citations (no self-citations) is not even that bad for a paper published in Economics Letters, where it ended up after being rejected by I forgot which journal. The reason to mention it here is that an extended version of it, taking additional regressors into account, developed by Harris and Mátyás (2000) and called WB+ by them, in their words including the italics, “generally outperformed all other estimators when T was moderate in all of the situations that an applied researcher might encounter.” The other estimators included AB. So we had the best method on offer at the time, but nobody noticed. Too bad, but such is academic life, sometimes. We apparently sold our method poorly. Apart from the sheer number of citations by itself, the pattern of citations over time is also a striking feature of figure 1. The fame of AB developed slowly until the turn of the millennium, when the number of citations started to grow going over into something resembling an explosion over the most recent years. The sky is the limit. At first sight this may not seem remarkable as this pattern is a good proxy of panel data econometrics, which has become a healthily growing subfield of econometrics, theory and empirics equally. But what holds at the aggregate need not hold at the level of a single paper. If a paper is important and makes a contribution, it inspires other researchers to build on it and to come up with new and better results, which in due course will outshine the paper that inspired it; the source fades away as a paradoxical reflection of success. This is what you often see but, by sharp contrast, such a development did not occur in the case of AB. The sheer fact that it is just a very good paper, fully deserving its place in a top journal and inspirational for many others, leaves open the question as to its persistent popularity. The answer is of course speculative. But speculation is fun. One reason may be is that it is easily available in various packages like Stata. The threshold for its use is agreeably low, and AB has become the standard; to give a typical quote, “[in] an influential article, Arellano
and Bond (1991) settled what later has become the bible for estimating [the dynamic model]” (Soto, 2003). Another reason may come from the teachers’ side. Many people over the world (at least, me) teach a course in microeconometric models. Often, the theory behind GMM is taught somewhere in the beginning of the course. Later on, panel data models are on the agenda. The AB approach to the dynamic model yields instruments from within the model, which is a rewarding moment for the teacher, and the abundance of instruments nicely motivates GMM, which is equally rewarding. Also, AB’s popularity may be due to a “signaling” function. Citing AB clarifies the positioning of a paper in a way everybody understands: the paper is clearly about the dynamic panel data model.
What kind of dynamics? But there is a more fundamental issue. Behind the popularity of AB looms another, more general phenomenon, which is the popularity of the dynamic panel data model in general. Why has this model become so popular? Again speculating, there are various possible answers. From a theorists’ point of view, one reason for the popularity of the dynamic panel data model is that it is just a very interesting model to work on. The model looks deceivingly simple but yet contains all kinds of complications that have invited research up till the present day. Also, the dynamic panel data model is a substantively important model as it allows for disentangling “state dependence” (through the lagged y in the model) and “heterogeneity” (as reflected by the individual effect in the model). And, last but not least, see above, the dynamic formulation may just be the appropriate one, as justified from the underlying theory. Yet the dynamic specification may not always be best, and there may be a certain degree of overenthusiasm for it. Let me explain what I mean, starting from the following observations. In panel data, the dependent variable, y, is always highly persistent. There is still a lot of persistence left after projection on the (equally highly persistent) regressors, x. This is fair enough as the factors that you don’t observe, captured in the error term u, could be (more or less) equally persistent as the things you do observe, x. This suggests immediately that any panel data model should allow for an unconstrained time structure of u as there is usually no justification for the opposite. The model is then a specific form of the so-called “seemingly unrelated regression model” or SUR model. This does of course not preclude the model from having the lagged y among the regressors. Yet, as an econometric modeler, you want to keep things simple if possible and you are tempted not to do everything at the same time but to consider either dynamics in y or correlation over time in u. The question then is where to start. Sometimes the choice is simple. In quite a few cases the economic theory
AENORM
vol. 20 (77)
December 2012
17
econometrics
underlying the case at hand implies a dynamic model. Examples are growth models, habit-formation models, and models based on dynamic optimization. Also, apart from articulate economic theorizing, a dynamic model follows, in any context, from partial adjustment in an equilibrium-seeking context. To model this, we assume that the static equilibrium value y*it is for an endogenous variable yit, after a realization of the innovation uit has been obtained, is given by (2)
where xit is an exogenous variable and i is the individual effect of cross-sectional unit i. The path to a static equilibrium is described by a partial adjustment model, (3)
Substitution gives
approach is the simplest one. But as yet I have no better alternative. The whole issue points a topic that, surprisingly, has not received any research effort as far as I am aware of. It is about the following. As was said already above, AB presents tests for correlation over time of the residuals. If no such correlation is found, starting with a dynamic model is apparently justified. But one can also start with a model with u freely correlated over time, and next test whether the lagged dependent variable should be added to the model. This is hardly if ever done. The critical issue here is that both approaches may lead to different conclusions. An estimator for the structural parameter that is consistent under one specification will be inconsistent under the other, and the difference can be huge. Some kind of tie-breaking mechanism should be introduced. My preference for simplicity would lead me choose the static model with correlated u. Sorting these very important issues out deserves a place high on the research agenda. An excellent starting point would be to extend the ideas of general-to-specific model selection (e.g., Hendry, 2000) to the panel data context.
(4)
yielding the dynamic panel data model. Notice that estimation of this model produces an estimate of the shortrun coefficient (1-α)β and not of the long-run coefficient (or “structural parameter”) itself. So a trivial bit of extra work is needed to get an estimate of the parameter of interest, and a less trivial bit of extra work gives you its asymptotic standard error. Otherwise, if there is no compelling a priori reason to choose between the two approaches, dynamics in y or correlation over time in u, the choice might be left to invoking a philosophical principle commonly called Ockham’s razor. There are various formulations of this principle but it comes down to the idea that, other things being equal, simpler explanations are generally better than more complex ones. When applying this principle in the present context one can argue that starting with correlation over time in u is simpler than starting from dynamics in y since the econometric theory behind the SUR model is quite a bit simpler than the theory behind the dynamic panel data model. Since econometricians like quantification, there is a statistic that illustrates this point, and that is the amount of research effort put in both. When scanning the literature over the last ten years, it is hard to find developments around the SUR model, whereas a fifteen-minutes search for the literature about the dynamic panel data model showed that at least 80 researchers have written about the dynamic panel data model in these ten years. The list is not just very long but is also impressive as to its quality since it includes a number of the world’s leading econometricians. Such a comparison by head count of research effort may seem a little weird, and I am happy to trade it in for a better way to show which
18
AENORM
vol. 20 (77)
December 2012
A brief look at the empirical reality In order to get an idea about the reasons why empirical researchers use the dynamic panel data model, I consulted the Web of Science again, this time to check the papers that actually cite AB. Given the huge amount of citations, I had to make a selection. At the time I made this selection, in October 2011, AB had 1925 citations, which happens to be 25 × 77. Since you can set the Web of Science at 25 entries on a page, it was logistically easy to select the bottom entry on each page, leading to a manageable 77 papers, equally spread over the citing papers in order of time. Next I went after these papers through my university’s electronic library. It so happened that eight of these papers could not be found there. Another seven papers had no empirical content, and yet another eleven were empirical but the model after all was not dynamic. This left me with 51 papers for inspection, dealing with a bewildering and fascinating range of topics; the dynamic model is all over the place. The first thing to look at was the motivation behind the specification of a dynamic model, and in particular the strength of this motivation. In principle this is a welldefined job but it clearly has a strong subjective element. No (clear) argument
17
Partial adjustment
11
“Weakly” theory-based
16
“Strongly” theory-based
7
Table 2: Motivation for a dynamic specification.
econometrics
Moreover, I did the job rather quickly, in a first round that I intend to replicate later, so there will no doubt be inaccuracies; scanning so many papers about so many different topics is a very interesting job, but it is also sometimes hard to stay concentrated and not to make a run for the coffee machine when yet another dynamic model pops up. With these caveats, a first set of results of this look at a sample from the AB-citing literature are summarized in table 2. It is about the motivation for using a dynamic specification. It appears that a third of the papers did not motivate the dynamic element in a clearly visible way. Of the remaining two-thirds of the papers, partial adjustment was clearly the favorite, being mentioned, in some form or another, in eleven papers. Partial adjustment is a sound argument for a dynamic model, yet something strange appeared. As is apparent from the discussion of the partial adjustment model given above, a paper that takes partial adjustment seriously should report estimates of the long-run coefficients, β, not the short-run ones, (1 - α)β, which come out of the computer first; the long-run coefficients are the structural parameters and should be the objects of interest. Yet only two of the eleven papers invoking partial adjustment did this, in a rather minimal way. This left me with 23 out of the 51 papers having some kind of substantive story behind the dynamics. In most of these cases, though, the motivation is not very deep or very strong, but at least it gets some attention. Only seven out of the 51 papers take the dynamics more or less seriously. Two of these papers deal with growth models. I was left with the impression that the dynamic element in the model is not a matter of great concern to applied researchers. Many of the papers showed great competence and loving attention in discussing the subject matter at hand, with the dynamics playing a minor role. The interest in the precise value of the coefficient of the lagged dependent variable appeared meager. Yet this is a topic to which still a huge research effort is being dedicated. This was a somewhat melancholic-making finding. A next issue of interest from inspecting the literature concerned the methods actually used by the researchers. It appears that 29 out of the 51 papers use AB. This is not surprising, of course. Somewhat more surprisingly, System GMM, considered superior in a stationary model, is used in only twelve papers. The result is no doubt biased since the selection of papers is based on citing AB. Ten papers used different methods. Two used plain, good old OLS, another two applied methods that adjust AB for small-sample bias, in yet another two cases I could not easily find out what actually has been done, one paper used the Anderson-Hsiao instrumental variable, and three papers used lagged values of x as instruments. There is a lot in favor of the latter, but this again is a issue in panel data analysis that gets far too little attention, unfortunately; there are situations where using lagged values of x as instruments leads to superior results.
Another issue is residual autocorrelation. This issue is crucial as the presence of autocorrelation renders the AB estimator inconsistent. Out of the 51 papers, 20 did not report test results in this important matter. The other 31 did, and 24 of them obtained test results that did not necessitate to reject the null-hypothesis of no autocorrelation. The other seven had to reject the nullhypothesis although that did not induce further action. This suggests that correlation in the error term is not a matter of great concern. To repeat the point made above, however, we could have reached a different conclusion about the model when we would have started from the position that there is such correlation, and test for the presence of the lagged dependent variable in the model. This would lead to a different conclusion about the structural parameters in the model.
Summing up One of the great advantages of having panel data available is they can provide insight into the dynamics underlying the relations that you want to analyze. This message is not lost on the research community, given the remarkable success of the Arellano-Bond estimator. However, the source of the dynamics is sought very much one-sided in the dynamics in the dependent variable, at the cost of attention for dynamics in the error term. Testing between the two (or rather, between a much larger class of models encompassing them) is a neglected topic in econometric theory, and the more so in econometric practice, although the effects on the estimation of the structural parameters of misspecification can be huge. Also, the potential of using exogenous variables as instruments is a matter deserving more attention that it has received up till now. One reason for this relative neglect may be that many papers start by investigating the dynamic panel data model without further regressors, adding them later on as an afterthought, that is, as a (minor) problem that has to be tackled, too, instead as a part of the solution. So this short paper ends, like almost all papers in econometrics, with the conclusion that further research is needed. Together with my colleagues Laura Spierdijk, also from Groningen, and Christoph Hanck, from the Universität Duisburg-Essen, I am trying to elaborate some of the issues raised. I can’t guess how far we will ever come, but dreaming of thousands of citations certainly motivates!
AENORM
vol. 20 (77)
December 2012
19
econometrics
References Ahn, S.C. and P. Schmidt (1995), “Efficient estimation of models for dynamic panel data”, Journal of Econometrics, 68, 5–27.
Holtz-Eakin, D., W. Newey and H.S. Rosen (1988), “Estimating vector autoregressions with panel data”, Econometrica, 56, 1371–1395.
Anderson, T.W. and C. Hsiao (1981), “Estimation of dynamic models with error components”, Journal of the American Statistical Association, 77, 598–606.
Hsiao, C., M.H. Pesaran and A.K. Tahmiscioglu (2002), “Maximum likelihood estimation of fixed effects dynamic panel data models covering short time periods”, Journal of Econometrics, 109, 107–150.
Anderson, T.W. and C. Hsiao (1982), “Formulation and estimation of dynamic models using panel data”, Journal of Econometrics, 18, 47–82. Arellano, M. and S. Bond (1991), “Some tests of specification for panel data: Monte Carlo evidence and an application to employment equations”, Review of Economic Studies, 58, 277–297. Arellano, M. and O. Bover (1995), “Another look at the instrumental variable estimation of error-components models”, Journal of Econometrics, 68, 29–51. Balestra, P. and M. Nerlove (1966), “Pooling crosssection and time series data in the estimation of a dynamic model: the demand for natural gas”, Econometrica, 34, 585–612. Blundell, R. and S. Bond (1998), “Initial conditions and moment restrictions in dynamic panel data models”, Journal of Econometrics, 87, 115–143. Hansen, L.P. (1982), “Large sample properties of generalized method of moments estimators”, Econometrica, 50, 1029–1054. Harris, M.N. and L. Mátyás (2000), “Performance of the operational Wansbeek-Bekker estimator for dynamic panel data models”, Applied Economics Letters, 7, 149–153. Hendry, D.F. (2000), “Epilogue: the success of generalto-specific model selection”, in D.F. Hendry, editor, Econometrics: alchemy or science? Essays in econometric methodology, Oxford University Press, 467–490.
20
AENORM
vol. 20 (77)
December 2012
Soto, M. (2003), “Taxing capital flows: an empirical comparative analysis”, Journal of Development Economics, 72, 203–221. Wansbeek, T.J. and P.A. Bekker (1996), “On IV, GMM and ML in a dynamic panel data model”, Economics Letters, 51, 145–152.
GET A SUBS FREE CRIP TION N www OW! . aeno
rm.eu
DOWNLOAD and READ published articles online
title
www.aenorm.eu
Aenorm Survey
Win a â&#x201A;Ź50,- giftcheque!
Aenorm Survey This survey can be submitted in both English and Dutch. The winner will be chosen between the submitted files, sorted on quality. Please send a completed form to (or submit a form at vsae.nl/aenorm) VSAE, Roetersstraat 11 E2.02-04, 1018 WB Amsterdam, The Netherlands and win a â&#x201A;Ź50,- giftcheque!
Step 1 When you read the Aenorm, do you read it online or hardcopy? * Online Hardcopy
Do you read the Aenorm often? * Yes (Go to Step 2) No (Go to Step 3)
Step 2 How many articles do you read? * 1-3 3-5 >5
What is your opinion about the number of articles published? * It is good right now (10 articles) Too many Too little
What is your opinion about the difficulty of articles published? * It is good right now Too easy Too difficult
Do you have suggestions how we can improve the Aenorm? * _______________________________________________________________________________________ _______________________________________________________________________________________ _______________________________________________________________________________________
* This questions is required to complete the survey
22
AENORM
vol. 19 (75)
May 2012
Aenorm Survey
Win a €50,- giftcheque! Step 3 Why don’t you read the Aenorm? *
Too difficult Boring subjects I don’t like to read English Other: ______________________________________________________________________________
Do you have suggestions how we could make you read the Aenorm? * _______________________________________________________________________________________ _______________________________________________________________________________________ _______________________________________________________________________________________
Step 4 What is your opinion about the look of the Aenorm? * Great the way it is A full colour look would be better To official To informal Other: ______________________________________________________________________________
Would you be more interested in the Aenorm if more VSAE-related activities were outlined? * Yes No
Would you be more interested if there were more company profiles in the Aenorm? * Yes No
If some of the articles were Dutch, would that be a problem? * I would read less articles I would read more articles It doesn’t matter to me
What is your opinion about the puzzle? * (Multiple answers possible) The questions are too easy The questions are too hard The puzzle is fine right now I would like to have more possibilities to submit a form
* This questions is required to complete the survey
AENORM
vol. 19 (75)
May 2012
23
Aenorm Survey
Win a €50,- giftcheque!
What is your opinion about the number of advertisements? Only on the cover would be the best On the cover and 1 per 10 pages On the cover and 1 per 7 pages On the cover and 1 per 4 pages None at all
Are you a student, professor or alumnus? * Student Professor Alumnus Other: ______________________________________________________________________________
If you would have to pay to receive the Aenorm, would you do that? * Yes (Go to Step 5) No (Go to Step 6) Depends on the price (Go to Step 5)
Step 5 How much would you pay on a one-year base for the Aenorm? *
Till € 5,From € 5,- to € 15,From € 15,-
Step 6 Are you interested in publishing in the Aenorm? * Yes (Go to Step 7) No (Go to Step 8)
Step 7 What are your contact details in order to keep in touch for publishing? * _______________________________________________________________________________________ _______________________________________________________________________________________ _______________________________________________________________________________________
Step 8 What is your e-mail address in order to make a chance to win the price? _______________________________________________________________________________________
* This questions is required to complete the survey
24
AENORM
vol. 19 (75)
May 2012
actuarial science
Three fallacies in the pension debate by: David Hollanders Every discussion has its clichés, frames, stereotypes and downright fallacies. Alas, the pension debate is no exception. Three fallacies that are often propagated, even by prestigious institutes as the IMF and the OECD, are the following.
1. Increasing the retirement age benefits the young Many people think it is a good idea to increase the retirement age and that this particularly benefits younger people, given that the population ages. While it may be a good idea to increase the retirement age, the assertion that this somehow is good for young people, with or without aging, is a fallacy. To see this, consider the increase of the retirement age from 65 years to 67 years, which the Dutch cabinet Rutte-II has planned. This would –if the statement holds- be good news for young people. The reasoning goes something like this. The state-pension (“AOW”) is financed by a Pay-As-You-Go system. This means that current benefits are paid for by contributions of current working generations. If there are more elderly, a PAYG– financed pension system is an increasing financial burden for young workers, and this burden can be lifted, at least partly, by increasing the retirement age. Of course, an increase of the retirement age hurts older workers who have to work two years longer (or save more), but the above statement does not relate to them. This reasoning is at best partly true. It is of course true that a young worker will pay less contributions for the rest of his or her working life. And this is clearly a benefit for this person. However, (s)he also loses something, as young workers forego two years of retirement benefits. This is clearly a loss. So how to weigh the benefit of this
policy against the loss? In an economy where the ratio of working people between 65-67 years and people between 15-65 years remains constant, the cost (a young worker does not have to pay for two years’ worth of retirement benefits) exactly cancels out against the benefit (a young worker does not receive two years’ worth of retirement benefits). But of course we do not live in an economy where this ratio is stable. On the one hand more and more people reach 65, on the other hand participation rates of older workers and women keeps on increasing. It is also affected by immigration and fertility rates, which are so low these days that the only way seems to be up. How that all will work out, remains to be seen. Even if the ratio will keep on increasing –which is far from certain- it is difficult to see how that disadvantages young workers. It is true that each new young generation contributes relatively more than every generation before, but –as a generation- it also benefits relatively more. Actually each new generation receives more than it contributes, in the sense that it contributes for a generation in which fewer people reach 65 years than people of its own generation will. So it is true that contributions will increase due to aging, as however will the benefits. One may think ever increasing contributions is a good reason to increase the retirement age or that it can be justified on other grounds, but in itself it is not good for young people, quite the opposite.
2. It is necessary to invest in risky stock in order keep contributions down
David Hollanders David Hollanders holds masters in Econometrics (UvA) and Economics (Tinbergen Institute). He finished his PhD at the University of Tilburg on aging and pensions this year. Currently he is a postdoc researcher at business school Tias Nimbas and the Amsterdam Institute of Advanced labour Studies (AIAS).
This statement is iterated time and again by pension fund board members and even sometimes by supervisors. The reasoning is that the return of stocks is higher than that of bonds, and therefore contributions can be reduced (or benefits increased) if pension assets are invested in stocks. To be sure, it is true that the expected return of stocks is higher than that of bonds. But this is not what many board members say nor what they mean. If the statement would be true then ever increasing the fraction invested in stocks could reduce contributions still further. And if it
AENORM
vol. 20 (77)
December 2012
25
actuarial science
were true, it would even be profitable to invest borrowed money in stocks. This can’t be true and it isn’t, as victims of the so called Legiolease-affair (where insurance companies invested borrowed money on behalf of participants without telling them that money was borrowed on their behalf) can testify. But let’s follow the proposition and see how far it goes. Suppose a pension fund needs to pay out 102 euro one year from now, as it promised benefits equal to that amount. Suppose the fund can finance the promise with any convex combination of investments in bonds and stock. Bonds have an interest rate of 2%, which is paid out with certainty. The stock-return is Bernoulli distributed. The return is 50% with probability 0.8 and it is -10% with probability 0.2. The expected return is thus 38%, which is obviously higher than the fixed-income return of 2%. This is a simple and stylized representation but the only point is that both the expected return and the variance of the stock is higher than the bond, resulting in a risk-return trade-off. Suppose the fund invests solely in bonds. It then would have to invest 100 euro in bonds in order to meet its liabilities. Now could the fund somehow decrease the value of the assets (100 euro) it needs by investing in risky stocks? One way to try is to invest 68 euro in stocks, which is far less than 100 euro in bonds. With probability 0.8 this would be exactly enough to cover the liabilities of 102 the next year. That sounds good, but then, with probability 0.2 the value of the stock would plummet, and would equal 61.2 the next year, which is clearly far removed from the promised 102. So this is certainly not a sufficient investment policy. Could nonetheless value be added by buying put options along with the stocks that give the right to sell stocks at a strike price of 102? That crucially depends on the price of the option. The fund needs a put option that pays out 40.8 euro in case stock return is low (and is worthless if stock return is high). The replication principle states that in an efficient market the price of such an option equals the value of the portfolio needed to replicate the pay out of the option. Now, a portfolio that has 100 euro invested in bonds and shorts 68 euro of stock replicates the needed payout. And the price of such a portfolio is (100-68=) 32 euro. This in turn means that investing 68 euro in stocks and buying put options priced 32 euro to hedge the risk together costs 100 euro. The conclusion is that it does not add any value to invest in stocks, as one needs 100 euro to cover the liabilities (the promised benefit of 102 euro). One could reduce the value of the assets needed –and thereby contributions- if and only if simultaneously benefits are lowered, for example if the fund does not promise 102 euro but promises 61.2 euro or if the fund shifts the investment risk to participants. But once promises are made, the price of the assets needed to cover the implied liabilities is fixed and does not depend in any way on how the assets are invested. With higher expected return comes higher risk and the market value of that risk -in an efficient market- exactly offsets the gain.
26
AENORM
vol. 20 (77)
December 2012
3. Pay-as-you-go is more vulnerable to aging than full funding There are two different ways to finance pensions. A first way is the above mentioned Pay-as-you-go system, in which –as said- current benefits are paid for by current contributions. For example, the Dutch state-pension “AOW” is paid for by levies on workers. That is, workers pay contributions to the government (which in turn pays out benefits) and hope to receive benefits from the government when old (when the then young pay). Alternatively, pensions can be financed by full funding, as is the case with occupational pensions. With full funding each pension fund participant saves for himself via pension funds (for example ABP or Zorg en Welzijn) or insurance companies. The contributions are invested on behalf of participants and are paid out as benefits when the participant retires. Now, it is frequently stated that full funding is more robust to aging than PAYG. The reasoning is that the crucial variable determining the financial sustainability of a PAYG-system is the ratio between people receiving benefits (retirees) and people paying contributions (the working population). When the population ages, this ratio seriously decreases. For example, the dependency ratio (defined as 100 times the number of people who are 65 years or older divided by the number of people between 19 and 65 years of age) was 14 in 1950. So 100 people of working age ‘supported’ 14 retirees. In 2010 the dependency ratio was 25.1 and it is projected to increase to 49.3 in 2040. If the projections turn out to be accurate, in 2040 two people of working age will support almost one retiree. Of course, the implicit rate of return of a PAYG-system is not only determined by the dependency ratio. It can be increased by increasing labour participation, immigration and labour productivity, but with this kind of demographic trends, the only way for the rate of return of a PAYG-system is down. This part of the reasoning is totally valid. The second part holds that a fully funded system is not equally affected by aging. The corollary is that it is a good idea for aging countries to shift from a PAYG-system to a fully funded system. While this is not completely a fallacy, the claim is so often completely overstated and abused, that it is close enough to being fallacious. The reason is that full funding and PAYG in many relevant aspects are similar and are –thus- both equally vulnerable to aging. To see this, consider a fully funded pension fund. Active participants pay contributions to the pension fund. What does the fund do with the money? One thing the fund has to do, is to pay out benefits to retired participants (who paid contributions before). So, active participants pay contributions to the fund (which in turn pays out benefits) and hope to receive benefits from the fund when old (when the then young pay). Sounds familiar? In terms of cash flows, this indeed equals a PAYG-system. But what if the contributions exceed total benefits, as is the case in the Netherlands? In that case the fund
actuarial science
accumulates assets. In the Netherlands funds have assets worth 900 billion euro. So isn’t that a big difference between funding and PAYG? Well, that depends on what participants would have done in a PAYG system with the money they paid in a funded system as contributions. If they would have saved, funding and PAYG are similar. But suppose that participants would have consumed the assets that are accumulated in a funded system. Isn’t that finally a difference? Well, it is, but that doesn’t matter that much. It all depends on how the assets are put to use. It can either be lend to foreign countries –leading to current account surpluses as in the Netherlands- or it can be invested in the home country. As not all countries can have current account surpluses –as the world cannot export to planet Mars-, establishing a current account surplus is not something every country can do. And borrowing large amounts of money to foreign countries comes with a big default risk, as the Netherlands who lend to Greece, Spain and Portugal is now discovering. But what if the assets are invested in the home country? Even if this is done productively (and companies are not buying yachts for the CEO or crap-backed securities like Collateral Debt Obligations), this only goes so far. The reason is that the capital-labour ratio decreases due to aging, and therefore the return on capital seriously decreases. To be concrete, in the face of aging one can build more hospitals and elderly homes, but in the end you need doctors, nurses and care takers. So, in the end you need workers. No matter how a pension system is financed, in any system workers need to produce to enable the retired to consume. This also means that any pension system is virtually equally vulnerable to ageing.
Conclusion To conclude, increasing the retirement age, financing pensions via full funding and investing in risky stocks may all be good ideas. But they are not good ideas because a higher retirement age is good for young workers, full funding is less vulnerable to aging or investing in stocks somehow reduces contributions. Stating the opposite is a fallacy that alas many pension fund board members, politicians, journalists and even supervisors and academics propagate.
AENORM
vol. 20 (77)
December 2012
27
career
In learning how to count, we forgot what really counts. by: Carl Johan Lens Summary: while being educated in equations and laws of mathematics we don’t necessarily get a feel for what is really important: what really counts. As students we get in touch with science and through mastering science we get a grip on the world surrounding us. At the same time, while being assured on ‘what is real’ we forget that in as far the laws of mathematics refer to reality, they are not certain and as far as they are certain, they do not refer to reality (paraphrase of a quote of Einstein). In other words: in capturing reality mathematically we lose sight of a part of reality. It might be that this particular ‘lost part’ makes all the difference in how we add quality to life and make things worthwhile. Truth and reality come in many forms, an econometric model just being one.
Personality This year I had the privilege of leading a workshop at the LED-2012 (Landelijke Econometristen Dag) about the Integrated Professional: the professional worker who is aware that a well balanced development of a professional career can not exist if the personality is not developed as well1. Someone is a well-balanced and integrated leader when personal issues don’t interfere with professional issues but at the same time, the personality is fully present at the workplace and acts as an instrument in exercising ones profession: the ‘Integrated Professional’. A bit ‘over the top’, but a clear example of integrated professionalism is found in Nelson Mandela who did not chose a road of bitterness and revenge after years of (unjust) imprisonment but instead overcame his urge and turned himself into an instrument of change for the good of all2. (Seen in this light: what do we think of the Managing Di-
rector who uses his position in the public housing sector to build a millionaire’s villa on an island in the Caribbean? Did he deal with his ‘personal issues’?) While conducting the workshop at the LED, I found out that during the formal education (Masters in Econometrics) at the university, none of the people present had ever spent one hour on Personal Development, Business Ethics or on some course in Philosophy. Everyone was thoroughly trained in hedging funds against given interest rates and uncertainties but no one was stimulated in thinking outside the technological boundaries of the profession. This appeared to be true for Dutch and foreign education. So everyone had ‘learned how to count’ but was, at least in a formal sense, understimulated in discerning what really counts. This statement is of course only valid if you, being the reader, think that ‘there is more to life then numbers’. Few people will deny this. The attraction however of an equation or a model that has
Carl Johan Lens Carl Johan Lens is Managing Partner at Lens & Partners Executive Search & Coaching. After having been International Director for an English Executive Search firm, Carl co-founded and managed an Internet start-up company before establishing Lens & Partners. Carl Johan also conducts individual career development programs and executive coaching programs for employees of companies and institutes, and for private individuals. He also chairs MachaWorks Foundation (www.machaworks. org), an organization aimed at the development of rural Africa through the introduction of internet. Comments, questions: cl@lensandpartners.com
1 www.lensandpartners.com/Documents/RAI-feb-12-Leadership.pdf 2 ‘A long walk to freedom’, Nelson Mandela, 2008
28
AENORM
vol. 20 (77)
December 2012
career
been created to help people (governments, pension funds, public transport companies, IMF etc. etc. etc.) make decisions with far fetching influence, is very powerful and seductive even if the model is not perfect. The beauty of an equation that is not necessarily a fair representation of ‘real life’ can carry one away. This is dangerous territory and Albert Einstein warns us: ’We should take care not to make the intellect our god; it has, of course, powerful muscles, but no personality’. Why should we take care? This is because the ‘personality’ makes all the difference. Without it we lose a big part of what makes life worthwhile. The personality in Einstein’s quote is ‘the head to which the body obeys’ or ‘the horseman riding the horse’. In my eyes the personality is the professional conscience that includes the ethical code and imagination a professional needs in discerning where to put the acquired knowledge to good use. The technological craftsmanship (the body/horse) is very powerful and left to itself will break lose. A fine but somewhat weighty illustration of this is the consults that were given to the Greek government by Goldman Sachs to enable the Greeks to enter the Eurozone3. Clearly technological craftsmanship ‘pur sang’ but broken lose and not headed by any personality but a (near?) criminal one. This of course, equals an underdeveloped or hardly existent personality. To end my thought cycle: I am convinced that the professionals that helped the Greek government satisfying the deadlines with regards to entering the Eurozone, were in majority hard working honest craftsmen ‘doing their job’ who believed that they were doing the right thing. In hindsight this is hard to understand. Clearly the head was loosened from the body and the satisfaction of working on such a big and complex task, of which every individual worker only saw a very small portion, was probably very satisfying. Like the Goldman Sachs consultants we take pride in our ability to create and we enjoy the power of intellectual labour. The intellectual satisfaction of capturing complex realities (thus being able to manipulate these same realities) can distract from day to day life where we have to put our conscience at work and listen to what it tells us as we see in the Greek example. This illustrates the need to develop ones own personality since it is here where conscience and discernment are seated.
Stretching In his wonderful book “Economics of Good and Evil”4, Tomas Sedlacek stretches our views on what is true and what science can do for us: “Truth did not always have today’s ‘scientific’ form. Today’s scientific truth is founded on the notion of exact and objective facts, but poetic truth stands on an emotional consonance with the story
or poem. If a poet writes, “she was like a flower,” from a scientific point of view he is lying; every poet is a liar. The human female has almost nothing in common with a plant and what little it does have in common is not worth mentioning. Despite this, the poet could be right and not the scientist. Ancient philosophy, just as science would later, tries to find constancy, constants, quantities and inalterability’s. Science seeks (creates?) order and neglects everything else as much as it can. In their own experiences, everyone knows that life is not like that, but what happens if the same is true for reality in general? In the end poetry could be more sensitive to the truth than the scientific method. Tragic poems, in virtue of their subject matter and their social function, are likely to confront and explore problems about human beings and luck that a scientific text might be able to omit or avoid”. This gives room for exploration of other parts then just our intellectual capacity and raises the question whether emotions and character (part of the personality) have anything to do with our profession.
A new era Present times witness the beginning and (although far from over yet) ending of a moral crisis we haven’t seen since ancient Rome. It all started with the crimes committed by Enron, MCIWorldcom, Ahold and GoldmanSachs. The brutal spending of pension fund money by C-level management shocked the world for the lack of morality in leadership. In the Netherlands we have witnessed similar a-moral behaviour in semi-public organisations, somehow especially in semi-government housing societies. In the first decade PM Balkenende tried to introduce ‘norms & values’ and it is only now that we slowly start to admit that there might be a hint of truth in his endeavour. My prediction is that in the decade to come, the discussion about values will grow in magnitude, relevancy and influence. Values have of course a lot to do with ‘personality’: it takes courage to say what needs to be said and to do what needs to be done. One needs stamina and conviction to overcome ‘personal issues’ thus ‘personality’. Why? We all have served the (capitalist) system to the point where it (almost) breaks. It is the sheer weight of the problems we are facing that makes the system in which we live, squeak: environmental issues, the social agenda (especially in countries where income-inequality is far above the Tinbergen norm. See also the work of Nobel prize winner Stiglitz5), ramshackle sovereign debt and the growing scarcity of natural resources, to name a few. The 19th of November a special UN meeting is held in London. In order to address the gap between the economic reality of environmental issues and the average
3 http://www.iol.co.za/business/business-news/goldman-sachs-role-in-greece-a-real-scandal-1.1258930 4 ‘Economics of Good and Evil’ the quest for economic meaning from Gilgamesh to Wall Street, Tomas Sedlacek, Oxford University Press 2011 5 Stiglitz, Joseph (2012) The price of Inequality, http://www.economics.utoronto.ca/gindart/2012-06-05 6 http://www.unepfi.org/work_streams/biodiversity/e_risc
AENORM
vol. 20 (77)
December 2012
29
career
perception of Risk Managers, the United Nations Environment Program Finance Initiative (UNEP FI) and the Global footprint network (GFN), together with a number of institutional investors, investment managers, and information providers, have launched E-RISC6, or Environmental Risk in Sovereign Credit analysis. This happening is organised with a good reason: it will show how environmental criteria can be factored into sovereign-risk models and hence into the credit ratings assigned to sovereign bonds. And this is all too necessary. These environmental criteria have long been neglected but will become a major component in any new model with some saying power. I see this as an early sign of change towards a more ‘connected’ life where the question whether economy is about money or about people is answered by: ‘it’s the people stupid!’ To be able to ‘mend the system’ or, hopefully, avoid breakage, we need to be connected since the problems are too big too solve on a national level. We need to build bridges: between cultures, generations, continents, gender, and religions. To be able to do so one’s personality must be built and trained to become a stable, sound person who is able to build bridges. (An example of a bridge builder is Sadat in the 70’s when he signed a peace treaty with Israel, read his autobiography: thrilling7) So it all starts with ‘self’ and thus the mentioned personality.
mass psychological intoxication can arise by which everyone is carried away, including the supervisors. A more spiritual attitude provides inspiration, more distance and more consideration towards employees, customers and other stakeholders. Also, the general interests such as environmental effects are better taken into account.” Witteveen is clearly advocating the development of your personality as a means of getting us out of the woods. A more spiritual attitude implies that the promise of money, ambitions and intellectual power don’t carry us away. Instead: “Strive not to be a success, but rather to be of value.” (Again, Albert Einstein.) Is it possible that this is exactly what we have forgotten: to be of value (to each other). So, in learning how to count we forgot to be of service and work for the good of all. Think about it when you distance yourself a bit from your day-to-day work.
What have we forgotten? The present crisis, unprecedented in its magnitude some say, asks for strong leadership. The way forward is not very clear, is subject to many disputes and demands of the politicians in charge, a courageous, bold attitude if we want to avoid protracted stagnation like the one that is battering the Japanese economy8 for over 20 years now. In order to drag us out of this situation H.J. Witteveen, former IMF President, pleads for a ‘more spiritual awareness with leaders’9: ”The financial crisis has clearly demonstrated what the implications can be of the lack of spiritual awareness among business leaders. A purely materialistic and selfish attitude can easily lead to a blind pursuit of maximising profits in the short term, sometimes to the detriment of others. It leads to excessive ambition and unbalanced decision-making. Thus a
7 Anwar Sadat, 1978, In Search of Identity 8 http://newamerica.net/sites/newamerica.net/files/policydocs/NAF--The_Way_Forward--Alpert_Hockett_Roubini.pdf 9 ‘De Magie van Harmonie’, een visie op de wereldeconomie, Gibbon 2012
30
AENORM
vol. 20 (77)
December 2012
Wat dacht je van een traineeship in het zenuwcentrum van de economie?
Word trainee bij DNB. Schrijf je in op werkenbijdnb.nl
Ben je ambitieus en wil je worden opgeleid tot een breed inzetbare professional? Dan is dit je kans. In twee jaar tijd doorloop je een intensief programma waarbij alles draait om inhoudelijke kennis en je persoonlijke ontwikkeling. Binnen drie verschillende divisies ben je verantwoordelijk voor uitdagende projecten die zoveel mogelijk aansluiten bij jouw ambities en capaciteiten. Daarbij leer je alles over de werking van de financiÍle markten en het toezicht daarop. Zo bouw je een rotsvast fundament voor je verdere carrière.
Werken aan vertrouwen.
econometrics
Application of the vignette methodology to measurement of education-related health inequalities among older Europeans1 by: Teresa Bago d’Uva Heterogeneity in the reporting of health by education may bias the measurement of health disparities. We use anchoring vignettes to assess the extent of this bias in six health domains for older individuals, in eight European countries. Without correction for reporting differences, there is no evidence of health inequality by education in 32 of 48 (country-domain) cases. There is however a general tendency for the higher educated older Europeans to rate a given health state more negatively than their lower educated counterparts (except in Spain and Sweden). Correcting for this leads to a general increase in measured health inequalities (except for Spain and Sweden) and, consequently, to the emergence of inequalities in 18 cases. Measured health inequalities by education are often underestimated, and even go undetected, if no account is taken of reporting differences. response thresholds are assumed higher for the more educated person (which could arise if individuals report their health relative to that of peers, or if more highly educated individuals are better informed of treatment options and so are less tolerant of a given health condition)2. In this example, for any given true level of health, the more educated person reports worse health on the categorical scale. For example, if H*L and H*H represent the true latent health levels, then both report their health as “moderate” despite the fact that the more highly educated person has better true health. If we were to rely solely on SRH, we would falsely conclude that there is no socioeconomic inequality in health. With data on SRH only, this is inevitable, as differences in reporting behaviour cannot be disentangled from differences in true health.
1. Introduction
The measurement of socioeconomic inequalities in health has often relied on self-rated health, SRH (eg, Van Doorslaer and Koolman, 2004; Kunst, Bos et al., 2005). This is partly due to its low cost and feasibility in largescale surveys, but is also justified by extensive evidence demonstrating its predictive ability for mortality (see, eg, the review of Idler and Benyamini, 1997) and for health care use (eg, Van Doorslaer, Koolman and Jones, 2004). However, there have been concerns that, besides containing valuable information on health status, SRH may vary with conceptions of what constitutes good health and expectations for own health (Thomas and Frankenberg, 2002). If these vary systematically with A possible solution to this problem is the vignettes mesocioeconomic status, then measures of socioeconomic thodology, which identifies reporting behaviour through inequality based on differences in SRH will be Teresa Bago d’Uva biased. Teresa Bago d’Uva is an Associate Professor at the Erasmus School Figure 1 illustrates this proof Economics. She obtained a BSc (Mathematics Applied to Ecoblem. This shows the hyponomics and Management) and an MSc (Actuarial Sciences) at the thetical mapping from latent Technical University of Lisbon, and a PhD (Economics) at the Unitrue health into categorical versity of York. Her areas of interest are applied microeconometrics responses of a SRH instruand health economics, and recent research has focused on measument for a representative high rement of biases in self-reported health and inequalities in health and low educated individual. and health care. For illustrative purposes, all 1 This article is a summary of the published paper “Bago d’Uva T, O O’Donnell, E van Doorslaer. Differential health reporting by education level and its impact on the measurement of health inequalities among older Europeans. International Journal of Epidemiology 2008, 37:1375–1383”. 2 However, different scenaros are possible (for example, if health problems are reported as a justification for not working, Bound, 1991; or if higher income individuals, perhaps driven by a belief that they should be in good health, use more lenient standards in reporting their own status, Melzer, Lan et al, 2004)
32
AENORM
vol. 20 (77)
December 2012
econometrics
the rating of case vignettes describing fixed levels of functioning within a given health domain (King, Murray et al, 2004). Survey respondents are asked to rate both the vignettes and their own health on the same response scale and so different evaluations of the same hypothetical case represent reporting differences. This makes it possible to identify systematic differences in response thresholds in relation to individual characteristics such as education. Assuming that individuals rate the vignettes in the same way as their own health (response consistency), the thresholds obtained from the vignette responses can be imposed on a model for reported own health. This enables identification of differences in true health by SES and not merely of a mixture of health and reporting differences. One can then measure health on a comparable scale, by estimating the level that each group would report if they all used the thresholds of a reference group. For example, in terms of Figure 1, the health of the high education individual, H*H, could be re-labeled “good”, while that of the low education individual would remain “moderate”. This aims at determining the impact of correcting for reporting differences on the measurement of health inequalities by education, using data on self-reported own health and vignette ratings for older individuals, in eight European countries.
2. Data The Survey of Health, Ageing and Retirement in Europe (SHARE) randomly sampled from the population aged 50 years and over (plus spouses) in 12 countries (BörschSupan and Jürges, 2005). The first wave of SHARE data were collected in 2004-05 and released in June 2007 (Release version 2.0)3. Vignettes data are available for eight countries, which we analyse separately: Belgium (N=564), France (N=872), Germany (N=506), Greece (N=718), Italy (N=445), The Netherlands (N=532), Spain (N=463), and Sweden (N=414). Respondents classify their own health in six domains, in response to the questions: “Overall in the last 30 days, how much...”: “of a problem did you have with moving around? (mobility); “difficulty did you have with concentrating or remembering things?” (cognition); “bodily aches or pains did you have?” (pain); “difficulty did you have with sleeping such as falling asleep, waking up frequently during the night or waking up too early in the morning?” (sleep), “of a problem did you have because of shortness of breath?” (breathing); “of a problem did you have with feeling sad, low, or depressed?” (emotio-
Figure 1: Self-reported health for high (H) and low (L) educated individuals. True latent level H*L is perceived by person L as “moderate” and by person H as “very poor”. Level H*H is perceived by person L as “good“ and by person H as “moderate“.
nal health). The response categories are: “None”, “Mild”, “Moderate”, “Severe” and “Extreme”. In addition, for each domain, respondents evaluate three vignettes, each describing a fixed level of difficulty in that domain, on the same response scale4. We measure educational attainment in the categories: (i) finished at most primary education or first stage of basic education; (ii) lower secondary or second stage of basic education (reference category); (iii) upper secondary education; and (iv) recognized third level education, which includes higher vocational education and university degree. We control for age and gender by means of a continuous variable and a dummy variable, respectively.
3. Econometric methods The standard analysis assuming reporting homogeneity consists of estimating an ordered probit model for selfreported health in each domain. The category reported, Hdi = k, k=1,...,K, in domain d, is assumed to be generated
3 The SHARE data collection has been primarily funded by the European Commission through the 5th framework programme (project QLK6-CT-2001-00360 in the thematic programme Quality of Life). Additional funding came from the US National Institute on Ageing (U01 AG09740-13S2, P01 AG005842, P01 AG08291, P30 AG12815, Y1-AG-4553-01 and OGHA 04-064). The Belgian Science Policy Office funded data collection in Belgium. Further support by the European Commission through the 6th framework program (projects SHARE-I3, RII-CT-2006-062193, and COMPARE, CIT5-CT-2005-028857) is gratefully acknowledged. For methodological details see Börsch-Supan and Jürges (2005). 4 Descriptions of vignettes for all domains can be found here: http://ije.oxfordjounals.org/content/37/6/1375/suppl/DCI
AENORM
vol. 20 (77)
December 2012
33
econometrics
by the position of a latent health index H*di, specified as: relative
(1) to
a
set
that,
of
fixed
thresholds such (2)
where Xi includes education, age and sex. The assumption of reporting homogeneity is reflected in the fact that the thresholds are fixed. From the estimates of the ordered probit model for each health domain, we compute the highest to lowest education group rate ratio for reporting no problem or difficulty in that domain, for a reference individual (male aged 64, the sample mean age). This represents our measure of health inequality by domain, with no adjustment for reporting heterogeneity. We allow for reporting heterogeneity by using an extended ordered probit model â&#x20AC;&#x201C; hierarchical ordered probit model, HOPIT â&#x20AC;&#x201C; in which the reporting thresholds are made functions of individual characteristics and so the parameters of the latent index represent true health effects, and not a mixture of health and reporting effects. The first component of the HOPIT models respondentsâ&#x20AC;&#x2122; ratings of the vignettes. The perceived latent health level of vignette j in domain d, V*jdi , is specified to depend solely on a dummy indicator identifying the vignette being rated and a random, normally distributed error: (3) The observed categorical vignette rating, Vjdi , relates to V*jdi through the reporting thresholds: (4) which are now defined as functions of the same covariates that enter the latent index of own-health in (1), (5) Note that observable individual characteristics are absent from (3), following from the assumption of vignette equivalence that respondents understand the vignette description as corresponding to the same level of functioning on a uni-dimensional scale. Consequently, effects of Xi in the thresholds (5) are identified. In other words, all the systematic variation in the vignette ratings is attributed to reporting behaviour5. The second component of the HOPIT models individual own health. This is assumed to be determined by the
position of a latent health index in relation to thresholds as in (1)-(2) with the important difference that the thresholds are no longer assumed constant but are constrained to be equal to those in (5), identified from the vignettes component of the model. This follows from the response consistency assumption that respondents rate the vignettes in the same way as they do their own health. If this did not hold then it would not be valid to impose the thresholds identified from the vignettes ratings on the reporting of own health, and so the true health effects would not be identified. The HOPIT therefore consists of generalised ordered probit models for the reporting of own health and health of the vignettes with the cross-equation restriction that the threshold parameters are equal. It is assumed that the error terms in the vignette and own latent health equations, vjdi and , respectively, are independent for all i, j and d. To obtain vignette-adjusted health inequalities, we first estimate the parameters of the HOPIT model and then predict latent health in each domain for males aged 64 with high/low education. We then predict the vignetteadjusted probabilities that each of these individuals has no problem or difficulty in that domain using their own predicted latent health and the estimated thresholds of males aged 64 with low education. Since thresholds are fixed, these probabilities vary with the impact of education on true latent health only. Finally, high to low education rate ratios are simply obtained by taking the ratio of the two probabilities.
4. Results Before adjustment for reporting differences, high to low education rate ratios for reporting no problem or difficulty with own health are generally greater than one but they are not significantly different from one in 32 of the 48 cases (including all domains in Sweden, The Netherlands and Belgium, and all but pain in Germany, Table 1). Vignette-adjustment raises 39 of the 48 rate ratios, 18 of which become significantly higher than one. The countries where we observe the largest impact are Belgium, France, Germany and The Netherlands. Spain and Sweden display a different pattern: the more highly educated rate a given health state more positively in three and four domains respectively and, consequently, adjustment reduces the magnitude of the rate ratio (but does not change its significance). The domains that are mostly affected by the adjustment for differential reporting scales are those of sleep and breathing.
5. Conclusion This study uses ratings of hypothetical case vignettes to investigate the degree to which reporting heterogeneity in health by educational level biases the measurement
5 In principle, it would be possible to include an error term in (6) representing unobservable heterogeneity in reporting styles. We do not do so since, with only three vignette ratings within each domain and relatively small samples, identification is likely to be weak.
34
AENORM
vol. 20 (77)
December 2012
econometrics
of health inequalities among the older population in eight European countries. In six of the eight countries (so, excluding Spain and Sweden), more highly educated individuals are more critical of a given health state. When uncorrected, this would lead to underestimation of health inequalities by education. In particular for Belgium and The Netherlands, before correction, there is no evidence of inequalities by education in the probability of reporting no health problem or difficulty. Vignettecorrection however increases the ratios for all domains and reveals inequalities favouring the higher educated in four domains in these two countries, and in 10 of the other domain-country cases analysed.
References A. Börsch-Supan, H Jürges (eds). “The Survey of Health, A geing and Retirement in Europe”, Methodology, MEA: Mannheim, (2005) J. Bound, “Self reported versus objective measures of health in retirement models“, J Hum Resour, 26 (1991):107--137 E. Idler, Y. Benyamini, “Self-rated health and mortality: A review of twenty-seven community studies.”, J Health Soc Behav, 38 (1997):21-37 G. King, CJL. Murray, J. Salomon, A. Tandon, “Enhancing the validity and cross-cultural comparability of measurement in survey research“, Am Polit Sci Rev, 98 (2004): 184-191
AE. Kunst, V. Bos, E. Lahelma et al. “Trends in socioeconomic inequalities in self-assessed helath in 10 European countries“, Int J Epidemiol, 34 (2005): 295305 D. Melzer, TY. Lan, BD. Tom, D. Deeg, JM. Guralnik, “Variation in thresholds for reporting mobility disability between national population subgroups and studies“, J Gerontol A Biol Sci Med Sci, 59 (2004):12951303 A. Quesnel-vallée, “Self-rated health: caught in the crossfire of the quest for ‘true’ health?”, Int J Epidemiol, 36 (2007): 1161-1164 D. Thomas and E. Frankenberg, “The Measurement and Interpretation of Health in Social Surveys“, Summary Measures of Population Health: Concepts, Ethics, Measurement and Applications, edited by C. Murray, J. Salomon, C. Mathers and A. Lopez, Geneva, Switzerland: World Health Organization (2002) E. Van Doorslaer, X Koolman, AM. Jones, “Explaining income-related inequalities in doctor utilization in Europe“, Health Econ, 13 (2004):629-647 EKA. Van Doorslaer, X. Koolman, “Explaining the differences in income-related health inequalities across European countries“ Health Econ, 13 (2004):609-628
AENORM
vol. 20 (77)
December 2012
35
econometrics
Turnpike Properties of Optimal Control Systems by: Alexander J. Zaslavski
1. Introduction
the turnpike theory we study the structure of solutions when an objective function (an optimality criterion) is In this paper we discuss recent progress in the turnpike fixed while T1,T2 and the data vary. To have turnpike theory which is one of our primary areas of research. properties means that the solutions of a problem are Turnpike properties are well known in mathematical determined mainly by the objective function (optimality economics. The term was first coined by Samuelson (see criterion), and are essentially independent of the choice [1]) who showed that an efficient expanding economy of time interval and data, except in regions close to the would for most of the time be in the vicinity of a balanced endpoints of the time interval. If a real number t does not equilibrium path. These properties were studied by many belong to these regions, then the value of a solution at researches for optimal paths of models of economic the point t is closed to a “turnpike” - a trajectory (path) dynamics determined by set-valued mappings with which is defined on the infinite time interval and depends convex graphs. In our recent book [5] we present a only on the objective function (optimality criterion). This number of turnpike results in the calculus of variations, phenomenon has the following interpretation. If one optimal control, the game theory and in economic wishes to reach a point A from a point B by a car in an dynamics obtained by the author. The results collected optimal way, then one should enter onto a turnpike, spend in [5] demonstrate that the turnpike properties are a most of one’s time on it and then leave the turnpike to general phenomenon which holds for various classes reach the required point. of variational problems and optimal control problems P.A. Samuelson discovered the turnpike phenomenon arising in engineering and in models of economic growth. in a specific situation in 1948. In further studies Turnpike properties are studied for optimal control turnpike results were obtained under certain rather problems on finite time intervals [T1,T2] such that T1 < strong assumptions on an objective function (optimality T2. Here T1,T2 are real numbers in the case of continuouscriterion). Usually it was assumed that an objective time problems and are integers in the case of discretefunction is convex, as a function of all its variables and time problems. Solutions of such problems (trajectories does not depend on the time variable t. In this case it or paths) depend on an optimality criterion determined was shown that the “turnpike“ is a stationary trajectory by an objective function (integrand), the time interval (a singleton). [T1,T2], and on data which is some initial conditions. In Since convexity assumptions usually hold for models of economic growth, turnpike theory has many applications in mathematical Alexander J. Zaslavski economics. There are several turnpike Alexander J. Zaslavski received his doctorate at the results for nonconvex (noncocave) Institute of Mathematics of the Siberian branch of problems but for these problems the Soviet Academy of Sciences in Novosibirsk. convexity (concavity) was replaced by He is a senior researcher at the Department of other restrictive assumptions which Mathematics of the Technion - Israel Institute hold for narrow classes of problems. of Technology. His research interests are the Therefore experts considered the optimization theory, the calculus of variations, turnpike phenomenon as an interesting optimal control, the dynamical systems theory, and important property of some very nonlinear analysis, the game theory and models particular optimal control systems with of economic dynamics. He is an author of 400 origin in mathematical economics and research papers and three monographs. for which a “turnpike” was usually a singleton or a half-ray. This situation has changed in the last period of time which
36
AENORM
vol. 20 (77)
December 2012
econometrics
began at 1995 when the works [2, 3, 4] appeared. In [2] we studied a general class of unconstrained discrete-time optimal control problems and established that a turnpike property holds for a typical (generic) problem and that a turnpike is a set which is not necessarily a singleton. For this class of problems the turnpike can be a singleton but rather seldom. In [3, 4] we studied the turnpike properties of extremals of one-dimensional second order variational problems arising in the theory of thermodynamical equilibrium for materials. We showed that for this class of problems the turnpike is a periodic curve which is not necessarily a singleton. In our book [5] we collected turnpike results which were obtained after 1995 and for which we do not need convexity of an objective function and its time independence. The results of [5] allow us today to think about turnpike properties as a general phenomenon which holds for various classes of optimal control problems. It was my great pleasure to receive on October 2000 the following letter from Paul A. Samuelson, the discoverer of the turnpike phenomenon.
Here T is a real number, y and z are points of the space Rn and an integrand is strictly convex and differentiable function such that
If a reader is not familiar with absolutely continuous functions, it is possible to assume that functions v are continuously differentiable or at least piecewise C1 functions. We intend to study the behavior of extremals of the problem (P0) when the points y,z and the real number T vary and T is sufficiently large. In order to meet our goal let us consider the following auxiliary minimization problem:
(P1)
By the strict convexity of f and the growth condition, the problem (P1) possesses a unique solution . It is easy to see that
Dear Professor Zaslavski:
Define
I note with interest your long paper â&#x20AC;&#x153;The Turnpike Property ...Functionsâ&#x20AC;? in Nonlinear Analysis 42 (2000), 1465-98. It may be of interest to report that this property and name originated just over half a century ago when, as a Guggenheim Fellow on a 1948-49 sabbatical leave from MIT, I conjectured it in a memo written at the RAND Corporation in Santa Monica, California. In The Collected Scientific Papers of Paul A. Samuelson, MIT Press, 1966, 1972, 1977, 1986, it is reproduced. R. Dorfman, P.A. Samuelson, R.M. Solow, Linear Programming and Economic Analysis, McGraw-Hill, 1958 gives a pre-Roy Radner exposition. I believe that somewhere Lionel McKenzie has given a nice survey of the relevant mathematical-economics literature. With admiration, Paul A. Samuelson
It is easy to see that the integrand is a differentiable and strictly convex function such that
Since the functions have
and L are both strictly convex we
and Consider an auxiliary variational problem
2. Problems with convex integrands Let | . | be the Euclidean norm in the n-dimensional Euclidean space Rn and let < . , . > be the scalar product in Rn. We consider the variational problem
(P0)
(P2)
is an absolutely continuous function such that v(0) = y, v(T) = z; where T > 0 and . Clearly, for any positive number T and any absolutely continuous function we have
is an absolutely continuous function such that
AENORM
vol. 20 (77)
December 2012
37
econometrics
It follows from the equations above that the problems (P0) and (P2) are equivalent. Namely, a function is a solution of the problem (P0 ) if and only if it is a solution of the problem (P2). We claim that the integrand possesses the following property (C):
Clearly, the integrals
do not exceed a positive constant c0(|y|,|z|) which depends only on the norms |y|,|z| and does not depend on T. Therefore
then Assume that a sequence satisfies = 0: The growth condition implies that the sequence is bounded. Let (y, z) be its limit point. Then,
This implies that , as claimed. Assume that y,z are points of the space Rn, T > 2 is a real number and that an absolutely continuous function is an optimal solution of the problem (P0). Since the problems (P0) and (P2) are equivalent the function is also an optimal solution of the problem (P2). We claim that
where a positive constants c0(|y|,|z|) depends only on |y| and |z|. Consider an absolutely continuous function defined by
In the sequel we denote by mes(E) the Lebesgue measure of a Lebesgue measurable set . If a reader is not familiar with the Lebesgue measure theory, we can say, roughly speaking, that a set of real numbers E is Lebesgue measurable if it is (in some sense) the limit of a sequence of sets of real numbers such that each Ei is a finite union of open intervals with no intersections. In this case where the Lebesgue measure of a set which is a finite union of open intervals with no intersections is the sum of their lengths. Now let be given. It follows from the property (C) that there exists a positive number such that for each point which satisfies In view of the choice of the constant and the inequality we have
and By the definition of the functions
38
AENORM
vol. 20 (77)
and x, we have
December 2012
It is easy now to see that the optimal solution spends most of the time in an - neighborhood of . By the inequality above, the Lebesgue measure of the set of all real numbers t, such that (t) does not belong to this -neighborhood, does not exceed the constant which depends only on |y|,|z| and and does not depend on T. Following the tradition, the point is called the turnpike. Moreover we can show that the set
econometrics
is
contained
in
the union , where
of
two
intervals
3. Nonconvex nonautonomous integrands We showed in the previous section that the structure of optimal solutions of the problem (P0), under the assumptions posed on f, is rather simple and the turnpike is calculated easily as a solution of the problem (P1). Nevertheless, the convexity of the integrand f and its time independence are very essential for the proof of this turnpike result. In order to obtain a turnpike result for essentially larger classes of variational problems and optimal control problems we need other methods and ideas. The following example helps to understand what happens if the integrand f is nonconvex and nonautonomous and what kind of turnpike we have for general nonconvex nonautonomous integrands. Consider an integrand defined by
and the family of the problems of the calculus of variations (P3):
where y, z, T1, T2 are real numbers and T2 > T1. It is clear that the functions f depends on the time variable t, for each real number t, the function f(t, . , . ) : is convex, and for each pair of numbers the function is nonconvex. Hence the function is also nonconvex and depends on t. Let y, z, T1, T2 be real numbers, T2 > T1 +2 and let a function R1 be an optimal solution of the problem (P3). Note that the problem (P3) possesses a solution since the integrand f is a continuous function and the function is convex and grows superlinearly at infinity for each point Consider a function defined by
Clearly,
and
Therefore
where
It is easy to see that for any real number following inequality holds:
the
Since the constant c1(|y|,|z|) does not depend on T2 and T1 it follows from the inequality above if the length of the interval T2 - T1 is sufficiently large, then the function is equal to cos(t) up to for most . Again, as in the case considered in the previous section, we can show that
where > 0 is a constant which depends only on , |y| and |z|. This example demonstrates that there are nonconvex time dependent integrands for which the turnpike property holds with the same type of convergence as in the case of convex autonomous variational problems, but with the the turnpike which is an absolutely continuous time dependent function defined on the infinite interval . This leads us to the following definition of the turnpike property for general integrands. Consider the problem of the calculus of variations
(P)
AENORM
vol. 20 (77)
December 2012
39
econometrics
is an absolutely continuous function such that v(T1) = y, v(T2) = z. Here T1 < T2 are real numbers, and the function Rn is continuous. We say that the integrand f possesses the turnpike property if there exists a locally absolutely continuous function (called the “turnpike”) which depends only on f such that the following condition holds: For each bounded subset K of the space Rn and each positive number there exists a positive constant T(K, ) such that for each pair of real numbers T1 0 and T2 T1+ 2T(K, ), each pair of points and each optimal solution of the problem (P), we have
The turnpike property is very important for applications. Assume that the integrand f possesses the turnpike property, the bounded set K and a small positive number are given, and we know a finite number of “approximate” solutions of the problem (P). Then we know the turnpike Xf , or at least its approximation, and the positive constant T(K, ) which is an estimate for the time period required to reach the turnpike. We can use this information if we need to find an “approximate” solution of the problem (P) with a new time interval [T1, T2] and the new values at the end points T1 and T2. More precisely, instead of solving this new problem on the “large” interval [T1, T2] we can find an “approximate” solution of problem (P) on the “small” interval [T1, T1 + T(K; )] with the values y, Xf (T1 + T(K; )) at the end points and an “approximate” solution of problem (P) on the “small” interval [T2 - T(K; ), T2] with the values Xf (T2 - T(K; )), z at the end points. Then the concatenation of the first solution, the function Xf :[T1 + T(K; ), T2 + T(K; )] and the second solution is an “approximate” solution of problem (P) on the interval [T1, T2] with the values y, z at the end points. In Chapter 2 of [5] we consider a general space of continuous integrands which is endowed with a natural complete metric. We establish the existence of a set which is a countable intersection of open everywhere dense sets in such that for each the turnpike property holds. Moreover we show that the turnpike property holds for approximate solutions of variational problems with a integrand and that the turnpike phenomenon is stable under small perturbations of f.
40
AENORM
vol. 20 (77)
December 2012
References P. A. Samuelson, “A catenary turnpike theorem involving consumption and the golden rule”, American Economic Review 55 (1965), 486-496 A. J. Zaslavski, “Optimal programs on infinite horizon 1,2“, SIAM Journal on Control and Optimization 33 (1995), 1643-1686 A. J. Zaslavski, “The existence and structure of extremals for a class of second order infinite horizon variational problems”, “Journal of Mathematical Analysis and Applications 194 (1995), 459-476 A. J. Zaslavski, “Structure of extremals for onedimensional variational problems arising in continuum mechanics“, Journal of mathematical Analysis and Applications 198 (1996), 893-921 A. J. Zaslavski, “Turnpike properties in the calculus of variations and optimal control”, Springer, New York, 2006
On this page youâ&#x20AC;&#x2122;ll find a few challenging puzzles. Try to solve them and compete for a prize! Submit your solution to Aenorm@vsae.nl.
Answers to puzzles Aenorm 76
Winner Aenorm 76
Olympic Games 2012: The fastest man in the world The chance for Murandy Chartina turned out to be zero. The odds of the other athletes count up to 1. Fortunately, this was also the result in the actual 100 meters sprint in London. An odd of 2 to 1 implies a chance of 1/(1+2), an odd of 3 to 2 a chance of 2/(3+2) and an odd of 11 to 4 a chance of 4/(11+4).
The winner of Aenorm 76 is: Alexander Bolwerk. Congratulations!
Baseball Game The score that exactly sums up to 50 is: 6, 19 and 25
Geese Counting
Solutions Solutions to the puzzles above can be submitted up to February 1st 2013. You can hand them in at the VSAE room (E2.01/04), mail them to aenorm@vsae.nl or send them to VSAE, for the attention of Aenorm puzzle 77, Roetersstraat 11, 1018 WB Amsterdam, Holland. Among the correct submissions, one will be the winner. Solutions can be both in English and Dutch.
A couple of little geese swim in a straight line behind Mother Goose. Four of them in front, four at the back and three in the middle. However, in total there are less then half a dozen little geese. How is this possible?
Divide Some Barrels Three thieves break into a brewery and steel 30 barrels of lager. When they open the barrels, they discover that 10 of them are empty, 10 are completely filled and 10 are halfway filled. How shall they distribute the barrels among themselves to make sure every thief will gain a equal amount of lager?
AENORM
vol. 20 (77)
December 2012
41
From the beginning of the academic year, there were a lot of activities and events for our members to attend. The first year students had a great time during the VSAE Introduction Days, were they were motivated to join one of our committees. They organized our classic opening activity, the pool tournament in November. The Beroependagen and our International Study Project, with 24 motivated students we visited KPMG in London, were both highly successful. Also the Inhouseday at MIcompany and the Consultancy Event were a great success. From the 23th to the 25th of November, 40 VSAE members visited Breda for the one and only VSAE weekend. The 3th of December we organized, together with Kraket, a party: Black Light Night. Both activities were lots of fun. For the VSAE board the year is already coming to an end. The new board will be announced at the General Members Meeting on the 11th of December. On the 21th of December, we want to thank all our active members for their efforts with a special diner. Our monthly drink starts right after dinner and has a cocktail chic theme. All our members and alumni are invited, so we hope everyone will come and celebrate the end of 2012 with us!
Agenda
Agenda
• 21 December Active Members Diner
•
• 21 December Cocktail chic party Diner
•
• 22 January Monthly Drink
• 10 January Acquaintance Drink for Ball
• 5 February LED (National Econometricians Day)
42
The end of the year is approaching and the temperature is decreasing. Christmas trees are set up and the December atmosphere can be experienced on any corner of the street. For Kraket it also means a time of working hard for the exam week and celebrating the Christmas holiday. That’s why we will have our ‘Alumni Drink’ on the 14th of December where all our alumni are invited to follow a small reading from well-known dr. H.C. Tijms and an informal ‘award’ for best master thesis. This will be followed by a drink at our association room and a diner. And we’ll be having the special edition of our monthly drink at the end of the exam week on the 21st of December which will promise to be pretty exciting like always! Furthermore our beloved ‘Ball commission’ is working hard to organize a Ball at the Holland Casino on the 23th of January together with a couple of other study associations, including an Acquaintance Drink on the 10th of January. January will also be the month in which a group of Kraketters with friends will go on a skiing trip in ’Les Sybelles’, France for 10 days. Departing around the 11th. And there we, as the board of Kraket, will be planning the second General Members Meeting to show our members how far we’ve come until now and what we have planned in future.
•
• 27 February Actuarial Congress
•
vol. 20 (77)
December 2012
21 December
Monthly Drink / Special
• 11 January 10 Days skiing trip
• 9 February 10 Days skiing trip
AENORM
14 December
Alumni Drink
23 January
Ball at Holland Casino Amsterdam
5 February
LED (National Econometricians Day)
Jij ziet overal cijfers...
…en de bijbehorende uitdagingen. Want jij ziet dingen die anderen niet zien. Juist dat maakt je zo’n uitmuntende consultant. Bij Mercer waarderen we dat. Werken bij deze internationale autoriteit in financieel-strategische dienstverlening betekent werken in de voorhoede. Terwijl jij samen met je enthousiaste collega’s financiële HR-vraagstukken meetbaar en tastbaar maakt, zorgt Mercer voor een ongeëvenaard klantenpakket én een direct toegankelijk, internationaal kenniscentrum. Ook onze ontspannen werksfeer – even informeel als inhoudelijk – is een begrip in de branche. Allemaal kenmerken die, volgens je toekomstige collega’s, van Mercer een topbedrijf maken.
Junior consultants m/v Die positie willen we graag behouden. We zijn voortdurend op zoek naar junior consultants die zowel individueel als in teamverband kunnen excelleren. Jonge, hoogopgeleide talenten met een flexibele geest, cijfermatig inzicht, kennis en gezond verstand. Menselijke professionals die, net als Mercer, niet terugdeinzen voor uitdagingen. Voldoe jij aan dit boeiende profiel? Dan vind je bij Mercer volop mogelijkheden. Kijk op www.werkenbijmercer.nl of bel 020-4313768.
IT’S TIME T0 CALL MERCER Consulting. Outsourcing. Investments.
Ben jij een adviestalent? â&#x20AC;&#x2122;s Werelds grootste multinationals kijken Towers Watson aan om belangrijke business issues voor hen te tackelen. Ontwikkel je talent en begin een uitdagende carrière bij de thought leader in Retirement Solutions, Finance en Human Resources. werkenbijtowerswatson.nl smar t phone
Scan deze
QR code met je