Aenorm 60

Page 1


B e p a a l d e o p t i ma l e b e l e g g i n g s m i x voo r een pens i oenfonds me t e e n v e r mo g e n v a n 4 . 3 m i l j a r d e u r o .

����� ����

����� ����� ���� �����

�������� �����

�����

�����

�����

�����

�����

���������

�����

�����������

Wat is haalbaar? En wat is verstandig? Hoeveel risico

investeringsstrategieën. We werken voor toonaan-

mag een pensioenfonds eigenlijk lopen? Je hebt

gevende bedrijven, waarmee we een hechte relatie

het wel over de oudedagvoorziening van honderd-

opbouwen om tot de beste oplossingen te komen.

duizenden mensen. Er moet voor hen hoe dan ook een

Onze manier van werken is open, gedreven en infor-

flinke taart overblijven. Bij Watson Wyatt kijken we

meel. We zijn op zoek naar startende en ervaren mede-

verder dan de cijfers. Want cijfers hebben betrekking

werkers, bij voorkeur met een opleiding Actuariaat,

op mensen. En op maatschappelijke ontwikkelingen.

Econometrie of (toegepaste) Wiskunde. Kijk voor meer

Dat maakt ons werk zo interessant en afwisselend.

informatie op werkenbijwatsonwyatt.nl.

Watson Wyatt adviseert ondernemingen en organisaties wereldwijd op het gebied van ‘mens en kapitaal’: pensioenen, beloningsstructuren, verzekeringen en

Watson Wyatt. Zet je aan het denken.


Preface

Games & Deadlines A

s I am sitting in a small pub thinking of what I will write in this preface a lot of things come to mind. Still my screen stays a blank white page and the terrible rainy weather outside will not change any soon either. Both the rain and the blank pages are trends in my life at this moment. Soon I’ll have to hand in my thesis and the deadline is pressing upon me as Damocles’ sword. Luckily because of this deadline it will soon be over and then the summertime can finally begin. Hopefully the weather in England will be better than it is in the Netherlands these days. When I think of the past few months ‘games’ come to mind. The Wimbledon final last Sunday, definitely the best final ever played. The marvelous football play of the Dutch team, obviously they should have won the European Championship. Unfortunately this bloody Dutch coach somewhere abroad thought differently. The Olympic Games which will keep us busy soon and of course the Econometric Game 2008. Last April the ninth edition of the Econometric Game took place. Twenty-two universities from all over Europe travelled to Amsterdam to solve a case on Direct Marketing made by Tom Wansbeek, the Dean of our faculty. Therefore this edition contains an interview with Tom Wansbeek and articles on Direct Marketing. We also show you the most beautiful and winning report by the University of Cambridge presented in the first article. At the end of this preface I would like to wish you all a very nice summertime. We are proud to present another edition of AENORM with many interesting articles, two interviews and a new puzzle. The second interview is with Russell Davidson. It is the Davidson & MacKinnon book which is calling me back to work on the thesis right now. Enjoy reading this edition. By the way, speaking about games, don’t forget to hand in your solutions of the puzzles on page 71. Since the person who has won the puzzle for at least four times in a row would like to get some competition! Daniella Brals

AENORM

60

July 2008

1


Aenorm 60

Contents List

Econometric Game 2008 Final report

4

A crucial prerequisite for any successful direct marketing campaign concerns the identification of potential customers (Otter et. al., 2006). In this article, we construct a statistical model based on in- and out-of-sample selection criteria in order to identify those combinations of individual characteristics that are associated with product purchases and hence, identify a profit-maximising strategy based on a selection rule. Team Cambridge

Increasing the price of theater, a modern tragedy?

11

Recently there have been many intense debates about the cultural policy of the Dutch government and the cultural sector itself. During the debates on financing the culture sector, it appeared that there was very little information on the prices of the cultural sector in the Netherlands. To study the possible effects of an increased price, further research was required. This article describes a study that complements earlier studies and provides further insight to the prices of the cultural sector, in particular of the performing arts. Nynke de Groot

Cover design: Carmen Cebriån Aenorm has a circulation of 1950 copies for all students Actuarial Sciences and Econometrics & Operations Research at the University of Amsterdam and for all students in Econometrics at the Free University of Amsterdam. Aenorm is also distributed among all alumni of the VSAE. Aenorm is a joint publication of VSAE and Kraket. A free subsciption can be obtained at www.aenorm.nl. Insertion of an article does not mean that the opinion of the board of the VSAE, the board of Kraket or the redactional staff is verbalized. Nothing from this magazine can be duplicated without permission of VSAE or Kraket. No rights can be taken from the content of this magazine. Š 2008 VSAE/Kraket

Cracking the marketing campaign design code

17

Designing a marketing campaign is like cracking the code of a safe. Multiple combination locks must be found and brute force typically does not work. Simply following your gut feeling and deploying many campaigns, is that brute force that is in most instances ineffective. Adding to this complexity, is the increase in the number of marketing campaigns within an organisation, fragmented customer segments and more channels. In this article, I will focus on the research that I’ve conducted in the area of target selection. Hiek van der Scheer

Interview with Tom Wansbeek

23

As of the first of September last year our faculty has a new dean in the person of Tom Wansbeek. After having spent a couple of years as the dean of the economics faculty of the Rijksuniversiteit Groningen (RUG) Tom Wansbeek returned to Amsterdam, where he had studied econometrics a long time ago. All together making it inevitable for the aenorm editors to visit the new dean and discuss some important issues regarding teaching, the PhD applicants and his objectives as the new dean. Daniella Brals and Raymon Badloe

Comparing approximations for sums of dependent random variables 27 Portfolios of insurers consist of lots of insurance policies that may or may not lead to the payment of a claim in a certain period of time. Since the amount of the possible claims is uncertain at the beginning of the period, all these policies are risks for the insurer. As a consequence of the absence of independence, actuaries need sophisticated methods and tools for modelling dependent risks. The theory of copulas and comonotonicity may become useful in modelling these dependencies among risks. Carlo Jonk

A State Space Approach for Con-structing A Repeat Sales House Price Index 34 Many individuals and institutions are interested in the development of house prices. The repeat sales model provides a way to measure this by the construction of a house price index. The traditional variants of the model have in common that they do not impose a time structure which has a couple of disadvantages. In this article we will consider the Local Linear Trend repeat sales model which provides a more reliable price index. Thereafter the hierarchical repeat sales model is discussed. Willem-Jan de Goeij

22

AENORM

60

July 2008


IBIS UvA: research and consultancy in Lean Six Sigma 39

Volume 15 Edition 60 July 2008 ISSN 1568-2188

Lean Six Sigma is an advanced process improvement methodology. It was introduced at Motorola in 1986, and it incorporates previous methodologies, such as Statistical Process Control, Total Quality Management, and Zero Defects. The assimilation of Lean Thinking tools led to the popular Lean Six Sigma program. Nowadays, the program is deployed at companies such as General Electric, Honeywell International, and Citibank. Benjamin Kemper and Jeroen de Mast

Chief editor: Lennart Dek Editorial Board: Lennart Dek

Mechanism design: Theory and application to welfare-towork programs 43

Design: Carmen Cebrián

My aim is to give you a flavor of what mechanism design theory is and how we can apply it in practice. More in particular, I will apply insights from the theory to the design of welfare-to-work markets, i.e., markets in which governments contract firms in order to help unemployed people finding a job. Sander Onderstal

Lay-out: Taek Bijman Jeroen Buitendijk Editorial staff: Raymon Badloe Erik Beckers Taek Bijman Daniëlla Brals Jeroen Buitendijk Lennart Dek Marieke Klein Hylke Spoelstra

Effects of correlation-volatility co-movement on portfolio risk: negligible or not? 47 Asset correlations and volatilities are important determinants of portfolio risk. A positive co-movement of asset correlation and volatility has the potential to further increase portfolio risk, by rendering diversification less effective when most urgently needed. Even though, many financial models seem not to account for such co-movement. Do risk-managers fool themselves when they do not model this interaction properly? René van Rappard

Advertisers: Achmea AON Deloitte Delta Lloyd De Nederlandsche Bank Hewitt Ibis UvA KPMG Mercer Michael Page Mn Services Ordina ORTEC PricewaterhousCoopers SNS Reaal TNO Towers Perrin Watson Wyatt Worldwide Zanders

Entrepreneurship, Venture Capital and Economic Growth: a Comparison between the USA and Europe. 53 Availability of Venture Capital (VC) plays a major role towards establishing new economic activities. Nearly all economic growth comes from new small and medium sized enterprises (SME’s). VC is widely available both in the USA and Europe, yet its use is less prevalent in Europe. More favorable conditions for economic growth in Europe are to be created. In this article it is argued that less attractive returns are generated on VC in Europe than in the USA. Frank W. van den Berg

Interview with Russel Davidson

61

In the world of econometrics Russel Davidson is, among others, most famous for publishing his book “Estimation and Inference in Econometrics” together with James MacKinnon in 1993. Since then he has worked several years on problems related to the bootstrap. Shari Iskandar and Erik Beckers

Information about advertising can be obtained from Tom Ruijter info@vsae.nl

On OLG-GE Modelling in the Field of Pension Systems: An Application to the Case of Slovenia 66

Editorial staff adresses: VSAE, Roetersstraat 11, 1018 WB Amsterdam, tel: 020-5254134

These are the reasons for anticipated increase of traditional social security benefits and introduction of new types of old-age insurance. Among key topics of social security is therefore the development of a sustainable, efficient and fair system of funding social security in the environment of expected further ageing of the population. Special emphasis is being put on the pension system due to its weight in the system of public finances; therefore it was also the focus of our research. Miroslav Verbič

Kraket, de Boelelaan 1105, 1081 HV Amsterdam, tel: 020-5986015

Puzzle 71

www.aenorm.nl

Facultative 72

AENORM

60

July 2008

3


Econometrics

Econometric Game 2008 Final report A crucial prerequisite for any successful direct marketing campaign concerns the identification of potential customers (Otter et. al., 2006). Towards this end, we construct a statistical model based on in- and out-of-sample selection criteria in order to identify those combinations of individual characteristics that are associated with product purchases and hence, identify a profit-maximising strategy based on a selection rule.

Team Cambridge consisted of one PhD Student, three CPGS students and one MPhil student, respectively: Diego Winkelried, TengTeng Xu, Richard Louth, Heather Battey and Nicky Grant. The research interests of the team members are mainly econometric ranging from the econometrics of repeated cross sections to volatility trading and hedge fund performance, and it was gratifying to see - through the case proposed in the Econometric Game these methods being applied in other disciplines, rather than strictly to economic phenomena.

A firm has to decide whether or not send a mail to an individual with characteristics x. The cost of mailing is C = € 0.65. Once the individual receives the mail, she may order the product. This is the primary action. Following an order, the firm dispatches the product and then the individual decides whether to keep it. This is the secondary action. The revenue from a good sold is A = € 24.89, whereas a product that is dispatched but not sold (that is, returned) implies a loss of K = € 5.92. Profit function and selection rule Let y1(x) and y2(x) be the random binary variables corresponding to the primary and secondary actions, respectively: y1(x) = 1 if there is an order from the individual and y2(x) = 1 if she eventually purchases the good. Consequently, the profit obtained from an individual with characteristics x is given by, π(x) = Ay1(x)y2(x) - Ky1(x)[1 - y2(x)] - C

(1)

The evaluation of the profit function (1) provides valuable information to identify potentially profitable customers. In fact, E[π(x)|x] = 0 separates the individual characteristics space into a mailing region, where E[π(x)|x] > 0, and

AENORM

60

E[π(x)|x] = (A + K) Pr(y2 = 1|x) K Pr(y1 = 1|x) - C

July 2008

(2)

It follows that (2) is greater than zero whenever, Pr (y 2 = 1 | x) >

Introduction

4

an inactivity region, where E[π(x)|x] < 0. Note that E[yj(x)|x] = Pr(yj = 1|x) for j = 1, 2. Then, following Koning et. al. (2002) the expected profit can be shown to be,

K C Pr (y1 = 1 | x) + K +A K +A

(3)

From this selection rule, it becomes evident that modelling Pr(y1 = 1|x) and Pr(y2 = 1|x) is at the core of the firm’s decision-making. Individual responses Consider the following random utility model, u1 = q1(x) + ε1 and u2 = q2(x) + ε2

(4)

where uj is the utility obtained from performing the `order’ (j = 1) and `keep’ (j = 2) actions, and qj(x) are the associated linear indexes. Thus, an individual orders the product if u1 > 0 and subsequently keeps it if u2 > 0. Moreover, we assume that, ⎛ ⎡0 ⎤ ⎡ 1 ρ ⎤ ⎞ ⎡ ε1 ⎤ ⎢ ⎥ ~ iid ⎜⎜ ⎢ ⎥, ⎢ ⎥ ⎟⎟ ε ⎣ 2⎦ ⎝ ⎣0 ⎦ ⎣ ρ 1 ⎦ ⎠

(5)

where ρ is the correlation between ε1 and ε2, so we allow for the possibility that the `ordering’ and `keeping’ actions are related. Intuitively, we would expect ρ > 0 since once a product is ordered, it is more likely that the individual will also keep it. It follows that, for j = 1, 2, Pr(yj = 1|x) = Pr(εj ≤ qj(x)) = F(qj(x))

(6)


Wat doe je? als je weet dat de beroepsbevolking steeds kleiner wordt

De Nederlandse bevolking vergrijst, mensen leven gemiddeld langer en er worden minder kinderen geboren. Dit betekent dat de pensioenlasten zullen stijgen. Dit heeft ook ingrijpende gevolgen voor een verzekeringsmaatschappij. Binnen Achmea is de actuaris de aangewezen persoon om te berekenen en uit te leggen wat de vergrijzing betekent: wat zijn de financiële effecten van wet- en regelgeving? En wat betekent dit voor de prijs en premiestelling van producten? En hoe gaan we de in- en externe rapportage hierover zo efficiënt mogelijk inrichten?

Actuaris

Bovendien kun je rekenen op veel mogelijk­

Natuurlijk houdt de actuaris zich binnen

heden tot persoonlijke en professionele

Achmea zich niet alleen met dit vraagstuk

ontwikkeling.

bezig, maar ook met alle andere relevante maatschappelijke vraagstukken. Als

Achmea

(aankomend) actuaris binnen Achmea kan

Het motto van Achmea luidt: Achmea

jij je op verschillende vakgebieden

ontzorgt. Om dat waar te kunnen maken,

ontwikkelen. Als grootste actuariële werk­

hebben we medewerkers nodig die verder

gever in Nederland bieden we veel

kijken dan hun eigen bureau. Die oog

mogelijkheden in jouw vakgebied op

hebben voor wat er speelt. Maar vooral:

verschillende locaties.

mensen die zich inleven in onze klanten en dat weten te vertalen naar originele oplos­

Wat we vragen

singen. Achmea onderscheidt zich door

Heb jij een actuariële achtergrond? Of heb

haar maatschappelijke betrokkenheid. Het

je een bèta studie als wis­/natuur­/sterren­

streven van Achmea is om de samenstelling

kunde of econometrie voltooid en wil je

van ons personeelsbestand een afspiege­

actuaris worden? Dan maken we graag

ling te laten zijn van de maatschappij.

kennis met jou! Wij vragen nadrukkelijk ook

Wij zijn ervan overtuigd dat diversiteit in cul­

parttimers om te solliciteren.

turen bijdraagt aan zakelijk en persoonlijk succes.

Wat we bieden Wij bieden een afwisselende baan in een

Meer weten?

moderne en enthousiaste organisatie. De

Voor meer informatie over de functie kun

arbeidsvoorwaarden zijn uitstekend: bijvoor­

je bellen met Joan van Breukelen,

beeld een 13e maand, een pensioenvoor­

(06) 20 95 7231. We ontvangen je sollicitatie

ziening en een marktconform salaris.

graag via www.werkenbijachmea.nl.

centraal beheer achmea fbto av é r o a c h m e a interpolis z i lv e r e n k r u i s a c h m e a

Ontzor gen is een wer kwoord AENORM

60

July 2008

5


Econometrics

No selection

Probit

Logit

C log-log

Scobit

29624

10019

9822

9925

9877

Percentage selected

100

33.821

33.156

33.503

33.341

Menu profit (€)

0.315

1.543

1.552

1.531

1.558

Number selected

Table 1: Selection rules of alternative models

where F(.) is the link function. Further structure is given to this problem below. Data Description We have available a data set that consists of a sample of 29624 individuals from a mailing list. The decisions taken by the household are (a) whether to order the product and, in the case of ordering, (b) whether to keep it or send it back. Accordingly, the data set is comprised of 2 binary dependent variables, y1 and y2 and 11 further explanatory variables (x) of which 4 are continuous (x1, x5, x6, and x11) and one is an indicator variable (x2). Out of the sample of targeted individuals, 3.39% ordered the product and of these, 51.49% paid for it.

in the tails of the distribution and is expected to accommodate the large number of zero responses found in the data. The Scobit is also designed to deal with low rates of responses. It is a simple generalisation of the Logit and the degree of skewness (and the weight in the tails) is controlled by a single parameter easily estimated from the data (Nagler, 1994). To allow for a flexible relationship between the explanatory variables and the response, we take the index q1(x) in (6) to be quadratic in the regressors, q1(x) = x’a + x’Ax, where a and A contain parameters to be estimated. The estimation is performed using a stepwise elimination algorithm (general-to-specific) based on an iterative forward-backward procedure of variable choice.1

"A selection rule can increase mean profits fivefold" Single-action model In this section we deal with Questions 1, 2 and 3 of the Game. We consider a special case of the general framework outlined before: every individual that orders the product keeps it, i.e. there is no returning. This implies that Pr(y1 = 1|x) = Pr(y2 = 1|x) and so the selection rule (3) simplifies to, Pr(y1 = 1|x) > C / A

(7)

and the expected profit reduces to, E[π(x)|x] = APr(y1 = 1|x) - C

For each estimated model we select individuals in the sample according to (7) and compute the expected profits as in (8). The results are shown in Table 1. Despite the low rate of responses in the data, the results from the Probit and Logit specifications are similar to those obtained with the models designed to dealwith this type of low response. That is, the selection rule based on any of the estimated models selects approximately 33% of the individuals in the sample and increases mean profits fivefold. Model Selection

(8)

The goal is thus to estimate Pr(y1 = 1|x) to evaluate (7) and (8). Estimation We proceed parameterically and entertain four different link functions: Probit, Logit, Complementary log-log (C log-log) and Skewed Logit (Scobit). The Probit and Logit models are well-known. The C log-log puts more weight

As most of the link functions considered are not nested within one another we cannot use standard testing techniques. To determine which of the link functions best explain the data, we discriminate between the models using the nonnested model selection tests advanced in Vuong (1989) and Clarke (2007). In the Vuong (1989) test the null hypothesis is that two models are equally far away from the true model in terms of the Kullback-Leibler information criterion. Under H0 the expected value of the difference in the appropriately adjust-

We do not report estimated coefficients due to space considerations, and the fact that the variable descriptions were not made available to us. 1

6

AENORM

60

July 2008


Econometrics

ed log-likelihood is zero. A significant deviation from zero signals that one of the models is closer to the true underlying distribution, and this deviation is N(0, 1). Moreover, significant deviations in either direction would be conformable with one or the other of the models being superior in fitting the data. This test can be thought of as a two-sample z-test for equal means of the individual contribution to the log-likelihood functions. The approach in Clarke (2007) tests whether the median difference in the adjusted log-likelihood is non-zero. It is an exact binomial test. Under H0, both models are of equal distance from the true model and thus the probability that the individual log-likelihood (corrected for the complexity of each model) is greater for one model than another is 0.5. The direction of the deviation from 0.5 indicates which of the two models provided a better fit of the data, or if indeed both models were of equal explanatory power. The results of pairwise non-nested tests among the four competing models are displayed in Table 2. By comparing each model against the others we can see that there is overwhelming evidence in favour of the Probit model: in the Voung test the relevant z-scores are well above the critical value of 1.96, and in the Clarke test the observed proportions are significantly greater than 0.5. Furthermore, we can infer that the Logit model fits the data better than both the Scobit and C log-log model, thereby implicitly ranking the models. This result is surprising since the simplest Probit seems to outperform the heavy-tailed alternatives, despite the low frequency of orders in the sample. H0: Probit=Logit Probit=C log-log Probit=Scobit Logit=C log-log Logit=Scobit Scobit=C log-log

Vuong

Clarke

2.716

0.649

(0.007)

(0.000)

2.777

0.662

(0.006)

(0.000)

3.029

0.641

(0.001)

(0.000)

0.9848

0.677

(0.325)

(0.000)

1.105

0.577

(0.135)

(0.000)

0.1106

0.721

(0.9119)

(0.000)

Note: Figures in parentheses are p-values Table 2. Testing non-nested models

Forecasting and campaign profitability The main result so far is that a selection rule

can increase mean profits fivefold.However, this may be a manifestation of the fact that we used the same sample to both estimate and predict, which is likely to yield an inflated estimate of the benefit of using a selection rule. Even though the above analysis is insightful to determine which model captures most of the features in the sample, from the firm’s point of view, an out-of-sample evaluation is perhaps more important for its actual decision-making. To this end, we now assess the out-of-sample performance of the competing models by the following procedure: a model is estimated using about 28000 observations randomly drawn from the original sample and the remaining 2000 observations are used as a validating sample to compute the mean square prediction error (MSPE). This process is repeated 2000 times. This approach is used for two levels of testing. Firstly, as above to select the most suitable link function. Secondly, within each class we consider a simpler specification. The “simple model” includes the basic explanatory variables only (q1(x) = x’a), whereas the “complex model” is as in the Estimation section (q1(x) = x’a + x’Ax). The introduction of the simple model is justified since the complex specification may over-fit the data, and hence it may be outperformed out-of-the-sample by a more parsimonious alternative. The results of the MSPEs are presented in Table 3. We observe that the complex formulation for each of the four model classes is selected using this criterion, as the MSPE is smaller for each. On the other hand, no one model seems to be superior to the others as their out-of-sample performances are remarkably similar. Based on this and the in-sample results, the Probit model is duly selected as a fair final empirical specification for Pr(y1 = 1|x). Probit Simple Complex

Logit

C log-log

Scobit

3.423

3.432

3.438

3.436

(0.342)

(0.343)

(0.345)

(0.344)

3.384

3.383

3.382

3.383

(0.338)

(0.338)

(0.338)

(0.338)

Notes: a) Based on a validation sample of 2000 observations and 2000 replications. b) The values are the mean (standard errors in parentheses) of the MSPEs, recorded in percentage terms, over replications. Table 3 Out-of-sample performance

Having decided an appropriate model for Pr(y1 = 1|x), we now assess the scope of the selection rule (7). As before, using 2000 replications, the probit model is estimated for nearly 28000 individuals and the expected profits are computed for the remaining 2000. The empirical distributions of the profits obtained with and

AENORM

60

July 2008

7


My passion Targeted innovation. Creating new products, new services,

new opportunities. Finding creative answers to the questions posed by our society. Working to improve solutions by making them faster, safer, smarter and more efficient. That is my passion.

8

AENORM

60

July 2008


Econometrics

Figure 1. Out-of-sample expected profits

without using the selection rule are shown in Figure 1. As in our previous findings, the selection rule improves mean profitability considerably. The average of the mean profit distributions with and without the selection rule are € 1.55, and € 0.32, respectively, with an average proportion of selected individuals of 33.74%. These results are in line with those in Table 1. As for the total profits obtained from random samples of 2000 individuals, we observe that the average of the profit distributions with and without the selection rule are € 634.85, and € 1045.80, respectively. More importantly, as is evident from the figure, the selection rule leads to a less dispersed profit distribution, having filtered most of the sampling variability. The standard error of the distribution with no selection rule is € 205.87, almost three times the standard error of the distribution under the selection rule, € 62.08. This highlights the usefulness of the selection rule as a tool for risk management. Two-action model This section addresses Question 4 of the Game. Building on the apparent superiority of the Probit model, we now re-introduce the bivariate model described in the Introduction but with Gaussian errors in (4). Indeed, that around 50% of individuals in the sample who ordered the product subsequently purchase

it, vindicates the estimation of a “joint-model” with ρ≠ 0. For comparative purposes, we also estimate an alternative “split-model” where ρ = 0, which corresponds to estimation of two univariate probit models for Pr(y1 = 1|x) and Pr(y2 = 1|x). Based on the results of stepwise estimation using the procedure adopted in the Estimation section, the log-likelihood for the joint- and split-models are -4751.539 and -6373.366,respectively. Thus, according to these statistics the joint-model statistically outperforms its split-model counterpart in accordance with the findings of Koning et. al. (2002). The results of expected profit calculation using the selection rule (3) for the whole sample are shown in Table 4. As we can see, there is a sharp distinction between the number of targeted individuals selected with both models. It is interesting to note that the number of individuals selected by the joint model is very close to the number of purchasers in the actual sample. Thus, by controlling for the correlation between both ordering and subsequently keeping the product, the joint model provides a more reliable estimate of the set of people to mail. Selection based on the split-model tends to the targeting of more loss-incurring individuals (out of the 2416 selected by this model, 2222 did not purchase the good). As a result, the joint model gave a higher level for expected mean profit, with the selection rule based on the splitmodel yielding negative profits. No selection

Split model

Joint model

Number selected

29624

2416

140

Percentage selected

100

8.156

0.473

Menu profit (€)

-0.342

-1.003

1.074

Table 4: Selection rules of bivariate models

Table 5 presents an analysis which parallels the out-of-sample approach of Forecasting and campaign profitability section. In contrast to the above, the MSPE does not indicate much of a difference between both models when predicting the marginal distributions of the primary and secondary actions, possibly due to the low rates of response in the original data set. Nevertheless, the evidence suggests that the joint-model is slightly superior when predicting the joint event (y1 = 1)(y2 = 0), which is the primary source of losses in the analysis. We can therefore confirm that the greater accuracy of the joint model in identifying those individuals who buy after an order is the main reason why the selection rule of this model delivers higher profits.

AENORM

60

July 2008

9


Econometrics

Spit model MSPE of (y1 = 1) MSPE of (y2 = 1)

Joint model

3.360

3.358

(0.343)

(0.345)

1.683

1.682

(0.238)

(0.239)

MSPE of (y1 = 1)(y2 = 0)

1.983 (0.256)

1.795 (0.275)

Mean profit (€)

-0.823

1.0494

Notes: a) Based on a validation sample of 2000 observations. b) The values are the mean (standard errors in parentheses) of the MSPEs, recorded in percentage terms, over replications. Table 5. Out-of-sample performance

Conclusion In this report we have considered a number of issues relating to the use of econometric techniques in marketing strategies for profit maximisation purposes. A crucial part of this was determining an appropriate selection rule for identifying those individuals with characteristics that are most associated with keeping the product after ordering. To determine the linear index space and thus, the set of important characteristics, we estimated various parametric specifications for the conditional mean. The first part of our analysis was concerned with discriminating between popular discrete choice models in explaining survey data. Using a two step testing procedure we were able to determine which of the models were best in explaining the data whilst offering sufficient predictive power. This analysis favoured a Probit model that contained a complex set of explanatory variables. Using this model we were then able to assess the profitability of a selection rule derived from a binary choice model, and found that the selection rule significantly outperformed a random mailing alternative. To extend this analysis, we moved on to consider the joint modelling of both order and purchase responses using a bivariate Probit model. Our results imply that the use of a joint model which captures the dependence between primary and secondary actions is a more useful tool for targeting potentially profitable customers. In terms of limitations, one crucial assumption set out in the case is that the profit induced by each individual’s response is constant across individuals. In reality, individual characteristics may induce differences in responses in terms of quantity bought (Otter et. al., 2006). A flexible approach would be to account for this heterogeneity by considering profit functions of the form, π(x) = A(x)y1(x)y2(x) K(x)y1(x)[1 - y2(x)] - C

10

AENORM

60

July 2008

Finally, although many extensions can be considered, we want to highlight one in particular. Despite the relative successes of our parametric approach, a more flexible, nonparametric link function as proposed in Cosslett (1983) may yet yield additional insights beyond those reported in this study. References Clarke, K. (2007). A Simple Distribution-Free Test for Nonnested Model Selection, Political Analysis, 15(3), 347-363. Cosslett, S.R. (1983). Distribution-Free Maximum Likelihood Estimator of the Binary Choice Model, Econometrica, 51(3), 765782. Koning, R., Spring, P. and Wansbeek, T. (2002). Joint Modeling of Primary and Secondary Action in Database Marketing, University of Groningen Research Institute SOM, Research Report 02F59. Nagler, J. (1994). Scobit: An Alternative Estimator to Logit and Probit, American Journal of Political Science, 38(1), 230-255. Otter, P., van der Scheer, H. and Wansbeek, T. (2006). Optimal Selection of House holds for Direct Marketing by Joint Modeling of The Probability and Quantity of Response, University of Groningen, CCSO Working Papers 2006/06. Vuong, Q.H. (1989). Likelihood Ratio Tests for Model Selection and Non-nested Hypotheses, Econometrica, 57(2), 307-333.


Econometrics

Increasing the price of theater, a modern tragedy? Recently there have been many intense debates about the cultural policy of the Dutch government and the cultural sector itself. The cause of these debates were the plans of the Minister of the Department of Education, Culture and Science (OCW), Ronald Plasterk. After his appointment he was planning to cut down the expenditures of the Ministry on the cultural sector by 50 million euro. He wanted to introduce the “benefit principle” and let the actual users of culture pay. This cut has been cancelled, but the Minister has encouraged the interests in demand studies for the arts. During the debates on financing the culture sector, it appeared that there was very little information on the prices of the cultural sector in the Netherlands. To study the possible effects of an increased price, further research was required. This article describes a study that complements earlier studies and provides further insight to the prices of the cultural sector, in particular of the performing arts.

Data We have a large dataset of 137 Dutch theaters at our disposal to estimate the price elasticity of the performing arts. The dataset comes from the Vereniging van Schouwburg- en Concertgebouwdirecties (VSCD), a branch organization that promotes the interests of its members, theaters in the Netherlands. Only the larger theaters where at least 40 professional performances a year are held can become a member of the VSCD. Overall, the members of the VSCD accounted in 2005 for 72% in the total number of performances in the Netherlands and for 66% of the total number of visitors (VSCD, 2005). The VSCD has collected data from 1996 to 2005. The dataset contains information on the revenues of entree fees, the number of visitors, the expenditures on publicity, sitting capacity and statistics on different genres of performances. In addition to this dataset, we use statistics of the Central Bureau for Statistics. These are more general variables on the population of the Netherlands, like income, the price of culture and the size of the population. The data of the VSCD tells us that the average price of a visit to a VSCD-theater has increased from €5,50 in 1996 to €15.47 in 2005. This is an increase of 153%. At the same time, the number of visits has been constant over the years. This indicates that either the price of the performing arts is of little influence to the demand of theater, or that other factors determine the demand.

Nynke de Groot obtained her Master in Econometrics in April 2008. This article is a summary of her master thesis, written under the supervision of Hans van Ophem. It was a continuation of a study for the Ministry of OCW performed by research company APE, where she has been working since 2006. Her thesis has been presented at the ACEI conference in Boston in June. She also was the president of the committee that organized the Econometric Game 2008.

Complications of the data The dataset of the VSCD contains several complications. One large complication is the unbalanced sample. The number of theaters included in the dataset grows over time, in 1996 the set only contains 62 theaters while in 2005 this number has increased to 133 theaters. Information on the whole period is only available for 20 theaters. It is possible that selectivity exists in the sample, that the decision not to report in a specific year is related to factors that determine the number of visitors in a theater. We test for selectivity bias in our model by the test of Nijman and Verbeek (1992). An even larger complication of the data is censoring in the data with respect to the number of visitors of a theater, the depending variable in our model. The number of visitors should measure the demand for a theater. However, the number of visitors has an upper limit, the sitting capacity of a theater times the number of performances. Estimating a model without accounting for sold out performances will lead to inconsistent estimates for price elasticity. The price elasticity will probably be estimated in the

AENORM

60

July 2008

11


Econometrics

direction of zero, since the real unobserved demand will change when the price is increased, but this change in demand can only be observed when the demand drops below the sitting capacity of the theater. The general solution to censoring, a Tobit model, is inappropriate however since the dataset only contains information on the aggregate of performances per theater. We have decided to neglect the problem and take the inconsistency into account when reviewing the estimation results, since a finding a solution would be a master thesis in itself. Model In the study itself, several models with different definitions for price of substitution of the performing arts and income have been estimated. In this paper, we only describe the best performing model, in regard of the other models we refer to the thesis. To estimate the price elasticity of the performing arts, we want to explain the demand for theater by several explanatory variables. The demand for theater is defined by the total number of visits of a theater in a specific year, we call this variable VSCD. It will be divided by the population of the Netherlands to correct for the growth of the population. One obvious choice as an explanatory variable is the price of tickets of the theater (P). The price of the theater is measured by dividing the total income on entree fees by the number of visits. Since the number of visits also includes free visits and discounted visits, this price should be viewed as an average price per visit. We assume the percentage of free and discounted visits is roughly the same for all theaters. It is not certain that the price of theater is an exogenous variable. There has been research on this matter but the results were not conclusive. Withers (1980) argues that price is exogenous since in earlier literature no statistical difference was found between 2SLS and OLS estimates. Other authors however are not sure of the exogeneity of price. Since we find it likely that price is influenced by demand, we assume price to be endogenous. Our dataset does not supply us with a valid instrument for price, all potential instruments are missing in certain years. The only available instrument is the lagged value of price, so we use this as an instrument for price to account for the endogeneity of price. The price of substitutes (PSUB) also influences the demand for theater, when the price of a substitute decreases, keeping all other variables constant, demand for theater will drop because people will replace theater with a substitute. We use the price of other theaters in the region of the theaters as the price of substitute, since people will not travel across the country for a lower price. Another interesting explana-

12

AENORM

60

July 2008

tory variable is the expenditures on publicity (PUBL). Increasing the publicity costs would logically increase the number of visitors. Previous research has also indicated that a large part of the number of visitors can be explained by the number of visitors in the previous year because habit formation plays a large role in the performing arts. People who went to the theater last year, have a large probability to go to the same theater the next year. Since data on individual visitors is not available, a good measure for habit formation is the number of visitors of the previous year. In demand theory, the demand for the good in the previous year is likely to influence the demand in the current year. Therefore, the lagged dependent variable will also be included in the model. Withers (1980) uses the method of Owen (1969) to account for a shift in consumption when income increases. His theory is that when income increases, so does the price of leisure. An increase in income therefore results in an increase in demand for the performing arts because people have more to spend, but at the same time this increase might be offset by a decrease in demand because of the rise of the price of leisure. The method of Owen separates the “full� income that can be spend on consumption from leisure price effects that take the price of time into account. In his article, he states that demand is not only a function of the price of attending, price of substitutes and income, but also of the price of leisure and the distribution of income. The price of leisure (PL) is calculated as PL = w(1-UR) where w is the average hourly wage and UR is the unemployment rate. We use the Dutch hourly wage and unemployment rate of the municipality of the theater. The advantage of using the unemployment rate of the municipality is that one can take regional differences into account and have a more accurate measure of the income of the visitors. The full income (F) that can be spend on consumption is defined as F = TcPL + Tww + Y, where Tc is the average hours of leisure, Tw is the average hours of work and Y is the household income. The price index that is used to standardize the prices is also transformed in Owen’s method and is equal to FPI = k * CPI + (1-k)PLI, where k is the weight of non-leisure income in the full income and PLI is the leisure price index. The distribution of income (D) that Withers includes in his model, is in our case the per-


Bij Aon mag je gebaande wegen achter je laten om te komen tot een goed pensioenadvies.

Voor onze vestigingen in Amsterdam, Purmerend, Rotterdam en Zwolle zoeken wij actuarieel geschoolde mensen met relevante werkervaring (2-5 jaar) voor de functie van analist. Heb jij de ambitie om complexe actuariële vraagstukken op een ondernemende en creatieve manier op te lossen en al doende je het vak eigen te maken? Wil jij van je collega-specialisten het adviesvak leren om daarna snel door te groeien tot een zelfstandig adviseur van de klant? Kijk dan op www.aon.nl (onder vacatures) voor meer informatie of bel de heer R. K. Sagoenie, Managing Consultant, op telefoonnummer 020 430 53 93.

R i s i c o m a n a g e m e n t

Aon Consulting is wereldwijd de op twee na grootste risico-adviseur op het gebied van arbeidsvoorwaarden en verleent in Nederland adviesdiensten aan (beursgenoteerde) ondernemingen en pensioenfondsen. De Aon Actuariële Adviesgroep biedt adviezen en praktische ondersteuning op het gebied van pensioenen. Het dienstenportfolio strekt zich uit van strategische beleidsadvisering, pensioenadvies en administratie tot en met procesbegeleiding en tijdelijke ondersteuning bij bijvoorbeeld implementaties en detachering. Aon is thuis in alle actuariële diensten zoals het maken van kostenprognoses, het uitvoeren van (waarderings)berekeningen, certificering van de jaarstukken tot het analyseren van behaalde beleggingsresultaten.

E m p l o y e e

B e n e f i t sAENORM • V e 60 r z e kJuly e r2008 i n g e n 13


Econometrics

Results

centage of households with a low income, as defined by the CBS. Summarizing, the model that will be estimated is ⎛ VVSCDit LOG ⎜⎜ ⎝ Populationt ⎛ PVSCDit β2 LOG ⎜⎜ ⎝ FPIt

⎛ VVSCDi ,t −1 ⎞ ⎞ ⎟+ ⎟ = αi + β1LOG ⎜ ⎟ ⎟ ⎜ Population t −1 ⎠ ⎠ ⎝

⎞ ⎛ PSUBt ⎟ + β2 LOG ⎜ ⎟ ⎜ FPI t ⎠ ⎝

⎛ Ft β5LOG ⎜⎜ ⎝ FPLt * Populationt

⎞ ⎛ Publ it ⎟ + β4 LOG ⎜ ⎟ ⎜ CPI t ⎠ ⎝

⎞ ⎛ PLt ⎟ + β6 LOG ⎜ ⎟ ⎜ FPI t ⎠ ⎝

⎞ ⎟+ ⎟ ⎠

⎞ ⎟ + β7 LOG (Dit ) + ε it . ⎟ ⎠

In this model, the last three terms before the error term are the variables calculated by Owen’s method and the lagged variable of price is used as an instrument for price. Model estimation We want to estimate the model described in the previous paragraph with panel data estimators. One of the explanatory variables however, is habit formation. This causes general panel data estimators such as the fixed effects estimator, pooled OLS estimator or random effects estimator to be inconsistent. We therefore use dynamic panel data estimators to estimate the model and compare these estimators. The first estimator we have used is the first differences IV-estimator of Anderson and Hsiao (1981). This estimator estimates the model in first differences and uses yi,t-2 or yi,t-2 - yi,t-3 as an instrument for the first differenced lagged dependent variable, yi,t-1 - yi,t-2. The second estimator is the GMM-estimator of Arellano and Bond (1991). This estimator does not only use dependent variables of lag three or less as instruments, but uses all lags available. At period 5 for example, the Arellano-Bond estimator uses yi1, yi2 and yi3 as instruments for yi4 - yi3. Since the model is then overidentified, GMMestimation is appropriate. Both the AndersonHsiao and the Arellano-Bond estimator often encounter the problem of weak instruments. When the coefficient of the lagged dependent variable is close to 1, as can be expected for our model, problems with weak instruments often arise. The system GMM-estimator deals with this problem (Blundell and Bond, 1999). This estimator assumes additional moment conditions from which it can be derived that lagged first differenced dependent variables can be used as instruments for the lagged dependent variable in the equation in levels. Estimation is now based on a system of equation in first differences and in levels. If the additional moment conditions are valid, it can be shown that the system GMM-estimator has smaller finite sample bias than the other two estimators.

14

AENORM

60

July 2008

When we estimated the model with the different estimators, we found the Anderson-Hsiao estimator to suffer from weak instruments. On the other hand the additional moment conditions required for the system GMM-estimator were rejected. The remaining estimator was the Arellano-Bond estimator, for which the required assumptions were not rejected by the used statistics. The results of the estimation are given in table 1. Arrellano-Bond Demand (-1)

0.10a** (5.66)

Price

-0.43** (-13.80)

Price of leisure

-0.31** (-4.86)

Full income

0.24** (3.41)

Publicity

0.12** (12.34)

Price of substitution

1.29** (5.85)

%low income

-0.18** (-3.70)

Sargan-testc

27.22 (P = 0.75) Df = 33

m1b

1.96 (P = 0.05)

m2

-1.54 (P = 0.12)

N

240 Table 1: Estimation results of the model for the performing arts T-values are given between brackets under the coefficients, ** means significant at 5%, * at 10% level. a

m1 and m2 are the LM serial correlation tests of serial correlation in the first differenced residuals of first and second order respectively under the null of no serial correlation. P-values are given between brackets under the test statistics χ2(3). b

The Sargan test tests the null hypothesis of valid additional instruments and is χ2(l - k) distributed under the null. Df is the number of degrees of freedom. c

The most interesting estimated coefficient for our research is that of the price of the performing arts. We find an estimate of -0.43, this means that the price elasticity is inelastic; a 10% increase in price only decreases demand for the performing arts by 4,3% and an increase in price therefore also increases profits. Compared to earlier studies, our estimate


Econometrics

seems to be in the middle region of the other estimates. If we compare our estimate to that of the only known Dutch study on price elasticity, of Goudriaan and de Kam (1983), we have to conclude that they find a somewhat larger estimate between -0.5 and -0.6. The price elasticity that can be directly concluded from the model is the short-run price elasticity, the first year effect of a change in price. We can also derive the long run price elasticity from this model, which is the effect on demand of a change in price after full adjustment of expenditure. The long run price elasticity is equal to −0.43 priceshort = −0.48 pricelong = 1 − demandt −1 1 − 0.10 As expected the long run price elasticity is larger than the short run price elasticity. An increase of 10% in price, leads in the long run to a decrease in demand of 4.8%, while in the short run the decrease is only 4.3%. A long run price elasticity of -0.48 is small compared to earlier research. We find an estimated coefficient of full income of 0.24, this means that the effect of income is inelastic and that an increase in income also increases the demand for the performing arts. The same holds for publicity, which has a positive coefficient of 0.12. The elasticity of the price of substitution is elastic, a 10% increase in that price increases the demand for theater with 12,9%. As expected, the price of leisure has a negative sign, the estimate is -0.31 which indicates that an increase in the price of leisure decreases the number of visitors of the performing arts. Habit formation has a small positive coefficient of 0.10. Previous research has indicated that habit formation plays a larger role in highbrow performing arts than in lowbrow performing arts. Our coefficient estimate seems to indicate that the type of performing arts played in the VSCD-theaters is mostly lowbrow, while the performing arts in the theaters really is a mixture of the two types. Conclusion We find a somewhat small estimate for the price elasticity of the performing arts of –0.43. This small elasticity indicates that if the benefit principle would indeed be introduced to the sector of the performing arts, the number of visits would only slightly decrease. Price does not seem to be a big barrier in the decision to visit a theater. Other factors such as income, the price of substitutes and less measurable influences like education and social background pay a larger role. Demand studies such as this one indicate that theaters can increase their benefits

by increasing their price. The cultural sector is not your typical economic sector however. The goal is not making profit, but attracting visitors to your performances. Increasing the price will certainly not contribute to this goal. Increasing the price while using your extra revenues to invest in publicity will attract more visitors. This fits the plans of Minister Plasterk, who wants theaters to find additional resources to finance performances and reach a larger crowd. The sector itself however, does not believe in professionalizing the sector and likes to keep things as they are. References Anderson, T.W. and Hsiao, C. (1981). Estimation of dynamic models with error components, Journal of the American Statistical Association, 76, 598-606. Arellano, M. and Bond, S.R. (1991). Some tests of specification for panel data: Monte Carlo evidence and an application to employment equations, Review of Economic Studies, 58, 277-297. Blundell, R. and Bond, S.R. (1999). GMM estimation with persistent panel data: an application to production functions. Working Paper, London: Institute for Fiscal Studies. Goudriaan, R. and de Kam, C.A. (1983). Demand in the performing arts and the effects of subsidy. In: W.S. Hendon et al. (eds.), Economic research in the performing arts. Akron: Association for Cultural Economics, 119-124. Nijman, T. and Verbeek, M. (1992). Nonresponse in panel data: the impact on estimates of a life cycle consumption function, Journal of Applied Econometrics, 7, 243-257. Owen, J.D. (1969). The price of leisure. Rotterdam: Rotterdam University Press. Vereniging van Schouwburgen Concertgebouwdirecties, (2005). Podia. Amsterdam: Vereniging van Schouwburg- en Concertgebouwdirecties. Withers, G.A. (1980). Unbalanced growth and the demand for performing arts: an econometric analysis, Southern Economic Journal, 46(3), 735-742.

AENORM

60

July 2008

15


Je leert meer...

...als je niet voor de grootste kiest.

Wie graag goed wil leren zeilen, kan twee dingen onze nanciële, commerciële en IT-functies, maar doen. Je kunt aan boord stappen van een groot zeil- net zo goed voor onze traineeships waarin je schip en alles leren over een bepaald onderdeel, diverse functies bij verschillende afdelingen vervult. zoals de stand van het grootzeil of de fok. Of je kiest Waardoor je meer ervaring opdoet, meer leert en voor een iets kleiner bootje, waarop je al snel aan sneller groeit. SNS REAAL is met een balanstotaal van € 83 miljard en zo’n 7000 medehet roer staat en zelf de koers kunt bepalen. Starters werkers groot genoeg voor jouw ambities Zo werkt het ook met een startfunctie bij SNS REAAL, de innovatieve en snelgroeiende dienst- en klein genoeg voor een persoonlijk contact. verlener in bankieren en verzekeren. Waar je als Aan jou de keuze: laat je de koers van je carrière starter bij een hele grote organisatie vaak een vaste door anderen bepalen of sta je liever zelf aan plek krijgt met specieke werkzaamheden, kun je het roer? Kijk voor meer informatie over de je aan boord bij SNS REAAL in de volle breedte startfuncties en traineeships van SNS REAAL op van onze organisatie ontwikkelen. Dat geldt voor www.werkenbijsnsreaal.nl.

16

AENORM

60

July 2008


Econometrics

Cracking the marketing campaign design code Designing a marketing campaign is like cracking the code of a safe. Multiple combination locks must be found and brute force typically does not work. The lock combinations for an effective marketing campaign are: (i) the right targets; (ii) the appropriate offer; (iii) the optimal channel; (iv) at the right time; and the profit is the content of the safe, which is the contents of the safe. Simply following your gut feeling and deploying many campaigns, is that brute force that is in most instances ineffective. So, cracking the design code is a challenge that many companies are facing. Adding to this complexity, is the increase in the number of marketing campaigns within an organisation, fragmented customer segments and more channels. In line with the theme of this special issue of Aenorm, I will focus on the research that I’ve conducted in the area of target selection.

Generally speaking, there are three key objectives for target selection: to acquire new customers, to sell more to existing customer or to retain customers for a longer period. In each situation, the idea behind the target selection is basically the same: select those (potential) customers whose expected net profits are larger than the cost of mailing. It is of course needless to state, that different rules should be applied to e-mailing, as the costs are negligible. Expected profit Defining the expected profit is not as straight forward, as one can approach it from different angles. The simplest approach is just to take the product’s margin of profit. This is basically a short-term approach, as one does not take into account the incremental revenues that might take place after the purchase. These revenues are driven by the loyalty effect. That is, loyal clients buy more and stay longer, which means that fewer marketing investments are needed to achieve the same net effects. The more advanced approach on the other hand, captures these incremental profits based on lifetime value. It is a metric that quantifies the customer’s long-term value to the firm. A simple but powerful definition of LTV is LTV = margin * (1 + retention) / (1 + discount rate - retention) In this formula, margin equals the total margin of the products that a customer purchased in a year; the discount rate is used to calculate the net present value of future purchases and retention stands for the probability that a customer remains a customer for another year.

Hiek van der Scheer is an Expert Consultant in Marketing Intelligence at VODW Marketing. He works on developing growth strategies, based on consumer insights. Prior to this, he worked for True Choice Solutions (New York), for Research International and the Rabobank. He has a Ph.D. in Econometrics from the University of Groningen.

The difference between the two approaches to computing the expected profit becomes immediately apparent with a simple example: suppose that the margin is € 100, the retention rate is 0.8 and the discount rate 0.1. The expected profit is thus just € 100 for the simple way, whilst it becomes € 367 if one uses the LTV approach! The impact on the selection rule is also huge. In the simplest approach, if the cost of a marketing activity is say, € 5 per customer, then the response probability should be at least 5% to break-even (expected revenues = 5% * € 100). In contrast, a response probability of 1.36% would suffice if one would use LTV. Thus, if an organisation selects prospects for a marketing activity, they should include everyone with a response probability of 1,36% or higher if they use the LTV and only those with a response probability of 5% or higher when only the margin is used. Improved selection As direct marketing campaigns become more important, it becomes crucial for companies to include the most promising targets and reduce the ‘waste’ of marketing efforts.

AENORM

60

July 2008

17


18

AENORM

60

July 2008


Econometrics

There are a number of ways to accomplish this • Advanced modelling. • Optimal data usage. • Testing and evaluating. In their own way, each of these three elements enhances the intelligence of the target selection. In advanced modelling, the data is basically treated as given, whilst the marketing intelligence specialist focuses on selecting the optimal model. This is often a trade-off between, descriptive vs. predictive (e.g. linear regression versus neural network), restrictive vs. flexible (linear versus semi-parametric) and simple vs. advanced (linear regression versus limited dependent variable model). Below, I will discuss a more advanced approach in more detail. Although it gets little attention in literature, exploiting the data is a crucial aspect in obtaining the optimal selection. Typically, an organisation has ample information about its customers (like transaction data, contact information and demographic information) which must be transformed into manageable variables. E.g. instead of knowing the date, place, amount, etc of each transaction for a particular respondent, one might only want the number of transactions per month and the amount involved. It goes without saying that the quality of the final set of variables has a huge impact on the quality of the selection. As it is company and situation specific, it is hard to come up with a detailed approach that is generally applicable. As mentioned before, the challenge of a campaign is to provide the right offer to the right consumer at the right time and through the appropriate channel. With consumer insights and gut feeling, one might of course get a descent campaign, the downside is, it typically requires continuous testing and evaluating to get a grip on it. The most common testing method is a split-run: customers are grouped in different segments and each segment is treated differently. Although useful, it becomes time consuming to test many different campaign aspects. Below I will focus on a structured approach to test many different options in one test-environment. Target selection by modelling the probability and the expected amount For direct mail, three types of responses can be distinguished, depending on the offer made in the mailing. The first kind concerns mailings with fixed revenues (given a positive reply), such as subscriber mailings of a magazine, membership mailings, and single-shot mailings offering just one product. A second kind, concerns mailings where the number ordered units ordered may

vary, e.g. the number of compact discs ordered by direct mail selling or the subscription time (a quarter of a year, half a year, a full year) of a magazine. Third, there are mailings with a response that can take on all positive values. This may involve total revenues in purchases from a catalogue retailer, or the monetary amount donated to a charitable foundation raising funds by mail. Nearly all of the proposed target selection techniques deal with the case of fixed revenues to a positive reply and hence concentrate on binary choice modelling. The quantity of response is thus implicitly assumed to be equal amongst the individuals. Note however, that many direct mailing campaigns do not generate simple binary responses, but rather a response where the quantity differs between individuals. In my PhD research, I specified various models that incorporate both the quantity and probability of response. The optimal selection rule is of course based on these two aspects. That is, someone with a low response probability can still be an interesting target if the expected amount is relatively large. Similarly, someone with a large response probability but a small expected amount, might not be an interesting target. In my research (Van der Scheer, 1998) I present various approaches to define a decision rule based on the expected amount and probability, here I will focus on three basic decision rules. Each is based on the break-even point (response probability times, expected amount equals the cost of the mailing): 1 The common approach in which the amount is considered constant across the respondents. A customer is selected when the response probability times this average amount, is larger than the break-even point. 2 Modelling the expected amount instead of the probability. Here we use the average response probability for all customers and someone is selected when the expected amount times the average response probability is above the break-even point. This approach is interesting, as it shows the importance of modelling the amount. 3 Modelling the response probability as well as the expected amount. The different methods were compared with an application based on data from a Dutch charitable foundation. This foundation heavily rests on direct mailing. Every year it sends mailings to almost 1.2 million individuals in the Netherlands. The data sample consists of 40,000 observations. All individuals on the list have donated at least once to the foundation. A small part of this sample was used to estimate the models whilst the remainder was used for validation. In order to get robust results, I used the bootstrap method (estimation and validating the model on different samples of the same data) and

AENORM

60

July 2008

19


Econometrics

used the average, over the various bootstrap samples to compare the results. The graph below shows the results. It makes clear that the current practice gives the lowest return – which is of course still much better than no selection at all. A great gain results from modeling the quantity of response (8-12% higher), even if the response probability is not modelled. Optimising the characteristics of the mailing Characteristics, or so-called communication elements, of the direct mailing package, are important aspect of the success of a direct mailing campaign. Those that are essential to the design of the mailing package relate to its form (size of the envelope, use of graphics etc.) and to aspects of the contents (style of writing, use of testimonials etc.). In order to be able to manipulate the characteristics of the mailing, the direct marketing manager needs to know to what extent the various characteristics of the mailing affect the response. Bult, Van der Scheer and Wansbeek (1996) propose a method to simultaneously optimise the characteristics of the mailing and select the right targets. The study’s objective is two-fold. First, to propose a method to improve the effectiveness of direct mail campaigns by creating an optimal mailing design. A traditional way of analysing the effect of several characteristics is by studying each aspect separately. This is generally inefficient, as a large number of mailings is needed to achieve a certain reliability in the estimates of the effect of the characteristics on the response rate. Moreover, there is no opportunity to take interaction between the mailing characteristics into account. The second objective, is to select the targets and optimise the mailing design simultaneously. That is, we incorporate the interaction between target and mailing characteristics in a target selection model.

20

AENORM

60

July 2008

A so-called conjoint field-experiment is used to measure the extent to which various characteristics of a mailing component contribute to response rates and to the amount of the donation. The mailings used in the study had to be constructed by experimental design. However, instead of eliciting evaluations with respect to the constructed set (which is the traditional conjoint approach), each individual in the selected sample is confronted with only one of the experimentally varied mailings. By (randomly) sending each different mailing to a (large) group of respondents (a test-mailing), the optimal characteristics can be determined on the basis of the response figures. Hence, while attractiveness is assumed to be the underlying factor of the response, it is not judged explicitly by the respondents. In the study we differentiated the following seven characteristics of the letter: • The payment device (pre-printed giro check inviting payment) is either attached at the bottom of the letter or is enclosed in the envelope. • A brochure, if enclosed, gives some background information on the foundation. • The letter may contain an illustration at the top left, the top right, or not at all. • Amplifiers are used to stress some information given in the letter, by using e.g. bold printing. There are either, many, few, or no amplifiers present in the text of the letter. • The Post Scriptum might contain a summary of the letter or some new information. • The letter bears the signature of either the director of the foundation or a professor in health care research (in the Netherlands a professor’s title carries a lot of weight). • The address, shown through the window envelope, could either be printed on the letter or on the payment device.


Battle of the brains.

winners only

Intellectuele krachtmetingen, we zijn er dol op bij Deloitte Consulting. Een prikkelende stelling, een afwijkende mening: kom maar op. Dat houdt ons scherp. Alleen zo kunnen we ons doel bereiken: de beste zijn. De beste in kennis én in kunde. Een organisatie vol specialisten waarmee je probleemloos de discussie aan kunt gaan. In zo’n omgeving leer je veel en snel. Carrières kunnen hard gaan. Dat betekent dat zich ook steeds nieuwe carrièrekansen voordoen, en we voortdurend op zoek zijn naar nieuw talent. Zoals voor onze Consulting praktijk.

Deloitte Consulting adviseert de top van het (inter-)nationale bedrijfsleven en veel (semi-)overheidsorganisaties over complexe strategische en organisatorische vraagstukken. We bieden waar mogelijk een totaaloplossing: van strategie tot en met implementatie. Deloitte Consulting adviseert op het gebied van Corporate Strategy, Finance en Change Management. Maar ook over kostenreductietrajecten, de wereldwijde uitrol van SAP- en Oracle-applicaties, CRM-oplossingen, het ontwikkelen van ICT maatwerkoplossingen en IT Strategie. Daarnaast geven onze consultants strategisch marketingadvies en ondersteunen zij organisaties bij Supply Chain Management. Maar hoe verschillend de specialismen van Consulting ook zijn, ze hebben één ding gemeen: het talent van de mensen die er werken. Onze cliënten dagen je namelijk voortdurend uit. Eigenschappen als resultaatgerichtheid en ambitie zijn daarom onmisbaar. Net als overredingskracht en doorzettingsvermogen. Laat jij ook graag zien wat je kunt? Doe dat dan bij Deloitte Consulting. Kijk voor meer informatie op www.treasuringtalent.com en ontdek waar jóuw talenten het best tot hun recht komen. Of bel met Brigitta Atmadinata 06 - 123 450 32 of Karlijn Nooijen 06 - 123 449 10.

TreasuringTalent.com AENORM

60

July 2008

21


Econometrics

The cost of the letter depends on the exact configuration of the letter. In the response model we included the characteristics of the letter, the characteristics of the customers as well as interaction effects between these characteristics (e.g. the effect of enclosing the brochure could be small for loyal customers and large for relatively new customers). Based on the response model we determined for each customer the letter that generates the highest profit (based on response probability and cost of the mailing.) First we looked for the mailings that were optimal for at least one customer. It turned out that 52 out of the possible 288 mailings survived this screening. As was to be expected, the number of customers for whom one particular mailing was optimal varied hugely, the highest numbers being 10585, 6157 and 5365, whereas thirteen mailings were optimal for fewer than 10 individuals. Naturally this does not imply that such a fine-tuned, personalized mailing method would be optimal, since we disregard the fixed costs involved in such a complex system, which are huge. We know that for a mailing to be used, we must at least have a minimum number of individuals that receive this mailing, otherwise the fixed costs dominate the variable costs. In consultation with the manager of the organisation’s mailing system, we put the lower bound on the number of individuals receiving a particular type of mailing at 1 % of the data base. We were then left with thirteen different mailings. We summarise the findings, by comparing the net returns for four mailing strategies in the chart. The single worst mailing is only displayed as a benchmark and is used to show that mailing characteristics do matter in terms of expected net returns. The second is the mailing, traditionally used by the organisation. This mailing had an attached payment device, a brochure, no illustration, no amplifiers, a Post Scriptum

containing a summary, a signature of the managing director, and the address on the letter. This strategy would, in net returns for the test sample, yield € 388,180. The third strategy uses the single best mailing out of the 288 possibilities. This would yield € 443,647 or 14 % over the traditional strategy. If we were to use the thirteen top mailings, we would obtain net returns of € 484,515, or 25 % over the traditional strategy and 9 % over the single best strategy. Per individual, the last strategy would generate € 0.86 more than the single best mailing. Given that the data base contains 1.2 million individuals, the expected net revenues increase is equal to € 1,029,528. Conclusion Selecting the right target audience with the appropriate communication, at the right time and through optimal channels, continues to be a challenge for marketing managers. This article shows that with more advanced approaches and efficient testing, an organisation can get a better handle on the drivers of campaigns effectiveness which eventually allows them to crack the marketing campaign design code. References Bult, J.R., van der Scheer, H.R. and Wansbeek, T.J. (1996). Interaction between target and mailing characteristics in direct marketing, with an application to health care fund raising, International Journal of Research in Marketing, 14, 301-308. Van der Scheer, H.R. (1998). Quantitative approaches for profit maximization in direct marketing, Ph.D. Thesis.

22

AENORM

60

July 2008


Interview

Interview with Tom Wansbeek As of the first of September last year our faculty has a new dean in the person of Tom Wansbeek. After having spent a couple of years as the dean of the economics faculty of the Rijksuniversiteit Groningen (RUG) Tom Wansbeek returned to Amsterdam, where he had studied econometrics a long time ago. All together making it inevitable for the aenorm editors to visit the new dean and discuss some important issues regarding teaching, the PhD applicants and his objectives as the new dean.

Could you tell us something about yourself and your personal life?

of this, the faculty has changed dramatically, in the good direction.

I studied econometrics at the UvA from 1965 till 1972, when it was found entirely normal to spend seven years on completing a study. I graduated as the third person from my class. After graduation I briefly worked with a foundation for transportation research. I next moved to the University of Leiden, teaching economics to law stu-dents. The teaching wasn’t challenging, simple Keynesian models and the like. The second half of the seventies could be called a ‘rich’ time, without budget cuts, and there was a lot of time to do research. This allowed me to complete my PhD in Leiden, under the supervision of Bernard van Praag. After that, I was with the Central Bureau of Statistics and next moved on to the RUG. In the mean time I also spent two seme-sters at the University of Southern California in LA. In September of last year I started as the new dean of our faculty. I now have a long-distance relationship with my wife, spending the working week in Amsterdam and the weekends in Groningen. For the time being I will keep the situation as it is, and we will see how things develop.

Econometrics is quite broad, which field is your favourite? And why? And what about research, will you continue doing research next to your function as a dean?

You have been away from our university for more than thirty years. Do you think it has changed a lot during that time? Let me put it in a more general context. The received opinion in the Netherlands is that we are good in football but that our universities are in trouble. It is just the other way around. The Dutch universities are, on average, the best in the EU, and let us forget about football! This applies to research, teaching is harder to measure but the insiders’ opinion is that is also quite good from an international perspective. This success is due to wise policy measures, like the introduction of a new system for PhD-students, a sys-tem of quality control, and abolishing much of the internal democracy allowing for a better strategic management. As a result

Like any researcher, I think that my own area of research is the most beautiful field in the world. My area is micro-econometrics, which is about methods for handling cross-sectional data. Why do I like it? That is mainly accidental, it just happens that I found the area interesting and was able to do fruitful research there. For the time being, how-ever, I will unfortunately be too busy to do research seriously. Doing research requires that you are able to stare out of the window for hours and just think and concentrate. As a dean I always have a hundred things on my plate, some of which need urgent at-tention. And that combines poorly with doing research. But I am always optimistic, so I hope to have more opportunity for research in the future! The only thing that keeps me in touch with research at this moment is my membership of the editorial board of the Journal of Econometrics, where I handle manuscripts that are being submitted for publication. I have to find referees for those manuscripts, usually three per manuscript. There referees give their opinion, and afterwards I come to an overall position as to whether something can be published. It appears to be the case that the demands regarding teaching have become more stringent in the U.S., a development that has especially occurred within the Ivy League. Do you perhaps know anything about this and if so, what are your thoughts on this matter? I am not aware of this development but it reminds me of something that is happening here. Over the last twenty years the pressure to per-

AENORM

60

July 2008

23


Starter die snel wil groeien!

Mn Services is hard op zoek naar nieuwe collega’s. Klant- en resultaatgerichte professionals, net zo ambitieus als wij. Al tientallen jaren is Mn Services toonaangevend in institutioneel vermogensbeheer en pensioenuitvoering. En het kan hard gaan bij Mn Services; in de afgelopen vier jaar steeg ons beheerd vermogen van 18 naar maar liefst 61 miljard euro. Met 700 medewerkers voeren we ook de pensioenadministratie uit voor ruim 1,1 miljoen Nederlanders en meer dan 33.000 werkgevers. Mn Services is in Nederland de snelst groeiende beheerder van institutioneel vermogen en de grootste onafhankelijke uitvoerder van vermogensbeheer en pensioenadministratie. Je vindt ons in Rijswijk, ZuidHolland.

Een traineeship bij Mn Services is de ideale plek om een brede kennis op te doen en zo de basis te leggen voor je verdere carrière als hoog opgeleide professional. Via afwisselende projecten doe je aan ‘on-the-job-learning’ en krijg je begeleiding van zowel je directe leidinggevende, als van iemand die je specifiek begeleidt in het traject. Daarnaast krijg je een assessment aangeboden gericht op jouw persoonlijke ontwikkelpunten. Ben jij die proactieve trainee met een positieve en flexibele werkhouding? Reageer! Kijk voor deze en meer vacatures op www.hoeharddurfjijtegaan.nl

Instappen en met ons meegroeien? We zetten er graag iets tegenover als je bij ons je carrière vervolgt. Dat gaat verder dan een goed salaris; we bieden je een uitdagende werkomgeving met eigen verantwoordelijkheden en mogelijkheden om snel een persoonlijke groei door te maken.

www.hoeharddurfjijtegaan.nl 24

AENORM

60

July 2008

Werken bij Mn Services: het kan hard gaan.


Interview

form well in research has increased quite a bit here, and this has distracted people somewhat from their teaching. The pen-dulum is swinging back a little, now, and there is a growing attention for the quality of teaching. For example, we are phasing in a thing called BKO, BasisKwalificatie On-derwijs, a kind of driver’s license in the field of teaching that will be a prerequisite for teaching at our university. The qualification has to be acquired within two years from starting as an “universitair docent” and takes about 270 hours to complete. During your time as a student you probably made extensive use of the program Eviews. As it happens to be we came across the Wansbeek-Kapteyn quadratic unbiased estimators. Could you tell us something more about this? When I was a student Eviews was something in a distant future. There was the possi-bility to use a mainframe computer, but to do so you had to take your bike and go to the Mathematisch Centrum in the Tweede Boerhaavestraat. Programming was not in the curriculum. I wanted to learn how to do it. So I took a course in Algol, a precursor of Pascal, and I had to pay the 70 guilders that it cost by my own. I am flattered to hear that my work with Arie Kapteyn is incorporated in Eviews, I didn’t know that! Arie is a great man, and I am happy to have collaborated extensively with him. Inci-dentally, apart from our econometric work we changed the economic scenery in the Netherlands somewhat by establishing a Top-40 for economists in the eighties. A posi-tion on this list was based on publications in international journals. The general mes-sage we wanted to convey was to emphasize the importance of peer-reviewed publica-tions. This is now commonplace but at the time the message needed to be spread. Re-turning to the Wansbeek-Kapteyn methods, they had to do with handling panel data, where we made some contributions on the linear algebra. Some of our work pertains to handling missing data, which frequently occur in practice. At the moment, if you want to become a PhD candidate in econometrics, you have to leave the econometrics program after finishing the bachelor and do the twoyear Research Master at the Tinbergen Institute, instead of following the regular one-year Econometrics master, although it is also research-oriented. That master may even be better if you want to obtain a PhD in theoretical econometrics. How do think about this matter? Do you think it is okay that the Tinbergen Institute has a monopoly position on providing the PhD candidates? And what do you think about the

fact that they take away the best student from our Econometrics master, although our master is probably more valuable for them and vice versa those students are more valuable for our master program? The issue that you indicate has been hotly debated for at least five years and badly needs a solution. As is often the case, both sides have a point. The Tinbergen Institute has built a worldwide reputation over the last twenty years, which should be carefully maintained, and the monopoly position that you mention certainly helps. On the other hand, as you also indicate, the Econometrics master programs in the Netherlands are of a very high level and offer a good point of departure for a PhD. The solution I en-visage is by widening the scope of the discussion. Our faculty should have more PhDs than is presently the case, and we should see how we can stimulate this development. This requires a discussion about a Research master program in Business, which we don’t have, but also a discussion about incentives for people to get outside funding for PhDs. In fact, there are many more elements involved, and I guess that when we de-velop a good policy we will discover that somehow the barbarians are not at the gate when we somehow relax the position relative to the Tinbergen Institute. is very limited and we have to get our PhD’s from abroad. Do you have any ideas and ambition to put studying Econometrics at UvA internationally on the map? Econometrics programs as we have them in the Netherlands are fairly unique in the world. Given their quality and international reputation, they are ideal products to sell on the world market, especially when they are in Amsterdam. In Groningen, which is harder to sell than Amsterdam, the internationalization of Econometrics was quite suc-cessful. I remember a course in Microeconometrics that I taught with eleven students, coming from six countries, including the Philippines and Uganda. The students were quite good and I found it very stimulating. This was at the master level, but we started offering the bachelor in English, too, with some success. But then I left Groningen and I don’t know how they are doing internationally. There is no reason why it couldn’t be successful, and we really should push hard here in Amsterdam. During your time as a student in econometrics, you actively participated in our studying association: the VSAE. What kind of things did you do/organise for the VSAE? I was a board member of the VSAE in the very

AENORM

60

July 2008

25


Interview

early years, I think 1967-1968, but af-ter forty years I do not remember my task description anymore. As to the activities we organized, there was a monthly drink and we also organized some excursions including one to London. There was not much money floating around since we hardly had any sponsorships and were not involved in recruitment. The activities are much more sophisticated and professional now, like the Econometric Game and the congresses. By contrast, we were involved in the discussions about university democracy. At that time that was a hot topic although, with all these kind and good-willing people, staff members and students alike, the discussion always remained friendly. And at some point in time it just evaporated. Have you been keeping an eye out on the VSAE while you were in Groningen and if so, what do you think about the developments the VSAE has gone through over the years? For example such projects like the Econometric Game and AENORM? During my many years outside Amsterdam I lost sight of the VSAE, I must admit. I may have received some mail but when you are in leading position at another universi-ty you consider your alma mater as a competitor rather than a place where your emo-tions lie. What do you think about the VSAE in general? I must say that my links with the VSAE are somewhat limited. I had a look at the VSAE internet site and was impressed by the quality and professionalism. I was in-volved in the recent Econometric Game, as the case maker, which I greatly enjoyed. At the RUG you combined your function as the dean with some teaching. Would you like to do something similar at the FEB? Can we expect some master course taught by you next year? I used to teach a course in micro-econometrics at the RUG and I do hope to have the chance again to do some teaching here in Amsterdam. After all, the university is for the students. For a dean, it is good to be involved in teaching in order to have direct contacts with students, and as a professor I want to tell other people about the beauty of my field. To what extent are you confronted with econometrics in your function as dean, besides being the case maker of the Econometric game 2008? As I said, I don’t do econometric research my-

26

AENORM

60

July 2008

self at this moment. That is something I really miss. At the same time, many managerial issues that I have to handle are quite complicated and have many elements, and thinking about them and trying to find the best solution often requires the same kind of analytical thinking that you do with re-search. So I have a weak substitute, at least! What are the goals you have set yourself as dean of the FEB? I want to further improve the academic standing of our faculty, through hiring the best people and improving the quality of the teaching. After all this is a matter of money, and funding the faculty’s expansion while maintaining its financial health will be quite a challenge. So, for example, we will have to expand our activities on the market through our non-subsidized programs and will try to attract more students from out-side Europe, because we are free in setting the tuition fee ourselves. This kind of ac-tivities bring us money and at the same time make our faculty more exiting.


Actuarial Sciences

Comparing approximations for sums of dependent random variables Portfolios of insurers consist of lots of insurance policies that may or may not lead to the payment of a claim in a certain period of time. Since the amount of the possible claims is uncertain at the beginning of the period, all these policies are risks for the insurer. For mathematical convenience, the amounts of the claims are often assumed to be independent, even when this assumption is not very realistic. As a consequence of the absence of independence, actuaries need sophisticated methods and tools for modelling dependent risks. The theory of copulas and comonotonicity may become useful in modelling these dependencies among risks.

Comonotonicity and independence Consider the situation that one needs to model a random variable of the type S = Σ ni =1 X i where the multivariate distribution function the random variables Xi is unknown or not completely specified. A possibility is assuming that the random variables Xi are mutually independent, which implies the value a random variable does not influence the value of the other random variables. It may also be helpful to approximate the dependence structure of the random variables Xi by the least favorable one, namely the comonotonic dependence structure. A subset A of Rn is called comonotonic if for any x = (x1, … , xn) and y = (y1, … , yn) in A either xi ≤ yi or yi ≤ xi for all i = 1, … , n. Hence, a comonotonic set is simultaneously non-decreasing in each component. When the random vector X = (X1, … , Xn) has a comonotonic support it is called comonotonic. This means that each two possible outcomes are ordered component by component. Thus, the higher (lower) the value of one of the components of a comonotonic random vector, the higher (lower) any of the other components. Applying The Probability Integral Transform Theorem to the random vector X gives

X ~ (FX−11 (U1), (FX−21 (U2 ),..., (FX−n1 (Un )). where ‘~’ stands for equality in distribution and Ui are standard uniform random variables. Comonotonicity can be characterized by the sum of the components of the random vector X above, with U1 Ξ U2 Ξ … Ξ Un Ξ U:

S = Σ in=1 X i ~ Σ in= 1FX−i1(U) = S C

Carlo Jonk starts the Master of Actuarial Sciences at the University of Amsterdam in September 2008. In May 2007 Carlo joined Triple A – Risk Finance. This article is a summary of his bachelor thesis he wrote under supervision of Prof. R. Kaas and Prof. M.J. Goovaerts.

The comonotonic sum Sc replaces the dependence structure of the components of the sum by the most dangerous one, namely a Kendall’s correlation equal to 1 between the components of the sum. Several important actuarial quantities of Sc such as quantiles and stop-loss premiums exhibit an additivity property in the sense that they can be expressed as a sum of corresponding quantities of the marginals involved. For the quantiles it holds that FS−C1 (p) = Σni=1Fx−i1(p),

0 < p < 1.

The VaR of S using the comonotonic approximation is easy to evaluate in view of this expression. Consider the sum of the two lognormal random variables X1 ~ LNor(0,1) and X2 ~ LNor(1,0.8). When these random variables are assumed to be independent, the distribution function of their sum S at threshold can be calculated using the convolution formula given by

FS(s) =

s ∫ FX (s −∞

− t )fy (t )dt

Figure 1 shows the distribution functions under the assumption of comonotonicity and independence. This graph shows that the distribution function

AENORM

60

July 2008

27


Actuarial Sciences

under comonotonicity has thicker tails than the distribution function under independence. For high levels of α, the VaR at level α for random

the literature, the above representation is often referred to as Sklar’s Representation Theorem. A class of copulas, the so-called Archimedean copulas, are of the form Cθ(u1,..,un)=Ф -1(Ф(u1)+...+(Ф(un)) with Ф: [0,1] → [0,∞) a strictly decreasing, convex function satisfying Ф(0) = ∞ and Ф(1) = 0, known as the generator of the copula. For Archimedean copulas Kendall’s correlation coëfficiënt can be calculated as (Nelsen, 1999, p. 130)

τ k ( X1, X2 ) = 1 + 4 •

Figure 1:

Distribution functions

variable S is higher under comonotonicity than the VaR under independence. Note that, given the finiteness of the expectations of the random variables X1 and X2, the distribution functions in figure 1 have to intersect at least one time, because the comonotonic and independent sum have the same expectation. Making decisions based on the comonotonic sum instead of the real sum is a prudent strategy in the framework of expected utility theory. This is because the comonotonic sum is ‘convex larger’ than the real sum, indicating a risk averse decision maker prefers paying the real sum above paying its comonotonic counterpart.

1 φ(t ) dt ∫ 0 φ' (t )

Consider the bivariate case with the generator function Ф(t) = (-ln t)θ, which leads to the Gumbel copula CθGu (u1, u2 ) = exp{−((− ln u1 )θ + (− ln u2 )θ )1 / θ }, 1≤θ <∞

If θ = 1 the independence copula C(u,v)=u•v arises, and the limit as θ → ∞ gives the two-dimensional comonotonicity copula C(u,v)=min{u,v}. Thus, the Gumbel copula interpolates between independence and perfect dependence, and the parameter θ represents the strength of the de-

“Making decisions based on the comonotonic sum instead of the real sum is a prudent strategy in the framework of expected utility theory.” Copulas The dependence between the real-valued random variables X1, … , Xn of random vector X is completely described by their joint distribution function FX. The idea of separating FX into a part which describes the dependence structure and parts which describe the marginal behavior has led to the concept of a copula. In words, a n-dimensional copula function C: [0,1]n → [0,1] is defined as the joint distribution function of the ranks of the marginal random variables. The copula function C satisfies Fx(x1,..xn)=Pr[X1≤x1,..,Xn≤xn] =Pr[F1(X1)≤F1(x1),..,Fn(Xn)≤Fn(xn)] =C(F1(x1),..,Fn(xn)). where Fi(Xi) is the rank of random variable Xi. In

28

AENORM

60

July 2008

pendence. Kendall’s correlation for the Gumbel copula is given by (McNeil, Frey and Embrechts, 2005, p. 222) τk(X1,X2)=1-1/θ which shows that Kendall’s correlation approaches 1 as θ → ∞. As a second example take the generator function Ф(t) = (t-θ -1)/θ, which, in the bivariate case, gives the Clayton copula CθCl (u1, u2 ) = (u1

−θ

+ u2

−θ

− 1)1 / θ , 0 < θ < ∞

n the limit as θ → 0 the independence copula arises, and the two-dimensional comonotonicity copula is obtained in the limit as θ → ∞. For the Clayton copula, Kendall’s correlation is


Maak een vliegende start als Actuarieel Trainee bij Delta Lloyd Groep Werken en leren tegelijk

Als Actuarieel Trainee word je binnen 2 jaar opgeleid tot volwaardig actuaris of risico-professional. Je krijgt uiteenlopende opdrachten zoals het ontwikkelen van een inflatiemodel voor collectieve pensioen-contracten of het optimaliseren van management informatie. Coaching en ontwikkeling zijn belangrijk Wij stimuleren doorstuderen, denk aan de opleiding tot Actuaris (AAG) of de CFA-opleiding.Alle Actuariële Trainees krijgen een intensieve vaardigheidstraining van twee jaar. Je mentor geeft coaching en loopbaanbegeleiding. Uitdagende carrière Na 2 jaar traineeship, ben je volwaardig actuaris of risico-professional en kun je doorstromen naar verschillende functies. Het management ondersteunt en stimuleert het (op termijn) doorgroeien naar een teamleidersfunctie. Cultuur van Delta Lloyd Groep De cultuur is informeel en toegankelijk, de sfeer persoonlijk en betrokken. De mens staat centraal en het personeelsbestand is net zo divers als de samenleving. De lijnen met de directie zijn kort, de drempels laag. Profiel trainee Wij zoeken (bijna) afgestudeerden in de Actuariële Wetenschappen,Wiskunde, Econometrie of ORM.

Voor meer informatie en solliciteren: www.deltalloydgroep.com/Werkenbij/Starters/ Professional Traineeship/Actuarieel Traineeship

‘Je wordt heel erg gestimuleerd om met eigen ideeën te komen’ ‘Ik wist tijdens mijn studie wel al dat ik de financiële kant op wilde, maar had zelf niet direct aan actuariaat gedacht. Viavia werd ik op het Actuarieel traineeship van Delta Lloyd gewezen. De opzet klonk goed en het enthousiasme hier trok me erg aan. Ik moet nog wel een opleiding Actuariaat volgen, maar de afwisseling tussen studie en werk vind ik juist wel leuk; nu kan ik het geleerde direct in praktijk brengen. Je krijgt daarbij de nodige begeleiding en daarnaast zijn er de opleidingen op het gebied van professional development bij De Baak en is er de Actuariële Academie: een platform binnen Delta Lloyd Verzekeringen voor startende actuarissen om met elkaar en gastsprekers in gesprek te komen over het vak. Ik werk op de afdeling Verzekeringstechniek en Risicomanagement aan een inflatiemodel voor pensioencontracten. De vraag naar zo’n model groeit, dus ik lever straks een wezenlijke bijdrage aan het bedrijf. Dat was een van de eerste vragen die ik tijdens mijn sollicitatie stelde: “Ik ga hier toch wel iets echt nuttigs doen?” De sfeer bij ons op de afdeling is gezellig en informeel. Maar weinig mensen dragen dagelijks een pak. Je wordt heel erg gestimuleerd om met eigen ideeën te komen. Tijdens het traineeship leer je ook mensen kennen van andere afdelingen. En ook tussen de trainees onderling is de sfeer goed; we hebben veel lol samen en we proberen elkaar zo goed mogelijk te helpen.’

Elise van Aken Geboortejaar Opleiding

Eerstejaars professional trainee Actuariaat 1983 Technische Wiskunde Delft) AENORM 60 (TUJuly 2008

29


Actuarial Sciences

(McNeil, Frey and Embrechts, 2005, p. 222) τk(X1,X2)= θ/(θ+2) from which one can see that Kendall’s correlation approaches 1 as θ → ∞ and 0 in the limit as θ → 0. One has to keep in mind the restriction on the parameter of the copula in question also limits the attainable values for Kendall’s correlation. For the Gumbel and Clayton copulas, the parameter is restricted to θ ≥ 1 and θ > 0 respectively. Given the expressions for their Kendall’s correlation, it follows that these copulas can only handle positive Kendall’s correlations. The following algorithm is fully discussed in Embrechts and Puccetti (2007). First note that the probability that the random variables X1 and X2 take values in a rectangle (a1,b1] x (a2,b2] can be calculated using the copula associated with these random variables:

Pr[X1 ∈ (a1, b1], X2 ∈ (a2 , b2 ] = C (FX1 (b1), FX 2 (b2 )) − C (FX1 (a1), FX 2 (b2 )) − C (FX1 (b1), FX 2 (a2 )) + C (FX1 (a1), FX 2 (b2 )) By defining the set I(s)={(x1,x2)є[0,∞)2 :x1+x2≤s} the probability Pr[x1+x2 ≤ s] can be written as

Pr[X1 + X2 ≤ S] = FX1 + X 2 (s) = Pr[(X1, X2 ) ∈ I(s)]. In order to compute this probability a numerical procedure will be used based on the fact that it is possible to approximate the region I(s) with a countable union of disjoint rectangles. Graphically, the rectangle [0,s] × [0,s] is iteratively divided in smaller rectangles Qi(s) as shown in figure 2. Iterating n times and adding the probabilities over all Qi(s) one gets

Pn(s) = Σni=1 Pr[( X1, X2 , ) ∈ Qi (s)] Passing this expression to the limit gives

Figure 2:

30

AENORM

60

Figure 3 Simulation sized 1000 from a Gumbel

copula and Clayton copula with two different Kendall’s correlations.

lim Pn (s) = Pr[( X1, X2 ) ∈ I(s)] = FX1 + X 2 (s)

n→ ∞

So calculating the distribution function of random variable X1+X2 at a given level of accuracy is a matter of computing Pn(s) for n large enough. A simple rule of thumb is to run the algorithm repeatedly for increasing n until Pn(s) increases less than a fixed tolerance level ε. The Monte Carlo approach Now look at Monte Carlo simulation techniques for determining the (empirical) distribution function of the sum of the lognormal random variables. Suppose an Archimedean copula is used for simulation. Wu, Valdez and Sherris (2007) present a method for simulation of random drawings from multidimensional Archimedean copulas. In the two-dimensional case, the algorithm is as follows: 1) Simulate two independent Unif[0,1] random variables, say s and w. 2) Set t where Kc-1(w) where Kc(t)=t-Ф(t)/ Ф’(t)

The set I(s) is approximated by a countable union of rectangles

July 2008


Actuarial Sciences

Kendall’s Correlation Coëfficiënt Method

VaR

0.05

0.02

0.5

0.8

0.95

MC-Clayton copula

1%

0.747 (0.026)

0.452 (0.019)

0.452 (0.016)

0.440 (0.014)

0.439 (0.014)

MC-Gumbel copula

E/P-Clayton copula

E/P-Gumbel copula

5%

1.273 (0.022)

1.052 (0.021)

0.855 (0.017)

0.820 (0.017)

0.818 (0.017)

50%

4.320 (0.039)

4.282 (0.041)

4.095 (0.048)

3.787 (0.044)

3.725 (0.043)

95%

14.610 (0.245)

14.968 (0.261)

15.805 (0.268)

16.993 (0.273)

17.118 (0.326)

99%

24.809 (0.750)

25.333 (0.710)

26.393 (0.692)

28.587 (0.804)

31.462 (0.883)

1%

0.822 (0.022)

0.728 (0.021)

0.575 (0.017)

0.467 (0.015)

0.441 (0.015)

5%

1.321 (0.021)

1.197 (0.019)

0.994 (0.018)

0.853 (0.016)

0.820 (0.015)

50%

4.276 (0.038)

4.088 (0.039)

3.846 (0.040)

3.731 (0.046)

3.715 (0.480)

95%

14.721 (0.228)

15.380 (0.301)

16.384 (0.297)

16.891 (0.311)

16.997 (0.455)

99%

25.631 (0.784)

27.998 (0.929)

30.680 (1.102)

31.786 (1.071)

31.933 (1.206)

1%

0.750

.553

0.458

0.453

0.452

5%

1.274

1.052

0.855

0.844

0.832

50%

4.322

4.288

4.096

3.786

3.722

95%

14.644

14.987

15.808

16.871

17.009

99%

24.945

25.390

26.556

28.731

31.571

1%

0.821

0.729

0.573

0.467

0.458

5%

1.321

1.198

0.993

0.853

0.820

50%

4.265

4.096

3.847

3.733

3.719

95%

14.742

15.623

16.418

16.944

17.021

99%

25.703

28.055

30.899

31.934

32.070

Table 1: VaR's for the sum of X1 and X2 for different copulas and Kendall's correlations

3) Set u1=Ф-1(sФ(t)) and u2=Ф-1((1-s)Ф(t)) Figure 3 shows typical scatterplots of one thousand drawings from a two dimensional Gumbel copula and a two dimensional Clayton copula. The two plots with Kendall’s correlation of 0.8 show the strong dependence between u1 and u2. Besides that, the simulated points from the Gumbel copula in the upper tail are more clustered than in the plot for the Gauss copula and Clayton copula. This is also the case for the points in the lower tail of the Clayton copula. The quantile transformation vector X=(F-1(u1),F-1(u2)) has the desired marginal distributions and Kendall’s correlation. Summing the elements of the vector for many simulations and determining the empirical distribution function, one gets an estimation of the distribution function of the sum of two dependent random variables. VaR’s can be determined using this empirical distribution function. A more precise estimation of the VaR’s of the sum of X1 and X2 can be obtained by performing this estimation many times and averaging the obtained VaR’s.

Comparing approximations Assume estimations of the VaR’s are needed of the sum of the lognormal random variables X1 and X2 with a certain Kendall’s correlation. The parameters of the copulas can be set in such a way that Kendall’s correlation is equal to a certain value. Therefore, only positive correlations are considered, since Kendall’s correlation of the Gumbel and Clayton copula can only handle positive Kendall’s correlations. Comparison of the VaR’s should give a clear idea of the effect of positive correlation on the VaR of the sum of the two lognormal random variables. Table 1 shows these VaR’s. Standard errors of the estimations in parentheses. First of all, table 1 shows the consistency between the Monte Carlo method and the Embrechts & Pucetti algorithm. The VaR’s calculated using Monte Carlo analysis are on average the same as those calculated using the Embrechts & Pucetti algorithm. Moreover, for each copula, the VaR’s for thresholds 95% and 99% increase as Kendall’s correlation increases, and the VaR’s for thresholds 1%, 5% and 50% decrease. Now look in table 1 at the VaR’s for thresholds 95% and 99% using the Embrechts & Pucetti algorithm. These VaR’s, the ones which are of most interest from risk management point of view, are displayed in bold in table 1. For every value for Kendall’s correlation and

AENORM

60

July 2008

31


Pas op! Verhoogd risico op een glansrijke carrière Michael Page Banking & Financial Services is toonaangevend op het gebied van werving & selectie en interim management bij banken, verzekeringsmaatschappijen, asset managers, pensioenfondsen en andere financiële partijen. Zij vervult financiële, commerciële en specialistische functies, waar gewenst in samenwerking met Banking divisies in bijvoorbeeld Londen, New York, Parijs, Frankfurt, Singapore en Sydney. Door onze sectorexpertise en internationale wervingskracht bieden wij onze relaties meerwaarde. Zowel recent afgestudeerden als ervaren professionals kunnen wij in binnen- of buitenland hierdoor de best mogelijke stap in hun loopbaan bieden. Ook de eerste! Naast de mogelijkheid om vrijblijvend kennis te maken met onze gespecialiseerde consultants kun je met betrekking tot de onderstaande vacatures gedetailleerde informatie opvragen via onze website www.michaelpage.nl. Je vindt de vacatures door op onze homepage het bijbehorende referentienummer in te toetsen. Vanaf onze website kun je direct reageren op de daar getoonde vacatures. Daarnaast kun je jouw vragen, opmerkingen of cv direct sturen naar banking@michaelpage.nl.

Quantitative Analyst Provisioning

Econometristen

Atradius has 80 years of experience and is one of the largest credit insurers of the world with a total revenue of € 1,3 billion and a worldwide market share of 24%. Atradius insures trade against the risk of non payment. Besides credit risk insurance Atradius offers collections services. The provisioning team produces and reports the technical provisions and is thus responsible for half of Atradius’ balance sheet. For this challenging team we are looking for a Quantitative Analyst in defining Atradius Group provisioning. Ref. 133642.

Voor verschillende opdrachtgevers uit de top van het bankwezen en de pensioen- en verzekeringsbranche alsmede consultancy firms in Nederland zoeken wij ambitieuze Econometristen. Voor zowel startende als ervaren Econometristen zijn er diverse interessante mogelijkheden. Bij onze klanten bestaan goede doorgroei- en ontwikkelingsmogelijkheden en uitstekende arbeidsvoorwaarden. Geïnteresseerde kandidaten die openstaan voor een objectief advies over toekomstige werkgevers, worden uitgenodigd te reageren. Ref. 135611.

Business Analyst

Actuarissen

LeasePlan Corporation (LPCorp) consists of a growing international network of companies engaged in fleet and vehicle management, mainly through operational leasing. For this sales driven, international organisation, the Business Development Department provides insight to the LPCorp Board in order to facilitate all strategic decisions regarding the commercial approach to the market. As a result of an international transfer, the Business Development Department is looking for a Business Analyst who is able to combine his or her quantitative and communicative skills in an international and commercial environment. The role will involve extensive travelling abroad. Ref. 135843.

Voor verschillende consultancy firms alsmede de top van het bankwezen en de pensioen- en verzekeringsbranche zoeken wij ambitieuze Actuarissen. Recent afgestudeerden kunnen wij in binnen- of buitenland de best mogelijke eerste stap in hun loopbaan bieden. Graag komen wij in contact met kandidaten die hun wiskundig inzicht willen omzetten in praktische oplossingen en het leuk vinden om klantcontact te hebben. Bij onze klanten is het mogelijk om als specialist of als allround Actuaris door te groeien. Geïnteresseerden die openstaan voor een objectief advies over toekomstige werkgevers, worden uitgenodigd te reageren. Ref. 137278.

Equity Analyst

Risico-analist

LaSalle Investment Management is a leading global real estate investment manager and a member of the worldwide Jones Lang LaSalle group. For the real estate securities team we are currently looking for an Equity Analyst who will be primarily responsible for the coverage of European listed real estate securities for institutional clients. The job offers extensive international exposure and a chance to be part of a dynamic and challenging environment. The ideal candidate will have proven ability to analyze stocks or experience within real estate funds. Ref. 131752.

Onze opdrachtgever is één van de grootste financiële dienstverleners op de Nederlandse markt. In Eindhoven zijn wij voor de afdeling Credit Risk Modelling & Reporting op zoek naar een risico-analist. De afdeling houdt zich bezig met het in kaart brengen van alle risico’s op klantproduct- en portefeuilleniveau. In de functie ben je een modelbouwer die verantwoordelijk is voor het ontwikkelen van kredietrisicomodellen. Als expert heb je aan de ene kant te maken met de wensen vanuit de marktdirectoraten van de bank en aan de andere kant met de eisen en wensen van de (externe) regelgeving. Ref. 137563.

Wereldwijd 149 kantoren in 25 landen www.michaelpage.nl

32

AENORM

60

July 2008


Actuarial Sciences

threshold, the VaR using the Gumbel copula is larger than than the VaR using the Clayton copula. The concept of tail dependence explains why this is the case. Roughly speaking, tail dependence implies that there is much more of a tendency of X2 to be extreme when X1 is extreme, and vice versa. The Gumbel copula has tail dependence in the upper tail, while the Clayton copula has tail dependence in the lower tail (McNeil, Frey and Embrechts, 2005, p. 222). Because of this difference in tail dependence, the choice of the copula does influence the shape of the distribution function. The algorithm of Embrechts & Pucetti is only suitable for modelling the sum of two dependent random variables. However, the method is exact and flexible in the choice of the marginal distributions and the choice of the copula. On the other hand, Monte Carlo techniques are flexible in the number of random variables to be modeled. But, as shown in table 1, this method always gives some error in estimating VaR’s, and calculation times may also become problematic. When, for example, threshold for sums of one hundred possibly dependent random variables are needed, the comonotonicity assumption may provide a solution. The VaR based on comonotonicity is an upper bound for the real VaR for high thresholds, and a lower bound for low thresholds. This can be seen from table 1 because the VaR’s approach the VaR’s based on comonotonicity as Kendall’s correlation increases. Assuming comonotonicity, apart from saving a lot of time and model building, leads to safe decisions from a risk management point of view. References Embrechts, P., Pucetti, G (2007). Fast computation of the distribution function of the sum of two dependent random variables (http:// www.math.ethz.ch/~baltes/ftp/EP08.pdf), 18 april. McNeil, A., Frey, R., Embrechts, P. (2005). Quantitative risk management: concepts, techniques and tools. Princeton University Press Nelsen, R.B. (1999). An Introduction to Copulas. Lecture Notes in Statistics, 139. New York: Springer. Wu, F., Valdez, E.A., Sherris, M. (2007). Simulating Exchangeable Multivariate Archimedean Copulas and its Applications. Communications in Statistics - Simulation and Computation, 36, 1019-1034.

AENORM

60

July 2008

33


Econometrics

A State Space Approach for Constructing a Repeat Sales House Price Index Many individuals and institutions are interested in the development of house prices. The repeat sales model provides a way to measure this by the construction of a house price index. The traditional variants of the model have in common that they do not impose a time structure, which has a couple of disadvantages. In this article we will consider the Local Linear Trend repeat sales model which provides a more reliable price index. Thereafter the hierarchical repeat sales model is discussed. Both models have been applied to a database of the Dutch Land Registry Office (Kadaster) containing selling prices of homes sold in the Netherlands in the period 1993-2007.

Willem-Jan de Goeij holds a Bachelor and a Master degree in Econometrics (both cum laude). Since 2004 he is a member of the Internet committee of Kraket. This article is a summary of his master thesis written under the supervision of Prof. dr. S.J. Koopman and dr. M.K. Francke.

Introduction Real estate is commonly divided into two classes: residential and nonresidential properties. Focusing on residential real estate, the Dutch National Statistics Bureau estimated its total value in the Netherlands at € 1,523 billion for 2007. Thus on a macro level it forms a large component of economic wealth. This holds on a micro level as well, since for many households the possession of a house is the single most important asset in their portfolios. Therefore, private households’ real estate investments are the key element in private pension schemes and optimal portfolio composition. Important questions that arise concern the risks which are involved when investing in real estate and the degree of inflation hedging of real estate investments. Potential house buyers, sellers and developers of new houses are all interested in these issues. Also banks want to know more about the risk of real estate since they use houses as collateral for mortgages. A house price index can provide valuable insights to answer these questions.

prices is with the repeat sales model. It is particularly useful when limited information is available. In fact, only the selling prices and the dates of sale are required. The model was introduced by Bailey, Muth and Nourse (1963). It is based on price changes between pairs of sales of the same house. This means that only houses which have been sold more than once are considered. The model of Bailey et al. is as follows:

y

it

= μ +β +ε , i t it

ε ~ N (0 , σ 2 I), i

with: i : 1,…,M; y : 1,…,T; yit : log selling price of house i at time t; μi : constant term of house i; βt : log price index at time t, β1 = 0; nt : number of transactions at time t; N : ∑Tt =1 nt . Since βt is a dummy variable we have to restrict β1 at zero to prevent a dummy trap. The Weighted Repeat Sales model of Case and Shiller (1987) is a modification of the model of Bailey et al. They argue that as time goes by we become more uncertain about the house specific effect μi. Therefore they add a Gaussian random walk to the model:

y it = μ i + β t + u it + ε it ,

ε t ~ N( 0 , σ 2 I),

u i,t + 1 = u it + v it ,

v t ~ N( 0 , qv σ 2 I).

The repeat sales model A way to measure the development of house

34

AENORM

60

July 2008

The disturbances εs and vt are assumed to be independent for all s and t.


Econometrics

The model of Goetzmann and Spiegel (1995) is an extension of the Case and Shiller model. Around the time of sale it is likely that people who either sell or buy a home make improvements, for example by repainting the house, by installing a new kitchen or bathroom. This causes an increase in the value of the house which is independent of time and as a result the price index will be misspecified. Therefore they propose to add a nontemporal return to the model:

y it = μi + βt + (lit − 1)γ + uit + εit , εt ~ N( 0 , σ 2 I), ui,t + 1 = uit + v it ,

with: lit : the number of times house i has been sold at time t; γ : non-temporal return. Estimation We will first discuss the estimation of the model without the Gaussian random walk. This model corresponds to:

εt ~ N( 0 , σ 2 I).

The parameters δ and σ2 can then be obtained by Generalized Least Squares (GLS). The estimates of the parameters μ1,…,μM are given by the average of the corrected log selling prices per house. When we add a Gaussian random walk to the model the only thing that changes is the covariance matrix Ω. Suppose house i has been sold in periods ς < τ < s < t, we have to add the following matrix to Ωi:

qv σ 2diag(τ − ς, s − τ, t − s).

v t ~ N( 0 , qv σ 2 I).

y it = μi + βt + (l it − 1)γ + εit ,

⎡ 2 −1 0 ⎤ Ω i = ⎢⎢ − 1 2 − 1⎥⎥. ⎣⎢ 0 − 1 2 ⎦⎥

To estimate this model we write it in ‘first differences’ which gives,

y it − y is i = γ + βt − βs i + ε it − ε is i . This equation can be stacked and written as,

~ ~ y = ιγ + Xβ + ~ ε = Aδ + ~ ε, ~ ε ~ N( 0 , σ 2 Ω), ~ where y is a N x M vector of log returns, ι is a ~ vector of ones, X is a (N-M) x (T-1) matrix, A = ~ [ι X ] and δ = (γ ;β )’. Each row of the matrix has the value -1 in the (s-1)th column, the value 1 in the (t-1)th column and zeros elsewhere, if the initial sale was in period s > 1 and the final sale in period t for the corresponding pair of transactions. When the initial sale happened to be in period 1 the corresponding row in the ma~ trix X has only the value 1 in the (t-1)th column and zeros elsewhere. The covariance matrix Ω is block diagonal since correlations between different houses are assumed to be zero. When a particular house i has been sold four times for example, the covariance matrix is given by

The parameters δ and σ2 can again be estimated by GLS. Thereafter we construct the corresponding loglikelihood function and concentrate δ and σ2 out. By a numerical optimization of the concentrated loglikelihood function we obtain an estimate for qv. The estimates of the parameters μ1,…,μM are now given by a weighted average of the corrected log selling prices per house. The Local Linear Trend repeat sales model So far, we have modelled the time effect in the repeat sales model simply by means of a dummy variable approach. Since no time structure is imposed the model assumes that the level at time t is independent of previous and future levels. This approach has three drawbacks. First of all the model cannot give an estimate of the price level in a period with no observations. Secondly, the use of the dummy variable approach may result in a very volatile price index in case the number of observations is relatively small. This may occur when constructing a local price index and/or when the time period is small. Thirdly, the model cannot be used for predicting future price levels. Therefore we discuss a different specification of the time effect in the repeat sales model proposed by Francke (2007). It is assumed that βt follows a stochastic trend process, in this case a local linear trend model in which both the level and slope can vary over time. The Local Linear Trend repeat sales model is given by

y it = μi + βt + (lit − 1)γ + uit + εit , εt ~ N( 0 , σ 2 I), ui,t + 1 = uit + v it , vt ~ N( 0 , qv σ 2 I),

βt + 1 = βt + κt + ζ t , ζ t ~ N( 0 , qζ σ 2 ), κt + 1 = κt + ξ t , ξ t ~ N( 0 , qξ σ 2 ).

AENORM

60

July 2008

35


Econometrics

Estimation The likelihood of the model is maximized by the Expectation Maximization (EM) algorithm, since the number of parameters in the model is proportional to the number of houses M, which can be very large in practice. We start with constructing two vectors of parameters, θ1 and (μ’, γ , qv, σ2)’ and θ2 and(qζ, qξ)’. Suppose the algorithm has been run k times yielding the parameter estimates θˆ1k and θˆ2k . The two steps are defined as follows • E step: run the Kalman filter-smoother conditional on θˆ1k and calculate the expected loglikelihood of the data and the states conditional on the estimated parameter vector θˆ2k + 1 , G(θ1)=E[l(y, β, κ, θ1 ,θ2)| θ 2 = θˆ2k + 1 ]. • M step: maximize G(θ1) with respect to θ1 obtaining a new estimate θˆ1k + 1 . To start the algorithm we need an initial estimate of θ1. A way to do this is simply by estimating the Goetzmann and Spiegel model. The inclusion of the house specific time effect ui gives no problems in the M step. This step basicly comes down to estimating the Goetzmann and Spiegel model with β = βˆk +.1 Ideally, we would also like to include it in the E step and impose the restriction Σiuit = 0 for every t, such

trend for the house type (λt) and a trend for the region (ϑt ). A house type is indicated by the subscript j=1,…,J whereas a region is indicated by the subscript k=1,…,K. The complete model is specified by

y ijkt = c jk + μi + (lit − 1)γ + βt + λjtϑkt + uit + εit , εt ~ N( 0 , σ 2 I), ui,t + 1 = uit + v it ,

vt ~ N( 0 , qv σ 2 I),

βt + 1 = βt + κt + ζ t ,

ζ t ~ N( 0 , qζ σ 2 ),

κ t + 1 = κt + ξ t ,

ξ t ~ N( 0 , qξ σ 2 ),

λj,t + 1 = λjt + ς jt ,

ς t ~ N( 0 , qς σ 2 I),

ϑ

ωt ~ N( 0 , qωσ 2 I).

k, t + 1

= ϑkt + ωkt ,

The trend for house type j, in region k can then be computed as the sum of the constant term cjk, the common trend, the jth house type trend and the kth region trend. Since we now have two complete classifications for the houses, (house types and regions) we have to impose the following restrictions: ∑J j =1 λ jt

= 0,

∑ K ϑ kt k =1

= 0.

These restrictions can be met by a rank reduction of the corresponding covariance matrices. To estimate the model we can use the EM al-

”In the period January 1993 till May 2007 the estimated common house price increase in the Netherlands equalled 7.6% per annum.” that the house specific time effects are not confounded with the local linear trend βt. In practice, however, this is completely infeasible because the dimension of the state vector would become of order M, the number of houses. Therefore, in the E step, we replace the house specific time effect by its expectation which equals 0. The consequence is that the trend is given by the local linear trend βt plus a weighted average of the house specific time effects, which is also noted by Francke (2007). Whether this weighted average deviates much from zero depends on the number of observations and on the size of the variance ratio qv. The hierarchical repeat sales model A way to extend the repeat sales model is to imply a hierarchical trend structure. This was done for the hedonic price model in Francke and de Vos (2000) and Francke and Vos (2004). In this model there is a common trend (βt), a

36

AENORM

60

July 2008

gorithm and follow the steps given earlier. In the first step we want an initial estimate for θ1. This can be done by estimating the Local Linear Trend repeat sales model separately for every segment. In the second step we extract the unobserved states and estimate the parameter vector θ2=(qζ, qξ, qς, qω)’ conditional on θ1. Thereafter we construct the expected loglikelihood G(θ1) conditional on θ2. In the third step we maximize the expected loglikelihood with respect to θ1. Steps 2 and 3 are iterated until the maximum absolute change in the elements of the smoothed signal vectors is smaller than 10-4. Results First we look at some results of the LLT repeat sales model. The model is applied to a dataset containing the terraced houses of the city of Amsterdam. After screening of the data 2,293 transactions are left. Figure 1 shows the esti-


Econometrics

Figure 1: price Amsterdam).

index

(terraced

Figure 2: monthly growth (terraced houses, Amsterdam).

houses,

mated price indexes obtained by the Goetzmann and Spiegel model and the LLT repeat sales model. From the figure it is obvious that the use of the LLT model results in a much more stable price index. The difference in standard errors of the price indexes is equal to a factor 17.6-43.2 in favour of the LLT model. The monthly growth of the price index is displayed in Figure 2. The hierarchical repeat sales model (which had not been applied before) is applied to a database of the Dutch Land Registry Office containing 1,141,500 repeated sales of 491,296 houses in the period January 1993 till May 2007. In Table 1 the numbers of houses sold more than once are listed. Table 2 shows the different house types which are distinguished by the Kadaster with the corresponding number of transactions. Number of sales

Number of observations

2

365,080

3

99,110

4

22,305

5

4,107

6

613

7

71

8

10

to exp(1.0593) = 288.4%. In other words, if you have bought a house in January 1993 this has yield a common return of 7.6% per year. Figure 4 shows the monthly growth of the common trend. In 1993/1994 the growth was relatively high whereafter it decreased to reach a minimum in 1995. From 1995 till 1999 the growth increased considerably, but from 1999 onwards the growth decreased again till 2003. Since 2003 the growth has remained rather constant.

Figure 3: common price index

Table 1: overview of transactions. House type

Number of observations

Detached houses

100,101

Semi-detached houses

111,708

Terraced houses

398,802

Corner houses

141,030

Apartments

389,858

Table 2: house types.

Figure 3 shows the estimated common price index. The value of the index in May 2007 equals 1.0593 which means that in a period of 14 years and 5 months the estimated common price increase in the Netherlands was equal

Figure 4: common monthly growth

Finally, the model also provides deviations from the common price index for all house types and provinces which are all graphically presented in de Goeij (2008). For instance, the detached and the semi-detached houses experienced an above average price increase while the price increases of terraced, corner houses and apartments were below average.

AENORM

60

July 2008

37


Econometrics

With regard to the provinces, the price indexes of Friesland, Gelderland, Utrecht, NoordHolland and Noord-Brabant showed an above average growth. Flevoland and Limburg, on the other hand, experienced a clear below average growth. References Bailey, M. J., Muth R.F., and Nourse O. (1963). A Regression Method for Real Estate Price Index Construction, Journal of the American Statistical Association 58, 933-942. Case, K. E, and Shiller R.J. (1987). Prices of Single Family Homes since 1970: New Indexes for Four Cities, New England Economic Review Case, K. E, and Shiller R.J. (1989). The Efficiency of the Market of Single-Family Homes, The American Economic Review 79, 125-137. Francke, M.K., and de Vos A.F. (2000). Efficient Computation of Hierarchical Trends, Journal of Business and Economic Statistics 18, 5157. Francke, M.K., and Vos G.A. (2004). The Hierarchical Trend Model for Property Valuation and Local Price Indices, Journal of Real Estate Finance and Economics 14, 33-50. Francke, M.K. (2007). Repeat Sales Index for Real Estate Prices: a Structural Time Series Approach, Technical Report, Vrije Universiteit Amsterdam, Department of Econometrics. Goetzmann, W. E., and Spiegel M. (1995). NonTemporal Components of Residential Real Estate Appreciation, Review of Economics and Statistics 77, 199-206. Goeij, W. de (2008). A State Space Approach for Constructing A Repeat Sales House Price Index, Master thesis, tinyurl.com/6qb3lh

38

AENORM

60

July 2008


Econometrics

IBIS UvA: research and consultancy in Lean Six Sigma Lean Six Sigma is an advanced process improvement methodology. Initially, Six Sigma focused on the reduction of the number of defects in manufacturing. It was introduced at Motorola in 1986, and it incorporates previous methodologies, such as Statistical Process Control, Total Quality Management, and Zero Defects. The assimilation of Lean Thinking tools led to the popular Lean Six Sigma program. Nowadays, the program is deployed at companies such as General Electric, Honeywell International, and Citibank.

The twentieth century saw an incredible development of products and professional organizations. The impact of technological advances is obvious, but besides these, innovations in management structures and methods have resulted in the highly productive organizations of today. When the race for outperforming competitors on quality and efficiency gained momentum, companies started to copy each other’s best practices. Consultants and management gurus quickly jumped in and started giving names to these best practices: total quality management, just-in-time, business process reengineering, statistical process control, quality circles, lean manufacturing, continuous improvement, et cetera. Out of these methods, principles and approaches time has singled out the ones that really have added value. And while most approaches have been presented as panaceas at one time or another, time has shown that they are in fact complementary. The most effective techniques have been integrated into comprehensive methodologies such as Lean Six Sigma. Lean Six Sigma Lean Six Sigma is not revolutionary. It is built on principles and methods that have proven themselves over the twentieth century. It has incorporated the most effective approaches and integrated them into a full program. It offers a management structure for organizing continuous improvement of routine tasks, such as manufacturing, service delivery, accounting, nursing, sales, and other work that is done routinely. Further, it offers a method and tools for carrying out improvement projects effectively. In an economy which is determined more and more by dynamics than by static advantages, continuous improvement of routine tasks is a crucial driver of competitiveness.

Benjamin Kemper obtained a MSc degree in Econometrics at de Faculty of Economics and Business, University of Amsterdam. Currently, he works as a consultant at the Institute of Business and Industrial Statistics (IBIS UvA) and as PhD student in Industrial Statistics at the University of Amsterdam. His research project Optimization of Response Times in Service Networks is under supervision of Prof. M.R.H. Mandjes and Dr. J. de Mast. During his study, he participated in several VSAE projects such as the Econometric Game 2007 and Aenorm. Email corresponding author: b.p.h.kemper@uva.nl Jeroen de Mast obtained his doctorate in statistics at the University of Amsterdam. Currently, he works as senior consultant at IBIS UvA, and as associate professor at the University of Amsterdam. He has coauthored several books about Six Sigma, and is recipient of the 2005 ASQ Brumbaugh Award, as well as the 2005 ENBIS Young Statistician Award. He is a member of ASQ and cofounder of the Dommel Valley Platform for reliability and innovation.

Benefits - Improvement and redesign of routine tasks (manufacturing processes, service delivery, marketing, healthcare procedures, sales, et cetera). - Resulting in superior quality and efficiency. Strategic value - Cost advantages: superior productivity and equipment utilization, avoidance of capital expenditure, working capital reduction. - Advantages derived from superior customer satisfaction: reduced price sensitivity, growth of revenue or market share. - Competence building in manufacturing or service delivery virtuosity. - Competence building in continual and company-wide improvement and innovativeness.

AENORM

60

July 2008

39


Econometrics

Method - Professional and science-like problem solving. - Precise and quantitative problem definition. - Data-based diagnosis. - Innovative generation of new ideas. - Field-testing of ideas before implementation. Organization - Program management consisting of a Lean Six Sigma director, program managers (daily management), and Lean Six Sigma Master Black Belts (knowledge resources). - Project management consisting of a Champion (project owner) and a Black Belt or Green Belt (project leader). - Resources: experts, shopfloor personnel. Acknowledging that process improvement requires intimate context knowledge and acceptance by the shopfloor, Lean Six Sigma favors project leaders from the line organization to staff personnel and external consultants. Acknowledging as well the importance of strategic focus and integration, Lean Six Sigma prescribes that projects be monitored and reviewed by Champions and program management. IBIS UvA The Institute for Business and Industrial Statistics of the University of Amsterdam supports quality and efficiency improvement initiatives based on its expertise in the field of statistical methodology. IBIS UvA has about 15 years of experience with Six Sigma, and has supported Lean Six Sigma initiatives at General Electric Plastics, Douwe Egberts, DAF Trucks, Philips, Perlos, Getronics, TNT Post, Achmea Pensioenen, ABN AMRO Bank, Wolters Kluwer, Rode Kruis Hospital, Canisius Wilhelmina Hospital, Virga Jesse Hospital, and many more Dutch and international organizations. The institute is owned by the University of Amsterdam, and is seen internationally as a center of expertise in Lean Six Sigma and industrial statistics. The members of the institute frequently publish in the international scientific literature as well as the professional literature. Furthermore, members of the institute have authored a number of books about Lean Six Sigma and Statistical Process Control. The quality of the research done by the institute is confirmed by the awards that members of the institute have won. Research and consultancy Members of the Institute for Business and Industrial Statistics conduct academic research within the research project Industrial Statistics, headed by Prof. R.J.M.M. Does, at the Kortewegde Vries Institute for Mathematics. Current key

40

AENORM

60

July 2008

research programs are: • Industrial Statistics: control charting methodology, measurement system analysis, response times in service networks, multivariate process control. • Six Sigma Methodology: rational reconstruction and grounding lean six sigma methodology and techniques. • Business Economics: six sigma and innovation, lean six sigma in healthcare and finance. Further, the institute facilitates the master course industrial statistics of the MSc program stochastics and financial mathematics. This course may serve as an elective course for master students in econometrics or operation research of the Faculty of Economics and Business. Besides, the institute teaches the Lean Six Sigma program to professionals (in both open and in-company courses and complete Lean Six Sigma programs at the client), supports clients in their improvement projects, and advises the company on the deployment of Lean Six Sigma. The underlying idea is that the combination of scientific research and its application in consultancy activities leads to fruitful interactions. Many research topics are inspired by problems encountered in practice, whereas clients from the business community benefit from the results of scientific research. Lean Six Sigma Projects In order to give an idea about Lean Six Sigma in practice, two projects are discussed. Improve the revalidation consults of a healthcare centre At a healthcare centre revalidation patients are treated. Before a patient starts his or her treatment, an indication consult is conducted in order to define a suitable treatment. The treatment itself is refunded by the patient’s insurer; the indication consult is refunded, but this amount does not cover the total costs. Before the Lean Six Sigma project started, management thought that many patients needed more than one indication consult for various reasons, such as lack of information, a scheduling error (wrong doctor, for example), or a patient no-show. A Lean Six Sigma project started in September 2007. The problem was defined and both doctors and administrators helped to measure the indication consult process of about 40 patients. The main findings were: • A patient waits 32 days before the actual consult takes place. • About 8% of the consults fail. • About 55% of the patients cannot be treated


Statistisch adviseur / promovendus Instituut voor Bedrijfs- en Industriële Statistiek (IBIS UvA) IBIS UvA BV is een aan de Universiteit van Amsterdam verbonden instituut dat actief is op het gebied van de statistische advisering en het wetenschappelijk onderzoek. Functie U wordt als adviseur ingezet bij onze klanten. U leidt personen op in onderzoeksvaardigheden en statistische methoden. Daarnaast begeleidt u projecten. Bovendien is er een substantiële hoeveelheid tijd beschikbaar voor het verrichten van wetenschappelijk onderzoek en het ontwikkelen van nieuwe producten. Op termijn dient u door te groeien tot zelfstandig opererend (senior) adviseur / onderzoeker. Profiel - Afgeronde universitaire opleiding wiskunde, econometrie of statistiek. - Bereid en in staat te promoveren aan de universiteit. - Initiërend, ondernemend en communicatief vaardig. Wij bieden - Interessante en verantwoordelijke functie met marktconform salaris en uitstekende secundaire arbeidsvoorwaarden. - Gelegenheid tot ontwikkeling in zowel consultancy als wetenschappelijk onderzoek. - De mogelijkheid om internationaal actief te zijn. Solliciteren vóór 1 september 2008: Prof.dr. R.J.M.M. Does Plantage Muidergracht 24 1018 TV Amsterdam

http://www.ibisuva.nl info@ibisuva.nl 020-525 5203 / 6024

Acquisitie n.a.v. deze advertentie wordt niet op prijs gesteld.

AENORM

60

July 2008

41


Econometrics

within in the healthcare centre. • After the consult, it takes 14 days to schedule a treatment. The measurements show that there are more problems. Therefore management redefined the project and set several goals for the future process, namely: • A consult needs to be scheduled in 7 days. • Consults should never fail. • The percentage of patients referred to other institutions (was 55%) should be reduced. • After the consult, a patient should be scheduled for treatment within 7 days. Next, in a brainstorm session the manager tried to find the process factors that influence the problems. The manager employed a nurse practitioner, who costs less than a doctor, in order to prepare the consults. Besides, the administrators were informed about their current performance and a dashboard will continuously show their performance in the future. As of today, three of the abovementioned four goals are reached. The nurse practitioner completes all information before the consult and schedules the patient with the right doctor. As a result, consults hardly ever fail. Besides, the nurse practitioner can single out patients that do not match with the treatments available. Therefore, the fraction of consults that results in an insurer refund increases. An additional benefit is that due to the preparations of the nurse practitioner, the doctor conducts twice as much consults: two patients per hour instead of one per hour. Resolve the backlog problems of an internet provider call centre An internet provider handles customer cases, such as questions, complaints, or system errors, through a call centre. Simple cases are handled by the call centre itself. More complex cases are forwarded, and handled by the so called second line. Due to an increase in the number of customers and the launch of new technology (for example, Interactive digital television) the total inflow of the contact center increased. As a result, the number of cases forwarded to the second line increased dramatically and a backlog of complex customer cases was created. During the first phase of the project, the problem was defined and management decided to focus on reduction of backlog, reduction of call centre inflow, reduction of forwarded ratio to second line inflow, and reduction of the number of recontacts (a customer that calls for a status update). After the measure and analyze phase of the project, the following problems became clear:

42

AENORM

60

July 2008

• Some of the cases in the second line should have been handled by the first line. • Staff at the call centre does not judge cases consistently. • After 5 days customers of backlog cases start to recontact the call centre at a cost of 7 euros per call. • The backlog cases are not handled according a FIFO rule (that is the oldest case is not handled first). • A customer of a 100 day old case recontacts the call centre about 5 times, at a total cost of 35 euros, plus additional cost of poor quality • Total cost of recontacts is about total cost 1.4 million euro/year. • Cases that are immediately solved by the call centre has a positive effect on customer satisfaction. The internet provider decided to temporarily increase the number of staff in the second line in order to reduce the backlog. Furthermore, it started to evaluate and improve the work instructions of call centre staff in order to reduce the number of falsely forwarded cases. These improvement actions were easy to implement and are expected to result in a higher customer satisfaction, but do not necessarily result in financial benefits. The rest of the project started to focus on the initial causes of complex cases that were forwarded to the second line because they couldn’t be solved by the call centre. The main causes were ‘system errors’ and ‘failing pending orders’. Next, per cause the corresponding products and services were listed in order of frequency. New software was written to resolve these causes for the top five products and services. Currently, the software is implemented in three releases. Besides the prevention of recontacts, the daily inflow of cases reduced from 1100 to 700 per day, representing a cost saving of about 320K euro/year. Additional benefits, such as reduced financial compensation, are present but not quantified. Conclusion Lean Six Sigma is not revolutionary. It has incorporated the most effective approaches and integrated them into a full program. It offers a management structure for organizing continuous improvement of routine tasks. IBIS UvA is a center of expertise in Lean Six Sigma and industrial statistics owned by the University of Amsterdam. A consultant at IBIS UvA combines a job as a researcher in the field of industrial statistics with a job as trainer and consultant in Lean Six Sigma. The underlying idea is that the combination of scientific research and its application in consultancy activities leads to fruitful interactions.


Econometrics

Mechanism design: Theory and application to welfare-to-work programs In October 2007, Leonid Hurwicz , Eric Maskin, and Roger Myerson won the Nobel Prize in Economic Sciences “for having laid the foundations of mechanism design theory”. My aim is to give you a flavor of what mechanism design theory is and how we can apply it in practice. More in particular, I will apply insights from the theory to the design of welfare-to-work markets, i.e., markets in which governments contract firms in order to help unemployed people finding a job.

The set-up of this article is as follows. I will begin by discussing mechanism design theory in general. Next, I will analyze a simple game theoretical model of the welfare-to-work market. In this model, I will derive the theoretical properties of a simple mechanism that could be easily implemented in practice, and compare it to the theoretically optimal mechanism. Finally, I will conclude. Mechanism design theory What is mechanism design theory? The answer is actually quite simple, despite the fact that the theory itself is far from easy and despite the fact that the popular press came up with the wildest definitions. Mechanism design is about finding optimal rules under which people, firms, institutions or other agents interact. Before mechanism theory existed, game theorists were concerned with the question: what do players do given the rules of the game? In contrast, mechanism design theorists assume that there is a principal who can choose the rules of the game in such a way that the players will act as he desires. Of course, the incentives of the players may conflict with the principal’s interests, so the principal has to take these incentives into account. Well, I admit that this explanation still sounds pretty abstract, so that it is perhaps not very surprising that journalists were a bit confused about what mechanism design theory tries to achieve. An example might help to create a clearer picture. Suppose that a government wants to maximize national income under the condition that it raises enough taxes to provide some minimum level of public goods. In this situation,

Sander Onderstal is assistant professor of Economics at the University of Amsterdam. He teaches courses in industrial organization in both of the economics and business programs. In 1997, he obtained a Master’s degree in Econometrics from Tilburg University. Subsequently, he wrote his Ph.D. thesis on auction theory at Tilburg University and University College London. In his current and previous jobs, he has advised several ministries on auctions for welfare-to-work programs, petrol stations, and radio frequencies.

the government is the principal and the citizens the players. The citizens have an incentive to work less hard the more taxes they have to pay, which has a negative effect on both national income and taxes. So, in finding the optimal taxation scheme, the government has to take this into account. Welfare-to-work programs Let us consider another practical application of mechanism design in more detail: the optimal design of welfare-to-work programs. In the Netherlands, unemployed people have the right to obtain welfare-to-work services so as to find a job more quickly than they otherwise would. The Work and Income Act (SUWI) of 2002 obliges the responsible public bodies to contract out welfare-to-work services to private providers. Currently, these providers account for a substantial fraction of the more than €5 billion euro of public resources spent on active labor market policies. However, in an overview on the effectiveness of active labor market policies in Europe, Kluve et al. (2007) argue that most money is not spent effectively. So, the following question begs for an answer:

AENORM

60

July 2008

43


of weet jij* een beter moment voor de beste beslissing van je leven? www.werkenbijpwc.nl

Assurance • Tax • Advisory

*connectedthinking ©2007 PricewaterhouseCoopers. Alle rechten voorbehouden.

44

AENORM

60

July 2008


Econometrics

what is the most effective way to contract out welfare-to-work services to private providers? In a simple model, we will show how mechanism design theory can help in finding an answer. Particularly, we will model welfare-to-work markets as a situation in which the government has to select a private welfare-to-work service provider, and give it incentives to help people finding a job. The model Suppose, for simplicity, that there are two private providers willing to offer welfare-to-work services. Moreover, 25 unemployed people would like to obtain these services. The government contracts one of the providers. This provider will exert effort e in finding jobs for these people. We make the assumption that one unit of effort results in one person finding a job. The costs of a provider’s effort are given by C(e,t) = 25 e2 / t ,

(1)

where t is the provider’s type. In other words, a provider is more efficient if it has a higher type t. Types are drawn independently from a uniform distribution on the interval (0,100]. We assume that the project generates social welfare according to the following expression: S(e,t) = 10e – T ,

(2)

where T is the (net) transfer from the government to the provider that wins the project. The social benefits (10e) include all effects on the economy associated with people finding a job, and may be positively related to increased production, a decrease in social benefits (so that the government has to levy less distortionary taxes), and diminishing intergenerational welfare dependency. The social costs (T) refer to the monetary compensation the government has to pay to the provider in order to at least cover its costs. The optimal information

mechanism

under

complete

A socially optimal mechanism maximizes expected social welfare under the restriction that the providers play equilibrium strategies, and under a participation constraint (both providers should at least receive zero expected utility). If the government knows the providers’ types, the optimal mechanism can be readily constructed. First, the government selects the most efficient provider, i.e., the provider with the highest type t. Second, the government induces this provider to choose effort e* = t / 5

(3)

Finally, the government pays a transfer to the provider such that the provider’s costs are exactly covered. Notice that we made the strong assumption that the government knows the types of the providers. If types are unknown, the government could still ask the providers to reveal their type, and implement the optimal mechanism. However, the providers have a reason to lie: if they report a lower type, they can make more profit than if they are honest. In other words, this mechanism is not “incentive compatible”. An incentive compatible mechanism What is a good alternative? Suppose that the providers play the following two-stage game. In the first stage, the government sells the project in a second-price auction. In this auction, the highest bidder wins the project and pays the bid of the other provider to the government. In the second stage, the government pays the winner 10 for every unemployed person that finds a job. Let us find an equilibrium of this game using backward induction. In the second stage of the game, if a provider wins the project, it will choose effort e in order to maximize its profits Π(e) = 10e – C(e,t) = 10e – 25 e2 / t.

(4)

It is easy to show that the optimal effort level is given by e** (t) = t / 5,

(5)

which coincides with the optimal effort under complete information in (3). Note that the provider’s profits are: Π(e** (t)) = t.

(6)

Now I turn to the first stage of the game. How much will a provider bid in the second-price auction? If it wins the auction, it will make profits t in the second stage of the game. So, a firm is willing to pay at most t. The second-price auction has an equilibrium in weakly dominant strategies in which both providers bid their type t. To see this, imagine that a provider with type t submits a bid b < t. Let B be the highest bid of the other bidder. Bidding b instead of t only results in a different outcome if b < B < t. If B > t, the provider does not win in either case, while if B < b, the provider wins and pays B in both cases. However, in the case that b < B < t, the provider receives zero utility by bidding b, while she obtains t − B > 0 when bidding t. Similarly, bidding b < t only results in a different outcome if t < B < b. A bid of t results in zero utility, whereas bidding b yields the provi-

AENORM

60

July 2008

45


Econometrics

der a utility of t − B < 0. Therefore, a provider is always (weakly) better off by submitting a bid equal to her type. The optimal mechanism Observe that this two-stage game is not only incentive compatible, but it also implements the effort level that is optimal under complete information. Moreover, the rules of the game are quite simple, and can be easily explained to welfare-to-work service providers in practice. However, the two-stage game does not implement the optimal mechanism. Observe, for instance, that the winning provider can make quite some profit. Suppose that it has the highest possible type, i.e., t = 100. Then the firm always wins the auction, because it submits the highest bid. The other bidder submits a bid between 0 and 100 according to a uniform distribution. So, the provider’s expected payment to the government equals 50. Note that the provider will find a job for 20 of the 25 unemployed people, so that the government has to pay 200 in return. Because the firm’s costs of providing this effort are equal to 100, its profits are 50, which is equal to the social welfare. In other words, the government loses 50% of social welfare to make the mechanism incentive compatible. In turns out that the government can do better than that. The government can increase social welfare in two ways. First of all, it can give less than full incentives to the firms, i.e., pay less than 10 per unemployed who finds a job. This is especially effective if applied to inefficient firms. Inefficient firms themselves will not contribute much to welfare in any case, so if the government pays then less than the full amount, efficient firms have less reason to pretend to be inefficient, so that the government will have to pay less informational rents. Secondly, and relatedly, the government screens out the lowest types, i.e., makes sure that they will not participate.1 Unfortunately, the optimal mechanism has several practical difficulties. First, the government needs detailed information about the bidders. In particular, it needs to know their cost functions and the distribution of their types. It requires costly market research to obtain this information. Second, the optimal mechanism is demanding with regards to the government’s commitment: it (1) requires a suboptimal level of effort from the winning agent, and (2) the government to withhold a welfare enhancing project when the efficiency parameter of the two providers turns out to be below a certain threshold value. 1 2

46

The good news is that the simple two-stage game that I discussed above performs almost as good as the optimal mechanism, provided that there are sufficiently many bidders. Note that the two-stage mechanism does not suffer from the informational problem: the rules of the game do not depend on the costs functions and the type distribution. Also the commitment problem is absent: the provider’s effort in the two-stage mechanism coincides with the optimal effort under complete information. So, the government cannot “abuse” information revealed in the auction. In a recent study, I compared the two mechanisms in a simulation, and found that only five bidders are sufficient for the difference between the social welfare of the two mechanisms to be less than 1%.2 Conclusion In this article, I’ve discussed mechanism design theory and applied it to welfare-to-work programs. The main lesson is that for these programs, fairly simple mechanisms may perform rather well relative to the theoretically optimal mechanism, especially in the case of sufficient competition. Still, the two-stage game that I propose has never been used in practice. Perhaps Kluve et al.’s (2007) conclusions about active labor market policies would have been more positive if governments had implemented this mechanism. References Kluve, J., Card, D. et al. (2007). Active Labor Market Policy in Europe: Performance and Perspective, Springer. Laffont, J.-J. and Tirole, J. (1987). Auctioning incentive contracts, Journal of Political Economy, 95, 921-937. McAfee, R.P. and McMillan, J. (1986). Bidding for contracts: a principal-agent analysis, RAND Journal of Economics, 17, 326-338. McAfee, R.P. and McMillan, J. (1987). Competition for agency contracts, RAND Journal of Economics, 18, 296-307. Onderstal, S. (2006). Bidding for the Unemployed: An Application of Mechanism Design to Welfare-to-Work Programs, mimeo, University of Amsterdam.

For details, see McAfee and McMillan (1986, 1987) and Laffont and Tirole (1987). See Onderstal (2006).

AENORM

60

July 2008


Econometrics

Effects of correlation-volatility co-movement on portfolio risk: negligible or not? To obtain a clear view of the market risk of holding an asset portfolio, the volatility and correlation behavior of individual assets are critical inputs. Both asset volatility and asset correlation are factors that have a positive effect on portfolio variance, and thus increase portfolio risk. For the price developments of financial assets, volatility as well as correlation processes are widely recognized to be time-varying. Financial institutions model these processes, and use the respective models as a basis for their risk analyses and investment policies. One of the leading researchers on this topic is Robert Engle, who has introduced several well-known and widely applied tools for modeling time-varying volatilities and correlations, like Autoregressive Conditional Hetroskedasticity (ARCH, 1982) and Dynamic Conditional Correlation (2002).

Besides the behaviors of time-varying asset volatility and correlation processes in isolation, also the co-movement of these processes can be argued to have an impact on portfolio risk. For example, a tendency for periods of high volatility to coincide with periods of high correlation would result in additional portfolio risk, because portfolio diversification would be ineffective when most urgently needed. In that case, at times when expected market movement is at its highest, unusually high asset correlation hampers the benevolent smoothing effects of diversification. Several authors, like Andersen (2005), have observed the presence of such correlation-volatility co-movement, but its effects have hardly been investigated. The obvious drawback of explicitly modeling this interaction between time-varying volatility and correlation is that it makes financial models more complex. In this article, the effects of these correlationvolatility co-movements on portfolio risk are investigated. Are these effects significant and would ignoring them lead to a severe underestimation of portfolio risk? Or is this co-movement only a second-order property with negligible effects, and does the inclusion of this property render financial models unnecessarily complex? In an attempt to answer these questions, I use two different approaches. The first is an empirical approach that compares the risk perceptions that follow from different multivariate asset models estimated from the same historical data. The second is a theoretical approach that compares the risk properties of artificial data-

René van Rappard studied Econometrics at the Universiteit van Amsterdam, where he recently received an MSc degree in Financial Econometrics. Over the course of his studies, he spent exchange semesters at New York University and l’Institut d’Etudes Politiques de Paris. Last summer, he did an internship at Alpinvest Partners. This fall, René will start working as a consultant at McKinsey & Company in Amsterdam. This article is based on his MSc thesis, written under the supervison of Prof. dr. H.P. Boswijk

generating processes (DGPs) that are structured to allow for an easy variation of correlationvolatility co-movements. The rest of this article will describe these two approaches and comment on their conclusions. First however, the values of multivariate ‘Bestof’ options are introduced as a creative tool for quantifying portfolio risk. ‘Best-of’ options as a tool for quantifying portfolio risk For an analysis of the effect of correlation-volatility co-movements on portfolio risk, a tool is needed to quantify portfolio risk. For this purpose I use the values of European ‘Bestof’-options, which are financial contracts that provide a payoff equal to the highest price of a set of several different underlying assets at a given maturity. Holding such options allows an investor to profit from price increases for the underlying assets, but provides protection from extreme drops in value for individual assets. In that way, the option shares, to a limited extent,

AENORM

60

July 2008

47


Econometrics

the nature of the opportunities and risks of an investment in a simple portfolio of the option’s underlying assets. The value of this protection, and hence of the option, is inversely related to the riskiness of a simple investment in a portfolio of the option’s underlying assets. As an illustration, imagine a set of assets of which the prices move in practical lockstep. A ‘Best-of’ option on these assets will provide little protection value. At the same time, the ineffectiveness of asset diversification in this high-correlation case makes investing in a portfolio of these assets relatively risky. In the paragraphs that follow, I will use estimated ‘Best-of’ option values as an inverse indicator of portfolio risk. These option values will be estimated through risk-neutral valuation by Monte Carlo simulation, as is done by Duan (1995). Specifics of this valuation method are widely available, for example in Hull (2005). Approach 1: Comparing different empirical models As explained, positive co-movement of time-varying asset volatility and correlation has the potential to increase portfolio risk. To find whether ignoring this connection in asset modeling can lead to a significant underestimation of portfolio risk, I use two different empirical models to estimate the risk of the same asset portfolios. To keep matters simple, I consider asset portfolios that each consist of two Dutch blue-chip companies. The pairs I consider are ING-Fortis, and ING-Unilever. I first estimate two models for the price movements of these stocks. The first model is the Dynamic Conditional Correlation model, proposed by Engle (2002) and used by Wong and Vlaar (2003). The model describes time-varying asset volatilities and correlations, but does not include any explicit interaction between the volatility and correlation movements. The second is an extension to this model, which in fact does include an explicit correlation-volatility connection. Both models are estimated using 15 years of historical price data from between 1992 and 2007, which are displayed graphically in figure 1. Trying not to go into too many technical details, I briefly comment on the main structure of these models. The asset prices are modeled as geometric Brownian motions, which means asset prices are assumed to be randomly meandering processes with (average) variances that adjust to their price levels. The daily variances of the relative price movements of the two individual assets are described as separate asymmetric GARCH(1,1) processes, as often used in financial econometric literature. The two models differ in their dynamic specifications of the correlations between the two assets. In both

48

AENORM

60

July 2008

Figure 1: 15 years of price data for ING, Fortis, and Unilever

models, the correlations are constructed as scaled versions of an element called q12,t. The two models differ in their specifications of q12,t, as shown below. For model 1: q12,t = c12 + α1α2(z1,t −1 ⋅ z2,t −1) + β1β2q12,t −1 + γ1γ2(z1−,t −1 ⋅ z2−,t −1)

(1)

For model 2: q12,t = c12 + α1α2(z1,t −1 ⋅ z2,t −1) + β1β2q12,t −1 + γ1γ2(z1−,t −1 ⋅ z2−,t −1) +

(2)

l1l2( h1,t h2,t )

For asset i and time t, q12,t is a correlation input, zi,t a price innovation, and hi,t an asset variance. z1−,t is a negative price innovation, defined as I(z1,t < 0) · z1,t. As can be seen above, model 2 contains an explicit correlation-volatility connection through the inclusion in q12,t of an additional term involving the asset variances. Estimation of both models using Maximum Likelihood yields good results. The estimates for price, volatility, and correlation parameters are in line with those found in earlier research. As expected, the estimates for coefficients l1 and l2 (in equation (2)) are positive, which indicates a positive correlation-volatility co-movement. Moreover, a likelihood-ratio test indicates that model 2 is probably a relevant extension to model 1, indeed suggesting an actual correlationvolatility link in the historical data. Using each of the two different models in Monte Carlo simulation, I estimate the values of the same set of ‘Best-of-Two’ options and corresponding confidence intervals. Figure 2 shows valuation results for options with maturities 3 months and 1 year, for which the prices of both underlying assets are normalized to 100 at time zero. For both sets of underlying assets, the results for the two models are very close and given their standard errors certainly not significantly different from each other.


Wij zijn op zoek naar ambitieuze en oplossingsgerichte

Junior Consultants

Zanders is een onafhankelijke, innovatieve en succesvolle organisatie op het complete gebied van Treasury & Finance Solutions. Binnen ons vakgebied bieden wij advisering, interim management en projectmanagement aan. De toegevoegde waarde van Zanders is specialistische kennis die wij op

Wat is het profiel van een Junior Consultant? Om in aanmerking te komen voor deze functie: • heeft u een afgeronde academische opleiding (economie, econometrie, bedrijfskunde, wiskunde, natuurkunde, informatica); • heeft u maximaal 2 jaar werkervaring; • beschikt u over een sterk analytisch inzicht en heeft u affiniteit met financiële markten; • bent u praktisch en oplossingsgericht ingesteld; • heeft u een goede beheersing van de Nederlandse en Engelse taal.

onafhankelijke wijze inzetten. Onze opdrachtgevers zijn ondernemingen in het binnen- en buitenland in diverse sectoren.

Welke dienstverlening biedt Zanders? • Treasury Management • Risk Management • Corporate Finance

Inmiddels is Zanders uitgegroeid naar een professionele

• Asset & Liability management

organisatie waar ruim zeventig mensen werken. Het succes van

• Treasury IT

Zanders wordt vooral bepaald door onze medewerkers. Dit is de

• Investment Risk Consultancy

reden dat Zanders een arbeidsklimaat biedt dat voor iedereen, zowel professioneel als persoonlijk, ruimte geeft tot ontwikkeling.

Wilt u meer informatie over deze functie en/of heeft u belang-

Zanders is altijd op zoek naar talenten die hun kennis en kunde

stelling voor een carrière bij Zanders? Neem dan contact op onze

willen inzetten in ons bedrijf. Voor de versterking van ons team zijn

Human Resources Manager Sjoeke Kamphuis.

wij op zoek naar Junior Consultants.

Zanders T.a.v. Mevrouw S. Kamphuis, Postbus 221, 1400 AE Bussum Telefoonnummer: 035 692 89 89 E-mailadres: solliciteren@zanders.eu

Zet uw carrière op groen

AENORM 60 July 2008 www.careeratzanders.com

49


Econometrics

Therefore, despite the fact that the inclusion of correlation-volatility connection in model 2 seems meaningful, we do not find evidence for a difference in perception of portfolio risk. As an additional non-surprising insight, note that due to higher average correlation, option values are lower for ING and Fortis, which operate in the same industry. Although not shown, the above results are found to be robust to some variation in the contract specifications of the ‘Bestof-Two’ options. Apparently, in this simple case, ignoring correlation-volatility co-movement would not lead financial analysts to gravely underestimate portfolio risk.

ING-Fortis 112

112

110

110

108

108

106

106

104

104

102

102

Model 1

tis

Model 2

ING-Unilever

20

Model 1

0.9 0.8 0.7

Model 2

0.6

15 10

110

5

108

0

0.5 0.4 0.3 0.2

3 m o nt hs

500

1 ye a r

104

30

102

25

Model 1

1500

2000

1500

2000

H2

1.0 0.9

Model 2

0.8 0.7 0.6

Approach 2: Comparing different artificial DGPs 10

0.5

Another way to5 investigate the effect of correlation-volatility co-movement on portfolio risk is to generate different sets of artificial financial 0 data artificially. Using a 500 structure1000 that allows 1500 for an easy variation of the correlation-volatility H2 co-movements, I create several H1 data-generating processes (DGPs, or recipes for generating data artificially) of financial data that display different co-movement characteristics. Based

0.3

60

1000 H1

Figure 2: Valuation of ING-Fortis (Top) and 20 ING-Unilever (Bottom) Best-of-Two options

AENORM

1.0

25

15

50

1 ye a r

30

112

106

Model 2

on these DGPs, I again calculate ‘Best-of-Two’ option values and hence evaluate portfolio risk under these different co-movement regimes. The different DGPs considered all have the same Vector Autoregression (VAR) structure. The processes generate volatility and correlation time-series for two individual assets which are subsequently used to generate asset prices. Averages and persistencies of volatilities and correlations are chosen to resemble those of the ING and Fortis data used in the previous paragraph. To briefly illustrate some of the DGPs’ technical basics, I present the VAR structure of the correlation and volatility series below: Yt = α + BYt-1 + Ut (3) ING-Unilever Yt is a 3-element vector, containing monotone transformations of the two volatilities and one correlation coefficient. Matrix B is diagonal and the vector Ut presents the series’ innovations. The three innovations are generated in such a way that they are correlated, which results in the correlation-volatility co-movement we seek to investigate. Adjusting the exact calculation 3 m o nt hsconof the innovations in Ut allows to carefully

July 2008

0.4

0.2 2000

0.1

500

1000 RHO

Figure 3: Artificial volatility (left) and correlation (right) time-series under high comovement regime

0.1


Econometrics

trol the degree of this co-movement. In fact, I control both the co-movement of the volatility series with the correlation series, and the comovement between the two individual volatility series. As an illustration, figure 3 shows self-generated sample paths of corresponding volatility and correlation series under a particularly high co-movement regime. As a result of the high degrees of co-movement, the two volatility series show very similar paths, and periods of high volatility coincide with periods of high correlation. To determine the impact of the degree of correlation-volatility co-movement on portfolio risk, I use a set of DGPs with different co-movement degrees to calculate the same ‘Best-of’ option prices. The options are ‘Bestof-Two’ options with maturities of one year, with both asset prices normalized to 100 at time zero. The results are presented in figure 4, showing estimated option values for different degrees of both the volatility-volatility (left axis) and volatility-correlation (right axis) co-movement.

time-varying correlation and volatility increases the risk of investment portfolios. This hypothesis is based on the idea that if such comovement exists, diversification will be least effective when most urgently needed. Using the values of risk-reducing ‘Best-of-Two’ options as a tool for quantifying portfolio risk, I arrived at two main observations. I first of all found that, despite being a useful addition to empirical DCC stock-price models, the inclusion of correlation-volatility co-movement does not necessarily lead to significantly different perceptions of portfolio risk. Secondly however, using VAR data-generating processes, I was able to further magnify and quantify the impact of the degree of co-movement between time-varying volatilities and correlations on portfolio risk. It turns out that such co-movement can indeed increase portfolio risk considerably. I conclude that correlation-volatility co-movement behavior in fact seems to have at least a mild influence on option values and portfolio risk. However, for the assets I examined and the models I compared, this impact is so modest that it does not easily lead to different practical perceptions of portfolio risk. Perhaps the use of even more accurate models of correlation and volatility, or the inclusion of different asset classes like fixed-income securities, could lead to different results. Such analysis could be the subject of future econometric research. References Andersen, T.G., et al. (2005). Practical volatility and correlation modeling for financial market risk management, Working paper 11069, National Bureau of Economic Research, Duan, J.C. (1995). The GARCH option pricing model, Mathematical Finance, 5, 13-32

Figure 4: Best-of-Two option valuations (vertical axis) for increasing degrees of volatility-volatility (left axis) and correlationvolatility co-movements

As follows from this graph, both types of comovement clearly have a negative impact on option prices and hence increase portfolio risk. The decreasing nature in two dimensions of a surface fitted through the valuation data points easily withstands tests for statistical significance. This provides evidence that correlation-volatility (and volatility-volatility) co-movement can indeed significantly add to portfolio risk. Conclusion In this article I have searched for evidence of the hypothesis that positive co-movement of

Engle, R.F. (1982). Autoregressive conditional heteroscedasticity with estimates of variance of United Kingdom inflation, Econometrica, 50, 987-1008 Engle, R.F. (2002). Dynamic conditional correlation: A simple class of multivariate generalized autoregressive conditional heteroskedasticity models, Journal of Business and Economic Statistics, 20, 339-350 Hull, J.C. (2005). Options, futures, and other derivatives, sixth edition, Upper Saddle River: Pearson Prentice Hall Wong, A.S.K. and Vlaar, P.J.G. (2003). Modelling time-varying correlations of financial markets, Research Memorandum WO&E, 739, De Nederlandsche Bank

AENORM

60

July 2008

51


Wat als zijn overboeking naar Hong Kong halverwege de weg kwijtraakt? Een paar miljoen overmaken is zo gebeurd. Binnen enkele seconden is het aan de andere kant van de wereld. Hij twijfelt er niet aan dat zijn geld de juiste bestemming bereikt. Het gaat immers altijd goed. Maar wat als het toch van de weg af zou raken? Door hackers, fraude of een computerstoring? Daarom levert de Nederlandsche Bank (DNB) een bijdrage aan een zo soepel en veilig mogelijk betalingsverkeer. We onderhouden de betaalsystemen, grijpen in als problemen ontstaan en onderzoeken nieuwe betaalmogelijkheden. Het betalingsverkeer in goede banen leiden, is niet de enige taak van DNB. We houden ook toezicht op de financiële instellingen en dragen – als onderdeel van het Europese Stelsel van Centrale Banken – bij aan een solide monetair beleid. Zo maken we ons sterk voor de financiële stabiliteit van Nederland. Want vertrouwen in ons financiële stelsel is de voorwaarde voor welvaart en een gezonde economie. Wil jij daaraan meewerken? Kijk dan op www.werkenbijdnb.nl.

Werken aan vertrouwen. 52

AENORM

60

July 2008


Econometrics

Entrepreneurship, Venture Capital and Economic Growth: a Comparison between the USA and Europe. Venture Capital (VC) is not an end in itself. The key issue is whether it contributes towards economic growth, and more specifically, towards creating wealth under conditions that benefit mankind without harming the environment or social relationships. It now seems as if Europe is on a roll, economically and that the USA is having all the trouble in 2008, with the subprime crisis, declining housing market and stalling job growth. However, that situation may soon be over. Compared to the USA Europe lacks in entrepreneurial spirit, in creating new economies where it counts, in biotechnology, in nanotechnology, in ICT and in energy technology.

While in the USA economic growth is prevalent, at least until 2007, in several Western European countries economic development is hardly progressing (even though VC is widely available). The average economic growth in the early 21st century is 3-4% p.a. in the USA, compared to 1-2% in Western Europe (with the exception of Britain, Denmark and Sweden, where growth rates have been higher). Unemployment in most European countries (at 7-10%) is substantially higher than in the USA, Canada, Australia and Britain (at 4-6%), countries which have at the same time a decent social benefits structure with a solid health system and unemployment benefits. The economic transition in Europe to more modern times has hardly taken place yet. According to the OECD 80 million jobs were created in the USA in the last 15 years, while indeed also 60 million jobs have been lost. In Europe 15 million jobs were created at a loss of 10 million. Thus, in the USA 20 million jobs were added compared to only 5 million in Europe, while Europe has more inhabitants than the USA. In most European countries inflexible labor markets, bureaucracy, high personnel overhead burden, costs of pre-pension and early retirement, housing rent subsidies, low labor participation rate (conversely: high unemployment rate including hidden elements like overly generous disability programs) and expensive social security programs drive up the costs of doing business. This hampers therefore entrepreneurship and technological change. Prosperity and economic growth are primarily the result of the

Frank van den Berg is by background an engineer, banker, investment manager and entrepreneur. He co-founded an aluminium anodizing company in Amsterdam with 55 employees. He worked 12 years in the Far East and New York City and now teaches Corporate Finance at the Vrije Universiteit Amsterdam, where he concentrates on Entrepreneurial Finance and the role of technology on economic growth. In the last three summers he taught Financial Management at Business Schools in China.

activities of entrepreneurs and innovation, and only secondarily it matters whether VC blooms. The latter certainly contributes, though. VC is widely available in Europe, it is only used less dominantly than in the USA, precisely because the opportunities for new business development are less prevalent. On the average, both in Europe and in the USA, well over 50% of GDP and employment is generated by small and medium sized enterprises (SME’s), companies with 1 to 250 employees. Nearly all economic growth and newly created jobs come from SME’s. So, it is imperative to create the right mix of conditions that these companies can come to fruition. Employment in multinational companies is in fact declining, while profits increase. The availability of VC is only a piece of the jigsaw puzzle, albeit not an unimportant one, for the start of SME’s. In the USA some $20 billion of VC is invested in approximately 30,000 new companies every year, compared to approximately €2,5 billion p.a. in Europe in 2.500-3.000 new companies. While all VC investments combined are

AENORM

60

July 2008

53


Econometrics

small in Europe compared to the USA, the total European Private Equity (PE) market, of which VC is a segment, is substantial. According to EVCA: Investments amounted to €73.8 billion in 2007. Seed investments represent €184.7 million. Start-up investments were €2.5 billion. The number of start-ups represented 2,576 investments. The difference between the USA and Europe is not the result of the fact that investors are less willing or daring in Europe or that fewer great ideas exist or that people lack entrepreneurial flair, but the result of a political climate, which generates fewer opportunities for new business development in Europe. VC companies in Europe are indeed not eager to be involved in start-up financing, and they are not to be blamed. The chance of success is low, better returns are to be made in more safe management buy-outs, equity carve-outs and/or mezzanine financing. The PE market is therefore equally booming in Europe as in the USA. However, PE does not contribute towards the start of new companies, at least not directly, only VC does. And here Europe lags the performance of the USA. Many articles and books have been devoted to the topic of economic growth. Still more attention can generally be spent on the decisive importance of entrepreneurship and technological change in economic growth. Many scholars have been searching for the “Holy Grail”. The general question is: are institutions, democracy, economic incentives, good political governance, rule of law and free markets the overriding factor, or more specifically: efficient capital markets, free commodities markets, flexible labor markets, education, human capital, literacy, equality of women, health care, property rights registration , intellectual property protection, fair bankruptcy proceedings, meritocracy, trust, family structure, R&D effort, level playing fields, lack of corruption or social mobility? Yes, all these elements do play a role, in the sense that these lead to innovation and thus to economic progress. It is the combination of all the above mentioned factors that causes innovation and technological change to take place. And it is as result thereof that we experience economic growth through productivity growth and increased demand, and thus that our wealth and the quality of our lives grow. Innovation and technological change can to be broken up in Product and Process Innovation. Product Innovation leads to new products (and services) meeting “unmet wants”. Not only the blessings of ICT, but all innovations in the area of energy, transportation, chemistry, biotechnology, physics, civil engineering, offshore technology including agriculture, medical sciences, health care and management science have

54

AENORM

60

July 2008

made major positive contributions towards economic growth. Most of these vastly improves the quality our lives: new medicines as result of biotechnology, mobile telephones, abundant food, convenient and cheap transportation, MRI scans, new materials (Kevlar, glass fiber, solar energy panels), ATM’s, gene therapy, GPS, airline internet ticketing, treatment of waste water, DVD’s, laser jet printing, internet banking. Some we can do without, of course: greasy fast food, sophisticated crimes, hard drugs, violent video games, high-tech wars, weapons of mass destruction etc. On the other hand, Process Innovation change leads to improvements in the production process, to more efficiency and to higher productivity in firms, as a result of robotics, automation, ICT, flexible production lines, lean production techniques, efficient planning techniques, just-in-time delivery, new management procedures etc. Product Innovation leads to increased demand and thus to a higher GDP, although some substitution effects may take place (DVD for video recorder, PC’s and laptops for main frames, but often at higher quality and cheaper). Process Innovation leads to higher productivity. Both factors, Product and Process Innovation lead to a higher GDP per capita and these are therefore the driving force of increasing wealth and economic progress. According to the original calculations of Solow, innovation and technological change explain 87 percent of the GDP growth per capita in the USA between 1909 and 1949. Denison finds that over the period 1929 to 1982 54 percent of the economic growth per capita in the USA is due to technological change, research and development, and general advances in knowledge. Other research has confirmed in various countries and over different periods that technological change consistently contributes more than half of GDP growth (Nelson), with the remainder the result of increased labor and capital investments. Another element is that employment in big corporations declines and that new jobs are created by starting companies and by growing small and mid-sized companies (Reynolds, Timmons). The majority of path breaking new technologies is discovered and brought to the market by newly started small companies (Bhidé). So it is vital for economic progress that entrepreneurship flourishes. Companies in the USA that have been at the forefront of these developments since the 1970-80’s include relatively young hightech firms in ICT and electronics, like: Apple, Dell Computers, eBay, Google, Intel, Microsoft, Yahoo!. By contrast the “young” European ICT companies are usually spin-offs or restructured “old” companies. TomTom, Skype and SAP are one of the few truly newly started European


Econometrics

high tech companies. Nearly all of the relatively young hi-tech American firms are pure start-ups with VC. Most of the non-American high-tech companies are spin-offs from big conglomerates or heavily nurtured by governments. What most of the non-American high-tech companies have in common, is that they took off, i.e. really started to grow, as offshoots of large companies. E.g. ASML and Navteq started as subsidiaries or divisions of Philips. In other “new” business segments, more European firms can be found (particularly in distribution, retailing, airlines, biotechnology, pharmaceuticals, fashion and other services), such as: Benneton, Biogen, Crucell, DHL, Dyson, EasyJet, Fugro, H&M, Ikea, Mexx, Randstad, RyanAir, Versace, Virgin Group and Zara. “Young” US companies in the non-ICT sector include Amgen, Bloomberg, CNN, FedEx, Genentech, McDonalds, Nike, Southwest Airlines, Starbucks and VISA. That new European companies in services, retailing and fashion do relatively well has everything to do with the lower level of global competition these firms are facing. Outsourcing and off-shoring is less prevalent there. In manufacturing, biotechnology, medical technology, ICT and nanotechnology one faces global competition. Transportation costs play less of a role here and the market is not fixed-in-place, like that’s the case in retailing (although that is changing as well as result of e-commerce). Technological, marketing, organizational or logistical breakthroughs were at the foundation of the US success stories in high-tech (at least so far). These breakthroughs did usually not materialize in established large companies, although some of the fundamental new technology was developed there: e.g. in Bell Labs (transistor), Xerox (laser printer, mouse), Fairchild (microchip) or Texas Instruments (integrated circuit). Nearly all economic growth has come and will come from newly established businesses (Bhidé). This process will continue in the future. Innovation is indeed the way to go, but it has to go hand in hand in a society where new business initiatives can flourish in a sustainable manner. The difference in the number of successful starting companies between the USA and other countries is noticeable, particularly in the technology sector. The issue is then to investigate further how and whether innovation, technological change and entrepreneurship contribute towards economic growth and prosperity of mankind. One has to understand how economic progress is created. Attention is thereby to be paid to the decisive role of technological change and entrepreneurship in increasing mankind’s prospe-

rity and well-being. Conversely, it is worthwhile to search for the right institutional framework, including efficient capital markets and flexible labor markets to create innovation and technological change. These factors are THE driving forces of increasing wealth on earth , embedded in an institutional non-corruptive environment with the rule of law and respect for property, in short with the appropriate incentives for people to develop themselves. If one speaks about productivity gains, these are the result then of new products (Product Innovation) and of implementing new production technologies, including new methods and management systems (Process Innovation). In contrast to what the “public” often thinks, productivity gains are usually not caused by people working harder. In fact, the average number of hours worked has declined in most European countries in the last decades. It is innovation and technological change that gives society “a bigger bang for a buck”. The question here is how to assure that the right mix of conditions and institutional framework is created to safeguard this continuing process of technological progress. In this process entrepreneurship, flexibility of labor markets, transparency of capital markets, availability of VC and the start of new enterprises play a major role as well. Entrepreneurial ideas are abound, both in the USA and in Europe. Personal ambitions are also widely available. It may be so that research indicates that fewer European citizens are inclined to start their own business compared to the USA. However, that is to be expected if the conditions to develop new businesses are so much less favorable. What is therefore really needed in Europe, is to make entrepreneurship more attractive, to create more of a level playing field for starting companies, to break down barriers to entry and to reduce the oligopolistic structure of so many markets. The governments recognize this, some progress has been made, but many more measures have to be taken. The influence of anti-trust regulators is indeed increasing. However, much more is to be done to create conditions for individuals and VC firms to start new innovative business ventures. One has to think thereby of: • More flexible labor markets. See Denmark: easy to fire (with reasonable unemployment benefits and training programs) leads to easy to hire; this is particularly beneficial for these groups in the labor market that employers in Europe presently are reluctant to hire, i.e. minorities, women and older people – in many European countries already those over 40 45 years of age are practically excluded from the labor market. The insiders are usually well protected; this system leads thus to higher unemployment under the outsiders, including

AENORM

60

July 2008

55


Econometrics

youth, older people, women, less well-educated people and foreign born employees. • No uniform bargaining and centrally determined labor agreements for a complete industry sector; why would a starting company be bound by excessively high starting wages and costly secondary employment benefits and can it not remunerate more in shares and options and initially at lower salary levels. For several European countries it can be stated that employees earn too little (in net salary) but cost too much (in total expenses for the employer including social benefits, taxes, pensions, early retirement etc.). In many European countries the gap is 50%, meaning if the employee takes home €2.500 net per month, it costs the employer €5.000 gross, with half of this gap paid by the employee for tax and social levies, and the other half paid by the employer for charges for social benefits, early retirement and health contribution. • No complete transfer of responsibility of employees’ sickness and disability to the employer and then only at most for “risque professionel” (for major companies this can be insured -at extra costs-, but this is relatively, if not prohibitively, expensive for small companies; a small company with few employees may well go bankrupt if one employee gets disabled, even if this is caused outside the work environment; while the sick employee has to be continued to be paid, at extra cost a temporary additional employee has to be hired as replacement ), • Revision of bankruptcy laws with reduced power for tax authorities and banks (along the lines of Chapter XI legislation in the USA and in Sweden, leading to greater opportunity for continuation of activities; there is no fundamental reason that tax authorities should get a preferential treatment compared to other creditors), • Reduced barriers to entry and less oligopolistic market structures (again no uniform bargaining agreements, that benefits existing companies, the so-called “wage-cartel”; more open and transparent market for business services leading to lower costs for legal, tax advisory, accountancy, banking and insurance costs, such as is now the case in the more competitive services for information technology, engineering advice, architecture etc.), • Reduced administrative and regulatory burden. Along with lower tax rates and reduced costs for social benefits this will lead to more of a “level playing field”. These measures will lead to more dynamic, vi-

56

AENORM

60

July 2008

brant and flexible markets. The costs of doing business will decline, at least not increase further. Overall labor costs may decline, but not necessarily wage levels. On the contrary. It is the employers’ total overhead costs which make the total cost of doing business prohibitive. Ideas for new ventures are around everywhere. The young and not-so-young are eager to start new companies. Indeed, the failure rate is high, but this is often the result of the preferential position of banks and tax authorities, and of the high costs of doing business, including employers’ overhead charges for sickness and disability pay, early retirement programs, local property taxes and high energy costs. European governments make indeed certain subsidies and investment programs available, like tax preferences for small equity providers, regional investment subsidies, accelerated depreciation for energy saving and environmental investments, and subsidies for training, Research & Development and joint innovation programs. All very nice, but nothing beats an overall improvement in business conditions: more flexibility and dynamism, a level playing field, more (and fair) competition and reduced overall labor costs (that is lower overhead, not lower wages). If the right mix of conditions are created for starting new companies and particularly for their survival in the critical first few years, then entrepreneurship will lead to an explosion of economic activity. VC will then automatically become available, because attractive returns are there to be earned. If Europe is to compete in the global competitive world, this is the way to go. If the new Google’s, Apple’s and Intel’s are to be blooming in Europe as well, then this right mix of more liberal conditions for entrepreneurship have to be established here. Few really young major European companies exist nowadays. Insofar they are around in Europe, these are more in services, retailing or fashion or they are spin-offs from big conglomerates or heavily nurtured by governments. Many new European companies also exist in the area of communication, electricity and distribution, but these are all mainly transformations of previously government owned companies. Really major European success stories of young independent companies of substantial size with new products in new markets are relatively scarce. It is not an impressive list compared to the USA, Taiwan, Japan, Korea and in the near future China and India. Lack of VC is not the issue in Europe, it is politics. It is the lack of political willpower to change the rather laid back European way of life. Britain, Canada, Australia, New Zealand, Denmark and Sweden have all shown that these countries can be reformed in


Hoeveel moet je als KPMG’er weten over buitenlandse feestdagen? © 2006 KPMG Staffing & Facility Services B.V., een Nederlandse besloten vennootschap, is lid van het KPMG-netwerk van zelfstandige ondernemingen die verbonden zijn aan KPMG International, een Zwitserse coöperatie. Alle rechten voorbehouden.

Interesse brengt je verder bij KPMG. De kalender van Chinese feestdagen hoef je wat ons betreft niet aan de muur te hebben. Het zou wel goed zijn als je iets

breed durven kijken én denken. Die op een goede manier ‘streetwise’ zijn. Als je over die mentaliteit beschikt, kun je

weet over de culturele achtergronden van het land. En van de

hier aan de slag als trainee (bij Audit) of junior adviseur (bij

stormachtige groei die de economie in Azië doormaakt. Wat dat

Advisory). Kijk voor meer informatie over deze functies en

met je werk als accountant of adviseur bij KPMG te maken

over onze manier van werken op www.kpmg.nl/carrieres.

heeft? Veel! Bij KPMG werk je met een gevarieerd pakket klanten. Daar kan zomaar een Aziatische autofabrikant tussen zitten. Of een rederij die daar grote belangen heeft. Om die goed te kunnen adviseren heb je interesse nodig in de wereld waarin die klanten opereren. En in de zaken waar ze dagelijks mee te maken hebben. Bij KPMG zijn we ervan overtuigd dat die interesse je een bete-

KPMG zoekt trainees en junior adviseurs

re adviseur maakt. Daarom zijn we op zoek naar mensen die

A U D I T  TA X  A DV I S O R Y

AENORM

60

July 2008

57


Econometrics

more dynamic and flexible economies, without losing their social climate. The average growth in these countries is therefore also persistently higher and their unemployment levels equivalently lower than the other European countries. Germany, Austria and the Netherlands have made some moderate changes, but more has to be done. In France and the Mediterranean countries politicians have to show the courage to look beyond the next elections. They will do their citizens an enormous favor that way. Nearly all economic growth and new employment has come and will come from newly established businesses. The employment in major multinationals is declining. This process will continue in the future. Innovation is indeed the way to go, but it has to go hand in hand in a society where new business initiatives can flourish in a sustainable manner. Only then can we expect flourishing new European companies, presumably in the field of biotechnology, energy, food industry, financial services, transportation and medical technology. Enough capital is around. It is all about incentives, or to put it even more clearly, about the lack of incentives (Easterly). “It is governments that kill growth”. Bureaucracy, disincentives and bad policies do not create wealth. If the business climate is improved for starting small and mid sized companies, higher returns can be made there. VC galore, young entrepreneurs with new ideas abundant, but the economic climate in most major European countries for new business development lacks the right conditions. Most Western European countries need reforms, possibly with the exception of Britain, Denmark and Sweden. Too much Big Government, Big Labor and Big Business is still dominant in most European countries. Political courage is needed for economic reforms in most countries, to start with in France and Italy, but also in The Netherlands, to make more room for outsiders in the labor market. VC will then find its way by itself, because “-like the bank robber said- it is here where the money is”. References www.ace-uva.nl (UVA Amsterdam Center for Entrepreneurship) www.cimo.vu.nl Entrepreneurship)

(VU

Center

www.evca.eu (European Private Venture Capital Association)

Equity

for &

www.kauffman.org (Kauffman Foundation of Entrepreneurship)

58

AENORM

60

July 2008

www.nvp.nl (Nederlandse Vereniging van Participatiemaatschappijen) Audretsch, D.B., Keilbach, M.C. and Lehmann, E.E. (2006). Entrepreneurship and Economic Growth. Oxford, UK: Oxford University Press. Barro, R.J. (1997). Determinants of Economic Growth: A Cross-Country Empirical Study. Cambridge, MA: MIT Press. Bernstein, W.J. (2004). The Birth of Plenty: How the Prosperity of the Modern World was Created. New York, NY: McGraw-Hill. Bhagwati, J. (2004). In Defense of Globalization. New York, NY: Oxford University Press. Bhidé, A.V. (2000). The Origin and Evolution of New Businesses. New York, NY: Oxford University Press. Denison, E.F. (1985). Trends in American Economic Growth, 1929 – 1982. Washington, DC: Brookings Institute. de Soto, H. (2001). The Mystery of Capital: Why Capitalism Triumphs in the West and Fails Everywhere Else. New York, NY: Basic Books. Easterly, W. (2001). The Elusive Quest for Growth: Economists’ Adventures and Misadventures in the Tropics. Cambridge, MA: MIT Press. Gompers, P. and Lerner, J. (2001). The Venture Capital Revolution, Journal of Economic Perspectives, 15(2), 145-168. Helpman, E. (2004). The Mystery of Economic Growth. Cambridge, MA: Harvard University Press. Jones, C.I. (2002). Sources of U.S. Economic Growth in a World of Ideas. American Economic Review, 92, 220-239. Kuznets, S. (1966). Modern Economic Growth. Rate, Structure and Spread. New Haven, CT: Yale University Press. Landes, D. (1998). The Wealth and Poverty of Nations. Why Some are so Rich and Some so Poor. W.W. Norton & Co., New York, NY. Lucas, R.E. Jr. (2002). Lectures on Economic Growth. Cambridge, MA: Harvard University Press.


Econometrics

Mokyr, J. (1990). The Lever of Riches: Technological Creativity and Economic Progress. New York, NY: Oxford University Press. Nelson, R.R. (1998). The Agenda for Growth Theory: a Different Point of View. Cambridge Journal of Economics, 22, 497-520.

Vinig, T. and Van der Voort, R. (2005). The Emergence of Entrepreneurial Economics. Research on Technological Innovation, Management and Policy, 9. Elsevier, Amsterdam.

North, D.C. (1991). Institutions, Institutional Change and Economic Performance. Cambridge, U.K.: Cambridge University Press. North, D.C. and Thomas, R.P. (1973). The Rise of the Western World: a New Economic History. Cambridge, U.K.: Cambridge University Press. Reynolds, P.D. et al. (2002). Global Entrepreneurship Monitor 2002 Executive Report. Kansas City, MO: Kauffman Foundation. Romer, P.M. (1986). Increasing Returns and Long-Run Growth, Journal of Political Economy, 94 (October), 1002-1037. Rosenberg, N. (1982). Inside the Black Box: Technology and Economics. Cambridge, U.K.: Cambridge University Press. Rosenberg, N. and Birdzell, L.E. Jr. (1986). How the West Grew Rich. New York, NY: Basic Books. Sala-i-Martin, X. (1997). I Just Ran Two Million Regressions. American Economic Review, 87, (December), 178-183. Solow, R.M. (1957). Technical Change and the Aggregate Production Function, Review of Economics and Statistics, 39 (August), 312320. Thurow, L.C. (1999). Building Wealth. The New Rules for Individuals, Companies and Nations in a Knowledge-Based Economy. New York, NY: HarperCollins. Timmons, J.A. (1998). New Venture Creation: Entrepreneurship for the 21st Century. Burr Ridge, IL: Irwin. Van den Berg, F.W. (2007). Make Private Equity less Private ‌.. and more entrepreneurial. Fiducie magazine, Vrije Universiteit Amsterdam. Van Praag, M. (1996). Determinants of Successful Entrepreneurship. PhD thesis. Universiteit van Amsterdam. Tinbergen Institute Research Series, 107. Amsterdam.

AENORM

60

July 2008

59


Set your own course

“ There are no speed limits on the road to excellence.� David W. Johnson

Hewitt Associates is a worldwide HRM Consulting and Outsourcing organization with about 23,000 associates in more than forty countries. In the Netherlands (350 colleagues) we assist our clients by providing actuarial advice, pension administration, and a complete HRM-consultancy package. We are passionate about our work, which translates into intellectual challenges, optimal quality and interesting clients, besides job satisfaction, individual growth, and the opportunity of setting your own course.

WizKids and science students on the look-out for more than a fascinating job are sure to find their niche at Hewitt Our type of company is expected to know what appeals to people at work and what they are looking for in their careers. That is why we skip the story about targets and how we succeed in reaching them all the time. The road to success is much more important to us, as it brings up the very best in people. At Hewitt you largely plan this road yourself and every destination will form a new beginning. Did you graduate in science? Do you wonder why so few people actually succeed in becoming actuaries? Could your advice save millions too? Do you crave working at a high analytic level, with lots of opportunities for individual growth? Is client satisfaction paramount to you? If so, Hewitt is the right choice. We are always looking for people who - like us - pursue the very best quality; people who feel that individual and professional growth are essential; people who combine willfulness with team spirit; and people who set their own course. We will help you follow a career path that matches your talents and ambitions, and we will coach you on the course you set yourself. Our firm offers room for initiatives, an informal culture, continuous challenges, and possibilities for combining work and studies. At Hewitt you can, or rather you must, be yourself. It is only then that you will perform best and be able to set your own course. More information about a variety of jobs at Hewitt Associates can be obtained from www.hewitt.nl, by sending an e-mail to nlpz@hewitt.com, or by sending a letter with CV to Hewitt Associates B.V., attention Linda Willemsen, Human Resources department, P.O. Box 12079, 1100 AB Amsterdam.

More information about various jobs and working at Hewitt can be found at www.hewitt.nl 60

AENORM

60

July 2008


Interview

Interview with Russell Davidson Russell Davidson was born in Scotland on 18 August 1941. In 1966 he obtained a Ph.D. in theoretical physics at the University of Glasgow. Eleven years later, he also obtained a Ph.D. in economics, this time at the University of British Columbia in Canada. In the world of econometrics he is, among others, most famous for publishing his book “Estimation and Inference in Econometrics” together with James MacKinnon in 1993. Since then he has worked several years on problems related to the bootstrap. Nowadays he divides his time between Canada, where he teaches at the University of McGill and French, where he works at GREQAM; a research lab in Marseille.

You completed two PhD’s, one in physics and one in econometrics. Which one do you prefer? Did the Ph.D in physics give you a different sense towards econometrics, compared with your colleagues and, if so, in what way? Actually, my second Ph.D. was in economics, not econometrics. It dealt with some problems related to optimal control, Hamiltonians, and such like, with a substantial application to urban economics. It was only later, when I met James MacKinnon at Queen’s, that I began to work in econometrics. It’s true that, in recent years, almost everything I have done has been in econometrics, but I don’t guarantee that that will continue, if I find some other problem in economics, or in physics for that matter, that interests me. I found that there are economies of scale in getting a Ph.D. The second time round, I wasted no time on inessentials, as I already knew what the main time-wasting traps are for a graduate student. Both times, I thought quite seriously that the part of life spent as a graduate student is the best part. Great freedom, no responsibilities to anyone but yourself. When I was no longer a student, I found that the rest of life could also be very satisfying! I had no particular difficulties with the technical aspects of economic theory or econometrics, but it was quite a long intellectual journey from a “hard” science like physics to economics which, as a social science, needs a different approach. I don’t think that I now have a very different sense of econometrics from that of my colleagues, but I have had an undoubted advantage due to the technical training of a physicist.

How did you come to the idea of the Jtest? Well, James was teaching an advanced econometrics course, and I was sitting in on it, since I had always been interested in econometrics, even if I had done no research in the field. One day, he came to my office with a suggestion that, after we had discussed it and refined it, ended up as the J-test. We enjoyed that collaboration so much that we are still working together. Do you use bootstrapping mostly on crosssection analyses or time series analysis? The proper answer to this question is: No. I have to confess that my main interest in the bootstrap is theoretical, although I do use it, and encourage its use by other people, in practical applications. But I don’t see any reason to suppose that the bootstrap can’t be applied equally successfully with cross-sectional data and time series. I’ve just finished refereeing a paper in which bootstrap methods are applied to panel data, thereby combining both sorts. In the beginning of your research period, you did a lot of work on differential geometrics. Did this form a base for your later work on bootstrapping? Not really. My interest in differential geometry goes all the way back to my physics days. I remember being very excited when I discovered that many statistical problems could be treated using a Hilbert space setup very similar to what is used in quantum mechanics. Subsequently, although I still think that geometric thinking is immensely valuable in statistics and econometrics, I have been a bit disappointed that differential geometry has not led to any big new insights in those fields.

AENORM

60

July 2008

61


Interview

Maybe I should be a bit more precise. The local structures of differential geometry make it possible to express and work with estimators and test statistics very conveniently. It is often a lot easier to obtain results and understand them using this local geometry. But the real meat of differential geometry, from the mathematical or physical perspective, is in the global structures that can be built within its framework. Although there is a quite large literature on differential geometry in statistics, I don’t think that global differential geometry has had anything very useful to say to us beyond what we get from local considerations. Anyway, no, I haven’t (yet) found any useful applications of differential geometry to the study of the bootstrap. Could you explain your work on inference methods, for example on income inequality? If you want the long answer to this question, then you’ll have to take the course I’m developing for next term at McGill! I think it’s still true that a good deal of the work being done by economists on income inequality and poverty makes no use of statistical inference. It’s just assumed that we have complete data on the distributions we want to study, and issues related to sampling are often set aside. Sometimes, this is reasonable, but sometimes it is not. The short answer to your question is that the distribution of a random sample drawn from a population is a consistent estimate of the population distribution, with a theoretically wellfounded sampling distribution. The rest of my work in this field starts from that basis, and uses various tricks that let you get consistency and asymptotic normality of indices of inequality or poverty, or other related quantities, by considering averages of IID observations. In your work you make use of so-called double lenght regression. What’s basically the idea behind this type of regression and when is it used? One of the key ideas behind a lot of my research in econometric theory is that many models, whether linear or nonlinear, can be formulated in such a way that they have, locally, the structure of a linear regression. When that is so, we can usually construct artificial linear regressions that correspond to these more complicated models. These artificial regressions can then be used for many purposes, including bootstrapping as well as testing and computing estimated covariance matrices.

62

AENORM

60

July 2008

The first double length regression that James and I came up with was designed to work with models in which the random elements are Gaussian, but the observed variables are nonlinear functions of these random elements. The double nature of the artificial regression is needed so as to take account of the nonlinearity. Although there are double length regressions of a different sort, as well as triple and even greater length regressions, the ones we first developed are mostly useful with models estimated by maximum likelihood with continuous dependent variables. What are you going to do when you are retired? At present, I am half retired, in the sense that I have been pensioned off from my job in France. There is no mandatory retirement age in Canada, and so I have made my job at McGill full time, instead of taking unpaid leave each year to go to France. This doesn’t mean that I don’t still spend time in France, although not so much as before. Really, the only change that I have noticed so far is that I no longer teach undergraduate courses in France. As to retirement in the full sense, I have made no plans for it. I firmly believe that retirement can be bad for the health, in the sense that, if you give up the things that interested you and kept you busy before retirement, you risk being isolated and bored. I have seen too many people in that situation come down with the diseases that then killed them. I’d rather keep going for as long as I am physically able to do so. You have taught econometrics for many years at both sides of the Atlantic, more in particular in Canada and France. What can you say about the main differences between the structure of the university programs in the old and in the new world? Do you see distinctive advantages or disadvantages? Do they have large effects on the levels and skills the students develop? In North America generally, not just in Canada, there are universities of varying levels. This is probably a good thing, since not everyone is ready for what is expected of the students and faculty in the high-powered universities. In France, the universities are of varying levels as well of course, but this fact is not officially recognised by the Education Ministry. In Canada, governments have much less to say about the internal affairs of universities than in France, and I suspect than in most European countries. As a result, no one is offended when it is assumed that the University of Toronto is better than some university (no names here!) in rural


Interview

Nova Scotia. It’s just a fact of life. In France, though, it can be difficult, despite various efforts, to build centres of excellence, because the official theory is that we’re all equally excellent. That said, historically the French education system has been very successful, and to this day, research in France does well in international comparisons. But I would be very fearful for it if it were not for the major changes being, often half-heartedly, undertaken in order to comply with European norms. I hope that these much needed changes will keep the system from becoming moribund. France will need to watch out, as well, to avoid losing all its best people to better-paying jobs elsewhere, especially in England, where academic salaries are much higher. I used to say that the French students were better prepared mathematically than their Canadian counterparts. I’m sorry to say that this is no longer true, and not because the Canadian students have become better prepared. It makes life a bit difficult for people trying to teach econometrics, a quantitative discipline that makes much use of basic mathematics. I have to say, though, that the McGill undergraduates are a very well prepared bunch. We are very fortunate in this, since the same cannot be said of more than another two or three Canadian universities. At the graduate level, things are rather different. Good Canadian students are often - I’m tempted to say almost always - attracted by the reputations of the big American schools for their graduate work. This is understandable, since the best American schools are indeed really good places, but it means that the students who stay in Canada for their Ph.D. are, with some honorable exceptions, not the very top students. In France, it is still unusual for a student wishing to do a Ph.D. in econometrics to go abroad to do so, although it is happening more and more. Not so much in Provence! As a result, the very brightest Ph.D. students I have supervised have been in France. For a substantial number of years already few Dutch students show ambition to embark on a Ph.D project in econometrics. At the University of Amsterdam, as at many North American universities, most Ph.D students in this field come from Asia or from Eastern Europe. What is the situation at McGill and in Marseille? What would be the major causes for these tendencies? What to do about it? Well, it’s hard to know how many Canadian students do a Ph.D. in econometrics, since, his-

torically at least, many of them have done so in the US. When I was at Queen’s, until 2002, the majority of Ph.D. students in the economics department were Canadians, from across the country. The non-Canadians typically included some Americans, Australians, New Zealanders, and of course people from Britain - all Englishspeaking. That has changed. There are fewer Ph.D. students altogether at Queen’s in economics, and more of them are Asian. Here at McGill, we have still fewer Ph.D. students. I reckon about half of them are Asian. We haven’t seen very many students from Eastern Europe as yet. In Marseille, there are some Asians, but not many. There are more Eastern Europeans, but the main non-French block is North African. Interestingly, the place that has most successfully attracted black African French speakers, of whom there are many, is the University of Montreal. While we in France are going over more and more to graduate instruction in English (although I still teach in French myself), at the U of M (as we all call it) most graduate courses are still given in French. The problem you bring up is not really a problem in France. Until recently, France produced too many French economics Ph.D.s relative to the number of job openings for Ph.D. economists in France. The market has sent its signal, and so there are fewer French students competing for grants to do a Ph.D. in economics, but there’s no shortage. What to do about the problem in the Netherlands, or in the UK, where according to all reports, the situation is worse, is not obvious. But until there are no students at all embarking on a Ph.D. in econometrics, it’s probably not a serious worry. In England, many of their foreign Ph.D. students go on to take jobs in English universities afterwards, and so econometrics remains healthy, as it does in the Netherlands. How many Ph.D students did you supervise yourself? What were their thesis topics? What type of career did they follow after their thesis defense? Not that many. For many years at Queen’s, the Ph.D. students claimed they were afraid of me, or at least of what I would expect them to do. In recent years, either I have seemed less demanding, or the students have become braver, since I have supervised more students in the last ten years than in all of my earlier career. I think that all but one of the Ph.D. students I have supervised did a thesis in econometrics. Most have had a fairly strong theoretical orientation, but all of them, except perhaps one,

AENORM

60

July 2008

63


Interview

also had empirical applications. A recent Ph.D. thesis I supervised had only a little theory, but a good deal of work in the econometrics of health, a subject that seems in need of a few more good people! I don’t know what the final career choices will be of the students who have recently graduated or are about to do so. Most of the earlier ones ended up in academic jobs, although one, a German, moved to Canada, and has become very successful with a finance company. What is the typical format of a working day (or week) for you? I have always avoided teaching in the morning before about 11 at the earliest. I normally show up at the office just before noon, and stay there until after 10 at night. Then I go home, make dinner for me and my wife, and sit up until around 2 in the morning. I don’t distinguish Saturdays and Sundays from other days, unless there is something special on. Many of your contributions to the theory and methods of econometrics have been in developing model specification tests and developing refinements of such test procedures to improve their effectiveness in finite samples. What are your expectations regarding further future developments? Is there much further refinement possible? Will computer programs be written that take over the role of scholars that have to perform and interpret all these tests? The promise of artificial intelligence remains largely unfulfilled. When someone asked Claude Shannon, of information theory fame, whether he thought machines could think, he replied, “Sure. We’re machines and we think.” While I agree with the basic premise, I think we have a long way to go before we understand how we think. Until we do, it’s unlikely that we’ll know how to program computers to think. When a procedure is well understood and can be implemented mechanically, then it should be programmed and used without further thought. We seldom think deeply about what the computer is doing when we ask it to run an ordinary least squares regression. Where intelligence is needed is in seeing when the programmed procedures are appropriate and when inappropriate. When nothing seems appropriate, then it’s time to develop something new. There is still a lot we don’t understand fully about statistical procedures, the bootstrap in particular. I don’t expect that we’ll sit around watching computers do all the work until that state of affairs is rectified. Once it is, then that

64

AENORM

60

July 2008

very fact will open up a whole new set of interesting questions. What is your favorite theorem? Pythagoras’ theorem of course. What was your motivation for the book “Estimation and Inference in Econometrics”, which you wrote together with mister MacKinnon? Did you have the idea that such a book was missing in the literature? We certainly did feel a lack in the set of econometrics textbooks available in the late 1980s, when we embarked on writing the book. We were starting to do a good deal of graduate teaching in econometrics - I had just taken the job in Marseille - and we felt that we had a way of thinking about econometrics that could illuminate the presentation of the subject in a graduate text. It took a long time to turn our idea into an actual book, and we had to rethink lots of things along the way. I think we both learned more from writing the book than our students learned from reading it! Nowadays, of course, there are many more adequate texts available, and some of them have clearly drawn some inspiration, at least, from ours. Since econometrics is a rapidly evolving discipline, parts of the book are now out of date. For instance, I forbid students to read the section on the bootstrap! It was easier to write our second book, not just because it is aimed at a somewhat lower level, but because, as with Ph.D.s, there are economies of scale in writing books.


HigH Potential M/V HYPE Masterclass bij Ordina Finance Consulting

Ons doel is om nationaal kampioen te worden in onze specialismen. Kampioen worden we alleen met de beste mensen. Jij als jong talent krijgt bij ons de kans om in dit ‘winning team’ te staan. In oktober 2008 start Ordina Finance Consulting een Masterclass Business Consultancy Hypotheken, Pensioenen en Verzekeren voor high potentials. De Masterclass bestaat uit een intensieve opleiding in vaardigheden die jij als Business Consultant nodig hebt. Binnen jouw vakgebied Hypotheken, Pensioen & Verzekeringen doe je in een uitdagend en pittig traject actuele kennis op. De opleiding wordt in samenwerking met zeer gerenommeerde opleidingsinstituten en dé goeroes op het gebied van hypotheken, pensioenen en adviesvaardigheden gegeven. Als onderdeel van de Masterclass maak je een buitenlandse studiereis en leer je de markten van jouw vakgebied in het buitenland kennen. Er zal een goede afwisseling zijn van theorie en praktijk, waarbij je coaching krijgt van echte professionals in het vak. Na acht weken ben je klaar om als Business Consultant je opgedane kennis en kunde in de praktijk te brengen bij de meest toonaangevende opdrachtgevers.

WAT BIEDEN WIJ? Dit is voor jou de mogelijkheid om een carrièrepad te kiezen dat past bij je talenten, waarbij je een gezonde balans tussen werk en privé opbouwt. Ordina biedt een ambitieuze en inspirerende omgeving met veel mogelijkheden en vrijheden. Ordina Finance Consulting is een groep van adviseurs en programmamanagers die voor de Top 10 financiële dienstverleners werkzaam is bij belangrijke en grootschalige verandertrajecten. Finance Consulting is onderdeel van Ordina, de specialistische kennisleverancier op het gebied van Consulting, ICT en Outsourcing. Ordina NV is genoteerd aan Euronext Amsterdam en maakt onderdeel uit van de Midkap Index en telt meer dan 5700 medewerkers. WIE BEN JIJ? Je hebt een afgeronde academische opleiding Je hebt minimaal 1 jaar werkervaring Je weet werk en privé goed te combineren Je hebt de ambitie om Senior Business- of Management Consultant te worden Wil jij die High Potential zijn in onze HYPE Masterclass? Neem contact op met Renée Velsen via 06 - 55 32 58 05 of via renee.velsen@ordina.nl

Je wordt dus klaargestoomd voor de top! Natuurlijk met bijbehorende topbeloning. Je krijgt bij aanvang van de masterclass direct een contract voor onbepaalde tijd en bij Ordina kan jij je eigen arbeidsvoorwaarden bepalen, zodat deze volledig tegemoet komen aan jouw wensen.

AENORM

60

July 2008

65


Actuarial Sciences

On OLG-GE Modelling in the Field of Pension Systems: An Application to the Case of Slovenia The economic sustainability of social security systems is currently under extreme pressure due to decreasing fertility rates and increases in life expectancy. These factors contribute to an ageing populous whose share of workers is decreasing and share of social security recipients is increasing (cf. OECD, 2000.) These findings have lead to an anticipation of increases in traditional social security benefits and the introduction of new forms of old-age insurance. Therefore it is no surprise that among key topics of social security reform is the development of a sustainable, efficient and fair system of funding social security in the environment of expected further ageing of the population. Due to its weight in the system of public finances special emphasis is being placed on the pension system. As such it is also the focus of our research.

Miroslav VerbiÄ? was born in 1979 in Dramlje, Slovenia. He earned his MSc degree in econometrics in 2006 at the University of Amsterdam and his PhD degree in economics in 2007 at the University of Ljubljana. His main areas of research are macroeconometric modelling and OLG-GE modelling, in which he has published several SSCI articles. He is employed as a research fellow at the Institute for Economic Research in Ljubljana and as a lecturer of macroeconomics at the University of Ljubljana.

In the 1990s it became apparent to the Slovenian government that, due to unfavourable forecasted demographic developments, the former pension legislation would leave the social security system ill-equipped to fulfill anticipated requirements. In 1996 this became distinctly obvious when, for the first time, the state pension fund needed additional financing from the central budget. This was sufficient to start intense preparations for Slovenian pension reform, which was adopted in the form of the 1999 Pension and Disability Insurance Act (PDIA) and implemented from the 1st of January 2000. With the gradual implementation of the PDIA, the second pension pillar becomes increasingly important. People will become less dependent on the first pension pillar at the point of their retirement. However, as the second pension pillar is mainly voluntary in Slovenia, there have been reservations in regards to whether the present amount of supplementary pension savings will be sufficient to compensate for the

66

AENORM

60

July 2008

deterioration of rights of the first pension pillar. In particular, we were interested in the studying the effects on the welfare of different generations and on the sustainability of public finances that would be made by varying the parameters of the current Slovenian pension system and introduction of mandatory supplementary pension insurance. Effective analysis of the consequences of economic policy on social development requires an appropriate tool; one that is capable of reflecting the complex consequences, to both household and national budget, of the impact from overall and individual social and tax policy measures. Overlapping-Generations General Equilibrium (OLG-GE) models currently represent the most advanced form of numerical general equilibrium models and were suitable for our research. The constructed model, SIOLG 2.0, is a dynamic OLG-GE model of the Slovenian economy. This model is based on the social accounting matrix (SAM), data on demographic structure of the population, expected future demographic developments, characteristics of Slovenian households, and the decomposition of households within generations (cf. VerbiÄ?, 2007). The model was developed for the explicit purpose of analysing the sustainability of the Slovenian public finances. However, the model may also be utilised more generally to analyse any part or sector of the economy.


Actuarial Sciences

The OLG-GE Modelling Framework The starting points of the OLG-GE model are the life cycle theory of consumption by Modigliani and Brumberg (1954) and the permanent income hypothesis by Friedman (1957). These are in fact special cases of the more general theory of intertemporal allocation of consumption (Deaton, 1992). Unlike in the Keynes’s theory of behaviour of consumption and savings, based solely on current income, the OLG-GE models consumption and savings by deriving them from intertemporal optimization behaviour which is dependent on total lifetime income. In the simplest case of unchanged income until retirement (cf. Modigliani, 1986), consumers save during their active lifetime and spend their savings after retirement in order to maintain current levels of consumption. Retirement is therefore the raison d’etre for saving. OLG-GE modelling was first proposed by Samuelson (1958) and Diamond (1965). However, it did not become an established means of economic modelling until Auerbach and Kotlikoff (1987), who constructed a relatively large and detailed computable model of the American economy. It was based on a detailed decomposition of the consumption side of the model, which means that, unlike Ramsey-type models, the consumers live long enough to live at least one period with the next generation of consumers but have finite life-spans. The determination of consumers by their birth cohort

Within the GAMS framework, the dynamic general equilibrium model is written in Mathiesen’s (1985) formulation of the Arrow-Debreu (1954) equilibrium model, i.e. as a mixed complementarity problem (MCP). The key advantage of this formulation is the compact presentation of the general equilibrium problem, achieved by treating variables implicitly, thus significantly reducing the computation time for higher-dimensional models. Namely, the mathematical program includes both equalities and inequalities, where the complementarity slackness holds between system variables and system conditions. Functions of the model are written in calibrated share form; a reasonably straightforward algebraic transformation, which nevertheless considerably simplifies the calibration of the model. A recent version of PATH solver, renowned for its computational efficiency, was used to solve the model and so achieve convergence. The Building Blocks of an OLG-GE Model The Households In the model consumers are deemed to live the periods ascribed by their life expectancies at birth. On the basis of assuming that the life expectancy is approximately 80 years, and that the active lifetime period starts at the age of 20, there are 60 generations in each period of the model. There is a new cohort of consumers born in each such period, which increases the population size, while at the same time other

"Unfavourable demographic changes will lead to a reduction in the active working population and hence the labour endowment" enables analysis of inter-generational effects, making OLG-GE models especially valuable for the analysis of tax policies, pension policies and other social policies. The dynamic SIOLG 2.0 model comprises not only the standard model structure of a national economy, but also the demographic and pension blocks. It is within this framework that the first and the second pillar of the Slovenian pension system were modelled. The model is built within the General Algebraic Modelling System (GAMS), which has become both the most widely used programming language and the most widespread computer software for construction and solving of large and complex general equilibrium models.

consumers pass away and decrease the total population. Consumers are observed in fiveyear intervals within households. Households seek to maximise their expected lifetime utility, subject to income constraints influenced by the need to save for retirement and support children. Households are differentiated in the model according to consumers’ year of birth, income and household size; within each cohort a distinction is made between a couple without children and the “nuclear” family, defined as a household with the standard two child average. Five income profiles representing different income brackets are also given. Consequently, in total there are ten versions of the model, facilitating analysis of intra-generational effects of different economic policies.

AENORM

60

July 2008

67


Wij bieden je

Dat wil niet zeggen dat je van Mars moet komen Als afgestudeerde wil je graag direct aan de slag. Bij ORTEC

Vanwege onze constante groei is ORTEC altijd op zoek

hoef je hier niet lang op te wachten. Je wordt direct op

naar enthousiaste studenten en afgestudeerden die de

projecten ingezet en krijgt veel eigen verantwoordelijkheid.

ruimte zoeken om zich te ontwikkelen en willen bijdragen

Bij ORTEC werken veel studenten. Sommigen schrijven bij

aan de volgende generatie optimalisatietechnologie.

ons een afstudeerscriptie, anderen werken enkele dagen per week als studentassistent.

Hiervoor denken we aan bèta’s in de studierichtingen: • Econometrie

Maar je staat er nooit alleen voor. Je kunt rekenen op

• Operationele Research

de expertise van je collega’s: stuk voor stuk experts

• Informatica

op het gebied van complexe optimalisatievraagstukken in

• Wiskunde

A0662

diverse logistieke en financiële sectoren. Hoogopgeleide,

68

veelal jonge mensen die weten wat ze doen en jou naar een

Kijk voor vacatures en afstudeerplaatsen eens op

hoger niveau zullen brengen. Samen met je collega’s help je

www.ortec.com/atwork. Zit jouw ideale functie of afstu-

klanten gefundeerde beslissingen te nemen. Dit doe je met

deerplek er niet bij, stuur dan een open sollicitatie of

gebruik van wiskundige modellen en het toepassen van

scriptievoorstel naar recruitment@ortec.com.

simulatie- en optimalisatietechnieken.

AENORM

60

July 2008

EPROFESSIONALS IN PLANNING


Actuarial Sciences

The Labour Market The factors of labour volume and labour productivity growth are given exogenously. Changes in wages are reflected in changes in labour supply. The consumption of households with children is additionally corrected for the extra costs per child, if the child is born within the 20-40 year age bracket of the household. Following retirement, households are modelled with two persons and then, from the 11th year onwards, one adult. In the capital markets, saving decisions of households affect investment decisions of firms and so future production. The effects ascribed herein have recurrent effects on the product market via decreased prices and on the labour market via higher productivity, which in turn lead to higher wages and so higher household income. It is a straightforward procedure to analyse both effects with a dynamic OLG-GE model. Perfect Foresight and the Demographics The perfect foresight assumption in the forward-looking model specification implies that households perform intertemporal optimization of the present value of total future consumption. In other words, consumers have full information at their disposal, on average adopt the right decisions and, as espoused by rational expectations theory, are familiar with future modifications of key economic indicators. They are able to anticipate new policies and to prepare themselves for future changes. The assumptions of equilibrium in all markets and of achievable sustainable economic growth, enable analysis of different scenarios which cause deviations from the reference growth path and changes in macroeconomic and microeconomic indicators. This is especially important when analysing social security, because it makes projecting the effects of demographic changes on the social security system possible. For this analysis we have three variants of demographic projections available; the low variant, the high variant and the reference medium variant to which they are both relative. The low variant combines lower fertility with lower life expectancy and lower net migration than the reference variant, whilst the high variant combines higher fertility with higher life expectancy and higher net migration than the reference variant. The Firms The assumption of perfect foresight is also valid for firms, which maximise profits in the environment of perfect competition. The effects of technology are given by the constant elasticity of substitution (CES) production function. The

number of production sectors in the model is dependent on availability of the input-output table for the base year, which means that there are 60 sectors of the standard classification of activities (SCA) available for discretionary aggregation. Government spending is dependent on economic and population growth and is financed by revenues from personal income tax, capital income tax, value-added tax and import duties. The different revenue sources of the Slovenian system of public finances provided a range of options for the funding of various economic policies in the simulation phase of the modelling. The Pension System The modelling of the first pension pillar, financed on a pay-as-you-go (PAYG) basis, was designed to capture the key pension system parameters that are usually the subject of modification with pension reforms. Emphasised in the model are the cash flow of the mandatory pension insurance institution, the relationship between the pension base and pensions, and the process of adjustment of pension growth with respect to wage growth. The modelling of the fully-funded (FF) second pillar is focused on the implementation of the liquidity constraint. Use was made of supplementary pension profiles and the ratio of premia paid to pensions paid out from supplementary pension insurance. The so-called “total pension� was introduced, representing the sum of the pension from the first and second pillars, where at every point households adjust the scope of their labour supply and their current consumption toward a target total pension. This creates a certain volume of supplementary pension saving which, if the target total pension is defined at a level dissimilar to the reference level, can be treated as mandatory supplementary pension insurance. The Foreign Sector The dynamic SIOLG 2.0 model is made complete using Armington’s (1969) assumption of imperfect substitutability, where commodity products are separated based on whether they were produced domestically or abroad. Demand for imported products is derived from the firm cost minimization and the consumer utility maximization criteria. With regard to exports, domestically produced products are sold at home and abroad but are nevertheless treated as imperfect subsititutes. Slovenia is assumed to be a small open economy, implying that the changes in the volumes of imports and exports do not affect the terms of trade. Given the intertemporal balance of payments constraint, international capital flows are endogenous.

AENORM

60

July 2008

69


Actuarial Sciences

Concluding Remarks The model presented herein involves a dynamic framework of a national economy, which includes household utility maximisation under the assumption of perfect foresight. This model allows for distinctions according to size of household, income level, household lifespan, and overlapping generations within households. This kind of modelling framework facilitates the monitoring and forecasting of complex shortterm and long-term consequences of demographic changes, such as an ageing population, on individual categories of public finance. Furthermore, it allows for analysis of the impact of changes in the taxation and social security system on the flexibility, competitiveness and so growth of the economy. The advantages that OLG-GE models offer in comparison to other modelling tools, such as actuarial models of pension reforms and generational accounting models, are not in modelling specific socio-economic phenomena such as demographic slowdown of GDP. They are in the their ability to model general equilibria which, being closer to the true functioning of the actual economy, make the results of the model more realistic. This entails modelling mutual interactions and feedback effects between macroeconomic aggregates which simpler models are not able to capture. This is seen in the analysis of the pension system, where a link has to be established between labour endowment and labour price. Unfavourable demographic changes will lead to a reduction in the active working population and hence the labour endowment, which in turn leads to an increase in labour price (wages) above the steady state growth. Since pension dynamics depend on the dynamics of wages, this also means higher pension expenditure. It can be seen that modelling such relationships is vital for ensuring a realistic and accurate analysis produced using a model of this kind. Naturally, we are aware that economic models are merely tools intended to replicate and analyse a specific economic theory or a part thereof. As such they will always be an incomplete and deficient representation of reality. The same applies to our dynamic OLG-GE model. However, with regards to its capacity to capture socio-economic reality and in terms of currently available levels of socio-economic analysis, it can be concluded that at present there is no more complete deterministic instrument than a dynamic OLG-GE model to meet the objectives set herein.

70

AENORM

60

July 2008

References Armington, P.S. (1969). A Theory of Demand for Products Distinguished by Place of Production. IMF Staff Paper, 16. International Monetary Fund, Washington, DC. Arrow, K.J. and Debreu, G. (1954). Existence of an Equilibrium for a Competitive Economy. Econometrica, 22(3), 265-290. Auerbach, A.J., and Kotlikoff, L.J. (1987). Dynamic Fiscal Policy. Cambridge, NY: Cambridge University Press. Deaton, A. (1992). Understanding Consumption. Oxford: Clarendon Press. Diamond, P.A. (1965). National Debt in a Neoclassical Growth Model. American Economic Review, 41, 1126-1150. Friedman, M. (1957). A Theory of the Consumption Function. Princeton: Princeton University Press. Mathiesen, L. (1985). Computation of Economic Equilibria by a Sequence of Linear Complementarity Problems. In Economic Equilibrium: Model Formulation and Solution, ed. A.S. Manne, 144-162. Amsterdam: NorthHolland. Modigliani, F. and Brumberg, R. (1954). Utility Analysis and the Consumption Function: An Interpretation of Cross-Section Data. In Post Keynesian Economics, ed. K.K. Kurihara, 388436. New Brunswick, NJ: Rutgers University Press. Modigliani, F. (1986). Life Cycle, Individual Thrift, and the Wealth of Nations. American Economic Review, 76(3), 297-313. Organisation for Economic Co-operation and Development (OECD) (2000). Reforms for an Ageing Society. Paris: Organisation for Economic Co-operation and Development. Samuelson, P.A. (1958). An Exact Consumptionloan Model of Interest With or Without the Social Contrivance of Money. Journal of Political Economy, 66, 467-482. VerbiÄ?, M. (2007). Equilibrium Model of with Emphasis on Ljubljana: University

A Dynamic General the Slovenian Economy the Pension System. of Ljubljana.


Puzzle

Puzzle As always, we present two puzzles from Sam Lloyd to our beloved readers. Of course we will first give the solutions of the puzzles in the previous Aenorm. Tell mother’s age

Making a perfect Square The redaction of Aenorm is very interested in origami, especially when it involves some kind of mathematical understanding. Suppose we would have a paper in the following shape:

Let T be the age of Tommy, F the age of his father and M the age of his mother. We know the following facts: T + F + M= 70 6 T= F

(1) (2)

We also know that when their combined ages will be twice of what they present know, Tommy’s father will only be twice as old as Tommy. Since this will be the case in 70/3 years, we know that: 2 (T + 70/3)= F + 70/3

(3)

Combining (3) with (2) we can find that: T=35/6 and F=35

All econometricians like perfect squares, so we would like to convert this shape into one. We can use a pair of scissors, but the challenge is to cut the mitre-shaped piece of paper into the fewest possible number of pieces which will fit together and form a perfect square. Six bottles of beer on the wall

Of course we want to know the age of the mother, which follows easily from (1): M=175/6 or 29 years and 2 months. Jealous Couples Describing the male econometricians as ABCD and the female econometricians as abcd, the 17 trips of the boat can readily be followed: Shore Island Over ABCDabcd 0 0 ABCDcd 0 ab ABCDbcd 0 a ABCDd bc a ABCDcd b a CDcd b ABa BCDcd b Aa BCD bcd Aa BCDd bc Aa Dd bc ABCa Dd abc ABC Dd b ABCac BDd b ACac d b ABCDac d bc ABCDa d 0 ABCDabc cd 0 ABCDab 0 0 ABCDabcd

At the University of Amsterdam, a professor decided to make some extra money by selling beer and wine to students. He started his new business with an odd lot of beer and wine. Of course the VSAE was down on it like a flash and bought 140 euro worth of beer and wine, paying twice as much for the wine as for the beer per gallon. With doing so we left the professor with only one barrel. Now, see if you can guess what that barrel was worth? Solutions Solutions to the two puzzles above can be submitted up to September 1st. You can hand them in at the VSAE room; C6.06, mail them to info@vsae.nl or send them to VSAE, for the attention of Aenorm puzzle 59, Roetersstraat 11, 1018 WB Amsterdam, Holland. Among the correct submissions, one book token will be won. Solutions can be both in English as in Dutch.

AENORM

60

July 2008

71


Facultive

University of Amsterdam The largest event hosted by the VSAE during the last few months was the Econometric Game. On the 7th and the 8th of April more than a hundred students from 22 different universities gathered in Amsterdam to compete in solving an econometric case. After two days of hard work, the jury pronounced the University of Cambridge as the winner. The University Carlos III Madrid came in second and the team of Copenhagen completed the podium. Shortly after the Econometric Game, 24 students from the VSAE went on a trip to MexicoCity. Here they visited multiple companies and institutions to gather information in order to solve a case for Greenpeace Mexico. Two weeks after the journey, their final report was presented to Greenpeace, who were very pleased with the results. Now that the holidays are approaching, most activities have come to a rest. After the summer break we will start with introducing the university and our study association to the all the new students. The next major event on our agenda is the National Econometricians Soccer Tournament on the 2th of October. At this date 200 students in econometrics, actuarial sciences and operational research from six different universities will compete in a soccer tournament. Agenda 25-27 August Introduction Days 9 September Monthly drink 2 October National Econometricians Soccer Tournament 7-8 October Beroependagen (Career event)

72

AENORM

60

July 2008

Free University Amsterdam The vacation time has come: it’s quite quiet in our board room. After the summer Pieter will take over and try to lead Kraket to more and higher levels. This will be hard, because we can look back at a very good year. Particularly the Kraket weekend and the year closing were successful activities. Not that the other activities weren’t, but these were the most recent ones and we’re very proud that we organized these. After the summer break, there will be the famous IDEE-week and the introductionweekend. And of course, after that, all the splashing new activities. As a closing I wish the next board a very good year. I’m sure that they will do well and will organize many very successful activities. The best of luck and a good holiday to everyone!


Bedenk jij de formule voor de toekomst?

Het financiële speelveld is complexer dan ooit. En het verandert nog elke dag. Dat vraagt om jong talent met een flexibele geest. Professionals die op basis van hun cijfermatige inzicht, kennis en gezond verstand, adviezen geven waar toekomstige generaties op kunnen bouwen. Waarmee bijvoorbeeld het pensioen van miljoenen Nederlanders optimaler wordt gefinancierd.

Junior investment consultants m/v Mercer biedt je een werkplek waar dit kan. De sfeer is informeel en inhoudelijk. Terwijl je werkt binnen een internationale organisatie die staat voor financieel-strategische dienstverlening op het hoogste niveau. En er is ook alle ruimte voor jouw toekomst. Meer weten? Neem dan contact op met Selda Akkaya, 020-5419469 of selda.akkaya@mercer.com of kijk op www.werkenbijmercer.nl

IT’S TIME TO CALL MERCER www.werkenbijmercer.nl RETIREMENT • HEALTH & BENEFITS • INVESTMENT CONSULTING • HUMAN CAPITAL AMSTELVEEN • ARNHEM • GRONINGEN • ’s-HERTOGENBOSCH • ROTTERDAM


To some people it’s just a job To others, it’s a leap into great opportunities

If you want challenging client assignments, opportunities to learn new skills and a chance to develop innovative solutions to client issues, you’ll be able to build a solid career at Towers Perrin. Come and see what we have to offer: www.towersperrin.com or contact us at recruitment-nl@towersperrin.com / 020 7114016


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.