Volume 20 Issue 1

Page 1

met Competitieve voordelen behalen met nieuwsinformatie op de beurs

Regret, rejoice and convex demand for quality: Paving the way for superstars The Driver Assignment Vehicle Routing Problem On Fit and Forecast of US Inflation: A Robustness Analysis using Extended Phillips Curve Models with non-filtered Data

Volume 20

|

Issue 1



Contents 2

Competitieve voordelen behalen met nieuwsinformatie op de beurs Viorel Milea and Frederik Hogenboom

4

Regret, rejoice and convex demand for quality: Paving the way for superstars. Caspar G. Chorus

12

The Driver Assignment Vehicle Routing Problem

18

On Fit and Forecast of US Inflation: A Robustness Analysis using Extended Phillips Curve Models with non-filtered Data

Remy Spliet and Rommert Dekker

Nalan Basturk, Cem Cakmakli, Pinar Ceyhan and Herman K. van Dijk

Index of Advertisers Econometrie.com StudyStore PGGM

Inside front cover Inside back cover Back Cover

Subscription

Colophon

Higher year members of the Econometrisch Dispuut and its contributors are

Medium Econometrische Toepassingen (MET) is the scientific journal of the

automatically subscribed to the Medium Econometrische Toepassingen. A

Econometrisch Dispuut (ED), a faculty association for students of the Erasmus

stand alone subscription costs

University Rotterdam. Website: www.met-online.nl

persons and

12 per year for students,

12 for private

24 for companies and institutions. The subscription is automati-

cally continued until cancellation. Unsubscribe by sending an email to the address

Editorial Board: Matthijs Aantjes, Marijn Waltman | Final Editor: Ruud van

of the editorial board.

Luijk | Design: Haveka, de grafische partner, +31 - (0)78 - 691 23 23 | Circula-

ISSN: 1389-9244

tion: 600 copies

Š2013 - No portion of the content may be directly or indirectly copied, pub-

Address Editorial Board | Erasmus University Rotterdam | Medium Econome-

lished, reproduced, modified, displayed, sold, transmitted, rewritten for publication

trische Toepassingen | Room H11-02 | P.O. Box 1738 | 3000 DR Rotterdam |

or redistributed in any medium without permission of the editorial board.

The Netherlands | met@ectrie.nl | Acquisition: Roman Gorlov | +31 - (0)10 - 408 14 39

MET | Volume 20 | Issue 1 | 2013

1


Competitieve voordelen behalen met nieuwsinfor Viorel Milea Econometric Institute Erasmus School of Economics Erasmus University Rotterdam Frederik Hogenboom Econometric Insitute Erasmus School of Economics Erasmus University Rotterdam

Communicatie heeft altijd al een cruciale rol gespeeld in financiële markten. Ruim drie eeuwen geleden was het paard op dit vlak de spil van financiële markten. Door technologische ontwikkelingen aan het begin van de twintigste eeuw kwam hier drastisch verandering in en werd de telefoon het voornaamste communicatiemiddel.

Tegen het eind van de twintigste eeuw waren financiële markten zelfs sterk afhankelijk van informatie- en communicatietechnologieën (ICT). Tegelijkertijd werd in 1971 de National Association of Securities Dealers Automated Quotation (NASDAQ) een elektronische markt waar offertes en orders direct konden worden geplaatst en opgehaald met behulp van een computer. Deze ontwikkeling vergemakkelijkte de automatisering van het order matching-mechanisme, wat leidde tot een decentralisatie van de markttoegang. Geholpen door nieuwe regelgeving leidden de elektronische financiële markten van de eenentwintigste eeuw tot een toename van de beschikbaarheid en openheid van aandelenhandel, op de voet gevolgd door een groei in het aantal marktparticipanten die voordeel wisten te behalen uit de lagere transactiekosten. De resulterende piek in het verhandelde volume leidde tot een toename van liquiditeit, wat institutionele handelaren ondersteunde bij het verdelen van grote transacties in kleinere orders met een lagere impact op de markt. Deze ontwikkelingen gingen tevens gepaard met een toenemende concurrentie tussen beurzen. Het grote volume en de ruime beschikbaarheid van financiële data - typisch voor de ontwikkelde financiële markten van vandaag - leiden niet alleen tot een toegenomen transparantie van deze markten, maar ook tot de ontwikkeling en het gebruik van steeds complexere modellen. Mede door de exponentiële stijging in rekenkracht - welke beschikbaar is tegen relatief steeds lagere kosten - spelen geautomatiseerde systemen een toenemend dominante rol in de financiële handel. Alle voorgenoemde factoren dragen bij aan de noodzaak om bedrijfsmodellen van institutionele bemiddelaars in de financiële sector radicaal aan te passen. Op het eerste gezicht kan de hedendaagse elektronische financiële markt slechts lichtelijk veranderd lijken, maar het is daarentegen een compleet nieuwe omgeving voor alle marktdeelnemers. In 2006 werd ruim een derde van alle aandelenhandel in de Verenigde Staten uitgevoerd met behulp van geautomatiseerde handelsalgoritmes. Recente schattingen wijzen op een bijdrage van dergelijk

2

MET | Volume 20 | Issue 1 | 2013


rmatie op de beurs

algoritmisch handelen in de aandelenhandel zo hoog als 77% in de Verenigde Staten en tussen de 30% en 50% in Europa. Machines vormen duidelijk een nieuw type handelaar. Maar hoe ver reikt hun invloed? De wereld ondervond dit op 6 mei 2010, toen de Dow Jones Industrial Average een duik maakte van zo’n 600 punten in een tijdsbestek van nog geen vijf minuten, waarna de markt zich binnen twintig minuten herstelde tot het niveau van voor de crash. Deze gebeurtenis is de boeken in gegaan als de “Flash Crash”, en wordt grotendeels geweten aan (slecht ontworpen) geautomatiseerde handelsalgoritmes. De snelle groei van het gebruik van algoritmisch handelen en de vooruitgang van zulke technologieën dragen bij aan de afgenomen winstgevendheid van dergelijke handelsalgoritmes. Ook voor het komende decennium lijkt deze trend door te zetten. Competitieve voordelen in financiële markten zullen het meest waarschijnlijk hun oorsprong vinden in de ontwikkelingen met betrekking tot News Analytics. De informatiebronnen bruikbaar voor deze geautomatiseerde technieken kunnen erg divers zijn. We onderscheiden gangbare media, persberichten van financiële entiteiten, technische rapporten van organisaties en instituten gemoeid met toezicht in financiële markten, fora en sociale media, en tot slot een bron die nog niet in academische inspanningen is gebruikt: dissidente meningen. News Analytics behelst zowel het (geautomatiseerd) verzamelen en verwerken van nieuwsberichten, als ook het gebruiken van informatie uit deze berichten. Dergelijke informatie is van grote waarde in handelsalgoritmes, aangezien hiermee actie ondernomen kan worden nog voordat deze informatie is verwerkt in de marktprijzen. Gezien de potentiële competitieve voordelen die te behalen zijn met het gebruik van nieuwsinformatie in handelsalgoritmes verdient het de voorkeur om interdisciplinair onderzoek in het raakvlak van taalanalyse, informatiesystemen en economie een impuls te geven. Dit kan solide oplossingen bewerkstelligen die in de praktijk ingezet kunnen worden en die tevens de gebruikelijke problemen gerelateerd aan complexiteit en snelheid zo goed mogelijk minimaliseren. MET | Volume 20 | Issue 1 | 2013

3


Regret, rejoice and convex demand for quality: Pav Caspar G. Chorus Faculty of Technology, Policy and Management Delft University of Technology

This paper presents a simple decisionmaking model that generates a convex demand for quality, and thereby indirectly predicts the existence of superstars. When compared to their competition, superstar artists, museums, or soccer players are much more popular than differences in quality would suggest at first sight. Although convexity of demand for quality is a known precondition for the existence of superstars, it remains

unclear

what

mechanism

might cause this imperfect substitution between different quality levels. The paper

proposes

a

decision-making

model where a decision-maker’s anticipated satisfaction with a chosen alternative depends on the regret and rejoice associated with bilateral comparisons of quality levels between the alternative and all competing alternatives. Using an analytical example and numerical simulations the paper shows that the proposed model generates imperfect substitution between quality levels and the resulting convexity of demand for quality, thereby paving the way for the existence of superstars.

4

1. Introduction The presence of so-called superstars has been observed in a variety of forms, including artists (Schulze, 2003), songs (Salganik et al., 2006), movies (Elberse, 2008), soccer players (Lucifora & Simmons, 2003), CEOs (Terviö, 2009), and museums (Frey, 1998).1 A common interpretation of the superstar phenomenon is that the difference in popularity between a superstar and its closest competitors is much larger than their underlying differences in quality: even when the quality of the second best ‘star’ is only slightly less than that of the superstar, the difference in popularity is often found to be very large. In the scholarly literature, the dominant explanation for this superstar phenomenon has been put forward in a seminal paper published some thirty years ago (Rosen, 1981). This paper suggests that superstars emerge due to the combination of a convex demand for quality (i.e., as quality becomes higher, quality differences result in increasingly and disproportionally large differences in demand) and economies of scale in production output (i.e., production costs do not rise in proportion to demand). The main theoretical argument put forward to support the convexity of demand functions is that “Lesser talent often is a poor substitute for greater talent. … hearing a succession of mediocre singers does not add up to a single outstanding performance” (Rosen, 1981 – page 846). However, throughout the superstar-literature, only little emphasis has been put on defining what exactly might cause this convexity in demand. Mainly for pragmatic reasons (see Rosen, 1981 – page 847), it has been proposed that convexity may be assumed to be caused by a fixed cost of consumption per unit of quantity; this results in a competitive advantage for higher quality alternatives (resulting in convex demand) when consumers aim to minimize costs. However, it seems unlikely from a behavioral viewpoint that the convexity of demand 1 In the remainder of this paper the more general term ‘alternative’ is used to refer to superstars and their competition, to refrain from suggesting that the paper focuses on any particular type of choice context in particular.

MET | Volume 20 | Issue 1 | 2013


ving the way for superstars

for quality is purely or even mainly caused by decision-makers’ inclination to minimize consumption costs. Rather, it is to be expected that decision-makers’ tastes for quality itself play an important role in determining the shape of the demand function as well. A natural taste-related assumption that would result in convex demand for quality is of course the assumption that decisionmakers maximize utility and that utility itself is a convex function of quality. The assumption of a convex utility function would imply that as quality increases, an additional marginal increase in quality brings an increasingly higher amount of marginal utility. However, this assumption runs against intuition as well as the empirically well-established Weber-Fechner law (e.g., Masin et al., 2009) which postulates that for a difference in stimulus (such as quality) to be noticed, the difference must be bigger for bigger initial magnitudes of the stimulus (i.e., higher initial quality levels). This paper puts forward a potential explanation (or model) for the emergence of convex demand for quality which does not rely on assumptions regarding consumption costs2 nor convex utility functions. The model builds on two assumptions regarding a decision-maker’s choicebehavior (see the next section for a more formal treatment): first, when evaluating an alternative from a choice set featuring alternatives with different quality levels, the model assumes that the individual uses quality levels of competing alternatives as reference points. When the quality level of a competing alternative is higher than that of the considered alternative, the difference counts as (anticipated) regret; when the quality level of a competing alternative is lower than that of the considered alternative, the difference counts as (anticipated) rejoice. Second, the model assumes that the total anticipated satisfaction associated with a considered alternative equals the sum of all anticipated regrets and rejoices that are associated with comparing the considered alternative with each of its competitors. In other words, the model 2 In the remainder of this paper, consumption costs are assumed zero for reasons of clarity of exposition.

MET | Volume 20 | Issue 1 | 2013

assumes that the anticipated satisfaction that is associated with a considered alternative increases when the number of worse-performing alternatives (and the extent to which they perform worse) increases, and decreases when the number of betterperforming alternatives (and the extent to which they perform better) increases. This referencedependent model of satisfaction implies that there is a satisfaction bonus associated with being the best of the set of alternatives (and increasingly so as the choice set get larger), and a dissatisfaction penalty associated with not being the best. Especially in the context of entertainmentrelated goods (e.g., concerts, museums, spectator sports), which have traditionally been the main focus of superstar-related research, the notion that comparisons with competing alternatives drives satisfaction seems an intuitive conceptualization of behavior. Also note that the model’s core premises build on empirical evidence, accumulated in the field of applied economics over the years (e.g., Simonson, 1989; Tversky & Simonson, 1993; Kivetz et al., 2004; Chorus, 2012), that decisionmakers – when considering an alternative – use quality levels of competing alternatives as reference points. Empirical evidence for the additive treatment in particular of reference dependent tastes can be found in Chorus (2012). 2. An additive regret-rejoice model that generates convex demand for quality gives the In notation, satisfaction which is perceived by decision maker to be associated with a considered alternative , and for and given perceived quality levels competing alternatives , respectively and the that are associated with the different utilities quality levels (see further below for various assumptions that may be made concerning the shape of the utility function). Acknowledging that there may be unobserved and/or random variation in perceived satisfaction across alternatives and individuals which may be represented by means of can be an additive error term, total satisfaction

5


. When the written as follows: decision-maker chooses by means of maximizing is total satisfaction and when the error term assumed to be independently and identically distributed (Extreme Value Type I), the probability that individual n chooses alternative i takes the form of the canonical Multinomial Logit-function

At first sight, there is an obvious connection between the formulation of satisfaction presented here and Regret Theory, a theory well known for its potential to explain decision-making in the context of risky quality levels (Loomes & Sugden, 1982). However, besides the obvious difference relating to the fact that the model presented here focuses on riskless (rather than risky) choice, there are two other important differences between the two approaches: first, the model presented here assigns equal weight to rejoice as it does to regret, whereas Regret Theory assumes that regret is weighted (much) more heavily than rejoice (usually this is asymmetry is modeled by means of a convex regret function). Second, Regret Theory has originally been put forward for the study of choices from binary sets, whereas the model presented here focuses on (much) larger choice sets. Furthermore, the few studies that have extended Regret Theory towards non-binary choice sets have assumed that only the quality level of the best of the competing alternatives serves as a reference point (Quiggin, 1994), whereas the model presented here is ‘additive’ in the sense that it assumes that every competing alternative’s quality level is used as a reference point and that regret and rejoice are summed over all bilateral comparisons with competing alternatives. To see how the proposed model predicts a convex transformation of quality to demand, consider an individual (subscripts are omitted for readability) who faces a choice set of alternatives (e.g., records, museums, concerts, … ), ordered has the in terms of quality so that alternative highest perceived quality , and so forth. For this

moment, it is assumed for reasons of clarity of exposition that utility equals quality (in notation: ). Assume that the quality of the highestis only slightly higher than quality alternative . In notation, that of ‘second-best’ alternative , with being arbitrarily small. Given the model presented above, it is easily shown that the difference in satisfaction between alternative equals , which is a substantial and amplification of the difference in quality, especially when the choice set size ( ) increases. consists of This satisfaction difference generates a rejoice three parts: first, alternative due to the comparison with of magnitude ; second, alternative generates a alternative rejoice associated with all bilateral comparisons alternatives of lesser quality with all the other larger than the rejoice that is a magnitude with the generated by comparing alternative alternatives; third, alternative same suffers a regret of magnitude due to the comparison with . In combination, this implies that the small difference in quality between the highest quality is alternative and the second best alternative during the transformation from multiplied by quality to satisfaction. When the choice set is sufficiently large, the amplification effect (i.e., ) results in large differences in satisfactions and resulting choice probabilities between and , the latter receiving very alternatives large demand at the expense of the popularity of the former and of the other competing alternatives. An important implication of the model is that this effect becomes more pronounced as the choice set size becomes larger). In other increases (i.e., as words: the model predicts that demand for quality becomes more convex, and the existence of superstars becomes more likely, when choice sets are larger. This implication appears to be fully in line with the behavioral intuitions formulated in the introduction, and with the notion that previous studies have without exception identified superstars in contexts where there are great numbers of alternatives to choose from (see the references cited in the introduction).

6

MET | Volume 20 | Issue 1 | 2013

(McFadden, 1973):


Figure 1a: demand for the highest quality alternative as a function of choice set size

Figure 1b: demand for the second-highest quality alternative as a function of choice set size

3. Numerical illustrations

notation, (see below for simulations given concave utility functions). This choice probability is then multiplied by 10^6 to arrive at a measure of expected demand per alternative from a given population of 1 million decision-makers (consumers). Subsequently, the expected demand for the two most popular alternatives – which are by definition the highest-quality alternatives – is identified. Acknowledging that there is and randomness in assigned quality-levels, this process is repeated 100 times and the average expected demand of the two highest quality alternatives is plotted as a function of choice set size ( ), which is varied from 10 to 100. The resulting pattern presented in Figures 1a and 1b is unambiguous: when the choice set contains only 10 alternatives, the ±10% difference in quality already results in a ±300% difference in demand. Moreover, when the number of alternatives in the choice sets increases, the small quality difference between the two best performing alternatives results in increasingly large differences in terms of demand as the absolute demand for the highest quality alternative increases and that for the

To further illustrate how the presented model generates convex demand for quality (and hence predicts the existence of superstars), a Monte Carlo experiment is performed. Settings are as follows: a alternatives are generated; number of have an alternatives numbered associated quality level , drawn from a Uniform is distribution3 between 0 and 0.9. Alternative assigned a quality level of 0.9, and alternative is assigned a quality level of 1. Using these settings and by applying the model presented above, it is illustrated below how the small difference in quality and leads to potentially very large between differences in popularity (implying convexity of demand), especially when the choice set size increases. More specifically, each alternative’s satisfaction and resulting choice probability are computed using the model presented above, given the assumption that utility equals quality or in 3 Similar analyses (not presented here) using normally distributed quality levels gave the same results.

MET | Volume 20 | Issue 1 | 2013

7


Figure 2a: demand for the highest quality alternative as a function of choice set size

Figure 2b: demand for the second-highest quality alternative as a function of choice set size

second-best alternative decreases. The result that the addition of relatively low-quality alternatives to the choice set leads to an increase in demand for the highest quality alternative in the set, reflects the notion that the demand-bonus associated with being the highest-quality alternative in the choice set increases as the choice set gets larger. This implication is in line with behavioral intuition and with the premises underlying the proposed decisionmaking model. All in all, the simulation clearly illustrates how the proposed model predicts (1) that demand for quality is convex and (2) that the extent to which demand for quality is convex increases as a function of choice set size. To study whether these results also hold in the context of concave (rather than linear) utility functions, the above analyses are repeated in the context of a logarithmic utility function4: . Figures 2a and 2b show that even when the additional utility associated with a gain in quality decreases as a function of initial quality (implying concavity of the utility of quality), the small difference in quality between the two highest

quality alternatives results in large differences in demand (implying convexity of demand for quality) – and increasingly so for larger choice sets. To contrast these results with a pattern generated by a standard model which does not feature additive regret and rejoice but rather assumes that satisfaction is only based on absolute quality levels, the exact same simulation as directly above is performed, but now based on the following ; in other words, satisfaction utility function: equals utility equals quality. Figures 3a and 3b show that in this case, as is to be expected, differences in average expected demand between the two highest quality alternatives are very small for all choice set sizes, implying non-convexity of demand for quality. As expected, the demand for both the two highest-quality alternatives decreases when competing alternatives are added to the choice set. Finally, it may be noted that a mixture decision-making model can be constructed which assumes that satisfaction is to some extent driven by additive regret and rejoice and to some extent by absolute quality levels: , with

4 Similar simulations (not presented here) using other concave utility functions such as gave the same results.

8

MET | Volume 20 | Issue 1 | 2013


Figure 3a: demand for the highest quality alternative as a function of choice set size

Figure 3b: demand for the second-highest quality alternative as a function of choice set size

indicating the extent to which satisfaction is driven by relative comparisons with competing alternatives (rather than by absolute quality levels). Additional simulations, presented in Figures 4a and 4b, show that even when satisfaction is only to a small extent (i.e., for 10%) determined by relative comparisons, convex demand for quality arises – especially when the choice set size increases. More specifically, Figures 4a and 4b show that when choice sets are relatively small, the 90%-weight on absolute (rather than relative) quality makes that both the highest quality alternative and the second highest quality alternative loose demand when new alternatives are introduced to the set. As a result, the difference between the two alternatives in terms of demand remains relatively small, implying non-convexity of demand. However, when choice sets feature more than around 30 alternatives, the trend is reversed in the sense that the highest quality alternative now starts to gain demand from the introduction of new alternatives (i.e., the bonus associated with being the highest quality alternative in the set becomes increasingly noticeable) and the second-highest quality alternative loses demand simultaneously.

The result is that for these larger choice sets, the difference in demand between the two highest quality alternatives to an increasingly large extent starts to exceed the difference in quality, implying an increasingly convex demand for quality.

MET | Volume 20 | Issue 1 | 2013

9

4. Conclusion and discussion This paper provides an explanation for the presence of convex demand for quality, which is an oftencited precondition for the existence of superstars. The proposed model of decision-making does not assume that convex demand – and as a result: the existence of superstars – is triggered by fixed consumption costs (which would likely be an incomplete representation of the behavioral premises underlying demand-convexity), nor does the model assume convex transformations of quality to utility (which would be at odds with empirical evidence that for a difference in a stimulus to be noticed, the difference must be bigger for bigger initial levels of the stimulus). Instead, the proposed model generates a convex transformation from quality to demand by assuming that decisionmakers anticipate that the satisfaction associated


Figure 4a: demand for the highest quality alternative as a function of choice set size

Figure 4b: demand for the second-highest quality alternative as a function of choice set size

with an alternative depends on quality comparisons between that alternative and each of the competing alternatives in the choice set (rather than depending on absolute quality levels). Even when quality differences are very small, the adding up of regret and rejoice that is associated with all these bilateral comparisons results in a demand bonus associated with being the best of a choice set, which increases as the choice set gets larger. Using an analytical example and numerical simulations the paper shows that the proposed decision-making model predicts convexity of demand for quality even when the transformation from quality to utility is concave and even when satisfaction is only to a small extent driven by the proposed model of relative quality comparisons. Clearly, the specific outcomes reported in the paper are to some extent the result of the particular settings adopted in the analytical example and in the Monte Carlo experiment. However, experimentation with a variety of different experimental settings suggests that the over-all results presented above are robust (see also Footnotes 4 and 5). The finding that the proposed model predicts convex demand for quality without relying on

minimization of (fixed) consumption costs, does of course not imply that the proposed model is the most important (let alone, the only) explanation for the existence of convex demand for quality. Most likely, a combination of factors (including but not limited to the proposed model as well as minimization of consumption costs) jointly determines the existence of convex demand for quality and the associated existence of superstars. Determining the importance of each factor and of their mutual interactions is ultimately an empirical, not a theoretical challenge that may be addressed in future research.

10

MET | Volume 20 | Issue 1 | 2013

Acknowledgement Support from the Netherlands Organization for Scientific Research (NWO), in the form of VENIgrant 451-10-001, is gratefully acknowledged. References Chorus, C.G., 2012. Random Regret Minimization: An overview of model properties and empirical evidence. Transport Reviews, 32(1), 75-92


Elberse, A., 2008. Should you invest in the long tail? Harvard Business Review, 86, 1-10 Frey, B., 1998. Superstar museums: An economic analysis. Journal of Cultural Economics, 22, 113-125 Kivetz, R., Netzer, O., Srinivasan, V., 2004. Alternative models for capturing the compromise effect. Journal of Marketing Research, 41, 237-257 Loomes, G., Sugden, R., 1982. Regret Theory: an alternative theory of rational choice under uncertainty. The Economic Journal, 92, 805-824 Lucifora, C., Simmons, R., 2003. Superstar effects in sports: Evidence from Italian soccer. Journal of Sports Economics, 4(1), 35-55 Masin, S.C., Zudini, V., Antonelli, M., 2009. Early alternative derivations of Fechner’s law. Journal of the history of the behavioral sciences, 45(1), 56-65 McFadden, D., 1973. Conditional logit analysis of qualitative choice-behaviour. In Zarembka, P., (Ed.) Frontiers in econometrics, Academic Press, New York Quiggin, J., 1994. Regret theory with general choice sets. Journal of Risk and Uncertainty, 8(2), 153-165 Rosen, S., 1981. The economics of superstars. The American Economic Review, 71(5), 845-858 Salganik, M.J., Dodds, P.S., Watts, D.J., 2006. Experimental study of inequality and unpredictability in an artificial cultural market. Science, 311, 854-856 Schulze, G.G., 2003. Superstars, pp. 431-436 in Handbook of Cultural Economics, R. Towse, A. Khakee, Eds. (Edward Elgar, Cheltenham, UK), Simonson, I., 1989. Choice based on reasons: The case of attraction and compromise effects. Journal of Consumer Research, 19, pp. 158-174 TerviÜ, M., 2009. Superstars and mediocrities: Market failure in the discovery of talent. Review Economic Studies, 76(2), 829-850 Tversky, A., Simonson, I., 1993. Context-dependent preferences. Management Science, 39(10), 1179-1189

MET | Volume 20 | Issue 1 | 2013

11


The Driver Assignment Vehicle Routing Problem Remy Spliet and Rommert Dekker Econometric Institute, Erasmus University Rotterdam

1. Introduction

We introduce the driver assignment vehicle routing problem. In this problem customers are assigned to drivers before demand is known, and after demand is known a routing schedule has to be made such that every driver visits at least a fraction

of its assigned

customers. We design a cluster firstroute second algorithm to find good solutions to this problem. From our computational

experiments

we

con-

clude that adhering to driver assignments can lead to an average increase of the expected transportation costs of 12.7%. Allowing a little flexibility, by choosing

, leads to an average

increase in transportation costs of only 2.9%. Finally, for instances with we compare the expected transportation costs from only using backup drivers for customers that can not be visited by their assigned driver, to the costs from also trying to assign them to nonbackup drivers with leftover capacity. The former leads to an average increase in expected transportation costs of 15.3%.

12

The capacitated vehicle routing problem, CVRP, is the problem of designing routes for vehicles with limited capacity to deliver goods to customers in a distribution network, such that the total transportation costs are minimized. This is a well studied problem in the scientific literature, see Baldacci et al. (2012) and Laporte (2009) amongst others for a survey on exact and heuristic methods to solve the CVRP. In distribution networks where each customer frequently receives a delivery, it is often desired that the same driver makes these deliveries. The quality of service benefits from regularity and personalization by having the same driver visit a customer, as is suggested by Bertsimas and Simchi-Levi (1996). Moreover, GroĂŤr et al. (2009) indicate that because drivers at UPS form a real bond with customers they generate additional sales with a volume of over 60 million packages per year. In our studies we focus on distribution networks in which the driver is also responsible for unloading the shipment and placing them in the storage facility of the customer, e.g. as is the case for the service provided by TNT Innight. This requires the driver to carry a key or password to enter the storage facility, which increases the need of a customer to be visited by the same driver. Moreover, security screening of drivers in this case, further increases this need. In traditional CVRP models demand is assumed to be known and fixed, hence it is trivial to always have the same driver visit the same customer. However, in practice demand is typically unknown at the moment of assigning drivers to customers. Moreover, demand will fluctuate during the period in which this driver assignment is enforced. On some days customers will have low demand and on others the same customers will have high demand. This uncertainty of demand makes it difficult to assign drivers to customers while making sure that resulting transportation costs are low. In this paper, we develop a model for assigning customers to drivers before the quantity to be delivered to these customers is known. We consider a set of demand scenarios, and for each scenario a delivery schedule has to be made that minimizes the transportation costs while satisfying the vehicle capacity con-

MET | Volume 20 | Issue 1 | 2013


straints. Furthermore, the delivery schedules per scenario should take the driver assignment into account. Because this increases transportation costs, we introduce a parameter that gives the decision maker some flexibility in the degree with which these driver assignments should be satisfied. We impose that a fraction of the customers that are assigned to a driver is actually ensures visited by that driver. Observe that setting that drivers always visit their assigned customers, but will yield high expected transportation costs. On the introduces flexibility yielding other hand, setting decreased expected transportation costs, but results in drivers not visiting all their assigned customers. The driver assignment vehicle routing problem, DAVRP, is to assign customers to drivers such that the expected transportation costs over all scenarios are minimized. The DAVRP is NP-hard as in the case of one scenario it reduces to the CVRP. The DAVRP is similar to the consistent vehicle routing problem, ConVRP, introduced by GroĂŤr et al. (2009). In the ConVRP each customer must always be visited by the same driver. However, it is additionally required that the time of delivery for a single customer cannot differ by more than a limited amount of time per scenario. In the DAVRP we do not consider the timing of deliveries as this is not relevant in the application on which we focus. Moreover, in the DAVRP case, the decision maker is allowed more flexibility by setting an appropriate . In another related study, Li et al. (2009) consider the rescheduling of bus trips in case of a disruption. In their model, they incorporate a penalty for assigning drivers to a trip they are unfamiliar with. The DAVRP is a relevant problem and is introduced by us in the scientific literature. Furthermore, we design a cluster first-route second heuristic and use it to find good solutions to the DAVRP for instances with up to 100 customers and instances with up to 100 scenarios. In our computational experiments we study the costs of adhering to the driver assignments. We compare the costs of always having a customer visited by the same driver, with the costs of relaxing this requirement entirely. Such an analysis aids a policy maker in determining whether it is worthwhile to require customers to be visited by the same driver. Furthermore, using two variants of the cluster first-route second algorithm we

complete graph , where is a set of locations such that represents are the customers. A route is the depot and a path in starting and ending at the depot. A routing schedule is a collection of routes such that each customer is visited exactly once. be the cost to travel along edge Let Hence, the costs of a routing schedule is the sum of the edges that are used on the routes. The travel costs satisfy the triangle inequality. Let be the set of available vehicles, each having a capacity of . In our model there is no distinction between a driver and a vehicle. Each driver will drive at most one route. Note that not every driver necessarily has a customer assigned to it, we will refer to such drivers as backup drivers. of scenarios is given, Furthermore, a set where each scenario is characterized by a realization of be demand. Let demand at location in scenario such that . Let the given by the integer probability that scenario occurs be . A driver assignment is an assignment of every customer to a driver. Given a driver assignment, a routing schedule is considered feasible for scenario if for of the customers every driver at least a fraction assigned to it is visited by that driver and additionally when every route satisfies the vehicle capacity constraint. A driver assignment is considered feasible if for every scenario there exists at least one feasible routing schedule. The driver assignment vehicle routing problem, DAVRP, is to find a feasible driver assignment and a feasible routing schedule for every scenario such that the expected traveling costs over all scenarios are minimized.

MET | Volume 20 | Issue 1 | 2013

13

study the increase in transportation costs of only using drivers that initially have no customers assigned to it to visit customers that can not be visited by their assigned drivers, instead of also trying to assign them to any driver with leftover capacity. This paper is based on our working paper bearing the same title. More details on the methodology and experiments can be found there.

2. Problem definition Consider

a


Figure 1a: General clustering assignment.

Figure 1b: Clustering assignment in scenario .

Note that preliminary experiments with a mixed integer programming formulation of the DAVRP and a mixed integer programming solver, allowed us to solve instances with 10 customers and 3 scenarios in one hour of computation time. However, we could not solve all instances with 15 customers and 3 scenarios in one hour. Moreover, we could not solve any instance with 20 customers and 3 scenarios, in fact not even a single integer feasible solution was identified. This perhaps illustrates the computational complexity of the DAVRP.

second phase by simply constructing a route for each cluster. Well known examples of cluster first-route second algorithms for the CVRP are provided by Fisher and Jaikumar (1981) and Bramel and Simchi-Levi (1995). Next, we describe a cluster first-route second algorithm for the DAVRP. First, we describe an algorithm used in the first phase to construct clusters. Following, we describe two algorithms that are used in the second phase to construct a routing schedule based on the clusters obtained in the first phase. In the first algorithm for the second phase, we allow customers that are not visited by their assigned driver to be assigned to any other driver with leftover capacity. In the second algorithm for the second phase, customers that are not visited by their assigned driver are only assigned to backup drivers.

3. Solution method To quickly find solutions to the DAVRP with a large number of customers and scenarios, we propose a heuristic. In this heuristic we decouple the driver assignment and the routing in each scenario. It is a two-phase approach that is similar to cluster first-route second heuristics, which are a well known family of heuristics for vehicle routing problems. In the first phase of cluster first-route second heuristics for the vehicle routing problem, customers are clustered and in the second phase a routing schedule is constructed based on these clusters. For the CVRP, one typically ensures for every cluster in the first phase that the total demand of the customers in a cluster does not exceed the capacity of a vehicle. This way, a feasible routing schedule can be obtained in the

14

3.1. Cluster first In the first phase we construct clusters of customers. We require of every cluster that in each scenario at least one subset of customers, containing at least a fraction of all customers in that cluster, has a total demand less or equal to the vehicle capacity. This allows us to use the clusters of customers as driver assignments, i.e. a feasible driver assignment is obtained by assigning all customers in one cluster (and no other customers) to a sin-

MET | Volume 20 | Issue 1 | 2013


gle driver. Next, we introduce the clustering problem which we solve to construct clusters. Consider a set of potential cluster centers. When a cluster center is in use, costs are incurred equal to the traveling costs from the . depot to a cluster center plus some penalty costs Furthermore, all customers are assigned to a cluster. In each scenario a decision is made whether a customer is skipped. If a location is not skipped, costs are incurred equal to the traveling costs from that customer to its assigned cluster center, otherwise traveling costs to the depot are incurred. In every scenario, at least a fraction of the customers in a cluster must not be skipped. Furthermore, in each scenario the capacity constraints must be satisfied by the locations in a cluster that are not skipped. The clustering problem is to select cluster centers, assign each customer to one of the selected cluster centers and select which customers to skip in each scenario, such that the total costs are minimized. Figure 1 shows an example of a clustering solution. In Figure 1a a general clustering assignment is shown. For the general clustering assignment of Figure , an example of a 1a and using for instance clustering assignment for some scenario is shown in Figure 1b. It shows three customers being skipped in their respective clusters. Costs equal to the traveling costs from these customers to the depot are incurred in this case. A solution of the clustering problem can directly be used as a feasible driver assignment. Moreover, note that the corresponding solution value (times 2) provides an upper bound on the solution value of any feasible solution to the DAVRP using this driver assignment. The optimal solution to the clustering problem minimizes this upper bound. Furthermore, is added to the costs of using a cluster center to discourage the use of too many cluster centers. To solve the clustering problem, we formulate it as a mixed integer programming problem and solve it using branch-and-bound. In our implementation, we obtain lower bounds by considering a Lagrangian relaxknapsack problems ation that decomposes into and a single uncapacitated facility location problem. Hence, for given Lagrangian multipliers the value of the Lagrangian relaxation is computationally relatively easy to determine. Moreover, we optimize the Lagrangian

In the routing phase of the cluster first-route second algorithm, the clusters obtained in the clustering phase are used as driver assignment. This driver assignment is used to construct a feasible routing schedule for every scenario. The routing problem is the problem of, given a feasible driver assignment, finding a feasible routing that minimizes the traveling schedule for scenario costs. We solve the routing problem to optimality for using a mixed integer programevery scenario ming formulation and a commercial mixed integer programming solver. We augment the solution process by adding valid inequalities that are known for the CVRP, which are separated using the software package developed by Lysgaard (2003). We will refer to this procedure as the exact routing algorithm. We also propose a heuristic algorithm to find a solution to the routing problem, referred to as the heuristic routing algorithm. In the heuristic routing algorithm, customers that are not visited by their assigned driver can only be assigned to backup drivers. The heuristic routing algorithm makes use of the solution to the clustering problem. In every scenario, a route is constructed for each cluster center using the the customers that are not skipped. This is done by solving a traveling salesman problem, TSP. Furthermore, a CVRP is solved using the skipped customers of every cluster center. The routes obtained by solving the TSP for each cluster and solving the CVRP, together form a feasible routing schedule. Note that there is a major difference between the exact and heuristic routing algorithm. In the heuristic algorithm customers that are not visited by their assigned driver are exclusively assigned to backup drivers. In particular, they are not assigned to the non-backup drivers even though they might still have some capacity left. This way, the heuristic routing algorithm is easier to put into practice, whereas the exact algorithm will yield

MET | Volume 20 | Issue 1 | 2013

15

multipliers using subgradient optimization. Furthermore, we obtain upper bounds in our branch-and-bound algorithm by applying a greedy Lagrangian heuristic to the infeasible solutions obtained by solving the Lagrangian relaxation.

3.2. Route second


Table 1: Solution values, cluster first-route second, different ’s routing schedules with lower transportation costs.

4. Highlights of our computational results In this section, we present the highlights of the results of the computational experiments for our study, in which instances1 of the DAVRP are solved. Unless stated oth. Furthermore, we set larger than the erwise maximum travel costs of each instance. All experiments are performed on an Intel(R) Core(TM) i5-2450M CPU 2.5 GHz processor. The algorithms were coded in C++ and the commercial mixed integer programming solver IBM ILOG Cplex optimizer, version 12.4, is used. We compare the cluster first-route second heuristic with the exact routing algorithm and with the heuristic routing algorithm. The variant with exact routing

algorithm provides on average 15% lower expected transportation costs than the variant with the heuristic routing algorithm. This shows the benefits of trying to assign customers that are not visited by their assigned driver to non-backup drivers that have some capacity left, in stead of only using backup drivers. However, as solving a the CVRP is computationally very demanding, so is solving the routing problem. Hence, we are not able to solve all instances with 40 customers and 3 scenarios within one hour with the cluster first-route second heuristic with exact routing. Using the heuristic routing variant, however, we can easily solve instances with up to 100 customers and up to 100 scenarios within one hour, the running time is on average even well within ten minutes. Larger instances are difficult to solve due to memory requirements of the algorithm. Adhering to the driver assignments may be beneficial for business, but it decreases flexibility in transportation. Hence, the expected transportation costs increase. Next, we investigate the increase of expected transportation costs. In Table 1 the results are presented of an experiment in which twenty instances with 25 customers and 3 scenarios are solved. For each instance three variants are , and . considered in which we set , the instances are solved by solving a CVRP For to optimality for every scenario to construct a routing schedule. It is equivalent to not imposing any driver assignment constraints. Hence, the lowest possible . For expected transportation costs are obtained for and the instances are solved using the cluster first-route second heuristic using the exact routing algorithm. Table 1 shows the solution values of the obtained solutions. Next, we report the average percentage difference of the solution values obtained for the instances with different values of . For these instances, adhering increases the to the driver assignments with expected transportation costs with 12.7%. The highest increase of 21.0% is obtained for instance 5. Adhering increases the to the driver assignments with expected transportation costs with 2.9%. This can be considered a moderate increase. Having driver assignment requirements but allowing a little flexibility, i.e. instead of , decreases the expected using

1 Instances are randomly generated and available on request.

16

MET | Volume 20 | Issue 1 | 2013


transportation costs significantly. Finally, note that the value of the optimal solution to the DAVRP for instances with a specific value of , is in between the solution value obtained for instances and those obtained using the cluster firstwith route second algorithm. This allows us to deduce that the cluster first-route second for instances with heuristic produces solutions are on average at most 2.9% more expensive than the optimal solution.

5. Conclusions We introduce the DAVRP and develop a cluster firstroute second heuristic to solve it. In the first phase, a solution to the clustering problem is found, and in the second phase, a solution to the DAVRP is constructed based on the clustering solution, which are used as a driver assignment. For the routing problem we designed an exact and heuristic algorithm. In the latter, customers that are not visited by their assigned driver are used to construct new routes instead of trying to add them to drivers that already have customers assigned to it. Computational experiments show that the cluster first-route second heuristic is able to solve instances with up to 100 customers and up to 100 scenarios can be solved well within one hour. In our experiments, the cluster first-route second algorithm produces on average 15% more expensive solutions when the heuristic routing algorithm is used instead of the exact routing algorithm. This quantifies the increase in transportation costs from only using backup drivers for skipped customers instead of trying to assign them to non-backup drivers with leftover capacity. From experiments where we solve instances of the DAVRP with the cluster first-route second algorithm using the exact routing algorithm, we conclude that adhering to the driver assignments can lead to an increase in expected transportation costs of up to 21.0%. the increase is on average 12.7%. When setting However, when adhering to the driver constraints but , the allowing a little flexibility by using increase in expected transportation costs is on average only 2.9%. These experiments also allow us to conclude that, even though we do not solve these instances to optimality, the solution value of the solutions produced by the cluster first-route second algorithm for instances MET | Volume 20 | Issue 1 | 2013

w are on average at most 2.9% more with expensive than the optimal solutions.

About the authors Remy Spliet is a Ph.D student at the Econometric Institute of the Erasmus University in Rotterdam under supervision of Dr. Adriana F. Gabor and Prof. Dr. Ir. Rommert Dekker. His research topic is vehicle routing with uncertain demand. Rommert Dekker is professor in operations research and quantitative logistics at the Econometric Institute of the Erasmus University in Rotterdam. His main research interests are service logistics, port logistics and transport optimization.

References Baldacci, R., Mingozzi, A. and Roberti, R. 2012, ‘Recent exact algorithms for solving the vehicle routing problem under capacity and time window constraints’, European Journal of Operational Research, Vol. 218, No. 1, pp. 1-6. Bertsimas, D.J. and Simchi-Levi, D. 1996, ‘A New Generation of Vehicle Routing Research: Robust Algorithms, Addressing Uncertainty’, Operations Research, Vol. 44, No. 2, pp. 286-304. Bramel, J. and Simchi-Levi, D. 1995, ‘A Location Based Heuristic for General Routing Problems’, Operations Research, Vol. 43, No. 4, pp. 649-660. Fisher, M.L. and Jaikumar, R. 1981, ‘A generalized assignment heuristic for vehicle routing’, Networks Vol. 11, No. 2, pp. 109-124. Groër, C., Golden, B. and Wasil, E. 2009, ‘The Consistent Vehicle Routing Problem’, Manufacturing & Service Operations Management, Vol. 11, No. 4, pp. 630-643. Laporte, G. 2009, ‘Fifty Years of Vehicle Routing’, Transportation Science, Vol. 43, No. 4, pp. 408-416. Li, J., Mirchandani, P.B. and Borenstein, D. 2009, ‘A Lagrangian heuristic for the real-time vehicle rescheduling problem’, Transportation Research Part E, 45, pp. 419-433. Lysgaard, J. 2003. ‘CVRPSEP: A package of separation routines for the capacitated vehicle routing problem.’, Working Paper 03-04, Department of Management Science and Logistics, Aarhus School of Business, Aarhus, Denmark.

17


On Fit and Forecast of US Inflation: A Robustness Analysis using E Nalan Basturk Econometric Institute Erasmus University Rotterdam Tinbergen Institute Cem Cakmakli Department of Quantitative Economics University of Amsterdam Pinar Ceyhan Econometric Institute Erasmus University Rotterdam Tinbergen Institute Herman K. van Dijk Econometric Institute Erasmus University Rotterdam Tinbergen Institute Department of Econometrics VU University Amsterdam Basturk, Cakmakli, Ceyhan and Van Dijk (2013) (henceforth BCCVD)

propose a set of extended Phillips Curve

(PC) models to jointly estimate changing time series properties and the PC model parameters. Using a simulation based Bayesian approach, they show that the predictive performances of the conventional PC model are substantially improved using these extended models. In this paper we consider alternative PC model structures and analyze the robustness of the results in BCCVD. These alternative models also allow us to separate the predictive gains from each model extension proposed in BCCVD. Furthermore, we present a simple prior-predictive analysis to check whether the predictive gains presented in BCCVD stem from the prior density definition. Our results support the evidence on the extended PC models. The extended PC models are found to outperform several alternative models and the predictive gains in these models are not found to be driven by the prior distributions.

18

1. Introduction The relation between inflation and economic activity has been attracted considerable attention in the literature. The standard approach to model this relationship is based on the Phillips Curve (PC), which considers the short-run fluctuations of inflation and economic activity. The PC structure defines a steady state relationship between these series. Therefore, the econometric analysis is often based on a priori demeaned and detrended data, see GalĂ­ and Gertler (1999); Nason and Smith (2008) among others. Such simple filtering and deterending method is questionable given the complex time series behavior documented for the inflation and economic activity data for the U.S. (McConnell and PerezQuiros, 2000; Stock and Watson, 2008). Recently, Ferroni (2011); Canova (2012) show that such time series behavior should instead be modelled jointly with the rest of the model parameters. Focusing on the U.S. inflation and economic activity data, BCCVD show that this conventional methodology of demeaning and detrending the data leads to unfavorable estimation results, particularly when the focus is on inflation predictions. They extend the PC model structures in three ways. First, they allow for time-varying levels in the inflation series. As an alternative, they define the inflation level as a regime switching process. For the marginal cost series, they allow for time varying levels and trends. Finally, they extend this approach to a richer class of Hybrid PC (HPC) models and incorporate survey expectations in the HPC models. As shown in BCCVD, such joint modelling of the data filters and the PC model improves prediction results. In this paper, we analyze the robustness of the results in BCCVD and differentiate the predictive gains from each of the model extensions proposed. Our first robustness check is based on a prior-predictive analysis to determine the effects of the adopted priors on the prediction results. Due to the complex model structure and the limited amount of available data, the priors used in BCCVD are not fully uninformative. For this reason, the influence of the prior density on the predictive results can be substantial. Using the approach in Geweke (2010), we show that the reported predictive gains do not stem from the prior definitions. On the contrary, the

MET | Volume 20 | Issue 1 | 2013


Extended Phillips Curve Models with non-filtered Data

defined prior distribution clearly does not favor the ‘best performing model’ in BCCVD. Our second issue is to analyze each PC model extension in BCCVD separately, and to distinguish the predictive gains from each of these extensions. For this reason we consider alternative models, with differing time series and model structures, which provide ‘intermediate’ models between the proposed models and the standard models. We show that the predictive gains only from including the survey expectations in the models are substantial. Furthermore, incorporating the low and high frequency data movements in the model is crucial. Finally, once survey data and time variation are included in the model, additive gains from the hybrid model extension are negligible in terms of the prediction results. From these results, we conclude that the increase in the predictive performance through the proposed models in BCCVD is not based on one of these extensions but is rather based on the combination of them. The remainder of this paper is as follows: Section 2 outlines the standard PC model and the extensions proposed by BCCVD. Section 3 presents the prior-predictive analysis on the proposed models. Section 4 shows the prediction results with alternative PC structures together with those in BCCVD. Section 5 concludes.

2. Standard and extended Philips Curve models The standard PC is given by

(1) where

and

are the inflation and marginal cost series, , standard stationary restrictions and indicates that the long-run level is hold for subtracted from the series, which corresponds to the demeanded and/or detrended series in standard PC models. The expectational term in the PC model can be removed by iterating the model forward and the resulting model is nonlinear in parameters

MET | Volume 20 | Issue 1 | 2013

(2)

BCCVD redefines this model by explicitly modelling the long-run levels of the series

(3)

and are the time-varying inflation and where marginal cost levels, respectively. The time variations in levels defined in (3) are defined as follows, , , ,

,

,

,

, , where the last equation specifies a time-varying volatil. is a ity for the inflation series with binary variable taking the value of with probability if there is level shift and the value with probability if the level does not change. Equation 4 summarizes three parsimonious models. The model is identical to time-varying level while it defines permanent regime model if for some . The former case changes in inflation if without stochastic volatility is abbreviated by PC-TV, the latter without stochastic volatility is abbreviated by PC-TV-LS, and the model with regime changes and stochastic volatility in inflation is abbreviated by PC-TVLS-SV. Finally, BCCVD propose three additional models using the same time series patterns but using the hybrid form of the HPC framework with forward and backward looking inflation expectations. This model is estimated using an iterative solution of the HPC model and by introducing survey data information in the model

19


structure. The corresponding extensions are denoted as HPC-TV, HPC-TV-LS and HPC-TV-LS-SV. A summary of the six proposed models and the baseline models with a priori demeaned and detrended series is given in Table 1. In BCCVD, these models are estimated for the quarterly U.S. data on inflation and marginal cost for the period between 1962-I and 2012-I. Furthermore, the predictive performance of the models are compared using one and four period ahead predictive likelihoods and Mean Square Forecast Error (MSFE) for the forecast period covering 1973-II and 2012-I. In this paper we follow the same approach, but rather estimate the model with log marginal cost series, since the series in natural logarithms is more often used in empirical analysis. These data are presented in Figure 1, together with the survey data on inflation expectations.

3. Prior predictive likelihoods of proposed models The proposed models summarized in Table 1 are nonlinear in parameters, and in BCCVD, they are estimated under non-flat priors for most parameters. Hence assesing the affect of the prior distribution is not trivial. We present a prior-predictive analysis as in Geweke (2010). For each of the extended PC and HPC models, we consider1000 parameter draws from their joint prior distributions and compute the predictive likelihoods for the period from the period between 1973-II and 2012-I. The prior sampling algorithm for this analysis is a simplified version of the proposed posterior sampling algorithm in BCCVD, apart from the fact that model parameters’ distribution is not updated with data points. We perform the prior predictive analysis for the extended PC models. Table 2 presents the average and

Table 1: Standard and extended Phillips curve models

PC-LT and PC-HP denote the standard PC models with a priori demeaning and detrending using a linear trend and an HP filter, respectively. These models are defined as the baseline models in BCCVD.

Figure 1: Inflation, inflation expectations and log real marginal cost (x100) series over first quarter of 1960 and the first quarter of 2012

20

MET | Volume 20 | Issue 1 | 2013


Table 2: Prior-predictive results for the proposed models

Note: The table reports the prior-predictive performances of all competing models for the prediction sample over the second quarter of 1973 and the first quarter of 2012. ‘(Log) Pred. Likelihood‘ stands for the natural logarithm of the predictive likelihoods. Results are based on 1000 simulations from the joint priors of model parameters. Model abbreviations are as in Table 1.

Table 3: Standard and extended PC models

Note: The first two columns present the standard and extended (H)PC models presented in the main paper, for which expectational mechanisms are solved explicitly. The last two columns present alternative model structures for (H)PC models. For these models, we do not iterate inflation expectations in the models, but instead replace them with survey data directly. PC(S)-LT (PC-HP(S)) refers to the PC model where the real marginal cost series is detrended using linear trend (Hodrick-Prescott) filter. PC(S)-TV refers to the PC model with time varying levels and trends. PC(S)-TV-LS refers to the PC model with time varying levels and trends. PC(S)-TV-LS-SV refers to the PC model with time varying levels and volatility. PC(S)-TV-LS-SV refers to the PC model with time varying levels, trends and volatility. HPC(S)-TV refers to the Hybrid PC model with time varying levels, trends and inflation expectations. HPC(S)-TV-LS refers to the HPC model with time varying levels, trends and inflation expectations. HPC(S)-TV-LS-SV refers to the HPC model with time varying levels and volatility. HPC(S)-TV-LS-SV refers to the HPC model with time varying levels, trends, inflation expectations and volatility. * Iterative solution of these models without using the survey data does not exist.

cumulative prior predictive likelihoods for the forecast sample. The adopted prior distributions clearly favor the less parametrized model, PC-TV. Moreover, the priors clearly do not favor the ‘good performers’, which are the models with stochastic volatility components in BCCVD. More importantly, the ‘best performing model’ according to the prediction results in in BCCVD, HPCTV-LS-SV, is the least favorable one according to the adopted prior distributions. We therefore conclude that

data information is dominant, and the superior predictive performance of the HPC-TV-LS-SV model in BCCVD is not driven by the prior distribution.

MET | Volume 20 | Issue 1 | 2013

21

4. Posterior and predictive results from alternative models for robustness checks The proposed PC and HPC models in BCCVD extend the standard models in several ways. First, unlike the


Table 4: Predicitive performance of additional PC models

standard (H)PC models, they explicityl model the time variation in the long and short run dynamics of inflation and marginal cost series. In addition, the iterative solution of the expectational mechanisms and the survey data in the extended HPC models enables the use of more data information. Furthermore, the extended and standard HPC models use the additional information from a backward looking component for the inflation series compared to the PC counterparts. According to the predictive results on BCCVD, the most comprehensive model, HPC-TV-LS-SV is also the best performing model. However, a deeper analysis is needed in order to see the added predictive gain from all these extensions. In this section we consider several alternative models and their predictive performances to separately address the predictive gains from each of these extensions in the model structure. Table 3 presents all PC and HPC model structures we compare to differentiate these effects and

shows the classification of the proposed models based on the low and high frequency structures and the methodology to specify the expectational mechanism. The first set of alternative models we consider are the standard PC and HPC models combined with data from survey expectations. These models, given in the first two rows of the right panel of Table 3, do not allow for time variation in the low frequency structure of data and they are defined for a priori filtered data. demeaning the inflation series, and detrending the marginal cost series prior to analysis. These models are abbreviated by PCS-LT, PCS-HP, HPCS-LT and HPCSHP, according to linear detrending or HP detrending prior to analysis. The improved predictive performances of PCS-LT and PCS-HP models compared to the standard PC counterparts show predictive gains from incorporating survey expectations in the models. Furthermore, comparing the predictive performances of the HPCS-LT and HPCS-HP models with the time-varying hybrid models, such as the HPC-TV or HPC-TV-LS models show the gains from incorporating time variation alone, since all these models use survey data and the backward looking component for inflation. The second set of alternative models we consider, on the right panel of Table 3, are PC models with time-varying levels, where we incorporate the survey expectations in the model directly rather than solving the model iteratively. These models correspond to (1) where the expectation term is replaced by survey expectations. We denote these models by PCS-TV, PCS-TVLS and PCS-TV-LS-SV, for the time-varying levels, time-varying levels with regimes shifts in inflation and time-varying levels with regime shifts and stochastic volatility component, respectively. Comparing the predictive results of these models to the HPC counterparts provide the predictive gains solely from the HPC extension, i.e. they separate the gains from incorporating the backward looking inflation component in the model from the other model extensions. The final set of alternative models we consider are the HPC models using the survey expectations directly, without solving for the expectational mechanisms. We denote these models by HPCS-TV, HPCSTV-LS and HPCS-TV-LS-SV, for the time-varying levels, time-varying levels with regimes shifts in inflation

22

MET | Volume 20 | Issue 1 | 2013

Note: The table reports the predictive performances of alternative models for the period between the second quarter of 1973 and the first quarter of 2012. ‘(Log) Marg. Likelihood’ stands for the natural logarithm of the marginal likelihoods. ‘MSFE’ stands for the Mean Squared Forecast Error. Marginal likelihood values in the first column are calculated as the sum of the predictive likelihood values in the prediction sample. Results are based on 10000 simulations of which the first 5000 are discarded for burn-in. Model abbreviations


and time-varying levels with regime shifts and stochastic volatility component, respectively. Comparing the predictive performance of these models with the proposed HPC models clarifies the predictive gains from solving for the inflation expectations iteratively in the hybrid models. One period ahead MSFE and log marginal likelihoods of these models, together with the standard (H) PC models and the models proposed in the paper, are given in Table 4. The prediction results are based on the forecast sample, which covers the period between the second quarter of 1973 quarter and the first quarter of 2012. Comparing the first block and the first two rows of the second block Table 4, we see that the gains from using survey data inflation is substantial even in the standard PC models. In terms of predictive gains, the biggest improvement in predictive likelihoods and the MSFE are achieved with this contribution in the models. However, the predictive performances of these improved models are still far from the more involved models. Hence the gains from the proposed models do not only stem from the inclusion of the survey data information alone. We also report the predictive gains resulting solely from introducing time-variation in the inflation and marginal cost series, by comparing the results of the HPCS-LT and HPCS-HP models with the HPC-TV or HPC-TV-LS models in the table. The more involved models with time variation clearly perform better according to the predictive results. Especially the difference in marginal likelihoods of these models enables us to conclude that incorporating time variation in the data is also important. As a third possibility for predictive gains, we focus on the models with backward looking components. One way to separate the added value from this component is to consider the second block of Table 4. The prediction results from the PC and HPC models in this block are very similar, with slight improvements in the hybrid models, where the backward looking component is incorporated. Another way to see the effect of the backward looking component is to compare the PCS-TV, PCS-TV-LS and PCS-TV-LS-SV models with HPCS-TV, HPCS-TV-LS and HPCS-TV-LS-SV mod-

els, respectively. In all these comparison, the models without the backward looking component performs slightly better (worse) in terms of MSFE (marginal likelihood), hence the backward looking component does not seem to improve predictive results in general and the improvements in the hybrid models mainly stem from incorporating the survey expectations. From the considered alternative models, timevarying level models with a stochastic volatility component using survey data directly (PCS-TV-LS-SV and HPCS-TV-LS-SV) clearly perform best. In terms of the predictive likelihoods, these models are also comparable to the ‘best performing’ model we propose. A final source of possible predictive gains in the proposed models is the iterative solution of inflation expectations. This comparison is based on the comparison of the models in the third (fourth) block and the fifth (sixth) block of Table 4, where only the third (fourth) block uses the iterative solution. In the PC models, predictive results deteriorate slightly when we solve the system iteratively. For the HPC models, however, the prediction results are less clear. While the MSFEs favor the models with the iterative solution, the predictive likelihoods favor the models using expectations data directly. We conclude that given that the results on this issue are not clear-cut, a more detailed analysis of the effect of the iterative solution is needed. This topic is for now left for future work.

MET | Volume 20 | Issue 1 | 2013

23

5. Conclusion In this paper we analyze the robustness of the results in Basturk, Cakmakli, Ceyhan and Van Dijk (2013) in detail. We consider alternative models, with differing time series and model structures, which provide ‘intermediate’ models between the proposed models in BCCVD and the standard models in the literature. Using these alternative models, we show that the results in BCCVD are not driven by a single model extension they propose, but is rather based on a combination of these extensions. Predictive performance gains from their first extension, including the survey expectations in the models, are found to be substantial. Their second extension, incorporating the low and high frequency data movements in the model, is also crucial for prediction perfor-


mances. A third extension they propose, the hybrid model extension, is found to have a relatively small effect on the prediction results. Apart from these alternative model specifications, we employ a prior-predictive likelihood analysis in the BCCVD models, in order to see whether the superior results they present are driven by the prior distributions they define. This analysis shows that the reported ‘good performing’ models are not in line with the ‘good performing’ models based on the prior only. Hence the defined prior distributions do not seem to be influential in the results.

forecasts. Working Paper 14322, National Bureau of Economic Research.

References Basturk N, Cakmakli C, Ceyhan P, Van Dijk HK. 2013. Posterior-predictive evidence on US inflation using extended Phillips curve models with non-filtered data. Working Papers 13–090/III, Tinbergen Institute. Canova F. 2012. Bridging DSGE models and the raw data. Working Papers 635, Barcelona Graduate School of Economics. Ferroni F. 2011. Trend agnostic one-step estimation of DSGE models. The B.E. Journal of Macroeconomics 11: 1–36. Galí J, Gertler M. 1999. Inflation dynamics: A structural econometric analysis. Journal of Monetary Economics 44: 195–222. Geweke J. 2010. Complete and Incomplete Econometric Models (The Econometric and Tinbergen Institutes Lectures). Princeton University Press. McConnell MM, Perez-Quiros G. 2000. Output fluctuations in the United States: What has changed since the early 1980’s? American Economic Review 90: 1464– 1476. Nason JM, Smith GW. 2008. The New Keynesian Phillips curve: lessons from single-equation econometric estimation. Economic Quarterly 94: 361–395. Stock JH, Watson MW. 2008. Phillips Curve inflation

24

MET | Volume 20 | Issue 1 | 2013


Jouw studievereniging wil het je zo voordelig en makkelijk mogelijk maken. Dus hebben ze een boekenleverancier die daarbij past.

Jouw studievereniging werkt nauw samen met studystore. En dat heeft zo z’n voordelen. Doordat we snugger te werk gaan, kunnen we jouw complete boekenpakket snel aanbieden tegen een scherpe prijs.


Beginner of belofte?

PGGM: werken aan een waardevolle toekomst

Een veelbelovende start?

PGGM is een vooraanstaande pensioenuitvoeringsorganisatie met

Bijna afgestudeerd in Actuariële Wetenschappen, Econometrie of

haar oorsprong in de sector zorg en welzijn. We verlenen aan diverse

Wiskunde? Op zoek naar een onvergetelijke stageplek? Toe aan je

pensioenfondsen diensten op het gebied van pensioenbeheer,

eerste baan? Het actuariaat van PGGM is een geweldige plek om

integraal vermogensbeheer, bestuursondersteuning en beleidsadvisering.

ervaring op te doen in de wereld van de financiële dienstverlening.

Momenteel beheren we circa 105 miljard pensioenvermogen van ruim 2,3 miljoen deelnemers.

Ons actuariaat bestaat uit twee afdelingen van totaal dertig mensen. Bij Actuarieel Advies & ALM ben je bezig met bestuurlijke advisering.

PGGM is een organisatie waar gewerkt wordt met geld, maar waar het

Je werkt aan vraagstukken als de houdbaarheid van het huidige

draait om mensen. Vanuit dat perspectief zijn we elke dag bezig om

pensioencontract en de optimale beleggingsmix. En bij Actuariële

mensen te helpen bij het realiseren van een waardevolle toekomst.

Verantwoording & Analyse gaat het om het analyseren van de

Zo lopen we voorop op het gebied van verantwoord beleggen.

pensioenverplichtingen voor nu en in de toekomst. Bestandsanalyse is

En investeren we als coöperatie zonder winstoogmerk we actief in een

belangrijk voor diverse pensioenvraagstukken en voor jaarverslaglegging.

waardevolle toekomst door verder te kijken dan pensioen alleen. Samen met partners en meer dan 562.000 leden werken we aan een

Kom kijken!

nieuwe oude dag door de domeinen pensioen, zorg, werk en wonen

Nieuwsgierig geworden? Bel met de afdeling Recruitment

actief aan elkaar te verbinden.

(030) 277 72 41, mail naar solliciteren@pggm.nl of kijk op internet: www.pggm.nl/werkenbij

www.pggm.nl/werkenbij


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.