Aenorm 73

Page 1

This edition:

The Road to Solvency II: A UK Perspective And:

73

vol. 19 dec ‘11

Basel III: The Price of a Stable Banking Sector Understanding Financial Instability Through Complex Systems Explaining Interest Rates in the Dutch Mortgage Market: A Time Series Analysis


Jij ziet overal cijfers...

…en de bijbehorende uitdagingen. Want jij ziet dingen die anderen niet zien. Juist dat maakt je zo’n uitmuntende consultant. Bij Mercer waarderen we dat. Werken bij deze internationale autoriteit in financieel-strategische dienstverlening betekent werken in de voorhoede. Terwijl jij samen met je enthousiaste collega’s fi nanciële HR-vraagstukken meetbaar en tastbaar maakt, zorgt Mercer voor een ongeëvenaard klantenpakket én een direct toegankelijk, internationaal kenniscentrum. Ook onze ontspannen werksfeer – even informeel als inhoudelijk – is een begrip in de branche. Allemaal kenmerken die, volgens je toekomstige collega’s, van Mercer een topbedrijf maken.

Junior consultants m/v Die positie willen we graag behouden. We zijn voortdurend op zoek naar junior consultants die zowel individueel als in teamverband kunnen excelleren. Jonge, hoogopgeleide talenten met een flexibele geest, cijfermatig inzicht, kennis en gezond verstand. Menselijke professionals die, net als Mercer, niet terugdeinzen voor uitdagingen. Voldoe jij aan dit boeiende profiel? Dan vind je bij Mercer volop mogelijkheden. Kijk op www.werkenbijmercer.nl of bel 020-4313768.

IT’S TIME T0 CALL MERCER Consulting. Outsourcing. Investments.


Colofon Chief Editor Myrna Hennequin Editorial Board Myrna Hennequin Editorial Staff Daan Oosterbaan Maarten van der Meij

Risk and Stability

Design United Creations © 2009 Lay-out Myrna Hennequin Cover design © Michael Groen, United Creations

by: Bas Kunst

Circulation 2000 A free subscription can be obtained at www.aenorm.eu. Advertisers DNB Mercer NIBC Towers Watson Information about advertising can be obtained from Daan Oosterbaan at info@vsae.nl Insertion of an article does not mean that the opinion of the board of the VSAE, the board of Kraket or the redactional staff is verbalized. Nothing from this magazine can be duplicated without permission of VSAE or Kraket. No rights can be taken from the content of this magazine. ISSN 1568-2188 Editorial Staff adresses VSAE Roetersstraat 11, E2.02 1018 WB Amsterdam tel. 020-5254134 Kraket De Boelenlaan 1105, 1A-19 1081 HV Amsterdam tel. 020-5986015

The articles in this AENORM cover the subjects risk and stability. The reader should be familiar with the subjects. The past months the European leaders try to end the economic crisis publicly. While doing so, the leaders use the terms risk and stability all the time. It is quite imaginable that the risk regulations for banks, insurance companies and even entire countries become more strict. This means that we - (upcoming) econometricians and actuaries - are more wanted in the risk departments of the financial sector. That makes me wonder: Why do I know so little about risk management? Since my graduation is upcoming, I should at least be confronted with risk management. The small amount of experience I have with respect to risk management is gained at the Risk Intelligence Competition, an event organized by the VSAE. It is great that the study association organizes such an event, although the first acquaintance should be at the University of Amsterdam (UvA) in my opinion. When one searches risk management in the course catalogue of the UvA, only one course is showing up. The concerned course is a master course, not required for the MSc Econometrics (it is required for the MSc Actuarial Sciences though). Should the UvA respond more actively to the increasing demand for risk? I think an extra course in the BSc, where the basics of risk management are educated, would be sufficient. However, adding courses to the BSc programs is not the top priority of the UvA. On the contrary, the UvA is bound to implement budget cuts. Last year, in the BSc and MSc Actuarial Sciences and Econometrics, courses were combined, dropped, or upgraded with one or two ECT’s. Not to mention the problems the UvA has with Operational Research & Management. The result: the amount of courses decreased, where the amount of ECT’s per course increased. All this is implemented to decrease the labour costs, by dismissing teachers. There seems to be a paradox: the recession leads to an increase of the demand for risk management, where the same recession leads to a drop in the amount of courses in the econometrics program. Is the level of our education dropping? Or more important: does our current education connect sufficient with our future jobs?

AENORM

vol. 19 (73)

December 2011

1


00 73 Understanding Financial Instability Through Complex Systems

04

by: Daan in ´t Veld This article presents an outline and some early results of the research project Understanding financial instability through complex systems. The economic models used by Central Banks for forecasts and policy choices do not perform well in exceptional circumstances, as has become evident during the global financial-economic crisis. Central bankers have started to appreciate the benefits of viewing the macroeconomic and financial system as a complex system. Sudden regime shifts such as crashes and economic decline are absent in the conventional models, but typical of complex systems, and have important implications for economic policy and crisis management.

Basel III: The Price of a Stable Banking Sector

09

Implications of Banking Regulation for Banks and Their Corporate Clients by: Charles Zondag

The BIS’s new capital requirements for banks, also known as Basel III, draws the attention of various stakeholders. It’s not only the banks that are keen to take note of these additions to the Basel II Accord of June 2006, but their professional clients also want to understand the implications for them. This article provides some suggestions on how to cope with the consequences of Basel III.

The Road to Solvency II

12

A UK Perspective by: Servaas Houben

Solvency II will come into regulation as per 1 January 2014. This resulted in a dramatic increase in demand for Solvency II experts, notably actuaries, within Europe. As the English Solvency I regime has several similarities to the proposed Solvency II framework it is interesting to assess the underlying reasons for previous regime changes. However, although Solvency II will replace the previous regulatory regime and create a level playing field for new people entering the insurance industry, a good understanding of the historical developments that have led to the Solvency II regime, will enhance the understanding of the structure of the proposed framework.

2

AENORM

vol. 19 (73)

December 2011

vol. 19 00 dec. m. y. ‘11


BSc - Recommended for readers of Bachelor-level MSc - Recommended for readers of Master-level PhD - Recommended for readers of PhD-level

Risk Management with the Multivariate Generalized Hyperbolic Distribution

18

Calibrated by the Multi-Cycle EM Algorithm by: Marcel Holtslag In this article, the question is whether the more complex models to capture the nature of the conditional density of returns are better suited compared to simpler but easier to use models. Gradually, the traditional Gaussian distribution to model financial returns has been replaced by several other viable distributions suitable to capture the empirically observed heavy tail behavior, kurtosis and peakedness. This study contributes towards the further development of the effectiveness of the multivariate generalized hyperbolic distribution (MGHyp) when it is used to forecast the possible next day portfolio loss.

Imagination Driving Corporate Leadership in 2020

23

by: Carl Johan Lens This article is written from the perspective of the ‘head-hunter’. Since that is what the author does for a living: finding the best possible financial specialist in a given situation. He does this with the underlying purpose to facilitate growth and development for both the candidate and the hiring organisation. In this article, he argues that in order to be a successful professional in 2020 you will need to integrate your personality into your job and that imagination is an essential part of leadership.

Explaining Interest Rates in the Dutch Mortgage Market: A Time Series Analysis

26

by: Machiel Mulder and Mark Lengton Since a number of years financial markets have been stirred up and the Dutch mortgage market is no exception to this. Because of a perceived high level of the mortgage interest rates, a number of parties have complained about the functioning of the mortgage market in the Netherlands. In order to explain the development in mortgage interest rates in the Dutch market, the authors of this article conducted an in-depth analysis of the mortgage market. This analysis included two types of econometric analysis: a panel analysis on annual data per bank and a time-series analysis on monthly data on industry level. This article describes the time series analysis.

Puzzle

31

Facultive

32

AENORM

vol. 19 (73)

December 2011

3


Econometrics

Understanding Financial Instability Through Complex Systems by: Daan in ´t Veld

This article presents an outline and some early results of the research project Understanding financial instability through complex systems, supported by the Dutch Science Foundation NWO. The project is carried out by the Center for Nonlinear Dynamics in Economics and Finance (CeNDEF) at the University of Amsterdam in cooperation with the research department of De Nederlandsche Bank (DNB).

Introduction The economic models used by Central Banks for forecasts and policy choices do not perform well in exceptional circumstances, as has become evident during the global financial-economic crisis. The president of the European Central Bank Jean Claude Trichet stated on November 18, 2010: “In the face of the crisis, we felt abandoned by conventional tools”. Traditional models are centered around a unique stable equilibrium of rational decisions by a single representative agent. In contrast, the present research project focuses on the interaction of heterogeneous agents with limited knowledge or abilities, and the resulting dynamic behaviour of the system. Central bankers have started to appreciate the benefits of viewing the macroeconomic and financial system as a complex system. In the first place, new means arises to describe a change in the type of dynamic behaviour, such as crashes and economic decline. Economic and financial crises may be understood as the outcomes of critical transitions between different regimes, triggered by small external shocks, amplified by positive feedback and non-linear effects, finally leading to big changes in the economy. Such sudden regime shifts are absent in the

Daan in ‘t Veld In September 2010, Daan in ‘t Veld started his PhD-research within the project Understanding financial instability through complex systems under the supervision of Prof. Cars Hommes and Dr. Cees Diks. Additionally, the Center of Nonlinear Dynamics in Economics and Finance welcomed postdoctoral researcher Marco van der Leij for his expertise about economic networks.

4

AENORM

vol. 19 (73)

December 2011

conventional models, but typical of complex systems, and have important implications for economic policy and crisis management. By departing from the assumption of one representative agent, there is also room to investigate the interactions of different agents by the use of a network structure. As is becoming ever more clear in the European banking and sovereign debt crisis, financial entities are strongly interconnected with each other. The effects of the network of interlinkages for financial stability form a critical research area. In the remainder of this article, two promising applications of complex systems are presented. The first is concerned with hyperinflation, and the second with the network of banks. The two applications are united by their implications for Central Bank policy.

Hyperinflations under heterogeneous expectations The primary objective of the Central Bank is to stabilise the general price level or, in other words, the purchasing power of money. In the period 1999-2011, inflation in the euro area has stayed quite close to the fundamental rate of 2%, the target rate of the ECB. A completely different picture is found in South America in the 1980’s, where many countries experienced several hyperinflations. There is agreement among economists that these hyperinflations are rooted in the lack of stringent monetary policy and high seignorage by governments to reduce fiscal deficits. However, it is more difficult to explain why the timing of the hyperinflations remained unexpected, i.e. uncorrelated with the timing of seignorage. In an important paper, Marcet and Nicolini (2003) explain the recurrent hyperinflations by an extensive model of money demand and supply. It will be shown that the results can be replicated by a simple feedback structure


Econometrics

if one allows for heterogeneity in expectations. Both models share one artificial feature: when a hyperinflation occurs, the only way to stop is an exogenous action of the government that removes expectation feedback. Following Marcet and Nicolini (2003), inflation will be tolerated up to 50 basis points. The implementation and costs of this action will be left outside the model. The issue of price stability is a good illustration of a complex system. The inflation rate is a mere onedimensional outcome of comparing all prices set by individual firms. Each firm is (at least) affected by its direct competitors, by other firms within its production chain, and by its consumers. It is inconceivable that such pricing decisions are taken under perfect information of future actions and plans of all other agents. And yet, the aggregate of these future actions have a big influence on present individual decisions. For example, if a firm foresees that the general price level will increase in the future, it has an incentive to raise its own price in advance. In other words, there is positive feedback of inflation expectations. This can be described by the following equation:

π t − π * = F (π te+1 − π *) + ε t

π te+1 = π t −1 + g (π t −1 − π t − 2 )

(3)

One specific value for g is of particular interest. Given the timing in the model, g = 2 could be called “linear extrapolation”. In this case the agents conjecture that the last two observations prescribe a perfectly linear path of inflation (see Figure 1). An extrapolation factor between 0 and 2 corresponds to more conservative extrapolation. Note that the extrapolation rule performs equally well as the fundamental rule when both πt-1 and π t −1 − π t − 2 are close to zero. Figure 1. Expectations by linear extrapolation of the last two observations.

(1)

Current inflation πt is a linear function of the expectations of future inflation π te+1 with the addition of a shock εt. For convenience, inflation is written in deviation from the constant fundamental value π*. The factor F ≥ 0 captures the expectation feedback. In fact, equation (1) can be derived from deeper models based on consumer, firm and central bank behaviour, such as New Keynesian DSGE models. Expectation feedback F is in these cases a function of deeper parameters related to microeconomic behaviour. Without being specific about the exact construction of F, the key principle is that the Central Bank has significant, though limited power to control expectation feedback F by monetary policy. For example, if the monetary authority increases the interest rate in response to high inflation, the pressure on current prices is released by higher incentives to save money for future consumption. The rational expectation for this system is

π te+1 = π *

other agents’ actions may prevent coordination. Consider for simplicity that there are two groups of agents. The first group beliefs in the stability of the economy and predicts the fundamental inflation rate. A second group of agents fears hyperinflations and is eager to pick up trends. These agents use a simple extrapolation rule based on the last two observed inflation rates:

(2)

where the prediction error is minimised to the shock εt. In words, if all agents belief that inflation equals the fundamental state every period, their actions will lead to inflation rates very close to this fundamental state. The assumption of rational expectations is particularly strong, because agents not only have to be certain about the inflation process given by equation (1), but also have to know that others know the process, along with all higher-order beliefs. Rationality in this case also requires common knowledge in a complex system. Although the belief in the fundamental state is a rational prediction of a representative agent, uncertainty about

The trend extrapolation rule is, in contrast to the fundamental prediction, not consistent with the inflation equation (1). If agents would have exact knowledge about the inflation process, they would know that there is no intrinsic dependence of inflation on its lagged values. However, within the complex systems paradigm, the inconsistency argument loses its value. Instead, realising that a vicious spiral of ever increasing inflation can happen might be a more pragmatic attitude for agents in a complex system. Many laboratory experiments with human subjects confirm that people use simple extrapolation rules for decision-making in uncertain environments with positive feedback. Even if including linear extrapolation in the set of beliefs is a reasonable assumption in the context of complex systems, it would be undesirable that agents follow an unreasonable strategy that performs very poorly. Along the lines of Brock and Hommes (1997), the fractions of both prediction rules can be dynamically determined by a binomial logit model. This framework assures that better performing predictors will attract more agents in the future, and disciplines the modeling of heterogeneous expectations. The exact formulation of the switching mechanism is not important in this

AENORM

vol. 19 (73)

December 2011

5


Econometrics

discussion. What matters is that all agents will abandon the extrapolation rule if it gives poor predictions, for example if F is close to 0. For higher levels of F, the combination of fundamental en trend extrapolating prediction rules generates behaviour that is impossible under homogeneous rules. Figure 2 presents a simulated time series for the parameters g = 2, F = 1 and σ ε2 = 1. If under a certain combination of shocks the extrapolation rules keep on attracting a lot of agents, a hyperinflation can arise. The economy gets locked-in because trend extrapolation keeps being the best prediction and generates a self-fulfilling hyperinflation. Note that the dynamics fluctuate a long time within a small interval around the fundamental value. The plot indeed shows a long period of moderate inflation, where mild fluctuations are driven by shocks. The differences between the two prediction rules do not have a strong effect in this period: the time series does not deviate much from white noise, giving false evidence for the rational expectation hypothesis. Figure 2. Simulated inflation in deviation from the fundamental value.

An important lesson is that, even if inflation appears to be stable, under heterogeneous expectations it may have the dangerous potential to diverge. The simple model can explain why a high level of expectation feedback can lead to recurrent, yet unpredictable hyperinflations, as were observed in South America in the 1980’s. The basic idea is that fear for another hyperinflation may at some point become dominant and thereby self-fulfilling. Restricting expectation feedback by monetary policy becomes a much more delicate matter after accepting the fact that expectations are heterogeneous.

The interbank network A different but related objective of the Central Bank is to maintain a stable banking system. For example, any new institution that wants to pursue banking business has to

6

AENORM

vol. 19 (73)

December 2011

receive authorisation of the Central Bank. Furthermore, if active banks do not meet certain requirements, the Central Bank can interfere. However, the focus on supervision of individual banks has shown serious limitations during the financialeconomic crisis. After the bankruptcy of Lehman Brothers on 15 September 2008, its creditors were facing huge losses. Fearing that other banks would have the same problems, the market for loans between banks effectively collapsed. Supervisors have come to realise that understanding complex interbank markets is crucial for managing financial stability. In the literature, the interbank market was commonly viewed as a flat and dense playing field. Pairs of banks that would both benefit from a loan agreement were assumed to find each other independently within a very large number of possible counterparties. Only within this representation, the systemic importance of banks does not depend on the exposures of their counterparties, and it is justified to look at balance sheet information only. Unfortunately, the assumption of a flat market may be too restrictive. Using data on the German interbank market, Craig and Von Peter (2010) have found that some banks play a more important role in the interbank market. As it turns out, many banks have a fixed relationship with a small number of core banks, who intermediate in the interbank market to match demand and supply of the periphery. This particular network structure introduces higher-order effects of contagion by default, because banks become indirectly linked with one another via the core. Being in the core increases the systemic importance of banks, and therefore affects Central Bank supervision. The question arises whether the core-periphery structure can also be observed in a country with a much smaller and more open banking system such as the Netherlands. The number of active banks in Netherlands in about one tenth of the number in Germany, and loans from foreign counterparties have a much larger share in the interbank market. This primary issue has been the start of the research. To test the concept of an interbank core-periphery structure in a quantitative way, Craig and Von Peter (2010) introduce a strict definition of this kind of network. In a perfect core-periphery structure, it holds that: 1. core banks are all bilaterally linked with each other and both lend to and borrow from at least one periphery bank; 2. periphery banks are linked to core banks only and do not lend to each other. These requirements are illustrated in Figure 3 with three core banks and five periphery banks. Naturally, there is generally no perfect division of core and periphery in the data, and any chosen set of core banks will have errors by either (1) the absence of links between core banks (2) the presence of links between periphery banks.


Econometrics

Figure 3. Example of an interbank market with a perfect core-periphery structure.

The analysis of Dutch data confirms the core-periphery structure, implying that it is general feature of interbank markets. Using the algorithm of Craig and Von Peter (2010), the optimal fit of the structure was found, indicating a core of a small number of Dutch banks. The fitted core-periphery structure gives a much better description than a flat structure in terms of errors, and moreover, the core has a stability of 83% between different time periods. Future research will focus on game-theoretical network formation that can explain the core-periphery structure by economic incentives, revealing the mechanisms that generate this important empirical feature. As in the first example, heterogeneity between economic agents is crucial in understanding complex financial systems.

References Brock, William and Hommes, Cars. “A rational route to randomness”. Econometrica 65.5 (1997): 1059-1095. Craig, Ben and von Peter, Goetz. “Interbank tiering and money center banks”. Deutsche Bundesbank discussion paper 12 (2010). Marcet and Nicolini. “Recurrent hyperinflations and learning”. American Economic Review 93.5 (2003): 1476-1498.

AENORM

vol. 19 (73)

December 2011

7



Econometrics

Basel III: The Price of a Stable Banking Sector Implications of Banking Regulation for Banks and Their Corporate Clients by: Charles Zondag

The BIS’s new capital requirements for banks, also known as Basel III1, draws the attention of various stakeholders. It’s not only the banks that are keen to take note of these additions to the Basel II Accord of June 2006, but their professional clients also want to understand the implications for them. This article provides some suggestions on how to cope with the consequences of Basel III.

Fundamentals of Basel I, II and III In 1988, the BIS published its first global capital requirements for banks, the Basel I Accord. These guidelines were simple and straightforward: a uniform and fixed capital requirement of 8% for most credit facilities granted by banks, while a lower requirement applied to a selection of asset classes. Additional regulation for market risk was published in 1996. Because Basel I could not accommodate the evolution of the risks of banks, a new accord, Basel II, was published in June 2006 and became effective in the European Union in January 2008. The aim of Basel II is to apply risk sensitive capital requirements. In general, the higher the risk of a bank’s business, the higher the capital requirements for the bank and the higher the pricing (while the reverse would apply as well). The comprehensive view on banking regulation is expressed by the three pillars of Basel II: Minimum Capital Requirements (Pillar 1), Supervisory Review (Pillar 2) and Market Discipline (Pillar 3). Moreover, in addition to credit risk and market risk, operational risk is included.

Charles Zondag Charles Zondag is associated with Zanders Treasury Management Consultants. He has extensive expertise in Credit Risk Management (retail & corporate), Basel II, Basel III. Charles has a doctorate degree International Economics from the Erasumus University Rotterdam and can rely on 22 years of international banking experience and 5 years of advisory experience. This article was written with many thanks to Chantal Comanne, for her meticulous assessment of the relevant regulation. Please contact Charles Zondag if you would like to receive more information: +31 35 692 89 89 or c.zondag@zanders.eu.

Subject to strict requirements, banks are allowed to use their own, internal risk models for the fulfillment of these capital requirements. In Basel II, risks are expressed by means of Risk Weighted Assets (RWA). The minimum capital ratio is expressed as a percentage of these RWAs. Under Pillar 2, the requirements according to Pillar 1 calculations are corrected for a variety of factors that are not included in Pillar 1, for instance concentration risk. This results in the final regulatory capital requirements. The financial crisis highlighted several shortcomings of Basel II, however. Basel III, in essence, focuses on additional requirements for the composition and quality of capital of banks, the liquidity position and the leverage. The calculation itself of RWA remains unaltered in most cases (except for counter party risk of large financial institutions and apart from inclusion of mark-to-market counter party risk losses).

Implications of Basel III The (rather unique) combination of a recently implemented Basel II and an unprecedented adverse economic situation already resulted in higher risk profiles of clients and facilities. Limitations in models and historical data as well as a gradual inclusion of the economic downturn in the underlying data pushed up risk profiles of bank clients and credit facilities. Moreover, many bank clients were downgraded due to the economic situation, implying a double impact. On top of this, Basel III has several implications:

1

In case of Basel II and Basel III, we refer to various relevant and closely related guidelines by the BIS and the EU.

AENORM

vol. 19 (73)

December 2011

9


Econometrics

1. 2. 3. 4.

Tighter definition of ‘real loss absorbing’ capital; Higher capital requirements; Restrictions on leverage; Stricter liquidity requirements.

product), while restrictions apply on netting. Although indicative, it will restrict banks’ activities irrespective of the calculation of RWA and implies a restriction in meeting targeted dividend payments. In case of focusing solely on low risk clients, the leverage ratio might become an overarching restriction. In case of high risks, the minimum capital ratio might be hit while the leverage ratio might not be breached yet. As a result of the leverage ratio, especially disproportionate derivative positions will be limited. Following abundant liquidity in the market, the financial and economic crisis underlined that financial institutions are extremely vulnerable to unexpected and major withdrawals of funds. Basel III addresses this with a Liquidity Coverage Ratio as well as a Net Stable Funding Ratio. Both the liquidity ratios and the additional Pillar 2 requirements of Basel III imply a stricter adherence to an overarching principle of (approximately) matched funding (tenors for credit facilities, cash-flows in case of derivatives, while it applies to FX positions as well). However, it is a mismatch that often generates attractive bank profits, but can put a bank at risk as well. The liquidity requirements are applicable in combination with the aforementioned capital requirements and leverage ratio. The new capital requirements will be implemented gradually, starting by 2013 and scheduled to be fully implemented as of January 2019. This long transition period underlines the need of further fine-tuning. In any case, the intake of Basel III makes clear that banks generally will need to meet stricter and higher capital requirements, where liquidity and leverage requirements form additional restrictions in a bank’s business. By and large, banks will need to look for higher revenues and/ or off-loading assets. It depends on the situation on the global financial markets and the strength and profitability of the bank to what extent banks will be able to get Basel III capital to appropriate levels. The current low interest rates allow banks to build up retained earnings. However, as soon as the market interest rates go up again, business

We underline however the cumulative effects of Basel III. The composition of capital is required to become more robust by means of stricter requirements for ‘real loss absorbing’ tier 1 and 2 capital. Capital instruments that do not meet these criteria, such as several types of mezzanine capital and tier 3 capital, will be gradually phased out for the calculation of regulatory capital. Next to this, deductions from capital will apply for certain unconsolidated investments in financial institutions, mortgage servicing rights and certain deferred taxes. Minimum capital requirements for banks will increase from the current 8% to at least 10.50% and even up to 13% in case of adverse economic circumstances. ‘System banks’ potentially will be confronted with additional requirements, which are still to be determined. On top of this, restrictions on remuneration will apply in case a bank hits the floor of the conservation buffer and the counter-cyclical buffer, if applicable. Under Basel III, non-eligible capital components should either be replaced by tier 1 or tier 2 capital, or the bank would have to reduce its risk weighted assets. Additionally, banks will need more capital to cover the same risks (apart from any change in risk profiles). This combination will put pressure on the banks’ target for risk adjusted return on risk adjusted capital and the anticipated dividends. In other words, banks will need to meet the same dividend targets for a similar or even restricted product portfolio that faces higher capital requirements. Restrictions on leverage apply by means of a maximum leverage ratio of 3% (of tier 1 capital). This leverage ratio applies to on-balance as well as off-balance sheet items (the latter with a specific credit conversion factor per Table 1. New minimum capital ratios. Common Equity

Tier 1 Capital

Total Capital

Old

Old

New

Old

New

New

Minimum

2%

4.50%

4%

6%

8%

8%

Conservation buffer (in form of common equity) Sub-total Countercyclical buffer (national discretion)

-

2.50%

-

-

2% -

4% -

Total bandwidth *

2%

7% 0 – 2.50% (full loss absorbing equity) 7 – 9.50%

2.50% (incl. in equity) 8.50% 0 – 2.50% (full loss absorbing equity) 8.50 - 11%

2.50% (incl. in equity) 10.50% 0 – 2.50% (full loss absorbing equity) 10.50 - 13%

* Excl. Potentially additional requirements for ‘System Banks’ (SIFI).

10

AENORM

vol. 19 (73)

December 2011

4%

8% -

8%


Econometrics

cases that rely on bank financing will change, which will put margins for banks under pressure.

Countering the implications of Basel II and III Basel III will result in restrictions in the supply-side of capital bearing products offered by banks. Countering the implications differs for banks and their clients: For banks: • The financial turmoil as well as the stricter capital requirements underline the need for accuracy in models and relevant data. So far, Basel II implementation has not always been optimal, often resulting in over reliance on models. Basel III is an excellent incentive to correct for these shortcomings. High quality data and high quality models combined with a fine balance in processes and systems are essential for Basel III as well. • The above-mentioned balance of models, processes and techniques is a prerequisite for optimizing the efficiency and the cost structure of a bank. • Banks will need to balance their capital-raising, offloading assets or a combination of both. • The first banks already started tapping the international financial markets by means of new forms of lossbearing capital, though at significant prices. In-depth insight in specific conditions of funding products (hybrid capital, professional funding, saving accounts, deposits) becomes highly relevant to meet the new capital standards and liquidity standards. • Product innovation, like for instance working capital finance / supply-chain finance, and in products relying on netting. For (professional) bank clients:

as per risk profile. • A creative use of bank facilities with a lower risk profile, like trade-related facilities and not-creditsubstituting bank guarantees (see also above, product innovation by the banking sector). • Syndicated loans and/or club deals will be easier to place. • Reduction of tenor of facilities, provided according to financing needs. • Financing based on tangible collateral: fixed assets have significantly more value than floating assets and work in progress. Financial covenants usually have a very low impact on pricing. • In case of multiple financial products acquired from the same bank, non-credit related fees might allow for compensation of a lower than (theoretically) required credit margin. Next to the above, we expect the market will look for other alternatives, like private equity, bond issues and an enhancement of securitization practices.

Conclusion The financial turmoil made clear that capital and liquidity can become scarce almost instantly, notwithstanding regulation and advanced modeling techniques. Basel III, on top of Basel II, provides the means for an institutionalized focus on the downside of risks. This will not protect the sector from bear markets, but clearly incorporates warning signals for potential stress in the banking world as well as caps and floors in the balance sheet. The trade-off for products and services offered by banks is less favorable: prices will rise in any case. Stability in the banking sector has a price, but this was significantly underestimated in the second half of the previous decade. Basel III puts the burden of a sound banking sector with the banks and their clients.

• Given the exponential curve in rating versus price of credit facilities, a stable, sound financial status of any bank client becomes even more important than in the past. In anticipation of Basel III, margins on bank products already increased significantly. As such, bank financing is and will be more expensive in any case, but especially for low rated clients. • Reduction of working capital pays off and can be achieved by means of: - Fine-tuning the limit size of unused credit facilities, in view of the costs related to underlying capital requirements for the offering bank; - Keeping a close eye on cash pooling and netting to prevent unnecessary credit facilities (requiring product innovation by the banking sector); - An active credit risk management of debtors. Determining the risk profile of (prospective) clients provides the means for early payment incentives, late payment fees and limiting debtor outstanding

AENORM

vol. 19 (73)

December 2011

11


Actuarial Sciences

The Road to Solvency II A UK Perspective by: Servaas Houben

Solvency II will come into regulation as per 1 January 2014. This resulted in a dramatic increase in demand for Solvency II experts, notably actuaries, within Europe, resulting in an influx of actuaries from overseas and actuaries starting their own one-person companies (contractors). As the English Solvency I regime has several similarities to the proposed Solvency II framework it is interesting to assess the underlying reasons for previous regime changes. However, although Solvency II will replace the previous regulatory regime and create a level playing field for new people entering the insurance industry, a good understanding of the historical developments that have led to the Solvency II regime, will enhance the understanding of the structure of the proposed framework.

Figure 1. The Solvency I to Solvency II timeline.

Net premium approach

ICAS

Preliminary Peak 2 exercise

May 2002

March 2003

Annual ICA calcula on

Dec 2004

Introduction In this article, the following regimes are discussed: • The current English Solvency I regime consisting of: - Pillar 1 consisting of Peak 1 and Peak 2 regulatory valuations; - Pillar 2 Individual Capital Assessment (ICA) under the Individual Capital Adequacy Standards (ICAS) regime.

Servaas Houben Servaas Houben studied econometrics at the University of Maastricht and economics at the University of Glasgow. Thereafter he studied actuarial sciences, CFA, and FRM. He has worked in the Netherlands for the first 4 years of his career and then 2 years in Ireland. He currently works in London. For any questions or information please contact servaashouben@gmail.com.

12

AENORM

vol. 19 (73)

December 2011

QIS exercises

2008

Solvency II in force

2011

2014

• The future Solvency II regime consisting of: - Pillar 1 regulatory capital requirement framework; - Pillar 2 company-specific qualitative framework. Please note that the pre 2002 regime, the net premium approach, falls outside of the scope of this paper. Figure 1 shows a timeline of past and future developments.

Solvency I - Pillar 1 - Peak 1 The English Solvency I framework introduced a valuation based on a regulatory basis (Peak 1) and a realistic framework (Peak 2, only for larger with profits (WP) firms). The overall framework is referred to as Twin Peaks. Peak 1 prescribes a prudential valuation method to ensure that future payments can be met. Furthermore, the framework is deterministic as a flat discount rate is applicable, independent of the timing of cash flows. Deterministic computer programs can be used to calculate balance sheets and profit and loss. Assets are classified as either admissible or inadmissible assets (for the last category no easy valuation is possible). Examples of inadmissible assets are deferred tax assets/


Actuarial Sciences

Table 1. LTICR calculation under the Peak 1 framework. LTICR component

Non-UL

UL

Gross mathematical reserves

4% of gross mathematical reserves

1% of gross mathematical reserves if there are guarantees

Gross capital at risk

0.3% of gross capital at risk

0.3% of gross capital at risk

Renewal/maintenance expenses

25% of last year’s renewal/maintenance expenses

25% of last year’s renewal/maintenance expenses

Table 2. Peak 1, strengths and weaknesses. Peak 1 Strengths

Weaknesses

Extra prudence in valuation leads to more certainty for policyholders.

Not reflecting shareholder value due to prudence (development EEV/MCEV).

Very prescriptive and therefore easy to perform.

Deterministic valuation does not represent time value of options and guarantees

Same rules apply throughout the entire industry: comparison between companies is easier.

Not a risk based metric: interactions (i.e. correlation) between risks are not taken into account and not all sources of risks included. LTICR mainly based on prudent reserve instead of more extreme scenarios.

liabilities, works of art, future profits on in force business (VIF), and non-covered derivatives. Liabilities are valued on a prudent basis and discounted on a flat yield curve based on a maximum of 97.5% of risk adjusted yield on backing assets. This discount rate is calculated as the weighted average of dividend yield on equities and redemption yield on bonds. Adjustments are made for credit default risk. Furthermore, there is no allowance on future lapses and negative reserves (e.g. products that will incur more income due to management charges than costs due to expenses). The total assets are then compared to the sum of liabilities to determine the level of the remaining surplus. The LTICR is added to the liabilities such that another insurance company would be willing to take over these liabilities. The calculation of LTICR is based on several components and differs between linked and non-linked liabilities as shown in Table 1. So far the Peak 1 framework has been based on prudent best estimate assumptions and expectations. To withstand any more extreme scenarios, a resilience test is performed leading to a resilience capital reserve (RCR, only for WP companies that have less than £500m in liabilities). In this test, assets are stressed under a single market scenario combining several shocks: equity and property fall, and interest rate rise/fall depending on which one is more onerous. Furthermore, a dampener

effect is applicable for the equity and property stress: the level of the stresses depends on the economic cycle, e.g. when equity prices have dropped, the equity stress will be downward adjusted. To assess the company’s solvency the Minimal Capital Requirement (MCR), which is Max(LTICR + RCR, BMCR), where BMCR is the industry-wide minimum capital requirement, is compared with the surplus assets. Solvency II has a similar MCR metric: once a company is unable to support its MCR with surplus assets, supervisory intervention will take place. Table 2 summarizes the strengths and weaknesses of the Peak 1 framework.

Solvency I - Pillar 1 - Peak 2 As a result a Realistic Balance Sheet (RBS) approach was introduced for larger WP funds (>£500m) to reflect the underlying risks better. The time value of options and guarantees is either determined by stochastic simulation such as Monte Carlo (most common) or by a closed formula approach. Moreover additional assets and liabilities are taken into account such as uncovered derivatives and the NP VIF written in the WP fund. The liability also consists of particular WP balance sheet items as planned enhancements (extra bonus) or deductions (reduction in bonus), and costs of smoothing (difference in actual and projected payout).

AENORM

vol. 19 (73)

December 2011

13


Actuarial Sciences

Table 3. Peak 2, strenghts and weaknesses. Peak 2 Strengths

Weaknesses

Takes into account future bonusses on in force business.

Regulatory prescribed framework: little or no room for company specific circumstances!

Measurement of the time value of options and guarantees.

RCM based on a single scenario not taking into account the whole spectrum of possible risk drivers (e.g. operational risk, mortality/longevity, etc.).

Removes margin on margin by assuming prudent reserves and adding LTICR in the Peak 1 framework.

Calculation of NP VIF based on different, prudent, EEV principals and therefore inconsistent with realistic balance sheet approach.

On top of this reserve a “realistic resilience test” is performed, also known as the Risk Capital Margin (RCM). This test consists of a fall in equity and property prices, a widening of credit spreads, a parallel up and down shift in the yield curve, and a persistency shock. These scenarios happen instantaneously and any mitigating effects due to management actions (change in asset mix, change in expense policy) are allowed after the scenario has happened. Table 3 summarizes the strengths and weaknesses of the Peak 2 framework.

1. When the regulatory surplus exceeds the realistic surplus, this can be represented as in Figure 2. When

The overall idea behind the framework is that the regulatory Peak sets the minimum capital requirement to ensure there is consistency within the industry to provide policyholders the same level of assurance. However, as there is a level of prudence within the framework, this methodology does not provide shareholders the best insight into the actual value of the company is. As a result, first EEV and later MCEV were developed to remove the prudence from the calculations and to align calculations with the market valuation view from shareholders. The Twin Peaks framework is quite similar to the Prudential Filter framework in the Netherlands. Similar to pension funds, insurance companies will define their risk appetite (i.e. which and how much risks they are willing to take) and define a ladder of intervention that defines which actions need to be taken depending on their solvency level.

Figure 2. No WPICC when Peak 2 surplus exceeds Peak 1 surplus.

Figure 3. WPICC when Peak 1 surplus exceeds Peak 2 surplus.

Which Pillar bites? Despite RBS, the overall framework remains prudent. In case the realistic surplus exceeds the regulatory surplus an additional reserve is added to the regulatory balance sheet, also known as WPICC, which equals: Max(Peak 1 surplus - Peak 2 surplus - future transfers, 0)

(1)

Either the realistic surplus exceeds the regulatory surplus, or the other way around.

14

the regulatory surplus (10) is smaller than the realistic surplus (15), the regulatory balance sheet is “biting”, and the company is required to steer on the regulatory surplus. 2. If we increase the RCM component from 12 to 20, the regulatory surplus (10) would exceed the realistic surplus (7). As a result, the WPICC balancing item is added to equalise the surplus levels for Peak 1 and Peak 2, as represented in Figure 3.

AENORM

vol. 19 (73)

December 2011


Actuarial Sciences

Table 4. Undiversified capital requirement per risk driver.

Solvency I - Pillar 2 Consequently, a new regime was introduced which removed prudence by including future profits on current business (i.e. VIF) in current surplus. This extra capital can either be counted as an increase in assets or a decrease in reserves. Also, the extra prudence in the regulatory balance sheet, LTICR, was removed. The capital requirement is calculated as a series of independent stresses which are then aggregated using a correlation matrix. However, there is an obligation to report the diversified and undiversified capital requirement for every risk driver (a risk driver is a homogeneous variable that requires the insurance company to hold capital, e.g. equity, property, mortality, etc). This provides more insight into which risks are harder to diversify, and therefore require closer monitoring. The capital requirement for risk driver i can be calculated as follows (C is the correlation matrix, z is the capital requirement for individual risk drivers): Capital Requirement Risk Driveri =

z Cij z j

j =1 i T

(2)

z Cz

There are several ways to assign diversification benefits arising on group level to the business unit levels. A pro rata formula is one of these approaches and works as follows: Capital RequirementBUi , diversified =

Capital RequirementBUi , stand − alone

j

Capital RequirementBUj , stand − alone

*Group Diversified Capital

(3)

Furthermore there is an option to run a combined scenario: in this scenario, all risk drivers are affected simultaneously at a combined 1 in 200 probability. The combined scenario takes into account the impact of the individual risk drivers, but also their correlations (higher correlated risk drivers are more dangerous and therefore require more capital). To determine the level of the risk drivers in the combined scenario the following steps are required: 1. Calculate the diversified capital requirement for risk driver i, i.e. the marginal contribution of risk driver i to the total required capital: ∑ j Capital Requirement j * ρi, j ; 2. Divide the result under 1 by the total capital requirement z T Cz ;

3. Multiply the ratio of 1 over 2 by the original stress level of risk driver i. The combined scenario will give more insight about the risks the company faces when several risks happen

Individual stresses

Stress level

Individual capital requirement

Equity Property Insurance

30% 25% 20%

10 5 10

Table 5. Correlation between risk drivers.

ρ - matrix

Equity

Propery

Insurance

Equity Property Insurance

1 0.75 -0.5

0.75 1 -0.25

-0.5 -0.25 1

Table 6. Diversified stress. Combined scenario

Parameter

Schocks

Equity

0.6614 0.7559 0.2835

20% 19% 6%

Property Insurance

simultaneously. An example of a combined scenario is when we have individual stresses and required capital for each of the risk drivers as shown in Table 4 and the correlation between the risk drivers in Table 5. Applying matrix multiplication for Tables 4 and 5 results in a diversified capital requirement of 13.23. Using Equation 2 leads to the capital requirement for each risk driver, which can be represented as a percentage of the individual stress levels as shown in Table 6. Although the insurance risk has a high individual capital requirement, after taking into account diversification benefits it is less material. When the regulator is unsatisfied with either the level of the capital requirement for a risk driver (too low) or about the governance, an additional amount will be added to the capital requirement: Individual Capital Guidance. This ICG is usually expressed as a percentage.

Solvency II - Pillar 1 Pillar I under the Solvency II regime will be the regulatory framework under which insurance companies will provide the regulator a 1 year Value-at-Risk based on a 99.5% confidence interval (Solvency Capital Requirement). Furthermore, part of this SCR will be called Minimal Capital Requirement. Within Pillar 1, there are 3 options: 1. Standard formula: the European regulator prescribes the level of the stresses and the correlation between the risk drivers. Stresses are performed individually as

AENORM

vol. 19 (73)

December 2011

15


Actuarial Sciences

in the ICA methodology; 2. Internal model: the insurance company develops its own model with assumptions for each risk driver and correlation assumptions. The insurance company will make an application to their regulator to get approval for the use of the internal model, IMAP; 3. Partial internal model: some parts of the model will be company specific while others will be based on the standard formula. Especially for risk drivers for which there is a lack of available data, such as operational risk, some insurance companies won’t be able to model this risk yet. On the longer run, the partial internal model will develop into a full internal model.

• Although the framework is new, it has several similarities with the Twin Peaks and ICA framework: • On top of the best estimate liabilities, an extra measure of prudence is added like the LTICR in Peak 1. This prudence represents the increase in liabilities that another insurance company would take into account when taking over the reserve. This risk margin only applies to non-hedgeable risks (risks which cannot be removed by market instruments). • Furthermore, QIS5 prescribed the use of a single equivalent scenario in which several risk drivers were stressed simultaneously. This is comparable to the ICA combined scenario. • During QIS5, the discount rate for valuing liability cash flows was derived by adding a Liquidity Premium depending on the nature of the assets backing these liabilities. Currently there are proposals to make the yield curve for discounting liabilities dependent on the return on assets (matching premium). This would actually imply a return to a Peak 1 like discount rate. • Also, in this system a ladder of intervention will apply such that intervention of the supervisor is required when capital is unable to sustain MCR and increased monitoring will take place when capital is unable to sustain SCR. • An equity dampener applies for the QIS5 standard formula, the Peak 1 resilience reserve also applies an equity dampener. • Mitigation due to management actions happens after the stress scenario has taken place in QIS5 and ICA.

a different credit rating objective, the ORSA allows firms to perform a Pillar 1 calculation on a different VaR limit (e.g. 1 in 2000); It will provide the company with more information regarding sensitivities in the tail of the distribution. Also, a company can perform the calculation on a different metric as VaR such as TailVaR; It provides flexibility regarding the frequency of the assessment, while the Pillar 1 frequency is prescribed; Includes some non-quantifiable risks, which won’t be included in Pillar 1. Examples are strategic risk, reputational risk and liquidity risk; Business planning, and scenario analysis (e.g. market fall, closure to new business) which was previously done in the Financial Condition Report.

10 years later and how much progress has been made? Solvency II will be introduced 1 January 2014, about 10 years after the introduction of the English Twin Peaks approach. In those 10 years, an enormous amount of consultation has taken place between the European supervisors and the industry, however the similarities between the old Solvency I and new Solvency II regime are striking. A better understanding of the development of the English regime is hence beneficial for people working in the Dutch insurance industry. However, one can wonder how much progress in truth has been made in these past years: especially the inclusion of a discount rate which depends on the rate of return on assets feels like a u-turn back to the previous framework and away from a truly market based valuation methodology.

References Dullaway, David and Peter Needleman. Realistic liabilities and risk capital margins for with-profits business. Institute of Actuaries, London, 2003 Financial Services Authority. Enhanced capital requirements and individual capital assessments for life insurers. Financial Services Authority, London, Consultation Paper 159, 2003

Solvency II – Pillar 2 Pillar 2 is the core of the Solvency II framework as it promotes the embedding of the Solvency II framework within the organisation. This involves governance, internal control and risk management. The Own Risk and Solvency Assessment (ORSA) is part of Pillar 2 and entails a process to reflect the firm’s own view on the following aspects: • Risk based calculation based on own risk appetite: as firms have a different willingness for taking risks,

16

AENORM

vol. 19 (73)

December 2011

Financial Services Authority. “FSA Handbook”. Financial Services Authority, November 2011. Web. November 2011. Muir, Martin and Richard Waller. Twin Peaks – The Enhanced Capital Requirement for Realistic Basis Life Firms. Staple Inn Actuarial Society, London, 2003. O’Keeffe et al. Current developments in embedded value reporting. Institute of Actuaries and Faculty of Actuaries, London, 2005.


69

71

vol.18 dec ‘

70

vol. 18 dec ‘10

vol. 19 may ‘11

72

vol. 19 1 aug ‘1

Are you interested in being in the editorial staff and having your name in the colofon? If the answer to the question above is yes, please send an e-mail to the chief editor at aenorm@vsae.nl. The staff of Aenorm is looking for people who like to: - find or write articles to publish in Aenorm; - take interviews for Aenorm; - make summaries of (in)famous articles; - or maintain the Aenorm website. To be in de editorial board, you do not necessarily have to live in the Netherlands.


Econometrics

Risk Management with the Multivariate Generalized Hyperbolic Distribution Calibrated by the Multi-Cycle EM Algorithm by: Marcel Holtslag

The question, mentioned by Pagan (1996), is whether the more complex models to capture the nature of the conditional density of returns are better suited compared to simpler but easier to use models. Gradually, the traditional Gaussian distribution to model financial returns has been replaced by several other viable distributions suitable to capture the empirically observed heavy tail behavior, kurtosis and peakedness. This study contributes towards the further development of the effectiveness of the multivariate generalized hyperbolic distribution (MGHyp) when it is used to forecast the possible next day portfolio loss. It differentiates by introducing an asset portfolio model via DCC-MGARCH and tries to reduce the overal calibration time. By reviewing the subclasses: normal inverse gaussian (NIG), multivariate hyperbolic (Hyp) and the 10-dimensional multivariate hyperbolic distribution (KHyp) following from the MGHyp class, a recommendation is made about the overall performance.

DCC-Multivariate GARCH

standard deviations hi ,t specified by k univariate GARCH(1,1) specifications given by:

The underlying portfolio model is based on the DCC(1,1)-MGARCH(1,1) of Engle (1999). It covers the time dependent correlations among the assets and as Laplante et al. (2008) mentions, statistical evidence does not support an even more complicated model if dealing with financial returns. The model assumes log return series from k assets that are conditionally multivariate normal distributed with constant mean μ and covariance matrix Ht. The information set IIt −1 consists of all known information until time t − 1.

rt | IIt −1 ~ N ( μ , H t )

H t = Dt Rt Dt

with

where Dt is a k × k diagonal matrix of time varying

Marcel Holtslag

hi ,t = ωi + α i (ri ,t −1 − μi ) 2 + β i hi ,t −1

for i = 1 ... k with sufficient restrictions on parameters α + β < 1 and non negative variances. The k × k dynamic correlation matrix, Rt , with ones on the ith diagonal is given by:

Rt = diag{Qt }−1 Qt diag{Qt }−1 with Qt the diagonal transformation matrix of the dynamic correlation matrix.

Qt = (1 − α − β )Q + αε t −1ε 't −1 + β Qt −1 Lastly, let the standardized residuals ε t be denoted by

ε t = Dt−1 (rt − μ ) Marcel Holtslag obtained his Master’s degree in Financial Econometrics at the University of Amsterdam in May 2011 and during his study he has contributed to several VSAE committees. This article is a brief summary based on his Master’s thesis written under the supervision of Dr. Simon Broda and Prof. Dr. Peter Boswijk.

18

AENORM

vol. 19 (73)

December 2011

The unconditional covariance matrix Q is a k × k matrix estimated from the standardized residuals ε t. A design feature of the DCC model makes it possible to estimate it as a two step optimization problem using the (quasi) maximum log-likelihood method. As long as the first step parameter estimates are consistent, the second step parameter estimates are consistent as well assuming reasonable regularity conditions and a continuous function


Econometrics

in the neighborhood of the true parameter values. After the estimation process, the standardized residuals are used to calibrate the multivariate generalized hyperbolic distribution.

Multivariate generalized hyperbolic distribution Although a vast literature has been written describing all sorts of different heavy tailed asymmetrical distributions, it is the density of Barndorff-Nielsen (1977) that is quite interesting. While the original paper concentrates its research to model mass-size distributions of aeolian sand deposits, the independent calibration of the third and fourth moments showed potential to model financial returns. Since 1992 three serious attempts were made to implement the MGHyp density for financial problems and this paper uses the latest, most general, version covered by Protassov (2004) and McNeil et al. (2005) since it handles the unwilling tractability issues. The derivation is not that complicated if one starts with the assumption of the Normal-Mean-Variance-Mixture d

X = μ +Wγ + W A

(1)

with Z ~ Nk(0, Ik), A ∈ d ×k with AA ' = Σ of dimension k × k and μ and γ are parameter vectors in k. The remaining step to derive the MGHyp distribution is to assume that the mixture weight W follows a Generalized Inverse Gaussian or N − (λ , χ ,ψ ) distribution. This results in the following MGHyp density expression:

f ( x) =

exp{(x − μ ) ' Σ −1γ }ψ λ ( χψ ) − λ 1

k

(2π ) 2 Σ 2 K λ ( χψ ) ×

k −λ 2

ψ*

k

( χ*ψ * ) 2

−λ

× K λ − k ( χ*ψ * ) 2

with χ* and ψ * defined by

χ* = ( x − μ ) ' Σ ( x − μ ) + χ ψ * = γ ' Σ −1γ +ψ −1

and K λ − k the Bessel function of the third kind. Each 2 parameter defines one particular part or shape of the density; λ controls the shape, χ the peakedness, ψ the difference between the statistical skewness and kurtosis estimates, μ is the location vector, Σ is the dispersion matrix and γ controls the skewness vector. All six function arguments (λ , χ ,ψ , μ , Σ, γ ) are in general unknown, hence some estimation procedure is necessary. λ is considered to be predefined since it takes quite some time to estimate its value while the difference between a predefined or estimated λ is neglectable. The other five parameters are estimated by means of the ExpectationMaximization (EM) algorithm.

Calibration by EM The aim of the Expectation-Maximization algorithm is to maximize the conditional expectation of the full model log-likelihood function such that if the dataset is incomplete, consistent parameters could still be estimated. Each iteration of the EM algorithm consists of two steps, called the Expectation step that deals with the missing values and the Maximization step to estimate the parameters. Defining the E-step Let ( p) ∈ , strictly non-negative, denote the current EM cycle and let Θ( p ) denote the collection of parameters λ , μ , Σ , χ , ψ and γ at cycle (p) such that the E-step is defined as Q(Θ | Θ( p ) ) = IE [ log f (x complete | Θ) | x observed , Θ( p ) ]

Unfortunately, the complete data specification depends not only on the observations x, but also on the missing variables w since the observational data for the GIG distribution is unavailable. Estimating the joint density f (x, w ) is therefore quite difficult in its present form but if somehow it is known that f (w Θ) has been realized, this knowledge can provide information whether f (x w; Θ) has also been realized. Assume f (w Θ) > 0 such that T

f (xcomplete Θ) = ∏ f (xi wi ; Θ) f ( wi Θ) i =1

Let xi be a vector of dimension k containing standardized residuals of the DCC(1,1)-MGARCH(1,1) model of k assets at some time i, where i ∈ [1 ... T], assume that all observation vectors xi are captured in a 1× kT vector (x1, ..., xi, ..., xT) and let the latent variables w = (w1, ...,wi, ..., wT) be drawn by N − (λ , χ ,ψ ) given as f ( wi ; λ , χ ,ψ ) =

χ − λ ( χψ )λ λ −1 − wi e 2 K λ ( χψ )

1( χ 2 wi

+ψ wi )

with K λ ( χψ ) as the modified Bessel function of the third kind with index λ . The expression for the conditional density f (xi wi , Θ) is derived using the Normal-MeanVariance-Mixture (1). f (xi wi , μ , Σ, γ ) =

1 k 2

1 2

k 2

(2π ) Σ wi −

e

e ( xi − μ ) ' Σ

−1

γ

( xi − μ ) ' Σ −1 ( xi − μ ) w − 2i γ ' Σ −1γ 2 wi

e

By simply substituting the found density expressions, one eventually finds the completely defined E-step as Q(Θ Θ( p ) ) = Q1 (xi , μ , Σ, γ ) + Q2 (λ , χ ,ψ )

AENORM

vol. 19 (73)

December 2011

19


Econometrics

T T k T −1 log | Σ | − ∑IE [log wi | xi , Θ( p )] + ∑ (xi − μ ) ' Σ 2 2 i=1 i =1

Q1 (⋅) = − −

1 T ∑IE [wi−1 | xi , Θ( p )] (xi − μ ) ' Σ −1 (xi − μ ) 2 i=1

T 1 Tk − γ ' Σ −1γ ∑IE [wi | xi , Θ( p )] − log(2π ) 2 2 i =1

T

χ

i =1

2

Q2 (⋅) = (λ − 1)∑IE [log wi | xi , Θ( p )] − −

ψ

T

∑IE [w

−1 i

| xi , Θ( p )]

i =1

λT ∑IE [w | x , Θ ] − 2 log χ 2 T

( p)

i

i

i =1

+

λT 2

logψ − T log [2 K λ ( χψ )]

It can be shown that all three conditional expectations following from Q1 (⋅) and Q2 (⋅) are actually defined by the first moment of the Generalized Inverse Gaussian distribution if one utilizes Bayes rule. Defining the M-step The updated parameters Θ ( p +1) are found via the second step by separately maximizing Q1 (xi , μ ( p ) , Σ( p ) , γ ( p ) ) and Q2 ( χ ( p ) ,ψ ( p ) , λ ) . Set the derivative of Q1 (⋅) with respect to μ ( p ) , γ ( p ) and Σ ( p ) equal to zero and simply solve the system of unknowns.

γ

( p +1)

=

1 T

T i =1

δ i( p ) ( x − xi )

δ ( p )η ( p ) ( p) 1 − γ ( p +1) T ∑ i =1 x i δ i

Σ ( p +1) =

T

δ ( p)

1 ∑ δ i( p ) (xi − μ ( p +1) )(xi −μ ( p +1) ) ' T i =1 +η ( p )γ ( p +1) (γ ')( p +1)

Maximizing the quasi log-likelihood function Q2 (⋅) with respect to χ and ψ is performed by a numerical maximization method. T

max χ ,ψ (λ − 1)∑ ξi − i =1

λT 2

χ

T

∑δ 2

log( χ ) +

i

ψ

λT 2

T

∑η 2

i =1

i

i =1

log(ψ ) − T log[2 K λ ( χψ )]

Since only Q2 (⋅) is optimized, instead of the complete loglikelihood, it is required to recalculate the weights δ i( p ), ηi( p ) and ξi( p ) using the updated parameters μ , Σ and γ of the current cycle (p). Furthermore, to address the near singularity problem for Σ, the dispersion matrix is scaled using the determinant of the sample covariance matrix. 1 k

Σ ( p +1) =

cov(x) Σ ( p +1) Σ ( p +1)

1 k

The E- and M-step keeps updating the parameters until the difference between the cycles (p) and (p-1) is neglectable.

20

AENORM

vol. 19 (73)

The one day ahead forecast portfolio return is estimated by the underlying DCC-MGARCH portfolio model with the residuals following the MGHyp distribution, calibrated by the previous 500 observations using the conditional VaR approach with the nominal coverage levels 95% and 99%. A rolling window using the previous 1000 observations has been tested, but no significant differences compared with the rolling window of 500 were noticeable. Let x be the asset weights, sum up to one and let Ht+1 be the forecast DCC-MGARCH covariance matrix such that the forecast portfolio return is denoted by 1

x ' rt +1 = x ' μ + x ' H t2+1ε

with ε as the MGHyp distribution. To estimate the conditional VaR using a multivariate density model is rather complicated due to the integrand, but by introducing weights the problem translates to an ordinary univariate case by which we all know how to solve. The proof is easy. Assume a multivariate linear function BX + b, assume that the intercept vector b = 0 and that matrix B is actually a weighting vector x ∈ k such that the sum of the weights equals one. Finally, let X denote the Normal-Mean-Variance-Mixture and W distributed by the GIG distribution such that the multivariate case translates to the univariate one.

Application and remarks

T

μ ( p +1) =

Risk assessment

December 2011

The equally weighted portfolio is constructed by the S&P 500 top ten constituents by market cap; Apple Inc (AAPL), Chevron Corp (CVX), General Electric (GE), Intl Business Machines Corp (IBM), JP Morgan Chase & Co (JPM), Microsoft Corp (MSFT), Procter and Gamble (PG), AT&T (T), Wells Fargo & Co (WFC) and Exxon Mobile Corp (XOM). The finite sample T = 2,766, for the period 01/01/2000 to 01/01/2011, is formed by taking daily negative log returns of the adjusted daily close price. It is apparent from descriptive statistics that eight out of the ten assets endured a loss over the covered period due to the liquidity crisis. Furthermore, all ten asset returns exhibit heavy tail and asymmetrical properties. Finally, Mardia’s test to test whether the portfolio is gaussian distributed is rejected at a 1% significance level with p-value 0.0000. Results By reviewing the subclasses: normal inverse gaussian (NIG), multivariate hyperbolic (Hyp) and the 10dimensional multivariate hyperbolic (KHyp) a comparison is made which of the hyperbolic subclasses performs the best to handle the observed heavy tail and asymmetry. Figure 1 shows two of the twelve outcomes of the time series analysis illustrating the risk violations by + markings while table 1 and 2 presents the statistical


Econometrics

data. It is apparent from table 1 and 2 that Christoffersen unconditional coverage test indicates strong evidence to reject the null of correct coverage for all three symmetric distributions using the nominal 95% coverage level. Of these three distributions, only the NIG is found to be statistically significant for the Christoffersen conditional test. While the latter distributions all underestimate risk, no significant problems are found when estimating the risk using the asymmetrical distributions. For the nominal 99% coverage level no strong statistical significance is found of under or overestimated risk for the symmetrical as well as the asymmetrical distributions. It seems suspicious that all three symmetrical distributions outperform the asymmetrical distribution based on the MSE value. Firstly, the portfolio is heavily skewed according to Mardia’s test. Secondly, the asymmetrical distribution nests the symmetrical distribution such that one should expect that at least one asymmetrical subclass could outperform a symmetrical

Figure 1. The one day ahead conditional Value at Risk using the Hyperbolic distribution for the nominal 95% (2nd line from top) and 99% (3rd line from top) coverage level. Conditional VaR violations are indicated by + markings with 95% below the 99% coverage level.

Table 1. Backtesting test statistics based on the 95% coverage level. The p-values are denoted between the parentheses and * (**) indicates a rejection at 1% (5%) significance level. All three symmetric subclasses are rejected by the Kupiec test at 1% significance level.

βˆ

Coverage

asym NIG

0.0419

asym HYP

0.0424

asym 10dim Hyp

0.0442

sym NIG

0.0358

sym HYP

0.0371

sym 10dim Hyp

0.0384

3.2674 (0.0707) 2.9100 (0.0880) 1.6959 (0.1928) 10.6869 (0.0011)* 8.7006 (0.0032)* 6.9369 (0.0084)*

Independence

0.08577 (0.7696) 0.0880 (0.7668) 0.1347 (0.7136) 0.4789 (0.4890) 0.3249 (0.5685) 0.2102 (0.6466)

Conditional

MSE

3.3532 (0.1870) 2.9980 (0.2234) 1.8306 (0.4004) 11.1658 (0.0038)* 9.0256 (0.0110)** 7.1472 (0.0281)**

0.00167 0.00172 0.00261 0.00064 0.00071 0.00177

Table 2. Backtesting test statistics based on the 99% coverage level. The p-values are denoted between the parentheses and * (**) indicates a rejection at 1% (5%) significance level.

βˆ

Coverage

Independence

asym NIG

0.0079

asym HYP

0.0102

asym 10dim Hyp

0.0126

sym NIG

0.0128

sym HYP

0.0143

sym 10dim Hyp

0.0137

1.0373 (0.3085) 0.0054 (0.9412) 1.1873 (0.2759) 1.6519 (0.1987) 0.2383 (0.6255) 2.7884 (0.0949)

0.3045 (0.5812) 0.4925 (0.4828) 0.7262 (0.3941) 0.7784 (0.3776) 0.5805 (0.4461) 0.8883 (0.3459)

AENORM

Conditional

1.3417 (0.5113) 0.4980 (0.7796) 1.9134 (0.3842) 2.4303 (0.2967) 0.8188 (0.6641) 3.6768 (0.1591)

vol. 19 (73)

MSE

0.00442 0.00507 0.01255 0.01422 0.01434 0.02164

December 2011

21


Econometrics

subclass. Due to the parsimonious behavior of the MSE, more observations are explained by the simpler and less complex symmetrical distribution. This results in lower MSE values and falsely indicates the better suited model. It probably could explain the odd ranking, and therefore its adviced not to rank the models based on the MSE value. If one only compares the setups if the same assumptions are used, e.g. symmetry and nominal coverage level, its follows from tables 1 and 2 that in these four cases the NIG distribution outperforms. This result is not surprising and has been documented in other papers, for instance by Protassov (2004) and McNeil et al. (2005). Calibration time improvement The calibration of the MGHyp density by EM is considered to be in general a slow optimization process. Let the time to optimize one cycle of the backtesting analysis be given as five minutes, such that estimating the full backtesting sample (2,266 cycles)1 takes a shocking eight days to complete. It would simply take too much time for an empirical study with twelve different distributions, namely 96 days.This study proposes the use of parallel processing using multi core desktops. The effectiveness of the parallel processing unit is noticeable since it reduces the running time by 2600%. Using one of the symmetric distributions results a running time of merely six hours while the asymmetric distributions takes seven hours to complete. This is perfectly explainable because the symmetric case assumes γ = 0 such that it isn’t estimated by the EM algorithm.

References Barndorff-Nielsen, O. (1977). “Normal inverse gaussian distributions and stochastic volatility modelling.” Scandinavian Journal of Statistics, 24:01-13. Engle, R. (1999). “Dynamic conditional correlation - a simple class of multivariate garch models.” Econometrica, 50:987-1007. Laplante, J., Desrochers, J., and J.Prefontaine (2008). “The garch(1,1) model as a risk predictor for international portfolios.” International Business and Economic Research Journal, 7:23-34. McNeil, A., Frey, R., and Embrechts, P. (2005). Quantitative Risk Management: Concepts, Techniques and Tools. Princeton University Press, Princeton, New Jersey. Pagan, A. (1996). “The econometrics of financial markets.” Journal of Empirical Finance, 3:15-102.

1

22

Full sample reduced by the first 500 calibration observations.

AENORM

vol. 19 (73)

December 2011

Protassov, R. (2004). “Em-based maximum likelihood parameter estimation for multivariate generalized hyperbolic distributions with fixed λ .” Statistics and Computing, 14:67-77.


Career

Imagination Driving Corporate Leadership in 2020 by: Carl Johan Lens

This article is written from the perspective of the ‘head-hunter’. Since that is what I do for a living: finding the best possible financial specialist in a given situation. I do this with the underlying purpose to facilitate growth and development for both the candidate and the hiring organisation. For sure, a cause worth working for. Below I will argue that in order to be a successful professional in 2020 you will need to integrate your personality into your job and that imagination is an essential part of leadership. That leadership starts with leading one person: you, and you, as a professional, need to practice your imagination if you want to make a real contribution to the organisation you are devoting your time and energy to.

Personality makes the difference In the daily practice of headhunting we introduce professionals to organisations. In the value chain of corporate life, the head-hunter works as intermediary or broker. ‘People-brokerage’ is basically about two things: competence and personality. Twenty years of experience in recruitment has taught me that personality is the key factor for successful performance in a position and that competence is the ‘conditio sine qua non’: without it you can only fail. This is not hard to understand when we look at the position of prime minister of the Netherlands: it’s the personality that makes someone right for the job. Rutte’s optimism, ambition to be successful and his ability to handle criticism as ‘water of a duck’s back’ are personality traits that are key-factors to his success. For us, the head-hunters, it is in the area of personality versus job requirements where one is most likely to fail or succeed. The area is full of uncertainty and it takes a lot of wisdom and insight to make the perfect match. For you, the professional, the outcome of the selection process makes the difference between a brilliant fit with the job or a nagging and disturbing sensation of mismatch between the requirements of the position and

Carl Johan Lens Carl Johan Lens is Managing Partner at Lens & Partners Executive Search & Coaching. After having been International Director for an English Executive Search firm, Carl co-founded and managed an Internet start-up company before establishing Lens & Partners. Carl Johan also conducts individual career development programs and executive coaching programs for employees of companies and institutes, and for private individuals. He also chairs MachaWorks Foundation (www.machaworks. org), an organization aimed at the development of rural Africa through the introduction of internet. Comments, questions: cl@lensandpartners.com

who you are as a person. One thing is certain here: your job satisfaction will be directly related to how much of your personality you can express in your work. In other words, can you take the ‘whole you’ to work or do you have to leave your heart outside the office door?

The future belongs to the Integrated Professional How does one do that: take the ‘whole self’ to work and have your organisation and yourself benefit from your complete presence? It all starts with the foundation of leadership: knowing yourself. You need to know yourself in order to be able to manage your weaknesses and efficiently deploy your strengths. For example, hire a strong secretary with wonderful organisational skills if you are disorganized. Don’t waste your time on improving skills you will never excel in. On the strengths side, do not be afraid to take authority in areas where you know yourself to be excellent. To know yourself, have “meetings with yourself” on a regular basis and make sure that the most important relationship you have, the one with yourself, is thriving. I know leaders who retreat in monasteries on a regular basis, Helmut Kohl did it every year for three whole weeks, taking time to get back in shape and defining goals in his professional and personal life. By knowing yourself you cannot avoid seeing how your personal life influences your professional life and vice versa. You will see that there is no real separation between the two worlds and that personality and profession come together in what I would call ‘the Integrated Professional’. Integrated, because personality and profession are of equal importance and cannot be seen as separate entities. They influence each other profoundly. It is your personality that will drive your professional behaviour to higher grounds. The leaders who have integrated personality and profession make the difference. Think

AENORM

vol. 19 (73)

December 2011

23


Career

of Nelson Mandela if you need an icon of Integrated Professionalism or Richard Branson who never leaves out his personality in his work. For us, the lesser gods, these leaders provide us with an image of a preferred style of working. They show us how you can bring yourself to work as you are, including your ethics and preferences and still be successful and fulfilled. My point is that you will never be able to develop your professional performance to outstanding levels without integrating your profession and your personality. Knowing yourself is the foundation of true leadership.

Personality needs Leadership So leadership and knowing yourself are intertwined. This is important since failure in leadership has very rarely to do with a lack of information but is much more often caused by a lack of willingness to look at one’s self. Think of the CEO of a bank that was once national pride if you need an example. Our culture just doesn’t promote self assessment; still we need it if we want to improve our effectiveness and impact. For a leader to be really effective and sustainable he or she needs to spend time with his/ herself reflecting on his/her actions and plans. Since narcissism and to a lesser extent, arrogance and sheer stupidity, prevent someone from getting in touch with ‘what is really going on’ and getting on a par with reality, you, as a leader need to organise your own safeguard by taking time to be alone in silence every day. By doing so, you will get in touch with the values important to you. By ‘standing still’ you can see more clearly how these values can be integrated in your professional life. You will find your own ‘voice’, your specific contribution to the organisation you are working for. C.K. Prahalad, the late, highly esteemed business thinker, says it like this: “Leadership is about self-awareness, recognizing your failings, and developing modesty, humility, and humanity.” Being connected with your own self in this way will give a boost to your leadership since you will be more centred, more self-assured without being arrogant and you will have a better concept of how you can serve the people around you. By seeing more clearly what it is that drives you, you will stay true to yourself and in this way you will be a stable, consistent and reliable leader for the people surrounding you (including your family, not unimportant). In other words, you will be trustworthy. And trust is something the world is crying out for, as we all know.

Leadership needs Imagination Trust is hard to find. The connection between the lack of trust and the present crisis in leadership is not difficult to explain: it is hard to put trust in leaders who don’t know where they are going and who give no purpose to the difficulties we all have to face when dealing with the present problems in the global economy. It seems we’re

24

AENORM

vol. 19 (73)

December 2011

stuck. No leadership for lack of vision and no willingness to deal with reality. No way out. Luckily there are people who keep their head and heart in balance: Prahalad being a very good example of this. To illustrate this I recall the lecture that Prahalad held in Nijenrode in 2009 where he came up with following: “I have always thought that the main factor in corporate success was analysis. At present I think that from the seven factors a Corporation needs to have, analysis is number seven: important, not crucial. I have shifted my thinking towards a more anthropocentric approach of corporate life. At this moment I believe that the human imagination is the number one need for corporate success.” Imagination, who would have thought of that? It seems so intangible, so ‘New Age’. But then, if you come to realise that everything that has been built has first been conceived and thus everything we see around us had its roots in someone’s head before we see it, it doesn’t sound that far-fetched: imagination at number one in the list of corporate needs. Before something sees the light of day, it has been imagined. This is what Stephen Covey calls ‘the Second Creation’. Like Prahalad, Covey encourages managers and leaders to imagine the future and then start building it. Both state that imagination is a necessity for improvement. And we know how true this is: think of Martin Luther King’s “I have a dream” and I don’t need to say more. Funny how “imagination” is not a subject in High School, except maybe when it comes to drawing. If we look at the present stagnation in the world, we come to realize that a MBA from Harvard is not enough to conquer the present crisis which is ‘ad fundum’ a crisis in leadership: we need much more than technology, we need imagination to design and then build a world in which we really want to live and where we can see our children grow up in peace and wellbeing. A task for which we need both our ‘hard skills’ and our ‘soft skills’, a complete human being, an Integrated Professional.

Are you an Integrated Professional in 2020? I believe we need leaders with imaginative power and that you as a professional need to practice your imagination in order to create your own professional future. In our practice of headhunting I see a shift in demand from organisations. The shift in the coming decade is from technology, things outside of you, towards a centred, Integrated Professionalism that is based on values inside you. These values are largely based on the power of imagination: a new skill for the new professional in a new time with new demands. The new econometrist/actuary in the year 2020 is an Integrated Professional who dedicates both personality and skills to an organisation. In giving room to your imagination and by reflecting on your own actions, you will be able to create vision for the future and give guidance in shaping it. I hope to have met you by then!


7AT ALS ZE GEEN VERTROUWEN HEBBEN IN DE ECONOMIE 7IE WEET WAT DE TOEKOMST BRENGT %N GELD KUN JE MAAR ÏÏN KEER UITGEVEN $US ALS MENSEN WAT MINDER VERTROUWEN IN DE ECONOMIE HEBBEN ZULLEN ZE EERDER GAAN SPAREN DAN BESTEDEN (ANDHAVING VAN HET VERTROUWEN IS DAN ESSENTIEEL $AAROM HOUDT DE .EDERLANDSCHE "ANK $." TOEZICHT OP DE SOLIDITEIT VAN lNANCIÑLE INSTELLINGEN 7E STELLEN EISEN AAN BANKEN VERZEKERAARS EN PENSIOENFONDSEN EN HOUDEN DE VINGER AAN DE POLS 6ERDER DRAGEN WE n ALS ONDERDEEL VAN HET %UROPESE 3TELSEL VAN #ENTRALE "ANKEN n BIJ AAN EEN SOLIDE MONETAIR BELEID EN EEN ZO SOEPEL EN VEILIG MOGELIJK BETALINGS VERKEER :O MAKEN WE ONS STERK VOOR DE lNANCIÑLE STABILITEIT VAN .EDERLAND 7ANT VERTROUWEN IN ONS lNANCIÑLE STELSEL IS DE VOORWAARDE VOOR WELVAART EN EEN GEZONDE ECONOMIE 7IL JIJ DAARAAN MEEWERKEN +IJK DAN OP WWW WERKENBIJDNB NL \ *URISTEN \ !CCOUNTANTS \ !CTUARISSEN

7ERKEN AAN VERTROUWEN

\ %CONOMEN \ %$0 AUDITORS


Econometrics

Explaining Interest Rates in the Dutch Mortgage Market: A Time Series Analysis by: Machiel Mulder and Mark Lengton

In order to explain the development in mortgage interest rates in the Dutch market, we conducted a time-series analysis using monthly data. Because of non-stationarity of these data, we estimated the model in both first and second differences. By adding the lagged dependent variable in these models we removed the remaining serial correlation. It appears that the lagged dependent variable is the most important explanatory variable, while we do not find a significant effect of the mortgage costs, risks in the money market or market structure. We conclude that in the short term the mortgage interest rates are mainly determined by factors having a prolonged influence, while the immediate influences appear to be relatively small.

Introduction

Literature

Since a number of years financial markets have been stirred up and the Dutch mortgage market is no exception to this. Because of a perceived high level of the mortgage interest rates, a number of parties have complained about the functioning of the mortgage market in the Netherlands. In their view, the market for mortgages is not functioning competitively, as financing costs for banks have drastically fallen over the last two years, while the mortgage interests rates for consumers have been fairly stable (see Figure 1). In order to get a better insight into the factors driving mortgage interest rates, the NMa conducted an in-depth analysis of the mortgage market (NMa, 2011). This analysis included two types of econometric analysis: a panel analysis on annual data per bank and a time-series analysis on monthly data on industry level (see Mulder and Lengton, 2011). In this paper, we describe the time series analysis.

The pioneering study in the field of bank interest margins is Ho and Saunders (1981). These authors find that there are two major factors driving the interest margins: the degree of competition as well as the interest rate risk banks are exposed to. Maudos and Guevara (2004), using data on the banking markets in the European Union, find significant and positive effects of a number of factors, including market power, credit risk and operating costs. These results were confirmed by Hawtreyand Liang (2008). Titman and Tompaidis (2005) conclude, on the basis of 26.000 individual mortgages from 1992 till 2002, that higher default risk also contributes to higher margins. Dietrich and Wunderlin (2010), however, found for the Swiss market significant negative effects for market concentration (measured by HHI).

Machiel Mulder and Mark Lengton Machiel Mulder is deputy Chief Economist of the Netherlands Competition Authority, visiting researcher at the Department of Economics and Business of the University of Groningen and extramural fellow at the Tilburg Law and Economics Centre. Mark Lengton is free-lance employee at the Office of the Chief Economist of the Netherlands Competition Authority and a MSc. Economics student at Tilburg University.

26

AENORM

vol. 19 (73)

December 2011

Model In a time-series analysis at industry level, we use monthly data to assess the effect of a number of variables on the average mortgage interest rate in the Dutch market. The dependent variable is the average monthly mortgage interest rate (I), while the explanatory variables refer to costs (C), risks (R) and market structure (S): It = β0 + β1Ct + β2Rt + β3St + εt

(1)

The mortgage costs (C) indicate the price banks have to pay for financial capital. These costs are based on Euribor rates, swap rates, interest rates on deposits, interest rates with residential mortgage backed securities (RMBS) premiums and interest rates with credit default


Econometrics

swap (CDS) premiums (see NMa, 2011). We expect that these costs are positively related to the mortgage interest rates. Although these costs also include premiums to reduce risks, they do not cover all risks which banks face. If the durations of funding contracts do not fully match the durations of the mortgage contracts, banks face a money market risk (see e.g. Maudos and Guevara, 2004; Hawtrey and Liang, 2008).1 In addition, banks are subject to default risk which may require higher interest rates as coverage (see e.g. Titman Tompaidis, 2005). In this timeseries model at industry level, it is not possible to include the latter risk as this risk differs among banks. The money market risk can, however, be measured by the so-called normalised swaption volatility. If this volatility goes up, we expect mortgage interest rates to go up. The influence of market structure on interest rates is measured by the C3, which is the share of the three largest suppliers in the total market. High levels of C3

suggest that a small number of banks may have market power that they can leverage to increase prices and earn higher profits. We therefore expect C3 to be positively related to the mortgage interest rates (see e.g. Maudos and Guevara, 2004; Hawtrey and Liang. 2008).

Data The characteristics of the data used are described in Tables 1 and 2. It appears that a fairly strong correlation exists between the mortgage interest rate on the one hand and the mortgage costs and the normalised swaption volatility on the other hand. The mortgage interest rate and the mortgage costs are depicted in Figure 1. From that figure we learn that the mortgage interest rate steadily rose from the end of 2005 until 2009. After that, the interest rate declined by approximately 1-percentage point. From the figure we also learn that the mortgage costs increased slightly from

Figure 1. Mortgage interest rate and mortgage costs (8/2004 - 10/2010). Source: see NMa (2011)

Figure 3. Market structure of the Dutch mortgage market, measured by C3 (monthly averages, 2004 - 2010). Source: Kadaster.

Figure 2. Normalised swaption volatility, 8/2004 – 10/2010. Source: Interbancaire data/Super Derivatives (R).

Figure 4. Profitability per type of mortgage, measured by the Lerner index (1/2004 - 10/2010).

1

Imperfections in the matching of the funding might be due to pipeline risk or prepayment risk (see NMa, 2011).

AENORM

vol. 19 (73)

December 2011

27


GET A SUBS FREE CRIP TION NOW ! www . aeno

rm.eu

DOWNLOAD and READ published articles online

title

www.aenorm.eu


Econometrics

Table 1. Description of variables (8/2004 – 10/2010) (n=75). Variable

Mean

Standard deviation

Minimum

Maximum

Mortgage interest rate (I) Mortgage costs (C) Normalised Swaption volatility (R) C3 (S)

4.63 3.67 .78 .73

.52 .84 .27 .03

3.64 2.50 .47 .68

5.61 5.35 1.85 .80

Souces: Moneyview (I), DNB (C), Interbancaire data/Super Derivatives (R), Kadaster (C3); note that both R and S are divided by 100 for scaling reasons. Table 2. Correlation matrix of variables (8/2004 – 10/2010) (n=75). Variable

Mortgage interest rate (I) Mortgage costs (C) Normalised Swaption volatility (R) C3 (S)

Mortgage interest rate (I)

1.00 .84 .63 .19

2005 until mid 2008 before it decreased sharply in the second half of 2008. As a measure for risk in the money market we use the normalised swaption volatility. A higher volatility indicates higher uncertainty about the future cost of interbank lending. Figure 2 clearly shows that the risk strongly increased at the end of 2008, but reduced to normal levels afterwards. Figure 3, depicting the development in C3 and HHI, shows that the Dutch mortgage market became more concentrated during the years 2004 – 2010, in particular in the most recent years. On the basis of the mortgage interest rate and sourcing costs, we are able to calculate the marginal profit, which is the coverage for non-financial costs as well as for fixed costs. An indicator to measure the marginal profit is the Lerner-index, which is the difference between the mortgage interest rate (I) and the sourcing costs (C), scaled by the mortgage interest rate: (It – Ct) / It. It appears that the marginal profits at the industry level slightly decreased from 2004 until the end of 2008 (see Figure 4). Afterwards, the profits increased strongly. This holds true for the various types of mortgages, although in some cases (such as “variable interest”) this pattern is more explicit than in others (such as “10 years fixed interest”).

Results In a time-series analysis on high-frequency data, nonstationarity and autocorrelation may seriously distort the analysis. Although theoretically one can doubt why interest rates would be non-stationary, in practice non-

Mortgage costs (C)

Normalised swaption volatility (S)

1.00 .64 -.09

1.00 .40

C3

1.00

stationarity is often the case (End, 2011).2 Testing our data on non-stationarity with the augmented DickeyFuller test, we indeed cannot reject the null hypothesis of non-stationarity (unit root) (see Table 3). As the residuals of an OLS-regression appear to be stationary, we can conclude that our data are cointegrated. In order to correct for non-stationarity, we estimate the model in both first and second differences. The model in first differences as well as the model in second differences show autocorrelation. This follows from the Breusch-Godfrey test (see Table 4). In order to correct for autocorrelation, we add the lagged dependent variable in these models as an explanatory variable. The Breusch-Godfrey test shows that this specification solves the autocorrelation problem in the second differences model, but not in the first differences one. Therefore we also present the Newey-West t-statistics, which correct for the impact of autocorrelation. Table 3. Augmented Dickey-Fuller test for unit root. Variable

Level

1st Difference

Mortgage interest rate

-.916

-4.339

-4.764

-1.430

-3.305

-6.071

-1.908

-4.135

-6.067

-2.141

-6.505

-8.497

Mortgage costs Normalised swaption volatility C3

2nd Difference

Note: The critical values are -4.108 (1%), -3.481 (5%) and -3.169 (10%).

2

Interest rates do not follow a structural rising pattern, as for instance macroeconomic quantities do. In the long term, these data have a kind of an “anchor”, which creates stationarity.

AENORM

vol. 19 (73)

December 2011

29


Econometrics

Table 4. Results of time-series analysis in different model specifications. Mortgage interest rate

Model in 1st differences

Mortgage costs

.11 (1.56/1.79) .03 (.26/.34) -.67 (-.78/-.91)

Normalised swaption volatility C3

Goodness of fit adj. R2 Bgodfrey (P>chi2)

Model in 2nd differences

Model in 2nd differences plus lag dependent variable

-.09 (-1.69/-1.98) -.07 (-.98/-1.52) .22 (.31/.50)

.00 (.02/.01)

.06 (1.01/1.39) -.12 (-1.19/-1.62) -.28 (-.38/-.50) .54 (4.92/3.91) -.00 (-.03/-.03)

-.00 (-.05/-.07)

-.09 (-1.77/-1.98) -.03 (-.40/-.60) -.07 (-.10/-.12) -.36 (-3.16/-2.48) .00 (.01/.01)

1%

25%

3%

15%

.00

.00

.00

.97

Mortgage interest rate(-1) Constant

Model in 1st differences plus lag dependent variable

Note: t-statistics in parentheses (normal/Newey-West)

Conclusion We do not find a significant effect of mortgage costs, risks in the money market as well as market structure on the monthly mortgage interest rate. It appears that the lagged dependent variable is the only statistically significant explanatory variable. Hence, we conclude that in the short term the mortgage interest rates are mainly determined by factors that have a prolonged influence, while the immediate effects appear to be relatively small. In order to explain the mortgage interest rate, less frequent data has to be used. In a panel analysis on annual data, we were indeed able to determine the contribution of these factors to the changes in the mortgage interest rate (see Mulder and Lengton, 2011).

References Dietrich, A., C. Wunderlin. What drives the margins of mortgage loans? Institute of Financial Services IFZ, Lucerne University, Switzerland, 2010. Hawtrey, K., H. Liang. “Bank interest margins in OECD countries.” North American Journal of Economics and Finance 19 (2008): 249–260. Ho, T.S.Y. and A. Saunders. “The determinants of bank interest margins: theory and empirical evidence.” Journal of Financial and Quantitative Analysis 16(1981): 581-600. Maudos, J., J. F. Fernández de Guevara. “Factors explaining the interest margin in the banking sectors of the European Union.” Journal of Banking & Finance 28 (2004): 2259–2281.

30

AENORM

vol. 19 (73)

December 2011

Mulder, M. and M. Lengton. Competition and interest rates in the Dutch mortgage market: an econometric analysis over 2004-2010. NMa, The Hague, NMa Working Paper 5, 2011. NMa. Sectorstudie Hypotheekmarkt; een onderzoek naar de concurrentieomstandigheden op de Nederlandse Hypotheekmarkt. The Hague, 2011. Titman, S., S. Tompaidis. “Determinants of credit spreads in commercial mortgages.” Real Estate Economics 33.4 (2005): 711-738.


Puzzle On this page you find a few challenging puzzles. Try to solve them and compete for a price! But first we will provide you with the answers to the puzzles of last edition.

divisible by 3. However, you can’t make 3 itself with portions of 6 and 9. So the largest impossible number is 2 * 20 + 3 = 43.

Answer to “Hannah’s House Number” Hannah sometimes lies about her house number, but Jake does not know that. He reasons as if Hannah speaks the truth. After his first three questions, Jake still needs to choose between two numbers, one of which starts with a 3. A number that starts with a 3 must be smaller than 50, so Hannah’s (lied) answer to Jake’s first question was “No”. Now there are four possibilities: • Hannah’s house number is a square and a multiple of 4: 16 or 36; • Her house number is a square and her house number is not a multiple of 4: 9, 25 or 49; • Her house number is not a square and her house number is a multiple of 4: 8, 12, 20, and more; • Her house number is a square and her house number is not a multiple of 4: 10, 11, 13 and more. Only the combination “her house number is a multiple of 4” and “her house number is a square” results in two numbers, of which one starts with a 3. Hannah’s (lied) answer to Jake’s second question therefore was “Yes”, and Hannah’s (true) answer to Jake’s third question was also “Yes”. In reality, Hannah’s number is larger than 50, not a multiple of 4, and a square. Of the squares larger than 50 and at most 100 (these are 64, 81, and 100), this only holds for 81. So Hannah’s real house-number is 81.

Math Maze In the math crossword puzzle above, each row and each column is a math equation. Use the numbers 1 through 36 to complete the equations. Each number is only used once.

Solutions Answer to “The Bitterbal Problem” Any desired number that is divisible by 3 it can easily be made with portions of 6 and 9 bitterballs, except the number 3 itself. If the tourists can’t achieve their desired number with portions of 6 only, then they can use one portion of 9 and they can order the rest in portions of 6. If the number they want is not divisible by 3, then they can order one portion of 20. If the remaining number is divisible by 3, they can use the above method for the rest. If the remaining number still isn’t divisible by 3, they should order a second portion of 20. The remainder must be divisible by 3, and they can order portions of 6 and 9 as above. The largest impossible number would be such that they would have to subtract 20 twice to get a remainder

Solutions to the puzzle above can be submitted up to February 1st 2012. You can hand them in at the VSAE room (E2.02/04), mail them to aenorm@vsae.nl or send them to VSAE, for the attention of Aenorm puzzle 73, Roeterstraat 11, 1018 WB Amsterdam, Holland. Among the correct submissions, one book token will be won. Solutions can be both in English and Dutch.

AENORM

vol. 19 (73)

December 2011

31


From the beginning of the academic year, there were a lot of activities and events for our members to attend. The first year students had a great time during the VSAE Introduction Days, were they were motivated to join one of our committees. They organized our classic opening activity, the pool tournament in November. The turnout was high and we had a great evening. The Actuarial Congress and the Beroependagen were both highly successful, with a record number of visitors. The National Econometricians Soccer Tournament (LEVT) in Tilburg was also a victorious day for the VSAE. Our noted soccer team Flex proved to be in top form and won the cup! From the 18th to the 20th of November, 52 VSAE members visited Maastricht for the one and only VSAEweekend. We visited the marl caves, took al walking tour in the city, went out for dinner and, of course, hit the bars. For the VSAE board, the year is already coming to an end. The new board will be announced at the General Members Meeting on the 12th of December. On the 23th of December, we want to thank all our active members for their efforts with a special dinner. Our monthly drink starts right after dinner and has a cocktail chic theme. All our members and alumni are invited, so we hope everyone will come and celebrate the end of 2011 with us!

Agenda •

The first few months of the new academic year have been a busy period for Kraket. We organized numerous activities, started planning our biennial study trip, and saw our membership increase substantially. At the time of writing we are just wrapping up our annual case day, which was held this year at the prestigious IGC business club on Dam Square. Over fifty students participated, and their boundless enthusiasm and teamwork made the day into a great success. To start 2012 off right, we have planned a week of skiing in Les Deux Alpes in January, from which we will hopefully return relaxed and in top form for LED 2012, a project which we are again organizing in cooperation with VSAE, and which demonstrates how well our two organizations work together. Looking at the short term: the next couple of weeks are packed with informal activities such as our monthly get-together, our freshmanparty, and our Sinterklaas celebration. In other words, there is much to look forward to, and we hope to see you at one of our events soon!

Agenda

12 December

General Members Meeting •

23 December

Monthly drink •

Active Members Dinner Cocktail chic party with members and alumni

23 December General Members Meeting

• •

23 December

10 January

14 - 21 January Skiing trip to Les Deux Alpes

Monthly drink • •

3 - 12 February

National Econometricians Day (LED)

Skiing trip to Saint Sorlin d’Arves

32

AENORM

vol. 19 (73)

28 February

December 2011


<On confidence in teamwork >

We consider teamwork as the cornerstone of our business approach. Teamwork allows us to capture opportunities for the group as a whole. And in doing so to move beyond our individual boundaries. If you see yourself as an ambitious team player we would like to hear from you. For our Analyst Program, NIBC is looking for university graduates who share our enthusiasm for teamwork. Personal and professional development are the key-elements of the Program: in-company training in co-operation with the Amsterdam Institute of Finance; working side-by-side with professionals at all levels and in every financial discipline as part of learning on the job. We employ top talent from diverse university backgrounds, ranging from economics and business administration, to law and technology. If you have just graduated with aboveaverage grades and think you belong to that exceptional class of top talent, apply today. Joining NIBC’s Analyst Program might be the most important career decision you ever make! Want to know more? Surf to www.careeratnibc.com.

Interested? Please contact us: NIBC Human Resources, Frouke RĂśben, recruitment@nibc.com. For further information see www.careeratnibc.com. NIBC is a Dutch bank that offers integrated solutions to mid-market clients in the Benelux and Germany. We believe ambition, teamwork, and professionalism are important assets in everything we do. 7+( +$*8( f /21'21 f %5866(/6 f )5$1.)857 f 1(: <25. f 6,1*$325( f ::: 1,%& &20


Welkom in de advieswereld Jij bent een consultant in hart en nieren. Je wilt iets doen met je wiskundige achtergrond. Én je vindt het interessant contact te hebben met klanten en met collega’s over de hele wereld. Dan ben je bij Towers Watson op de juiste plek!

smar t phone

Scan deze

QR code met je

BeneďŹ ts | Risk and Financial Services | Talent and Rewards werkenbijtowerswatson.nl


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.