INTELLIGENT RISK knowledge for the PRMIA community
July, 2015 ©2015 - All Rights Reserved Professional Risk Managers’ International Association
PROFESSIONAL RISK MANAGERS` INTERNATIONAL ASSOCIATION EDITORIAL BOARD EXECUTIVE EDITOR Justin McCarthy editor@prmia.org PRODUCTION EDITOR Andy Condurache andy.condurache@prmia.org
SPECIAL THANKS
INSIDE THIS ISSUE 003
Letter from Justin McCarthy, Chair PRMIA Board
004
Letter from the Executive Director
006
Are regulators missing the point? by Marcus Cree
010
ORM Conference
014
Be prepared for ECB’s analytical credit dataset (AnaCredit) by Bogdan Teluca & Armando Cortez
024
The continued rise of Biosimilars Trouble ahead for Big Pharma? by Will Stow
028
Backtesting Value-at-Risk: how good is the model? by Philippe Cogneau
036
C-Suite Events: London: Model Risk Management and Black Swans / New York City: Best Practices in Model Risk Management
038
Correlation Risk – Models and Management by Gunter Meissner
043
University event: ICBS & HKU
SPONSORED BY
Thanks to our sponsor, the exclusive content of Intelligent Risk is freely distributed worldwide. If you would like more information about sponsorship opportunities contact cheryl.buck@prmia.org.
xxxxxx xxxxxx xxxxxx
Adis is a leading medical publisher with more than 40 years’ experience in delivering independent drug evaluations. The new AdisInsight platform brings together essential scientific and commercial information to support financial evaluations of the pharmaceutical industry. It contains scientifically-sound data on drugs, trials, deals and safety, all of which has been reviewed, verified and consolidated by a team of scientists from multiple sources.
@prmia
Visit adis.com/finance for more information, or preview the database at adisinsight.springer.com
FIND US ON
prmia.org/irisk
002
Intelligent Risk - July, 2015
letter from the chair
Justin M. McCarthy PRMIA: Certification, Education & Training An industry standard for an Association’s future It is with great pleasure that the Association takes pride in the recent completion and introduction of the newly updated Professional Risk Manager (PRM™) designation exam. With this completion, as well as a new offering of our Operational Risk Manager (ORM) certificate and a soon to be reintroduced / relaunched Associate Professional Risk Manager (Associate PRM) certificate, PRMIA has set a standard in certifications, continuing education and training offerings that is unmatched in the industry. This direction is unilaterally supported by PRMIA’s Global Board through months of strategic planning. PRMIA’s broad continuing development and focus on education, along with our commitment to meeting the diverse needs of our members, will continue to strengthen the Association’s position as a keen provider throughout financial services. However, one of the most important parts of that strategic planning exercise will now be leaving PRMIA. Our Executive Director, Kevin Cuff is leaving PRMIA to take on a role as one of the chief bank regulators in the Commonwealth of Massachusetts. Kevin has been a great asset to PRMIA and our members throughout his time with us. He has helped us to focus on a strategy to deliver these new programs. We wish Kevin all the best in his new role, and we expect to stay in close contact with him in his new job. On a personal note, I truly enjoyed working with Kevin and I thank him for all the work he has done with me as the chair of our Global Board. So, PRMIA has a strategy and plan and we have already realized the fruits of its efforts through a 35% increase of both PRM and Associate PRM offerings, further supporting the broad-based demand for exam and training based opportunities. As an example, we are pleased to announce that our newly introduced ORM offering has seen unprecedented demand that surpassed our expectations. PRMIA’s reputational and financial future continues to look bright as a prominent provider of risk-based life-long learning. Justin McCarthy Chair, PRMIA
Intelligent Risk - July, 2015
003
letter from the executive director
Kevin Cuff
It’s education, stupid! In 1992, President Bill Clinton’s political strategists James Carville and George Stephanopoulos coined a takeoff of the above phrase “It’s the economy, stupid!” to overemphasize the simplicity of the missions before them in the 1992 presidential political campaign. PRMIA is at a like crossroads. Over the course of the past year or more, your Association has aggressively reviewed its missions, objectives and strategic directions. The end results are echoed in my opening line: It’s all about the education! Should your Association develop the complex certification, education and training objectives clearly outlined by the Board of Directors over the course of the past 12 months and as engaged by the new Director of Certification & Education (Agnes Amos-Coleman), then the Association can fulfill its members’ mandate for greater access to continuing development. Hence, PRMIA has a tremendous opportunity in front of us all – to professionally develop an aggressive risk management training catalog that will fulfill the needs of the risk industry into the foreseeable future. As many of you may now know, I conclude my service to the Association as I resign with this edition in order to become one of the chief bank regulators in the Commonwealth of Massachusetts (USA). I am hopeful, then, that I may continue my service to the Association in fulfilling this most aggressive and needed agenda. Thank you very much for the support.
Kevin M. Cuff Executive Director, PRMIA
004
Intelligent Risk - July, 2015
PRMIA RESOURCES: A LIBRARY OF RISK INFORMATION AT YOUR FINGERTIPS Through articles, papers, publications, periodicals, webinars and surveys on a variety of financial risk management topics, PRMIA is dedicated to providing thought leadership resources. PRMIA'S RESOURCES INCLUDE AN EVER-INCREASING LIBRARY OF: Case Studies | White Papers | Survey Results | Videos Webinars | Risk Casts | Speaker Presentations Online Courses | Risk University Courses | Intelligent Risk Issues | Publications by Various Publishers | Blogs
Become a PRMIA Member today to keep up to date on the latest risk resources.
Are regulators missing the point?
by Marcus Cree Regulation may be the main driver behind most efforts to improve global limit management but financial institutions who focus solely on compliance are missing the opportunity to improve their firm’s bottom line. Marcus Cree, Risk Specialist at Misys, explores.
‘‘
Question 01
Misys and PRMIA jointly recently released the results of a survey which highlighted a growing need for global risk limits management. What is the main driver behind internal initiatives to improve global limit management for banks today?
Marcus There are two main drivers behind the increase of the globalisation of limit management. The first is on a regulatory basis as there are more and more controls being specified in terms of exposure against external counterparties and there are official limits being put on those larger counterparties in just about every regime. The second driver is the strategic perspective in that improved limit management has certain business benefits. This is because the more you globalise and have single limits which are shared across, the closer you get to being able to put out general directives concerning the direction of the business. Unfortunately, when requirements or changes are regulatory mandated they tend to be done in isolation and the implementation of the risk limits can become a tick-box exercise. In fact, our research shows that the majority of banks can only leverage 25-50% of all regulatory budget towards enhancing their own internal risk management. That’s a missed opportunity because there’s a lot of upside on the table such as making a firm more risk centric in general and improved monitoring of defaults, internal ratings on exposures, and managing the business according to that data. It is much easier to do risk monitoring from a global limit management perspective than it is from a more silo basis.
006
Intelligent Risk - July, 2015
Question 02
When speaking to firms about how they view their own limit management capabilities, were there any surprising trends that you spotted in this survey? Marcus I think what is surprising is that the regulations are viewed as an administrative overhead rather than a definite boon to the business. When you look at the percentages, I expected it to be a minority that would say ‘it’s an admin overhead with little business benefit’ but it was a much greater number than should really be the case. One of the trends that we’re seeing is the fact that risk in and of itself is still not central in the strategic planning. Many firms will say that it’s an aim (and I’m sure that it is), but there are overriding priorities that are preventing the strategic requirement for risk being put in the centre of a business decision becoming the main driver.
Question 03
So why are firms not including risk management as part of wider strategic plans?
Marcus Obviously a lot of it is best guess, because most people in the market will say that risk is key to what they do, but I think that’s sort of an expected response in 2015, given what’s happened in recent years with the financial crisis and new regulation. The truth though is that for firms and their business planning to be risk-centric they need to think beyond regulatory compliance and need to consciously put a huge amount of emphasis on getting the risk results. That push shouldn’t come from IT but from the core business because clearly risk management and limit management is a pragmatic way to influence the day-to-day tactics and the overall strategy. Risk decisions have to be based on the right data and that should become an imperative within the organisation.
Question 04
Do you think that firms will be making changes to support regulatory compliance solely? Or are many also looking to make these changes to improve limit oversight and risk management practices? Marcus This is really the core question and an interesting one. In my view every time a decision in a financial firm is made, it’s a risk management decision. Unfortunately, from a nomenclature perspective, risk management appears to be this group that sits separately and tells senior management and the rest of the world what risks are being taken in a kind of a post-event way. What makes it difficult is that regulatory rules are being introduced which have wide reaching requirements on counterparty and liquidity management and thus require on a practical basis a good level of data aggregation, but many firms just view this from a compliance perspective that as long as the procedures are put together somehow it’s fine. If you take a wider view and look at BCBS Rules 239 and 294 which specifies the guidance about how financial institutions should be setting up systems to put risk at the centre or risk management to make processes robust, then that limit oversight should become a natural function of that. There also needs to be better communication between the risk department and senior management about what they want their risk appetite to be. Specifically, firms should be able to set those limits as a way of expressing risk appetite, possibly with minimums as well as maximums, and those limits should be understandable and actionable by the people in the front office who actually have to make the decisions. Intelligent Risk - July, 2015
007
Question 05
What are the long-term implications of this structure that you’ve just described where it’s more compliance-focused and compliant-only response to making limit management changes? Are there negative impacts to the firm where it misses the opportunity to improve the procedures overall to support risk management practices? Marcus From a firm’s perspective, focusing on tick-boxes for compliance is definitely a huge missed opportunity, because if one can take this area of oversight and transform it into one which takes a riskbased view of how and who to do business with as well as the overall kind of buffers and Tier 1 capital you want to put on the funding block is an incredible business advantage if you can get that tied together. For a Tier 1 firm, then it’s entirely understandable that this is a far easier thing to discuss than it is to implement. But for raft of smaller firms who don’t have dozens of input systems, then just by knowing exactly where they are and what they want strategically will put them in a position of relative strength. It is a huge advantage.
Question 06
What is the biggest operational challenge that firms face in terms of improving their enterprise information to support more effective limits monitoring and management? Marcus I’d say the main challenges are the current setup of the organisation. If they’re not designed on common platforms, databases and structures, it becomes quite difficult to combine a net-down and get a good view of what is what. What we’re really talking about there is the quality and aggregation of the data. There are many mechanisms to clean that data and thought has to be put into the technology behind the aggregation from several different database storages, results and trade repositories. Also, processes need to be robust because firms need to run calculations quite quickly, because if the data is going to be meaningful it has to be intraday, if not real time. And so while there are a lot of operational and technological challenges, it all comes down to data quality, both external and internal.
Question 07
My final question is what recommendations would you offer to risk management readers as to how they should take their practices beyond compliance and improve limit management going forward? Marcus Overall, my advice would be for firms to take a step back and adopt a holistic and strategic view. To get to this strategic view, firms should ask questions around what decisions need to be made, how these can decisions be expressed at a Board level and what would that actively look like? Also, firms should focus how can risk management transform the way the business works, and will this particular project (in this case we’re talking about limits) be instrumental in taking the next step where a decision is needed and made, or is all the work a dead end that I have to just put in place?
008
Intelligent Risk - July, 2015
Overall, my advice would be for firms to take a step back and adopt a holistic and strategic view. To get to this strategic view, firms should ask questions around what decisions need to be made, how these can decisions be expressed at a Board level and what would that actively look like? Also, firms should focus how can risk management transform the way the business works, and will this particular project (in this case we’re talking about limits) be instrumental in taking the next step where a decision is needed and made, or is all the work a dead end that I have to just put in place? As long as a firm can clearly see how it’s transformative and enables a risk conversation from the top of the shop to the bottom, then there is a clear understanding of how decisions are made. This could involve perhaps looking at different technologies and ways of aggregating data, rather than just what’s the quickest and cheapest way of doing this. If that quickest and cheapest way doesn’t push me towards the final transformative goal then the firm is probably looking at the wrong solution. Organisations like Misys have been offering this risk data aggregation solution for a long time, rather than just saying let’s solve this as if nothing exists before, and build it up. The best way forward is to look at the kind of technology that can reuse what is already working at risk calculation and data storage level without duplicating processes or data, and then quickly aggregate the data to make it meaningful and useful throughout the day to satisfy compliance requirements and boost business profitability. To access the PRMIA research study, please click here.
author Marcus Cree Marcus Cree, Global Risk Specialist, Misys Marcus Cree has worked in financial risk management for over 20 years, including asset management and tier 1 banking, in the role of practitioner as well as implementation and consultancy. Covering Market, Credit and Liquidity risk, Marcus has worked through Basel 2 and 3, as well as Dodd Frank implementations. His current role is as Global Risk Specialist for Misys Financial services
Intelligent Risk - July, 2015
009
PRMIA’s Operational Risk Conference – May, 2015
summarized by Allan D. Grody The conference was held in New York City on May 6-8 and coincided with the launch of PRMIA’s groundbreaking handbook “Risk Management Frameworks and Operational Risk Management” and its companion Operational Risk Manager Certificate program. The Handbook was authored by a prominent team of PRMIA member volunteers – Penny Cagan, David Coleman, Julian Fisher, Jonathan Howitt (who also edited the document), Mikhail Marakov and Barham Mizair. Our grateful thanks to these volunteers who gave their time and expertise to educate us all. The conference was preceded by a full day workshop summarizing the content of the handbook. The presenter, Mario Mosse led the workshop aided by PRMIA’s Board Chairman Justin McCarthy. Hats off to Mario for his accomplishment of having distilled a complex work of nearly 300 pages into a concise one day presentation. Mario comes to the task with his own prodigious amount of knowledge and three decades of experience, retiring recently from Prudential Financial as Head of Operational Risk and, before that, having led the development of JPMorgan Chase’s operational risk program. Later in the conference Mario led a discussion on scenario analysis. Scenario analysis has the ability to educate the entire organization on vulnerabilities in controls, an important aspect of the exercise complementing the outcomes of the capital losses predicted from the scenarios. The main two day conference began with Robin Phillips, former Head of Operational Risk at JPMorgan Chase and more recently having held that same position at Bank of America. Robin gave us a tour of the various BIS Operational Risk (“OpRisk”) papers from the first separate treatment of OpRisk in 1998 to the most recent Review of Sound Management Principles for Operational Risk released in October, 2014. Lori Gattinella, AVP of Operational Risk Management at the Hartford described the differing balance sheet attributes of insurance companies vs. banks. She also described the implementation of the “three lines of defense” concept of operational risk at the Hartford – first line defense provided by operating management, then the formal risk management function and, finally, internal audit. Bob Lewis, former CRO of AIG discussed conduct and culture risk and the implications of the too many compensation plans that were in place at AIG. This created an impenetrable labyrinth of incentivize schemes which proved unmanageable to incent employee behavior through these devices.
010
Intelligent Risk - July, 2015
Michael Smith, SVP and Chief Innovative Executive at AccuWeather presented a proxy analytic environment to final institutions’ risk management environment. The startling conclusion was that predicting weather extremes, as refined as it appears to be, and with all its historical data, is not very good at predicting severity of storms. The implications for predicting the severity of losses in the risk management methods of financial institutions was an obvious metaphor. Joe Sabatini, former OpRisk Head at JP Morgan Chase and Chair of the ORX Consortium (the group of banks reporting OpRisk losses to a collective database) shared his decade and a half perspective on the evolution of OpRisk regulation. He also conducted an informal survey on the current state of OpRisk for this event which we will share the highlights later on. Justin McCarthy, Board Chair of PRMIA, discussed the Basel Committee revisions to the non-AMA approaches - the Basic Indicator, the Standard Approach and the Alternate Standard Approach to calculating OpRisk capital. He noted that while these methods produced lower capital requirements OpRisk losses were still increasing. In a later session he also discussed risk appetite and the lack of standard metrics to measure such risk. John Simone, former OpRisk officer at both JPMorgan Chase and AIG discussed reputational risk and the potential to observe its emergence through a set of indicators. Jaidev Iyer, a former Managing Director and Global Head of OpRisk at both UBS and Citibank, discussed the impediments to an effective OpRisk management system. One of the key impediment was an inability to ‘join the dots’ across the many silos of business that comprise large financial institutions. Julian Fisher, Executive Director of Crest Rider and Professor of Finance at Mountbatten Institute, discussed the Standards of Practice (SOP) initiative being developed by PRMIA to allow the industry to govern itself. A first approach is to focus on OpRisk through development of a scoring mechanism and risk weighting of KRIs (Key Risk Indicators). A presentation on cyber risk by Jody Westby, CEO of Global Cyber Risk and Adjunct Professor at Georgia Institute of Technology focused on the organizational governance structure and processes needed to address and respond to cyber vulnerabilities and incidents. The OCC’s Deputy Controller for OpRisk, Bethany Duggan, presented the OCC’s perspective on OpRisk and the fact that a lot of issues of the financial crisis was those of OpRisk failures. Accenture’s Bjorn Pettersen, Jonathan Freider and Greg Ross discussed reputational risk as a consequence of adverse commentary on social media. An approach to monitoring and scoring such social media risk was presented.
Intelligent Risk - July, 2015
011
Rob DuBois, CEO of Seal of Peace Consulting, in his presentation described the parallels of threats monitored by the Navy Seals to aspects of threats from exposures to operational risks of financial institutions. James Tunkey, a former Kroll executive, shared his view that risk managers need to pay attention to events that are routine as executives at companies will not pay attention to risk that are far from probable. Other key takeaways from the presentations noted above follow: • Operational risk should be seen as complimentary to capital calculations rather than the consequences of that activity • Operational loss history is a backward looking metric • For OpRisk a discussion around a one in one hundred year scenario occurrence is more valuable than the Basel prescribed 99.9% one in one thousand year occurrence. • Business process mapping is seen as an effective way to exploring at a granular level all aspects of control weaknesses and sources of operational risk. However, the documentation of business processes gets out of date quickly, suggesting ongoing process mapping efforts should concentrate on significant risks only. • OpRisk should be process-focused and integrated with risk appetite • Data consistency and data standards are key missing components of OpRisk – BCBS’s Risk and Data Aggregation mandate should aid in focusing the industry toward standards • An overall forward looking metric for OpRisk is still elusive • Industry-wide initiatives led by a few thought leader organizations will drive consistency and a rethink of the OpRisk agenda • Overall banks have made insufficient progress in implementing Basel’s Principles of Sound Operational Risk Practices. • Basel’s Sound OpRisk Principles should align with a firm’s risk profile (risk appetite and risk tolerances) rather than with its capital calculations. • In an informal survey conducted by one presenter, thought leaders contacted described the LDA (Loss Distribution Approach)/AMA (Advanced Measurement Approach) as unworkable. Large losses were found to distort the methodology. • That same survey concluded that the OpRisk effort is considered a cost to be minimized rather than an investment in improved performance and customer service. • While Risk Control Self Assessments (RCSAs) are critical for OpRisk there are no standards nor is there any way of aggregating the key components of such assessments, KRIs (Key Risk Indicators), up to higher levels. • Basel’s Operational Risk newly ‘simplified” methods based on Sound Business Practices for OpRisk still is unfulfilled as to how to link OpRisk Capital to Operational Risk.
012
Intelligent Risk - July, 2015
• In scenario analysis if all controls fail the fallback is the default of the values associated with inherent risk absent any mitigating effect of those controls. • Scenario Analysis complements the risk assessment process, which often overlooks material events that only happen infrequently. The value of scenario tests is that it gets the organization to think through control vulnerabilities • Incentive compensation risks when multiple schemes are in place needs to be analyzed one by one. This is necessary to assure that adverse consequences to the organization is not being incented to favor an individual or group of individuals. Incentive systems must be aligned with expected behaviors. • A reputational risk box score should be developed to include such metrics as stock price volatility, volume of negative and positive news on social media, inputs from employee and shareholder surveys, regulatory findings, industry studies, etc. • PRMIA’s in development Standards of Practice will serve to assure regulatory authorities that they can depend on the Risk Management profession to act effectively in the public interest • Corporate governance is the interlinked and efficient interaction of risk management and strategic planning with the organization’s culture and risk appetite • Cybersecurity should be positioned beyond preventing ‘breaches’ to events that involve targeted attacks, sophisticated insertion of malware, ‘botnet’ activities, and hacking and cyber espionage by nation states and Insider actions.
*Compiled and summarized by Allan D. Grody, President, Financial InterGroup, Member - PRMIA Blue Ribbon Advisory Panel
Intelligent Risk - July, 2015
013
be prepared for ECB’s analytical credit dataset (AnaCredit)
by Bogdan Teluca & Armando Cortez abstract Recent evolutions in the financial sector and associated regulation has further highlighted that, to drive a more efficient and timely set of monetary policies, financial stability and counter-systemic measures, to benefit from accurate statistics, more granular, frequent and standardized credit and credit risk data are needed within the European System of Central Banks (ESCB). In this context, the Central Credit Registers (CCRs), which are operated by several National Central Banks (NCBs) in the EU, appear as the principal data provider. The article goes through the Analytical Credit Dataset (AnaCredit) data requirements, analyses the coverage of these requirements within the current reporting made at a national level by the CCRs, states the possible impact on the current reporting applications, highlights some of the challenges and finally, provides a provisional timeline for implementation.
National Central Credit Registers as a starting point Today, the Central Credit Registers (CCRs) already collect information from reporting credit institutions. In the future, the CCRs will have to send more granular credit datasets to the European Central Bank (ECB). In 2010 ECB issued a Memorandum of understanding on the exchange of information among national Central Credit Registers.1 The memorandum lists, in its Annex 1, the main features of the CCRs reporting made in 10 European countries2: • Reporting institutions and type in scope • Borrowers and counterparts covered • Loan and exposure reported • Identification of borrower and counterpart • Reporting thresholds, frequency and deadlines
1 / Material is substantially drawn from M. Crouhy, D. Galai and R. Mark, The Esssentials of Risk Mangement, Second Edition, McGraw Hill, 2014. 2 / Austria, Belgium, Czech Republic, Germany, Spain, France, Portugal and Romania
014
Intelligent Risk - July, 2015
With this occasion, it became clear that the reporting to the CCRs in each member state is far from being homogenous. In fact, one reading the Memorandum will have more the impression that similarities represent more the exception than the rule. Since then, a number of initiatives were launched at the national level to harmonize the reporting to the CCRs or, to close some obvious gaps. We believe that this trend will continue and pick up speed over the coming months and as a consequence, a credit institution must be able to steer effectively the efforts to comply. Despite the different initiatives launched, the AnaCredit requirements3 are larger than the current scope of reporting. Let us look at what is reported today versus what will be in the scope of AnaCredit.
what is reported today - The Belgian Case To be able to give concrete examples, let us take the example of credit institutions in Belgium and the reporting made to the National Bank of Belgium. In Belgium, the NBB counts two CCRs, one for each borrower type: • T he Central Credit Register Reporting for Enterprises (CKO2/CCE2): credit institutions have the obligation to report on a borrower-by-borrower basis, every month, the credits granted to legal persons, residents, non-residents and individual entrepreneurs. The Central Credit Register Reporting for Enterprises 2 (CKO2) was launched soon after the memorandum, as a second, more granular version of the initial CKO reporting. • T he Central Credit Register Reporting for Private Persons (CKP/CCP): credit institutions have the obligation to report on a loan-by-loan basis, the regulated credits granted to natural person residents, within two working days and the defaults on the regulated and non-regulated credits within 8 working days. Some of the credit institutions in Belgium have chosen to report on a daily basis. Let us look more closely at the AnaCredit reporting requirements vs. credit data reported today, firstly, for enterprises and secondly, for private persons. Thirdly we will analyze the way the borrowers are identified today by the CCRs.
3 / Official Journal of the European Union (2014/192/EU): Decision of the European Central Bank of 24th of February 2014 on the preparatory measures for the collection of granular credit data by the European System of Central Banks (EUROPEAN CENTRAL BANK/2014/6)
Intelligent Risk - July, 2015
015
Central Credit Register reporting for enterprises Looking at the Exhibit 1, the AnaCredit requirements seem to be casted in the same mould as the CCR for enterprises. In terms of priorities, the credits for enterprises over a certain threshold rank higher. It seems that ECB has chosen to keep the requirements as close as possible to what is being reported at a national level, probably as an attempt to keep implementation costs and time to an absolute minimum. As a novelty, the reverse repurchase loans (repos) are added to the scope of reporting as well as a list of Basel II credit risk indicators: the Loss Given Default (LGD) and the Risk Weighted Assets (RWA). Unfortunately, as the data requirements are not fully detailed by the ECSB, the comparison stops here. Exhibit 1 - Belgian Credit Central register for enterprises reporting requirements AnaCredit Requirements
Central Credit Register Reporting for Enterprises (CKO2/CCE2)
Institutions
Credit institutions
"Credit institution Insurer (Guarantee insurer, credit insurer), Factoring company, Leasing company"
Credit Type
Reverse repurchase loans
Cash credits: Authorized credits with mixed uses Fixed term credits Overdrafts Financial leasing & similar transactions Mortgage loans Non-mortgage instalment loans Own acceptances Commercial bills Other cash credits
Overdrafts (Cash Facilities) Financial leasing Mortgage loans Instalment loans Credit card debts Commercial bills (trade receivables) Other term loans
Commitment credits: Guarantees (other than referred below) Guarantee-substitute credits Letters of credit Not traded acceptances Threshold
2016: >50 000 EUR 2018: >25 000 EUR
none
Scope
2016: Granted in home country 2018: Granted by foreign branches
Granted in BE or by branches abroad, Extended in BE
Credit Contract Details
Credit Details, Maturity, Currency Authorized Amount & Used Amount Collateral type & value PD only for institutions applying IRBA Non performing status LGD only for institutions applying IRBA Risk weighted assets Annual interest rate Syndicated loan, subordinated debt
Credit Details, Maturity, Amounts, Currency, Country Type of Credit: Authorized & Used Collateral (haircuts, outstanding) Probability of default on one year (in %) - Basel 2 Default on payment “90 days past due� details
Borrowers
Legal Entities residents and non residents Individual Entrepreneurs
Legal entities (Residents and Non-residents) Entrepreneurs
016
Intelligent Risk - July, 2015
Central Credit Register reporting for private persons Exhibit 2 - Belgian Credit Central register for private individuals reporting requirements AnaCredit Requirements Instalment loans, Financial leasing. Overdrafts (Cash Facilities), Credit Type
Credit card debts Mortgage Loans
Central Credit Register Reporting for Private Persons (CKP/CCP) Cash credits: Instalment sales / loans, Financial leasing, Opening of credit for cash Facilities / cards, Overtaking of cash account (unauthorized overdrafts) Mortgage Loans
Other term loans
Threshold
2016: >50 000 EUR 2018: >25 000 EUR
(200 EUR for positive central, 25 EUR for the negative central)
2016: Granted in home country 2018: Granted by foreign branches
Granted in BE
Performing credits Non-performing credits
Positive Central Negative Central
Credit Details, Maturity, Currency
Credit Details, Maturity, Amounts, Currency, Repayment Details
Type Non performing status
Type of Credit Default Date, Claimability, Default Amount, Regularization
Scope
Credit Contract Details
Borrowers
Collateral type & value PD, LGD only for institutions applying IRBA Risk weighted assets Annual interest rate
Natural Persons - Residents
Natural Persons - Residents
Although not as high in priority as the reporting of credits for enterprises, AnaCredit will include the credits of the private persons. The ECB might be interested in statistical data such as: • T he balance of mortgage loans, which in most of the Western Europe can be a significant amount, as compared with the national debt for example • T he evolution of indebtedness and default rates at a national level, by product, socio-demographic group and financial institution
Intelligent Risk - July, 2015
017
borrowers identification and information Exhibit 3 - Borrower identification and details reporting to Belgian Credit Central registers
First of all, the clear distinction between resident and non-resident borrowers will continue to be made, as it is the case today with the reporting to the CCRs. For legal persons, information about headquarters and foreign branches might be required. Secondly, the borrower identification, for AnaCredit, will be represented by the Legal Entity Identifier from the Register of Institutions and Assets Database (see the next chapter). Regarding the direct identification of the borrower, the ECB is not directly interested in the details (perhaps because privacy regulations are still being discussed within the ESCB members). What the ECB cares about is mostly statistical data as the activity code, the country of origin and the size of the enterprise.
018
Intelligent Risk - July, 2015
AnaCredit specific requirements RIAD REGISTER OF INSTITUTIONS AND ASSETS DATABASE RIAD (“Register of Institutions and Assets Database”) is an application operated by the ECB that collects business information from individual ESCB members, then makes the data available to the entire ESCB and to the public. The main objective of maintaining these lists is to facilitate the production of comprehensive and consistent euro area financial statistics by ensuring that the statistical reporting population is complete, accurate and homogenously defined. In substance, the “national segments” in RIAD are updated by the NCB of the respective EU Member State, via “internal, external, commercial sources”4 and information derived from national supervisory authorities. The RIAD system should be fed overall on a host-country principle,5 meaning that the EU country where the exposure originated and where the borrower is resident will determine which NCB will be responsible. However, the home-country approach would be used for foreign branches and subsidiaries outside the euro area/or the EU (e.g. the exposures of a Belgian bank in Switzerland would be reported by the Belgian parent institution). In regards to non-resident borrowers, there will be further discussions on how to ensure the data quality management. Probably, the ECB will act as a hub and ensure coordination for borderline cases.
LEGAL ENTITY IDENTIFIER The main attribute uniquely pointing to a credit institution in RIAD is called LEI (Legal Entity Identifier).6 The LEI is a unique 20-character alphanumeric code that would be assigned to all entities that are counterparties to financial transactions through ISO 17442. The LEI itself will be neutral, with no embedded intelligence or country codes, which would create unnecessary complexity for users. Every credit institutions communicating to the NCB is obliged to apply for a Legal Entity Identifier. It is not excluded that in the future, the LEI will be asked by the CCRs in the interfaces for reporting of the enterprise or private client credits.
4 / Source: European Central Bank, Monetary and Financial Institutions on RIAD 5 / Bank of International Settlements, Irving Fisher Committee on Central Bank Statistics: Standardized granular credit and credit risk data, Violetta Damia and Jean-Marc Israël 6 / Febelfin, Belgian Federation of the Financial Sector, LEI Practical Aspects
Intelligent Risk - July, 2015
019
BORROWER GROUP STRUCTURE AND AGGREGATION OF CREDIT EXPOSURES In a recent paper published by the Bank of International Settlements (BIS), the Irving Fisher Committee on Central Bank Statistics, Standardized granular credit and credit risk data, Violetta Damia and Jean-Marc Israël affirm that “the reporting is foreseen to be established on a solo basis. However, as institutions may be part of more complex group structures, it is currently being investigated by the ECB [as] an appropriate approach to provide information on entities at group level, important from a financial stability and supervisory viewpoint.” As a side note, the type of group structure being sought is the notion of legal group. The two authors then highlight that two main approaches were considered to allow for analysis of credit exposures at group level: “the direct reporting on a group basis (e.g. reporting of exposures at the consolidated banking group level), or additional data on group structures.” The former approach is considered more costly but provides more accurate information on exposures at group level. The latter method only uses the information reported on a solo basis and, therefore, is less costly for reporting agents. However, it requires retrieving the relevant information on group structures from other (supervisory) reports or from business registers. “ We foresee quite a challenge ahead for the NCB or the ECB, and it is very likely that the group structure will have to be provided by the reporting credit institutions to their CCRs in a while. Our experience with enriching the legal customers data from independent business registers is that, sometimes, reliability and freshness of information provided is questionable. Moreover, sources of data are non-standard and the level of detail quite often very disparate. But, let us wait and see.
DATA QUALITY CONSIDERATIONS While AnaCredit reporting will be a great leap forward in understanding the exposures and the types of credits granted or defaulted at a European level, one has to take into consideration the following data quality dimensions: • Standardization and interpretability: differences in interpretation of types of credits, between member states, and diverging legal frameworks. • Completeness: Credits reported today via the CCRs represent a fraction of the total credit facilities granted by banks within an economy. After all, if statistics or risk indicators will be produced at a European level, but only for a fraction of the total exposures, how reliable will those indicators be? • Scope: credits granted to non-residents, today are reported for the enterprises, but not for private persons. This can lead to a distortion of the statistics, especially for the small countries with a strong international presence.
020
Intelligent Risk - July, 2015
• Timeliness: quarterly reported data enables ECB to remain reactive at its best. In case of an emerging systemic phenomenon, the ECB will be facing already obvious data, and the markets would have normally already reacted.
PROVISIONAL TIMELINE Exhibit 4 details what are the expectations at ECB: • The framework is expected roughly at the end of May 2015. • Implementation by the Central Credit Registers is expected by the end of 20177 (initially foreseen by the end of 2016). • Based on instructions from the NCB CCRs, the reporting credit institutions are expected to close the gaps well before that.
Exhibit 4 - AnaCredit implementation timeline May, 2015
ECB regulation: Anacredit framework
Approval ECB’s council
Jun-Jul, 2015
End, 2017
Mid, 2018
End, 2019
Start implementation
Who reports
Credit institutions
Other credit-institutions, foreign branches
Borrower identifier Country, Institutional sector
Public sector Borrower attributes
Type counterparty
Sector of activity, Size
Non-finance companies
Syndicated loans, subordinated debt
Individual entrepreneurs Credit variables
Reporting Threshold
Non-performing credits Performing credits > EUR 50 000
Collateral type, Nonperforming status
Performing credits > EUR 25 000
Overdrafts (cash facilities) Credit cards
Credit (drawn, undrawn), arrears Collateral value, interest rate
Other loans for households
Housing loans
Loan identifiers, Currency Maturity (original and residual)
All legal entities
Trade receivables (commercial bills) Credit measures
RWA, credit adjustments (provi.), LGD, PD
Type credits
REPOS, other termloans Financial leasing Installment loans Mortgages
7 / Source: National Bank of Finland: Analytical Credit Database
Intelligent Risk - July, 2015
021
CONCLUSIONS A recent letter from the European Banking Federation towards the European Central Bank,8 highlights the fact that, “especially over the last two months, AnaCredit has become more and more complex... e.g. extended number of attributes” and that there are concerns on how the collected data supports the purposes of AnaCredit, which is “not merely intended to serve statistical, monetary and macro-prudential purposes but is also meant to support micro-prudential supervision”. Although ECB started to build on exiting grounds with a view of minimizing the costs and implementation time, perhaps various shortcomings determined the ECB to add extra layers. In consequence, the banking sector starts to treat seriously the topic and thinks that the implications of AnaCredit will be far-reaching. We have shown the obvious impact on the current reporting applications and the data collection process, but as ECB will refine its requirements, only time will tell the full extent of the changes introduced by AnaCredit and the cost for the credit institutions.
references 1. Banking HUB: Analytical credit dataset of the ECB 2. Bank of International Settlements, Irving Fisher Committee on Central Bank Statistics: Standardized granular credit and credit risk data, Violetta Damia and Jean-Marc Israël 3. Bruegel Contributions, Capital markets union: a vision for the long term 4. European Banking Federation, Letter to ECB Concerns in respect of AnaCredit 5. European Central Bank, Chapter 2, Further progress in the implementation of Banking Union 6. European Central Bank, Recommendation of the EUROPEAN CENTRAL BANK of 24 February 2014 on the organisation of preparatory measures for the collection of granular credit data by the European System of Central Banks (EUROPEAN CENTRAL BANK/2014/7), OJ C 103, 8.4.2014, p. 1 7. European Central Bank: Memorandum of Understanding on the exchange of information among National Central Credit Registers, published in April 2010 8. European Central Bank, Monetary and Financial Institutions on RIAD 9. Febelfin, Belgian Federation of the Financial Sector, LEI Practical Aspects 10. Jentzsch & Rientra: Information Sharing and Its Implications for Consumer Credit Markets: United States vs. Europe 2003 (Credit bureaus US & EU)
8 / European Banking Federation, Letter to ECB Concerns in respect of AnaCredit
022
Intelligent Risk - July, 2015
11. Official Journal of the European Union (2014/192/EU): Decision of the European Central Bank of 24th of February 2014 on the preparatory measures for the collection of granular credit data by the European System of Central Banks (EUROPEAN CENTRAL BANK/2014/6) 12. National Bank of Belgium, CKO2 Presentation for participants, business aspects. Published 1st of August 2010 13. National Bank of Finland: Analytical Credit Database 14. Rothemund & Gerhardt: An analysis of a survey of credit bureaus in Europe 2011
authors Bogdan Teleuca Bogdan Teleuca is a Senior Risk Consultant at Business & Decision in Brussels, where he focused for the past eight years on credit risk, regulatory reporting and risk information management. He is also a steering committee member of the PRMIA Brussels Chapter. He holds a Master in Financial Risk Management from FUSL Brussels and holds the FRM certification.
Armando Cortez Armando Cortez is a Risk Consultant at Business & Decision in Brussels. He has acquired over 4 years practical experience on the management of banking projects from a regulatory, practitioner and consumer perspective. The projects in which he has worked cover the fields of Credit, Operational, Market and Liquidity risk. Armando holds 3 masters degrees in the fields of Financial Risk Management, Projects and Microfinance.
Intelligent Risk - July, 2015
023
The continued rise of Biosimilars Trouble ahead for Big Pharma?
by Will Stow A biosimilar (or follow-on biologic) is a biologic medical product which is an exact (or as close to as possible) copy of an original drug, manufactured and marketed by a different company to the “innovator”. They are officially approved versions of the original drug, and can only be manufactured when the original product’s patent has expired. (1) Unlike the more common small-molecule drugs, biologics are generally much more complex, and therefore may be very sensitive to changes in manufacturing processes. Follow-on manufacturers only have access to the commercialized innovator product, and not the processes, so proving their biosimilars are comparable can be a much more difficult and expensive process than that of traditional generics manufacturers. Despite these challenges, the biologics market is growing rapidly, especially compared with the small-molecule chemical market whose revenues actually decreased in 2012. The future for many pharmaceutical firms is in biologics. Several biologics have sales of more than $1 billion annually. For example, in 2014, global sales were $12.9 billion for Humira (adalimumab), $7.16 billion for Remicade (infliximab) and $7.04 billion for Avastin (bevacizumab). Many of these
024
Intelligent Risk - July, 2015
very profitable drugs are scheduled to come off patent over the coming years, providing the opportunity and incentive for biosimilar entry. (2) Health policy makers tend to see the potential of biosimilars as a force in reducing prices and expenditures while creating broader access to treatments because some drugs might be funded for indications which are presently not covered due to a high cost-effectiveness ratio. This view is mostly based on the experience of the market dynamics of conventional small molecule medicines, where the entry of generics (when exclusivity rights expire and depending on the policies applied) can reduce the drug’s price by up to 80% of the innovators pre-expiry price. (3) However, biosimilars will encounter substantial barriers in their efforts to compete with branded biologics. These obstacles are more substantial than those encountered by smallmolecule generics. Specifically, biosimilars have to overcome the particular barriers that are associated with manufacturing, marketing, storage and other distribution issues, delivery devices, immunogenicity, clinical trials and special requirements for pharmacovigilance (i.e. post-sale monitoring). (4)
Despite this, a recent report by Citi analyst Andrew Baum indicates that innovator pharma companies are likely to lose vast amounts of money in the coming years as more and more biosimilars come to market. Baum believed that “the market materially underestimates the magnitude and the timing of the impact on exposed innovator companies as well as the commercial opportunity for the biosimilar sponsors.” The analyst estimated that “despite the multiple defensive tactics, innovators will likely lose an aggregate >$360 billion in revenues over the next ten years, with c.$110 billion captured by biosimilar sponsors and the residual helping to alleviate the high and growing cost pressures for both patients and payors associated with introduction of premium-priced specialty drugs, most notably immunotherapy.” (5) Some companies are already suffering from upcoming biosimilar launches. For example, Dermira Inc was recently downgraded from a “buy” option to “neutral” as the company faces price erosion from future biosimilar launches. It was predicted that Dermira would face a 70 percent loss in market share in the first 7 years following biosimilar launches in 2018 and that branded drugs in the class would need to reduce prices by a third in order to compete. (5) The biosimilars market is already large and growing fast. As of May 2015, AdisInsight listed 123 marketed biosimilars globally. With a further 43 in the registration and pre-registration stages and 69 in phase III this will bring the number closer to 200 in the coming years. AdisInsight also describes over 60 programmes in early clinical testing and over 140 in the preclinical and research stages. (3) India is the largest player in the biosimilars space with well over 30 marketed agents and over 120 biosimilars being actively developed in the region. The USA comes in second with over 80 programmes having some
kind of development but very few are marketed. Germany, China and South Korea make up the rest of the top 5 most active regions in terms of development. (2) In terms of indications, rheumatoid arthritis is the largest with 81 biosimilars marketed or under development for the disease. This is followed closely by oncology with neutropenia and anaemia coming in third. Other notable indications include hepatitis C, multiple sclerosis and female infertility. The market could become crowded quickly however with many companies vying to be first to market with their biosimilar. For example AdisInsight has profiles for 28 biosimilars of etanercept, 22 of abatacept, 20 each of bevacizumab and epoetin alfa and similarly large numbers for other blockbuster drugs such as filgrastim, infliximab, rituximab, trastuzumab and various interferons and insulins. (3). Interestingly, some of the traditional innovator pharma companies are also developing biosimilars with Merck & Co and Novartis both featuring in the list of top 20 companies by number of biosimilar development programs. But these seem to be smaller players compared to the likes of Nanogen, BioXpress and Zydus Cadilla who have the highest number of development programmes. (2) It is expected that the majority of biosimilar development and marketing will continue in emerging markets where barriers to entry are lower. However once these products have an established record of effectiveness, migration into the more established markets is inevitable. Although the biosimilars market has not increased at the expected rate, the market is expected to grow substantially as the patent cliff wears on and mandates to decrease healthcare costs continue. Experience in the EU shows that biosimilars have proved to be safe. As the US market opens more, biosimilars are likely to become increasingly prominent, but this will take considerable time and effort.
Intelligent Risk - July, 2015
025
references 1. FDA public hearing to obtain input on specific issues and challenges associated with the implementation of the Biologics Price Competition and Innovation Act of 2009 (BPCI Act). November 2-3, 2010. Link here. 2. AdisInsight. Springer Science+Business Media, n.d. Web. 10 May 2015. Link here. 3. The impact of biosimilars’ entry in the EU market. Rovira J et al. Andalusian School of Public Health. Jan 2011 4. Blackstone EA, Fuhr JP. The future of competition in the biologics market. Temple J Sci Technol Environ Law. 2012; 31: 1–30 5. Citi Research, The Atlantic 2015. Link here.
author Will Stow Custom BI Product Specialist for Adis Will has been working as the Custom BI Product Specialist at Adis (part of Springer Science+Business Media) for more than 7 years and is based in Auckland, New Zealand. Prior to his current role, Will worked as an Ongoing Trials Coordinator and Outsource QA Editor for the Adis Clinical Trials Insight database. He has also worked as a university research assistant at the University of Auckland School of Medicine, where he completed his Bachelor of Science degree majoring in Pharmacology and Physiology. Will is a member of the Society of Competitive Intelligence Professionals (SCIP) and holds a Lean Six Sigma Green Belt among other industry qualifications.
026
Intelligent Risk - July, 2015
PRMIA’S
C-SUITE MEMBERSHIP Our new peer-to-peer program is designed to ensure that C-Suite members are connecting to others that are truly their counterparts, while also gaining access to valuable content, resources, and opportunities. Because the issues faced by our C-Suite members are at the highest level, our programs are matched to their needs and moderated by those of similar responsibility, skill and experience, within three levels of membership: C-Suite | C-Suite Select | C-Suite PinnacleÂ
For more information and to apply for C-Suite Membership, click here.
bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb
Backtesting Value-at-Risk: how good is the model?
by Philippe Cogneau abstract Basel regulation provides a framework that links the regulatory capital to the Value at Risk (VaR). Banks have thus been obliged to implement VaR models, predicting future risks. These models have to be regularly backtested to prove their accuracy. A large variety of testing methods have been proposed in the scientific literature. The first goal of this document is to provide a survey of them and discuss their strengths and weaknesses. Then, it addresses some recommendations for an optimal methodology in testing.
regulatory framework Since 1998, Basel regulation forces Banks to reserve a certain amount of capital to cover potential future portfolio losses due to market risk. A measure of the portfolio risk is used to determine the level of market risk capital requirement. The selected measure for the risk is the Value-at-Risk (VaR) at a confidence threshold α = 99% and a time horizon of 10 days (Basel Committee, 2006)1: this quantity represents the expected maximum loss of the portfolio with 99% certainty during the 10 following days. For instance, a reported value of 5,000,000 € means that the Bank expects to realize a loss higher than 5,000,000 € during the next 10 days only 1% of the time. If during the next 10 days, the loss of the portfolio goes at least once above 5,000,000 €, we say that there is a violation (or failure) of the VaR. Over the years, Banks have been developing different models to assess the VaR as precisely as possible. Concretely, the market risk required capital is computed as a function of the VaR99% over the next 10 trading days and a multiplication factor St which depends crucially from the backtesting results of the VaR. Indeed, it is determined upon the number N of VaR99% violations during the previous 250 days, according to the following formula:
028
Intelligent Risk - July, 2015
This classification is often referred as the “traffic light”, first zone being the “green”, second zone “yellow” and the last being the “red”. The higher the number of violations of the VaR, the larger will be the required capital. Furthermore, if there are 10 or more failures, the VaR model is considered to be inaccurate, and immediate measures must be taken to review the VaR computation by the Bank. In the current regulatory framework, this “traffic light” approach is the only prescribed assessment of VaR model accuracy. However, this methodology suffers from a series of defaults. First, it does not take into account the dependence between the violations. Indeed, if the model is correct, the violations should be evenly spread over time: for instance, if there are four violations on the days 112, 113, 114 and 115, the model will be classified in the “green zone” while there is clearly a problem with it. Second, it is lacking in precision. In reality, two kinds of errors are possible when we perform a test to accept or to reject a model: • Type 1 error: reject a correct model • Type 2 error: accept an incorrect model A powerful test should minimize the probabilities for these two errors. If we decide to put a cut-off level at 5 (which is the first level outside of the “green zone”), when considering a sample of 250 values,2 the probability of a type 1 error is 10.8%, which is above conventional thresholds for statistic tests. On the other hand, if we face an inaccurate model which computes the VaR at a threshold of 98% instead of 99%, the probability of a type 2 error is 43.9%. This is also clearly too much.
new backtesting methods Different ways are proposed by the researchers to circumvent those weaknesses. Historically, the seminal paper by Christoffersen (1998) is often considered at the basis brick. It points out that VaRα forecasts are valid if and only if the violation process satisfies the following two assumptions: 1. Unconditional coverage (UC) hypothesis: the unconditional probability of a violation must be equal to the coverage rate α. 2. Independence (IND) hypothesis: violations observed at two different dates must be independently distributed
1 / It is worth noticing that at the moment of writing this paper, Basel committee intends to replace it by the Conditional Value-at-Risk (or Expected Shortfall) at a confidence level of 97.5%. We discuss on this new statement in the recommendations. 2 / The probabilities reported here come from tables provided in a document published by Basle Committee in 1996
Intelligent Risk - July, 2015
029
A common way to express these assumptions in mathematical terms relies on the definition of a “hit” function, representative of the violations. If we compute VaRα(t), value-at-risk of the portfolio at the moment t, with a threshold α and we compare it to xt+1 the profit or loss of the portfolio over the required time interval, we introduce the binary function:
The UC hypothesis can then be written:
The IND hypothesis is:
Different unconditional coverage tests have been proposed in the literature. As outlined by Jorion (2007), the basis is a simple Z statistic on the number of failures, but Haas (2001) considers instead the proportion of violations, while Kupiec (1995) proposes a likelihood ratio test. The same author introduces a test on the time taken for the first failure to occur. While in line with the regulatory “traffic light”, those tests suffer from a weak discrimination power, especially if the number of observations is reduced to one year. More interesting are the tests that add a severity dimension. Colletaz, Hurlin and Pérignon (2013) propose to apply a similar standard backtesting procedure to a second threshold, α’, higher than α (for instance, take α = 99% and α’ = 99.75%): the violations at the level α’ are called “super-exceptions”. If their frequency is too high, it means that the magnitude of losses in exception to α are too large. This leads to a map like in Exhibit 1 that reports traffic lights in function of the number of violations (y axis) and super-exceptions (x axis). Exhibit 1. Map of the rejections at the confidence levels of 95% (yellow) and 99% (red) for 250 observations, when the VaR thresholds are 99% (vertical axis) and 99.75% (horizontal axis).
030
Intelligent Risk - July, 2015
Lopez (1998, 1999) proposes another way to control for the magnitude of the exceedance in the violation. He suggests to consider loss functions, for which the larger the violation, the higher the value. This procedure is highly flexible, but relies strongly on the choice of a benchmark: from which level the value of the loss function can be considered as too high? The first independence test was proposed by Christoffersen (1998). It is a Markovian model based on the state (violation / no violation) in time t according to the state in time t-1. Unfortunately, this is only testing for dependence on one period, not dependence in general. To tackle this issue, Haas (2001) suggests to generalize the Kupiec test of time till first violation (see above) and defines a similar statistic for the time νi between violations i-1 and i. As this test does not present any consideration on the cardinality of the failures, a model that present a large number of violations is not penalized, if those are separated by a sufficient number of days. A few attempts have been done to combine unconditional coverage and independence. The basis test of Christoffersen (see above) can be extended in this way, but this does not solve the issue that only 1-level dependencies are considered; furthermore, the gain in completeness of the test is paid by a loss in power. Other tracks have been explored. Engle and Manganelli (2004) propose to consider a linear regression model; Patton (2002), followed by Dumitrescu, Hurlin and Pham (2012) suggest rather a logistic model, while Berkowitz, Christoffersen and Pelletier (2011) prefer the use of autocorrelation. All these tests appear to be very severe, and reject a lot of models when less than 2 or 3 years is available. More classically, Haas (2001) proposes to combine in a statistic two previously introduced tests: Kupiec’s test proportion of violations with the generalized test of frequencies. Finally, researchers3 have introduced tests on the whole distribution of predictive losses. Daily, the model forecasts a probability density for the returns of the portfolio. On the next day, one records the percentile in which the true return falls in this density. If the model is correct, each percentile should occur with the same probability and they should be independent. Different tests can then be applied to check the latter condition. Being harder to implement as the whole forecasted P&L distribution must be recorded, this test is also severe. Indeed, it is possible that a model, adequate for the tails, fails to the test because it does not respect the two properties for central thresholds.
3 / Crnkovic and Drachman (1996), Berkowitz (2001)
Intelligent Risk - July, 2015
031
practical checks, observations and recommendations We built a sample of 16 Banks presenting various profiles of VaR violations during one year, and we processed the backtesting methods listed above: Exhibit 2 reports these Banks, their violations profile at a 99% threshold and the conclusion of the different backtests. The consideration of this table brings to light very interesting remarks.
Exhibit 2. Rejections for backtesting 16 Banks according to the methods described in this paper. The background of each Bank reports the colour of the traffic light from the Basel regulatory framework. First, we see how disparate the results are: all columns from the table present a different pattern of rejections. In particular, the classical test of frequency (or percentage of failures), widely used due to his connection with the regulation, presents only a partial view of the true picture. That is an important concern, as this is often the only test which is applied. Our main recommendation is then to multiply the performed tests. A good understanding of their own specificities is needed to analyse their rejections. No test is exhaustive4; each of them can only capture one or two potential misspecification(s). Conversely, we point out that a test which tries to capture two misspecifications is losing in discrimination power.
4 / For instance, there is no test that covers both of the severity and the independence
032
Intelligent Risk - July, 2015
Second, we see that even in a category of tests that are supposed to be similar, the rejections are not the same. This is partially due to differences in the power levels of tests. To tackle this issue, it is often recommended to increase at maximum the sample size. Working with two years of data instead of one can already boost significantly the power of some of the tests, but of course the longer the better. It is also useful to consider different confidence thresholds for rejections. The traffic lights framework, introduced by the regulatory authorities, implicitly includes such a series of thresholds. We have observed that the consideration of distinct thresholds often leads to extremely different results. The evolution of the regulation must also be taken into consideration. For the trading portfolio, Basel Committee intends to replace the VaR at the threshold of 99% by the Conditional VaR (or Expected Shortfall) at the threshold of 97.5%. A new series of backtesting tests shall be developed, but it is likely that current tests that consider the severity of the violations should be in the core of the process. Finally, we would emphasize on the old principle “garbage in, garbage out”. Using poor data to compute the VaR will produce bad results, even if the model is correctly built. Similarly, backtesting the VaR with poor P&L data will always lead to backtesting errors. The quality of daily P&L data must be assessed – indeed, Banks often publish a complete P&L once a month, and daily output is less precise.
author Philippe Cogneau Senior consultant in finance, Business & Decision Benelux Philippe Cogneau has been working for eight years as a senior consultant in finance at Business & Decision Benelux. He is specialized in financial risks management and in derivatives accounting. Prior, Philippe worked at BBL / ING Belgium: first in the IT Department, where he was responsible of accounting applications, then in the Credit Risk Department for modelling retail customers. Philippe is also an external Professor at the EDHEC (Lille), at the Francisco Ferrer High School (Brussels), at the IESEG (Lille) and at the University of El Jadida (Morocco). Philippe has got a PhD in Economics and Finance (2013) at the University of Liège, where he is also researcher. Before, he graduated in Mathematics (magna cum laude, University of Liège, 1986), in Computer Science (magna cum laude, University of Liège, 1988), and in Management of Financial Risks (summa cum laude, F.U. Saint Louis – Brussels, 2006).
Intelligent Risk - July, 2015
033
references 1. BASEL COMMITTEE OF BANKING SUPERVISION (1996), “Supervisory Framework for the Use of ‘Backtesting’ in Conjunction with the Internal Models Approach to Market Risk Capital Requirements”, available here. 2. BASEL COMMITTEE OF BANKING SUPERVISION (2006), “International Convergence of Capital Measurement and Capital Standards: A Revised Framework - Comprehensive Version”, available here. 3. BERKOWITZ, Jeremy (2001), “Testing Density Forecasts with Applications to Risk Management”, Journal of Business and Economic Statistics, vol. 19, n° 4, pp.465-474. 4. BERKOWITZ, Jeremy, CHRISTOFFERSEN, Peter F., and PELLETIER, Denis (2011), “Evaluating Valueat-Risk Models with Desk-Level Data”, Management Science, vol. 57, n° 12, pp. 2213-2227. 5. CHRISTOFFERSEN, Peter F. (1998), “Evaluating Interval Forecasts”, International Economic Review, vol. 39, n° 4, pp. 841-862. 6. COLLETAZ Gilbert, HURLIN Christophe, and PERIGNON Christophe (2013), “The Risk Map: a New Tool for Risk Management”, Journal of Banking and Finance, vol. 37, n° 10, pp. 3843-3854. 7. CRNKOVIC, C., and DRACHMAN, J. (1996), “Quality Control in VaR: Understanding and Applying Value-at-Risk”, Risk, vol. 9, pp. 139-143. 8. DUMITRESCU Elena-Ivona, HURLIN Christophe, and PHAM Vinson (2012), “Backtesting Value-atRisk: From Dynamic Quantile to Dynamic Binary Tests”, Finance, vol. 33, pp. 79-111 9. ENGLE, Robert F., and MANGANELLI Simone (2004), “CAViaR: Conditional Autoregressive Value-atRisk by Regression Quantiles”, Journal of Business and Economic Statistics, vol. 22, pp. 367-381. 10. HAAS, Marcus (2001), “New methods in Backtesting”, Working Paper, Financial Engineering Research Center Caesar, Bonn, DE. 11. JORION, Philippe (2007), “Value at Risk: The New Benchmark for Managing Financial Risk”, 3rd Edition, McGraw Hill. 12. KUPIEC, Paul H. (1995),”Techniques for Verifying the Accuracy of Risk Measurement Models”, Journal of Derivatives, vol. 3, n° 2, pp. 73-84. 13. LOPEZ, Jose A. (1998), “Methods for Evaluating Value-at-Risk Estimates”, Federal Reserve Bank of New York Economic Policy Review, October 1998, pp. 119-124. 14. LOPEZ, Jose A. (1999), “Regulatory Evaluation of Value-at-Risk Models”, Journal of Risk, vol. 1, pp. 37-64. 15. PATTON, A. J. (2002), “Application of Copula Theory in Financial Econometrics”, Ph,D. Dissertation, University of California, San Diego, US.
034
Intelligent Risk - July, 2015
GAIN PERSPECTIVE
JOIN PRMIA TODAY
Events | Training | Certifications | Networking | Webinars | Discounts
PRMIA members include the best Risk Managers in the world. They are THE LEADERS because of the perspective they gain with their PRMIA colleagues. Become a member and experience the benefits for yourself.
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
PRMIA.org/membership
PRMIA C-Suite Discussion Series PRMIA continues to lead in the C-Suite June 11, 2015 London, UK June 23, 2015 New York City, USA PRMIA continues to lead in the C-Suite through a generous grant from Numerix. PRMIA hosted backto-back C-Suite events in London and in New York City.
London: Model Risk Management and Black Swans David Rowe, Sr. Strategist for Risk & Regulation at Misys and a senior domain expert, volunteer and leader at PRMIA, moderated a spirited discussion at the historic Vintners’ Hall in center London. David centered the discussion in areas highlighted by events of the financial crisis of 2008. Today, risk managers are ever-increasing in demand of appropriate models to detect short term volatility and potential structural shifts outside the range of typical market fluctuations. Recent economic and regulatory environment has demanded the need to manage model risk without falling into the trap of thinking that the proper quantitative modeling offers a complete answer to all of the challenges that risk managers face.
036
Intelligent Risk - July, 2015
David delved into more specific regulatory demands and whether or not they have improved confidence in models or just increased the regulatory and cost burdens. In addition, attendees entertained areas that pose the most catastrophic risk in the form of “black / gray� swans and whether today’s models are true enough to be predictable and reliable. Finally, David could not leave the event well enough alone without venturing into some of the more topical geopolitical events of the period, including the future of the Euro-zone and the current consequences surrounding potential Greek default and Grexit. The London event drew 20 senior professional risk managers throughout the greater European financial services sector.
New York City: Best Practices in Model Risk Management Numerix also supported a spirited C-Suite event at the Yale Club in the heart of New York City as moderated by Cindy Williams, CRO Regulatory Coordinator for the Americas for Credit Suisse and Director of the PRMIA NY Chapter Steering Committee. A special guest presentation was made by Karen Schneck of the Federal Reserve Bank of New York. The New York event centered on Best Practices in Model Risk Management that was very consistent in tone and tenure of the London event. Financial model complexities adds to a level of uncertainty, lack of predictability and are met with higher levels of scrutiny. Time consuming processes demand more manpower and expense and are highly prone to disputes by varying stakeholders. Cindy and Karen managed a competent discussion and exchange of ideas regarding all-around best practices and the development of a universal industry standard. Nearly 30 senior professional risk managers engaged in the event. PRMIA is extremely appreciative for the continued support through program and educational grants made available through a strategic partnership with Numerix. Interested in learning more about becoming a PRMIA C-Suite member? Visit http://www.prmia.org/csuite-membership or contact membership@prmia.org.
Intelligent Risk - July, 2015
037
Correlation Risk – Models and Management
by Gunter Meissner 1. Correlation and Finance Correlations are ubiquitous in finance. Correlations are applied in risk management as an input for VaR (Value at Risk), ES (Expected Shortfall), ERM (Enterprise Risk Management) or EVT (Extreme Value Theory). The higher the correlation between the assets in the portfolio, the higher the risk measures, since high correlation implies a high probability of many assets declining jointly. Correlations also play a central role in investment analysis. In MPT (Modern Portfolio Theory) the correlation coefficient matrix serves as an input to derive the portfolio variance, which is interpreted as portfolio risk. In the seminal CAPM (Capital Asset Pricing Model), the covariance is the numerator of the famous β, the measure for the sensitivity of an asset to non-diversifiable systematic risk. In addition, correlations are applied in numerous correlation trading strategies. Autocorrelations of indices or stocks can be calculated to find the degree of trending, which has decreased in recent years. Standard multi-asset options derive their value based on the correlation coefficient between the underlying assets. In addition, dispersion trading applies the correlation parameter as an input to evaluate the relationship between index volatility derived by the index components and actual index volatility. For details on correlation trading strategies, see Meissner (2015a). Correlation risk was highlighted in the global financial crisis 2007 – 2009. In a crisis, typically many types of correlations increase. Stock correlations increase, since many stocks decline jointly. The same logic applies to bonds and bond correlations as well as correlations between stocks and bonds. Hence VaR, ES, and ERM numbers of stock and bond portfolios typically increase in a recession. In 2008, CDO spreads increased dramatically, since default correlations and default probabilities increased sharply. In particular, correlations between the tranches of the CDOs increased during the crisis. This had a devastating effect on the super-senior tranches. In normal times, these tranches were considered extremely safe since a) there were AAA rated and b) they were protected by the lower tranches. But with the increased tranche correlation and the generally deteriorating credit market, these super-senior tranches were suddenly considered risky and lost up to 20% of their value.
038
Intelligent Risk - July, 2015
To make things worse, many investors had leveraged the super-senior tranches, termed LSS (leveraged super-senior tranche) to receive a higher spread. This leverage was typically 10 or 20 times, meaning an investor paid $10,000,000 but had risk exposure of $100,000,000 or $200,000,000. What made things technically even worse was that these LSSs came with an option for the investors to unwind the super-senior tranche if the spread had widened (increased). So many investors started to purchase the LSS spread at very high levels, realizing a loss and increasing the LSS tranche spread even further. Correlation is naturally a critical part of regulation, especially credit risk regulation. Basel II (and most likely Basel III) applies a slightly modified version of simplistic OFGC (one-factor Gaussian Copula) model to derive the default correlation risk of a portfolio. The default correlation between the debtors ρ is a negative function of the probability of default PD .The logic is that highly rated companies with a low default probability have a higher correlation of default since they are mostly prone to systematic factors as a recession, in which they default together. However, companies with a high default probability are more affected by their own idiosyncratic factors and less by systematic risk, hence are assumed to be less correlated. The Basel II Accord (updated in Basel III) addresses correlation risk such as specific and general wrong-way risk: General wrong-way risk exists when the probability of default of a counterparty is positively correlated with general market risk factors.2 An example would simply be a long bond position: If in a recession interest rates decrease, bond prices will increase leading to higher credit exposure. At the same time a recession typically leads to higher default probabilities, hence higher credit risk with respect to the bond issuer. Hence we have credit exposure and credit risk increasing together, constituting wrong-way risk. A bank is exposed to specific wrong-way risk if future exposure to a specific counterparty is positively correlated with the counterparty’s probability of default.3 An example would be a bond hedged with a CDS, where the default correlation between bond issuer and the CDS seller is positive: If the credit quality of the bond issuer deteriorates, constituting higher credit risk, so will the credit exposure since the present value of the CDS increases with decreasing credit quality. Hence, again we have higher credit risk together with higher credit exposure. If specific and general wrong way risk is present, the Basel Accord applies a multiplier of 20% for the IRB approach and 40% for the standardized approach to the credit risk capital charge to account for the higher credit risk exposure. More sophisticated approaches to model wrong-way risk are currently being developed, - see for example Hull and White 2006. The Basel Accords also recognize the credit risk reduction when a CDS is used as a hedge applying a ‘substitution approach’ or a ‘double default approach’. Critical in reducing hedged credit risk exposure is the correlation between the obligor (the debtor) and the guarantor (the CDS seller).
2 / BCBS, “Annex (to Basel II)”, http://www.bis.org/bcbs/cp3annex.pdf, p.211; BCBS, “Basel III: A global regulatory framework for more resilient banks and banking system”, http://www.bis.org/publ/bcbs189.pdf, p.38. 3 / See BCBS, “Basel III: A global regulatory framework for a more resilient banks and banking systems”, http://www.bis.org/publ/bcbs189.pdf, p.45.
Intelligent Risk - July, 2015
039
The higher the default correlation, the lower the regularity capital reduction, since there is a high probability of the obligor and the guarantor defaulting together - see Brigo et al 2008 and Meissner et al 2013. The Fed and the IMF also require stress testing of correlations, recognizing that “Historical statistical relationships, such as correlations, proved to be unreliable once actual events [as a severe shock] started to unfold”.4
2. How to measure correlations? There is no shortage of correlation measures in statistics and finance. Figure 1 gives an overview: Figure 1: Statistical and financial correlation models
Correlation models
Statistical Correlation Models
Deterministic Financial Correlation Models Bottom-up Models
• Pearson’s p (1900) • Cointegration (Engle, Granger 1987) • Spearman’s rank correlation (1938)
• Correlating Brownian motions (Heston 1933)
• Kendall’s τ (1938)
• Binomial Correlations (Lucas 1995) • Copulas
One-factor - Gaussian Copula (Vasicek 1987) Applied in Basel II
Multivariate Copula (Li 2000)
• Contagion models Davis and Lo (2001), Jarrow and Yu (2001)
4 / BCBS “Principles of sound stress testing practices and supervision” May 2009
040
Intelligent Risk - July, 2015
Top-Down Models • Modeling Transition Rates (Schönbucher 2006) • Modeling Stochastic Time Change (Hurd and Kuznetsov 2006) • Contagion Default Modeling (Giesecke et al 2009)
Stochastic Financial Correlation Models
• Dynamic Conditional Correlations (Engle 2002) • Heston model with Stochastic Correlation (Buraschi et al 2010) • Correlating Stochastic Volatility and Stochastic Correlation (Lu and Meisner 2013)
The question which correlation model to apply depends on the complexity of the problem. By far, the most popular correlation model in finance is the simple Pearson correlation measure. However, the Pearson model should only be applied if the data set is linear, sufficient data with few outliers is present, different correlation regimes have been scrutinized, and the causality has been exogenously analyzed and accepted - see Meissner 2015 for details. For simple homogeneous portfolios, the OFGC (one-factor Gaussian copula) model, applied in Basel II may serve as a first approximation for portfolio credit risk. However, richer copulas (Li 2000) with correlation matrices and time dependent default probability functions seem to be the better choice. In addition, recent developments as dynamic copulas (Albanese et al 2007, 2011) and optimized copulas (Hull 2006) are better suited to capture complex credit risk. A new, promising and rigorous approach are stochastic correlation models which allow a high degree of realism and versatility - see, for example, Da Fonseca et al 2007, Buraschi et al 2010, and Lu et al 2013. In conclusion, correlations are critical in many areas in finance such as risk management, investments, and trading. Unfortunately, correlations typically increase in a crisis, which can have devastating effects as seen in the global financial crisis 2007 – 2009. Numerous models exist, and advanced correlation models are being developed to address correlation risk. Regulators have incorporated basic correlation measures to reduce correlation risk. In addition, regulators require the stress testing of correlations applying Pearson correlations. In the future, more advanced and rigorous correlation concepts may find their way into correlation regulation.
author Gunter Meissner After a lectureship in mathematics and statistics at the Economic Academy Kiel, Gunter Meissner PhD joined Deutsche Bank in 1990, trading interest rate futures, swaps, and options in Frankfurt and New York. He became Head of Product Development in 1994, responsible for originating algorithms for new derivatives products, which at the time were Index Amortizing Swaps, Lookback Options, and Quanto Options and Bermuda Swaptions. In 1995/1996 Gunter was Head of Options at Deutsche Bank Tokyo. From 1997 to 2007 he was Professor of Finance at Hawaii Pacific University and from 2008 to 2013 Director of the financial engineering program at the University of Hawaii. Currently, Gunter is President of Derivatives Software (www. dersoft.com), Founder and CEO of Cassandra Capital Management, (www.cassandracm.com), and Adjunct Professor of Mathematical Finance at NYU-Courant. Gunter Meissner has published numerous papers on derivatives and is a frequent speaker at conferences and seminars. He is author of 5 books, including his 2014 book on “Correlation Risk Modeling and Management – An Applied Guide including the Basel III Correlation Framework” (John Wiley). He can be reached at gunter@ dersoft.com.
Intelligent Risk - July, 2015
041
references 1. Albanese C., and A. Vidler (2007), “Dynamic Conditioning and Credit Correlation Baskets”, Working paper 2. Albanese C., T. Bellaj, G. Gimonet and G. Pietronero (2010), “Coherent Global Market Simulations and Securitization Measures for Counterparty Credit Risk”, Journal of Quantitative Finance, January 2011 3. Basel Committee of Banking Supervision, (Basel II) “International Convergence of Capital Measurement and Capital Standard, A Revised Framework”, November 2005. 4. Basel Committee of Banking Supervision “Principles of sound stress testing practices and supervision” May 2009, p.10; document here. 5. Basel Committee of Banking Supervision, “Basel III: A global regulatory framework for more resilient banks and banking systems”, June 2011; link here. 6. Brigo, D., and K. Chourdakis (2008), “Counterparty risk for credit default swaps”, International Journal for Theoretical and Applied Finance”, Vol. 12, No. 7, November 2009, pp. 1007-1026, 2008 7. Buraschi, A., P. Porchia, and F. Trojani (2010), Journal of Finance 65, 392-42 8. Da Fonseca, J., Grasselli, M., and Ielpo, F. (2007), “Estimating the Wishart Affine Stochastic Correlation Model Using the Empirical Characteristic Function” SSRN-1054721 9. Hull J., and A. White, “Valuing Credit Derivatives Using an Implied Copula Approach”, Journal of Derivatives, Winter 2006, Vol. 14 No:2 p. 8-28 10. Li, D. (2000), “On default correlation: a copula approach”, Journal of Fixed Income, 9, 119-149, 2000 11. Lu, X. and G. Meissner, “Asset modeling, stochastic volatility and stochastic correlation, University of Hawaii working paper 2013 12. Meissner, G., S. Rooder, K. Fan, “The Impact of different Correlation Approaches on Valuing CDSs with Counterparty Risk” with Seth Rooder and Kristofor Fan, Quantitative Finance, March 2013 13. Meissner, G., “Correlation Risk Modeling and Management – An Applied Guide including the Basel III framework” John Wiley 2014 14. Meissner, G, “Correlation Trading Strategies – Opportunities and Limitations”, Working paper 2015a 15. Meissner, G., “The Pearson Correlation Model – Work of the Devil?” Working paper 2015 16. Pearson, K. (1900). “On the Criterion that a given System of Deviations from the Probable in the Case of a Correlated System of Variables is such that it can be reasonably supposed to have arisen from Random Sampling”. Philosophical Magazine Series 5 50 (302): 157–175 17. Vasicek, O., “Probability of Loss on a Loan Portfolio”, (1987), KMV working paper, 1987. Results published in RISK magazine with the title “Loan Portfolio Value”, December 2002
042
Intelligent Risk - July, 2015
university event ICBS & HKU On June 2, 2015, a mini-conference co-sponsored by PRMIA and Imperial College of London Business School (ICBS) was organized for graduate students from ICBS and the visiting BSc. Quantitative Finance students from the Faculty of Business and Economics, the University of Hong Kong. Three prominent experts from the market were invited to speak on timely topics related to the practice of risk management. Programme: Seminar 1: “The Role of Treasury and the Asset and Liability Commitment in Banking: Risk Management and Strategy”, by Professor Moorad Choudhry, FCSI FIFS FIoD, Founder, Certificate of Bank Treasury Risk Management Seminar 2: “Securitization: the Process of Regulatory Policy Development” by Mr. Ramnik Ahuja, Senior Manager, Structural Funding and Securitization, Prudential Policy Authority, Bank of England Seminar 3: “Lessons from the Credit Crisis” and “Getting a job in Risk” by Mr. Markus Krebs, Chief Risk Officer and Seasoned Risk Specialist in Risk In his speech, Professor Choudhry presented his models of asset and liability management in commercial banks within the context of the four principal responsibilities of the treasury; i.e. liquidity, capital, funding, and risk management. He also compared the treasury strategy in the pre- and post-financial crisis era and offered some guidelines on the best practices of the asset and liability management committee. Ramnik’s talk focused on structure and process of securitization. He provided an overview of the EU regulatory framework, at the international and EU levels, within which securitization laws are formulated, and compared the differences between securitization in the EU and US markets. His sharing about the behindthe-scene negotiation and the compromise of interests from various stakeholders in the securitization process enriched students’ understanding of this very important intermediary function of the financial market. How should one make of the crediting rating from credit rating agencies? Mr. Krebs addressed the interpretation, the uses, and the mis-uses of credit ratings by practitioners in the financial market. His talk exemplified the value of bridging the gap between theory and practice. While finance textbooks usually allocate no more than a couple of pages to credit rating and only limited the discussion to the meaning of various rating designators, Markus’ presentation provided a detailed account of how the ratings are determined and “should” be used by market practitioners. In the second part of his presentation, Markus offered his advice, in a very light-hearted and lively way (a lot of amusing photos), on how to pursue and enjoy a job in the financial market. Overall, the mini-conference was well-received by participants. PRMIA will continue to organize similar cross-cultural activities in the future in accordance to its commitment in promoting risk management education in the tertiary sector.
Intelligent Risk - July, 2015
043
learning opportunities In the current environment risk education is not just a choice, it is a necessity. PRMIA provides open enrollment and customized classroom training, using leading academic instructors and practitioners who are experts in their fields. Please contact us if you are interested to learn more about our customized classroom training programs.
UPCOMING CLASSROOM COURSES ARE LISTED FOR YOU BELOW. MANAGING ENTERPRISE RISK IN THE NEW ENVIRONMENT Led by Russell Walker Chicago / Oct 1-2, 2015, 8:30 A.M - 4:30 P.M. More details here. OPERATIONAL RISK MASTER CLASS: MEASUREMENT, MANAGEMENT, AND LEADERSHIP A two-day course led by Russell Walker New York / December 7-8, 2015, 9:00 A.M - 5:00 P.M. More details here.
044
Intelligent Risk - July, 2015
submission guidelines CALL FOR ARTICLES Article submissions for future issues of Intelligent Risk are actively invited. Articles should be approximately 1,000–1,500 words, single spaced, and cover a topic of interest to PRMIA members. Please consult the submission guidelines located at the end of the publication prior to submitting your article. Please send all article submissions that you wish to be considered for publication to iRisk@prmia.org. Chosen pieces will be featured in future issues of iRisk, archived on PRMIA.org, and promoted throughout the PRMIA. community.
I-RISK SUBMISSION GUIDELINES Follow these instructions regarding the format of your articles and references. Article Submission - Please send all article submissions that you wish to be considered for publication to iRisk@prmia.org File Format - Please prepare your work using Microsoft Word, with any images inserted as objects into the document prior to submission. Abstract - Please present a brief summary or abstract of the paper on the page following the title page. Author Biography - Please include a biography, not exceeding 150 words, for each of the contributing authors listed. All biographies must be included at the end of the article. Author Photo - Please provide a professional photograph to be included with your article. The photo must be submitted as a separate file in jpeg or tiff format. Exhibits - Remember to attach all elements relevant to the paper (tables, graphs, charts and photos) on separate and individual pages at the end of the article. Please denote all tabular and graphical materials as Exhibits, and designate them using Arabic numerals, successively in order of appearance in the text. Exhibit Presentation - Please ensure that tables and other supplementary materials are organized and presented consistently throughout the paper, because they will be published as is. You may submit exhibits produced either in color or black and white. Use the exact same language in consecutive appearances; indicate all bold-faced or italicized entries in exhibits; arrange numbers consistently by decimal points; use the same number of decimal points for the same
types of numbers; center headings, columns, and numbers correctly; and incorporate any source notes when required. Consistency of fonts, capitalization, and abbreviations in graphs throughout the paper is required, and all axes and lines in graphs must be labeled in a consistent and coherent manner. Paste all graphs into Word documents as objects, and not as images, allowing access to the original graph. Please supply source materials for graphs such as Excel files. Equations - Please present equations on separate lines. All equations must be aligned with the paragraph indents, but not followed by any punctuation. Use Arabic numerals at the right-hand margin to number equations consecutively throughout the article. Use brackets to indicate all operation signs, Greek letters, or other such notations that may be ambiguous. Reference Citations - In-text citations of authors and works must be represented as: Smith (2000). Use parenthesis for the year, not brackets. Similarly, references within parentheses must be represented as: “(see also Smith, 2000).” References List - A reference is a source that is actually cited in the text. Please formally list only articles previously cited, using a separate alphabetical references list at the end of the article. Author Guidelines - PRMIA categorically values literary excellence in selecting articles for publication. To enhance clarity and coherence, we urge the use of simple sentences comprising of a minimal number of syllables per word.
Please follow these recommendations in the interests of meeting PRMIA’s publication standards, and to accelerate both the evaluation and editorial process. The review process will take up to 4-8 weeks. The author will receive articles due for revision, as well as those while accepted, departs in large part from these guidelines. Finally, PRMIA reserves the right to return to an author for reformatting purposes, any article, which is accepted for publication that deviates from the aforementioned standards. The editors always reserve the right to make further changes to your work for consistency and coherence.
Intelligent Risk - July, 2015
045
INTELLIGENT RISK knowledge for the PRMIA community ©2015 - All Rights Reserved Professional Risk Managers’ International Association