PRMIA Intelligent Risk - February, 2023

Page 3

INTELLIGENT RISK knowledge
the
community ©2023 - All Rights Reserved Professional Risk Managers’ International Association February 2023
for
PRMIA

PROFESSIONAL RISK MANAGERS’

CONTENT EDITORS

FIND US ON prmia.org/irisk @prmia
Thanks to our sponsors, the exclusive content of Intelligent Risk is freely distributed worldwide. If you would like more information about sponsorship opportunities contact sponsorship@prmia.org Intelligent Risk - February 2023 02
INTERNATIONAL ASSOCIATION
INSIDE THIS ISSUE
Steve Lindo Principal, SRL Advisory Services and Lecturer at Columbia University Carl Densem Risk Manager, Community Savings Credit Union SPECIAL THANKS 03 Editor introduction 10 The FTX implosion - Dan diBartolomeo 16 Systemic risk mitigation should be front of mind in technological innovation - by Michael Leibrock 24 Peer-to-peer lending: financial inclusion or debt trap? by Barbara Dömötör 31 Digging into project risks aimed at climate change by Ganesh Melatur 39 Fraud risk management: An assurance approach by Dami Osunro 48 Today’s inflation risk: new approaches - by Maya Katenova 35 Mitigating cybersecurity risk through effective management of application programing interfaces (APIs) by Preet Makkar 05 A physicist reflects on market risk measurement by Paul Nigel Shotton 13 Does your risk management pass a turing test? by Rick Nason 20 Shepherd or the sheep: introducing climate risk frameworks by John Thackeray 28 Machine learning and credit risk: challenges and insights by Dr. Nawaf Almaskati 43 Knowing the unknown – the stain greenwashing leaves on financial institutions - by Ina Dimitrieva

editor introduction

When Warren Buffet said, “Only when the tide goes out do you discover who’s been swimming naked,” he could have been talking about 2022, when that tide – of low interest rates and timid inflation – went out faster than anyone thought possible. With higher interest rates still bringing inflation under control in much of the world, 2023 may be recessionary for many countries and turn more attention to risk management’s ‘bread and butter’: measuring risk, distilling cause and effect, and structuring responses to uncertainty.

Complex Adaptive System is a term which describes many social, environmental and biological phenomena, such as climate, cities, governments, industries, power grids, traffic flows, animal swarms, insect colonies, our brain and immune systems. It also describes financial markets where, as our capstone article by Paul Shotton demonstrates, established risk measurement practices have yet to fully incorporate their adaptivity and complexity.

Also in this edition of Intelligent Risk, Professor Rick Nason asks whether your risk management could pass the famous Turing Test: handling the particularly human domain of complexity beyond that which algorithms can solve. Closely related, Dr. Nawaf Almaskati looks at the rise of machine learning in credit risk and what the latest research can teach about how best to implement these models. Not to miss, Dan DiBartolomeo analyzes the implosion of the crypto-darling FTX.

Climate risk remains a top concern for risk managers and, in this edition John Thackery threads together the forces at play into a more coherent approach, while Ina Dimitrieva looks at the unseen risks of Greenwashing and puts the responsibility back on corporations to go above regulatory requirements.

Other February articles tackle a variety of perennial thorns in the side: cybersecurity, peer-to-peer lending, fraud and inflation.

The articles selected for this issue represent another step towards our goal of fostering a dynamic exchange of views on high-priority and fast-moving risk issues among PRMIA’s community. To that end, we welcome all and any feedback, either by email to iriskeditors@prmia.org or by posting a blog or discussion contribution on PRMIA’s Intelligent Risk Community webpage.

03 Intelligent Risk - February 2023

our sponsor

With over 45 years of experience, DTCC is the premier post-trade market infrastructure for the global financial services industry. From 21 locations around the world, DTCC, through its subsidiaries, automates, centralizes, and standardizes the processing of financial transactions, mitigating risk, increasing transparency and driving efficiency for thousands of broker/dealers, custodian banks and asset managers. Industry owned and governed, the firm simplifies the complexities of clearing, settlement, asset servicing, data management, data reporting and information services across asset classes, bringing increased security and soundness to financial markets. In 2021, DTCC’s subsidiaries processed securities transactions valued at nearly U.S. $2.4 quadrillion. Its depository provides custody and asset servicing for securities issues from 177 countries and territories valued at U.S. $87.1 trillion. DTCC’s Global Trade Repository (GTR) service, through locally registered, licensed, or approved trade repositories, processes 16 billion messages annually.

04 Intelligent Risk - February 2023

Market risk measurement is founded on the precepts of classical physics, yet financial market behavior is much more akin to quantum physics. In this article, the author explains how established practices of market risk measurement ignore this inconvenient truth.

a physicist reflects on market risk measurement

When I left academia as a CERN physicist more than thirty years ago to join Wall Street as a trader, my physics knowledge was a nice but superfluous skill. Back then, risk management as a distinct and quantitative discipline did not exist. Of course, bankers were held accountable by their employers for the quality of the deals they brought into the portfolio, but risk management as practiced by banks and investment banks such as Goldman Sachs, where I worked (still then a private partnership), was done wholly by the front office, or what is now referred to as the “first line of defense”, and the bank’s managing partners.

In the intervening years, the quantification of risk has developed into a pillar of the financial industry, based primarily on the precepts of classical physics. Yet the degree of success of this practice can best be described as partial because financial market behavior is much more akin to the world of quantum mechanics than it is to the world of classical physics.

the evolution of financial market risk measurement

Although the Basel Committee of Banking Supervision (BCBS) was set up in 1974, it wasn’t until 1988 that the Committee, under the Chairmanship of Peter Cooke of the Bank of England, first prescribed the minimum amount of capital which international banks should hold on the basis of their riskweighted assets, with five categories of riskiness and a 100% risk-weight yielding a requirement to hold a minimum of 8% of the asset value as capital; the so-called Cooke ratio, in what became known as the Basel Accord.

Elsewhere, Dennis Weatherstone, Chairman of JP Morgan, frustrated by the plethora of different metrics used to report disparate risk types such as FX, rate futures, government bonds, corporate bonds, equities, etc. held by the JP Morgan trading desks to the daily 4:15pm Treasury Committee meeting which he chaired, asked that a means be devised to aggregate these various risk types into a single number.

Intelligent Risk - February 2023 05
Synopsis

So was born Value-at-Risk, the classic statistical method to aggregate market risk, later adopted by the Basel Committee via the 1996 Amendment to the original Basel Accord, as a means by which sophisticated banks (those allowed to use their internal models) could calculate the amount of capital required to underpin market risk in their trading book, which generally produced a smaller capital requirement than did the standardized approach.

Since its first formulation, the Basel Accord has evolved considerably, as supervisors sought to refine the approach in order to address criticism on both sides: on the one hand that the Basel capital prescription was too onerous and risk-insensitive, resulting in reduced lending to support economic growth; and on the other that it was too lenient and allowed the buildup of dangerous levels of risk which could threaten financial markets and the economy.

In response to these criticisms, and to various market crises, in particular the Global Financial Crisis (GFC) of 2008, quantitative approaches to risk measurement gradually became more codified and more prescriptive, and risk management became established as a distinct discipline. After a decade as a trader, I was at the forefront of what became a trend of having former traders move to the independent risk oversight function, now known as the second line of defense, and contributed to the development of many of what are now standard techniques in the risk manager’s toolbox, such as VaR, Expected Shortfall (ES), Stress Testing and Scenario Analysis.

complex adaptive systems

However, like many frameworks which mediate interactions between people, financial markets are examples of Complex Adaptive Systems (CAS). CAS are characterized by highly non-linear behavior, following power laws, which means that small changes in input parameters can have highly magnified impacts on outcomes. They have deep interconnectivity between different parts of the system, which means that the classic techniques of Stoic philosophers - of taking a complicated problem and breaking it down into its component parts; having experts solve each component according to their specialization; and then reconstitute the whole - do not work.

As I mentioned above, financial market behavior is much more akin to the world of quantum mechanics than it is to the world of classical physics. In classical physics there is complete independence between the observer and the system under observation. Betting on a horse race, for example, is akin to classical physics, because of this independence. Whilst actions in the betting market change the odds for which horse is favored to win, they don’t impact the outcome of the event, which is rather determined by the best horse on the day.

In the realm of quantum mechanics, however, the systems under observation are so small that the act of observation disturbs the system itself, described by Heisenberg’s Uncertainty Principle. Betting in financial markets is like this world of quantum mechanics, because in financial markets the actions of market players are not separate from market outcomes; rather it’s the actions of the market players which produce the market outcomes, a process which George Soros refers to as “reflexivity”.

06 Intelligent Risk - February 2023

It’s this interaction between financial markets and participants which results in markets being adaptive. Markets continually evolve over time and adapt to the actions of market players, including changes in market conditions imposed by supervisory bodies like the Federal Reserve and the BCBS.

invalid assumptions

Because of this adaptivity, using historical timeseries data to estimate market risk (through Value-at-Risk and Expected Shortfall modeling) and credit risk (through Default Probability and Loss Given Default modeling) is problematic. Implicitly, when using these models, we make the assumption that historical data are a good representation of the distribution from which future events will be drawn, but this assumption is justified only if the timeseries exhibit stationarity, or, in other words, if the statistical properties of the underlying processes generating the timeseries do not change. Since markets are adaptive, this assumption is not valid.

Given these concerns, consider the framework the BCBS uses to determine the capital requirements for sophisticated banks under the Pillar I Risk Weighted Asset formulations. The Committee’s approach is to perform separate calculations of capital required to underpin respectively credit-, market- and operational risks, and the results are then summed to produce the total capital requirement. But as we have discussed, the interconnections between risk types mean that the separation is invalid, and the linear sum of the three components cannot be assumed to produce a prudent level of capital.

interconnected risk types

As an example of the interconnection between risk types, consider the operational risk losses borne by banks and other financial market participants arising from the frauds conducted by Bernie Madoff. Madoff’s frauds had been underway for a long time and the suspicions of some participants had been aroused by the uncannily smooth financial performance of his funds, but what really brought the frauds into the open were the liquidity crisis and market risks arising from the GFC. It should not be surprising that risk of financial fraud may be heightened during stressful market environments, so clearly modeling operational and market risk capital requirements independently is inappropriate.

A forward-looking example of the link between market and credit risk might be the heightened risk of default of so-called zombie companies which have been kept on life support by the extremely low level of interest rates following the GFC and the pandemic. As interest rates are now rising and economic conditions worsening, failing companies will be increasingly unlikely to be able to refinance their debt or grow their businesses, so are at increased risk of default. Moreover, the fact that interest rates have been so low for so long has enabled weak companies to continue to operate far longer than they would have been able to historically, a factor which is likely a driver of the lower productivity experienced in many economies. It is likely that such companies will have been run farther into the ground than would have been the case historically, so that Loss Given Default estimates based upon historical loss data may well prove to be too optimistic, and the credit risk capital so calculated prove to be insufficient.

07 Intelligent Risk - February 2023

risk versus uncertainty

Most of the time, markets exhibit “normal” behavior and the inaccuracies induced by the invalidity of the assumptions are not too erroneous, but then most of the time a bank’s expected losses are easily absorbed by its earnings from fees, commissions and bid-offer spread. The only times when a bank needs capital to absorb losses is during extremely stressful market periods when markets are moving with extreme volatility and counterparties and borrowers may be at heightened risk of default; but these are the very times when the assumptions underpinning the capital models break down most egregiously. Thus, the regulatory capital models truly are classic umbrellas which work only whilst the sun shines, yet the BCBS persists in using these flawed models.

It’s worth remembering how the economist Frank Knight described the difference between “risk” and “uncertainty”: risk is the potential for adverse outcomes drawn from a distribution which is known, whilst uncertainty is the potential for adverse outcomes drawn from a distribution which is not known, the latter being what Nassim Nicholas Taleb refers to as “Black Swan” events.

In using historical observations to inform their view of potential adverse future outcomes, risk managers treat financial risks as “risks” (in a Knightian sense), in that they treat them as adverse events drawn from a known distribution. Risk managers do this because it is the only way to make risk quantification tractable. The truth, however, is that due to the CAS nature of markets the future is fundamentally unknowable, and in reality what risk managers are dealing with is uncertainty. As risk managers, we are like the man who is looking for his lost key under a street light on a dark night. He is joined in the hunt by a friend, who, after some time of fruitless searching, asks: “Are you sure this is where you lost your key?”, to which the man replies: “Oh no, I lost it over there!” “Well”, his friend says, “if you know you lost it over there, then why are you looking here?”, to which the man replies: “Because this is the only place where I can see!” We call ourselves “risk managers” and treat the uncertainties we are dealing with as risks, because doing so gives us access to the only tools we have, and although they are not the right tools, they are better than nothing.

closing thoughts

Does all this mean that I have turned my back on risk management and the tools I used and helped develop during my career? Absolutely not! Whilst I don’t believe that statistical metrics such as VaR and ES are appropriate for determining capital underpinning, nevertheless they should be a part of the risk manager’s armory. For one thing, actual stress events are mercifully rare, and so although scenario analysis may be a better tool for determining minimum bank capital, we rarely get the opportunity to test those models. But if we use the same P&L representation (which tells us how the value of a position changes with changes in the underlying market levels) and risk representation (which we use to express changes in the market risk factors feeding into the P&L representation) to calculate VaR/ES as we use to perform scenario analysis, then VaR/ES give us a relatively high-frequency measure of the adequacy of our P&L and risk representations through the daily performance of granular P&L predict-and-explain and granular back-testing.

08 Intelligent Risk - February 2023

The facts that markets are CAS and that what we as risk managers are really dealing with is uncertainty is why there can never be a silver bullet for risk quantification. And that is why, as a physicist and a risk manager for more than thirty years, I believe that, for all the developments in quantitative risk metrics and the formalization of risk management during the course of my career, risk management will forever remain as much an art as it is a science.

Paul Shotton

Paul Shotton is a physics PhD with more than 30 years of practice in financial market risk analytics and executive leadership. His current roles include CEO of Tachyon Aerospace, an aerospace technology company, and chairman and CEO of White Diamond Risk Advisory, which advises CEOs, boards and startup companies in the finance and technology sectors. Paul developed his knowledge of markets and honed his insights in high-level trading and risk management positions at financial institutions in major metropolitan hubs, first in fixed-income trading positions at Goldman Sachs and Deutsche Bank in London, and subsequently, in New York, as global head of market risk management at Lehman Brothers and deputy head of group risk control and methodology at UBS. Paul is a frequent publisher of articles on economics, corporate governance and risk management.

09 Intelligent Risk - February 2023
peer-reviewed by

Synopsis

FTX’s unravelling and ongoing bankruptcy proceedings have led to the uncovering of potentially criminal self-dealing, regulatory laxity and misplaced optimism in crypto to self-regulate. As risk professionals, now is an opportunity to step back, observe historical precedence and notice where regulation (already in place for banks) may have made the difference. In doing so, both crypto’s risks and the likely path of future rules are made clearer.

implosion

The bankruptcy of FTX, a large international cryptocurrency dealer, will no doubt have serious repercussions in the cryptocurrency world. The investigation into what went on at FTX could take months or years to be completed. Since the major business unit was legally located in the Bahamas, known for accommodative regulations and incentives for foreign businesses, there are probably few regulations that could be broken. Most regulations that do exist relate to privacy of financial transactions which will slow down any judicial proceeding. There were obviously some rules in place as FTX continues to allow withdrawals from accounts of Bahamian citizens while non-local accounts have been frozen. This kind of “don’t mess with the locals” rule is common in tax havens like the Cayman Islands, Panama, etc. Financial firms in these jurisdictions have an expectation of criminal charges for fraud if local citizens lost out, while prosecution for losses suffered by external participants is extremely rare. Mr. Sam Bankman-Fried, CEO of FTX has been criminally charged in the United States with various fraud charges. The legal process of extradition is moving forward at this time.

FTX’s makeup and regulation

FTX was organized with three key corporate units (and numerous subsidiaries), one of which, Alameda, undertook proprietary trading with the firm’s capital. The main business was the crypto equivalent of a securities broker-dealer, but broker-dealers are heavily regulated in the US under the Securities Act of 1934, Rule 15c3-1, known as the “net capital rule.” All the technicalities around this rule run to about 300 pages. Most other countries with organized financial markets have similar rules. If FTX had been regulated as a broker-dealer, then all customer funds and assets for which the investor had fully paid would have to be kept segregated from any funds available to the firm. If a customer has a margin account with a broker, then assets of the margin account act as collateral for loans from the broker to the customer.

Intelligent Risk - February 2023
10

The broker can then pledge those same assets for loans to the broker from an external lender (e.g. a bank). The net capital rule regulates how these collateral relationships are managed and how much firm capital and liquidity the broker must maintain.

If crypto was regulated as a commodity (as the US IRS treats it for tax purposes), then FTX would have been treated as a “futures commission merchant”, or FCM, by the CFTC. FCMs are subject to a similar net capital rule, CFTC 1.17(a)1(i), which requires net capital of $1 million or more based on the volume of business. If the FCM deals in over the counter (OTC) transactions, the minimum capital is $20 million.1 Given FTX’s “total assets” of tens of billions, the capital requirement would obviously have been substantial.

what happened at FTX?

From early reports, it appears that FTX lent $10 billion of customer funds/assets to their own trading affiliate (Alameda) without collateral. Initial reports suggest that around $1.7 billion is now missing, most probably through trading losses at Alameda. There have also been news reports of a separate $473 million in suspicious transactions that could represent further losses. Some industry sources have speculated that these transactions were related to the aforementioned actions of Bahamian regulators to mitigate risk to local citizens. It should be noted that most trading in fiat currencies (FX) and physical commodities (e.g. gold bars) is also unregulated in the US and many large countries. However, most currency trading is done by banks, so the activity is often indirectly regulated.

Near the end of November, US-based cryptocurrency lender BlockFI declared bankruptcy,`1 citing exposure to the FTX implosion. Interestingly, BlockFI named FTX affiliate Alameda as an entity to which it lent while also citing another FTX unit as a creditor. While such “both way” counterparty risk situations are common among financial institutions, it also raises the possibility that FTX was using outside firms such as BlockFI as a conduit to disguise movements of funds among FTX units. The extent of further fallout in terms of contagion effects from the FTX collapse is currently unknown.

investor risks in wake of FTX’s implosion

From a theoretical perspective, investors are dealing with both market risk and operational risk, being jointly important but hard to assess. Our research article published in the Global Commodities Applied Research Digest (JPMorgan/University of Colorado) provides a unique framework for that problem.2 A less formal version of the same article appeared in PRMIA Intelligent Risk’s November 2022 edition. Northfield’s Peter Horne has also published two related articles in the Journal of Performance Measurement on how the evolution of cryptocurrencies and “decentralized finance” will impact institutional investors. At least one institutional investor, the Ontario Teacher’s Pension Fund, has indicated investments of about CAD $95 million related to FTX.

11 Intelligent Risk - February 2023

the future of crypto regulation

The “Wild West of crypto” in the US seems to have arisen from the inability of the SEC, CFTC, IRS, and the Federal Reserve to coordinate on how to approach crypto. The FTX situation is somewhat akin to earlier losses associated with “over the counter” swaps. Initially, swaps were explicitly exempted from most regulation by prior Federal laws, so regulators could claim their hands were tied. Rules for reporting and clearing swaps were enacted after the Global Financial Crisis, but some aspects of individual swap transactions (e.g. margin requirements) are still generally treated as private business contracts. In the US, the CFTC has primary regulatory power on swaps, but the SEC can get involved in cases where swap transactions are based on securities that the SEC would normally regulate.

A clear regulatory picture in the US would improve the ability of the G20 countries to collaborate on sensible guidelines, leaving unregulated crypto activities to operate only in small countries with economic incentives for liberal engagement with cryptocurrencies (e.g. El Salvador).

references

1 // Code of Federal Regulations.

https://www.govinfo.gov/content/pkg/CFR-2021-title17-vol1/xml/CFR-2021-title17-vol1-sec1-17.xml

2 // Blackburn, T., DiBartolomeo, D., Zieff, W. Global Commodities Applied Research Digest. Published September 1, 2022: https://www.jpmcc-gcard.com/gcard-article/assessment-of-cryptocurrency-risk-for-institutional-investors/

peer-reviewed by

Carl Densem

author

Dan diBartolomeo

Dan diBartolomeo is founder and president of Northfield Information Services, Inc. He serves as PRMIA Regional Director for Boston, as well as on boards for several financial industry associations including IAQF, QWAFAFEW, BEC and CQA. Dan spent eight years as a Visiting Professor in the risk research center at Brunel University in London. In 2010, he was awarded the Tech 40 award by Institutional Investor magazine for his analysis that contributed to the discovery of the Madoff hedge fund fraud. He is currently the co-editor of the Journal of Asset Management and has authored nearly fifty research studies in peer review publications.

12 Intelligent Risk - February 2023

The Turing Test can be used to tell apart two important types of challenge facing Risk Managers today: complex and complicated issues. While complicated issues can be dealt with by algorithmic, rulesbased systems and likely better run by computers, complex issues exhibit qualities that befuddle such programmatic thinking and can only be handled by human ingenuity.

does your risk management pass a turing test?

The English mathematician Alan Turing, who was portrayed in the movie “The Imitation Game,” is widely considered to be the grandfather of the computer. As part of his theorizing about computers, Turing devised a thought experiment which is now called the Turing Test. In one version of a Turing Test, you are to imagine two curtains. Behind one curtain is a human, and behind the other curtain is a computer. The Turing Test is to have an observer ask questions or pose problems to each of the “curtains” and by the responses decide as to whether the responses are coming from a computer or a human. If a computer is indistinguishable from a human, then it is said to have “passed” the Turing Test.

You might be asking, “what does this have to do with risk management?” and the response is everything. In essence, the Turing Test answers the question of whether or not your risk management is robotic in nature or human in nature. More importantly, it answers the question of whether your risk management can deal with adaptive complexity (for which you need human responses), or solely complicated risk issues (for which a computer solution will suffice).

Complicated

In Systems Thinking there are two main types of systems: complicated and complex. Complicated systems work based on defined rules or laws whose trajectories can be calculated and thus predicted. This makes managing complicated issues relatively straightforward. Checklists, procedures, and automation are effective at managing complicated risks.

Synopsis introduction
Intelligent Risk - February 2023 13
which is which: complicated vs. complex

Gravity is one such example of a complicated system: if you hold your coffee mug up in the air and let go of it, then it will fall to the ground.

Complexity

By contrast, complex systems are those which are unpredictable, and furthermore are not reproducible. Financial market bubbles and crashes are examples of complex issues. Perhaps a more quirky example is the unprecedented demand for toilet paper at the beginning of the COVID crisis. Who could have predicted that toilet paper would be the item in greatest demand? Furthermore, it is likely that a totally different “fad issue” other than toilet paper will arise with the next crisis.

Complexity arises when three conditions are present:

1. There are agents (think individuals, groups of individuals, companies, economies etc.),

2. Who can interact (through discussions, media reports, social media, business competition) and

3. Who can adapt (change their minds, opinions, strategies, or their desires).

Complex situations exhibit a host of properties that create special challenges for risk management. One specific property of complex systems is that of emergence. A particular example of emergence is that of a murmuration of starlings, for which videos are easily found on social media. In a murmuration, a flock of birds collectively create beautiful patterns of flight, but the patterns are constantly changing and never repeating. Furthermore, the patterns of the flock are completely unpredictable. Charts of the stock market are another example of emergence where there are obvious patterns in the movement of the market averages, but unpredictable breaks and departures as new patterns form.

how are complex and complicated risks managed?

Complexity imposes special demands on the risk manager, and they need to be managed very differently than traditional complicated issues. To begin, a risk manager must first determine whether the risk is complicated or complex. Secondly, a complex situation cannot be solved – it can only be managed. Thirdly, a “try, learn, adapt” approach is needed to deal with the ever changing, and non-reproducible aspect of complexity. Each complex situation must be treated as unique; what was best practice and worked last time, will not necessarily work the next time the risk arises.

Complexity requires flexibility and adaptability. Rigid rules and processes designed for complicated situations will be ineffective against complex ones, and indeed will often be more harmful than helpful. This is why the Turing Test for your risk management system becomes relevant.

14 Intelligent Risk - February 2023

passing the turing test

A risk management system solely designed for complicated situations will definitely pass a Turing Test. The human manager will not be distinguished from the computer. A risk management system that relies solely on rigid rules and procedures can (and probably should) be replaced by a computer or a bot. However, a computer or a bot (or other form of artificial intelligence) cannot deal with complex situations.

Complex risks require judgment, intuition and context sensing – all characteristics that are unique to the human mind. Processes and algorithms that can be programmed do not have the required flexibility or adaptability of the human manager. While risk managers as individuals, or as departments also have their flaws and errors and biases, these shortcomings are not as bad as assuming that all risks are complicated.

So, take a minute and consider the most critical risks that your organization is facing. Do most of these risks follow well-defined rules like the laws of physics, or are they related to the underpinnings of complexity; agents, who can interact and who can adapt? Are your most critical risks complicated or complex? Does your risk management pass a Turing Test?

As a follow-up to this article, join us for a PRMIA thought leadership webinar on March 29

Rick Nason will sit down with Carl Densem, Intelligent Risk editor, to discuss VUCA, AI, and the future of risk management. Learn more

Rick Nason, PhD, CFA, is an Associate Professor of Finance at Dalhousie University where he been awarded numerous awards for teaching excellence. His academic work includes researching corporate risk management and complexity in business. As a former capital markets, as well as corporate learning professional, he remains active in consulting and corporate training, specializing on applications of complexity science and risk management.

He is the author / co-author of seven books and textbooks, including It’s Not Complicated: The Art and Science of Complexity in Business, published by University of Toronto Press, and Rethinking Risk Management, by Business Experts Press.

author 15 Intelligent Risk - February 2023

Synopsis

Technological innovation has already made strides in advancing parts of the financial system, notably the payments sector, increasing access to customers who now have more choice. At the same time, there is a growing awareness of risks stemming from rapid technological advancement, with regulatory scrutiny increasing too. There are ways, however, for financial firms to continue to achieve innovation, without compromising systemic risk mitigation effort.

systemic risk mitigation should be front of mind in technological innovation

Emerging technologies hold considerable promise and benefits for the global financial system. Distributed ledgers, for example, could be used to validate and track transactions on a distributed and decentralized platform, providing a compelling alternative to today’s centralised payment infrastructures. More specifically blockchain, a type of distributed ledger technology (DLT), could improve efficiency in existing international payments systems and infrastructures. The payments sector is a great example of how technology has helped increase client access to the financial system while enhancing the user experience leveraging innovative payment tools.

After all, new technologies are significantly changing the way society and the financial services industry conduct business. Initially, many fintech developments focused on enhancing existing capabilities through innovative technologies, such as the cloud, Application Programming Interfaces (APIs) and machine learning. Today, fintech applications are looking to fundamentally transform the way counterparties interact with one another, for example by using distributed ledgers, smart contracts and digital assets.

introduction addressing new risks

Despite the multiple current and potential use cases, no new technology application comes without risks. First and foremost, the interdependency or interconnectedness of the global financial marketplace should be front of mind when considering new technology implementations. Recent events in the crypto industry highlighted the increased risk of contagion to the financial sector and the real economy.

Intelligent Risk - February 2023 16

Indeed, a recent report by the Financial Stability Board (FSB) has shown that correlations between crypto asset prices and mainstream equity indices have been steadily increasing. Meanwhile, the use of DLT, while minimizing some risks and providing efficiencies, simultaneously increases the number of points of potential failure, as well as the risk of data breaches, hacking and other types of third-party risk. According to DTCC’s most recent Systemic Risk Barometer survey, the threat posed by cyber risk is considered one of the top three risks in financial services. Consistently ranked as one of the top risks since the survey first launched in 2013, against the backdrop of increasing adoption of new technology solutions and the related risks to the industry, this trend is only likely to accelerate in coming years.

There are also a number of other influential factors that should be addressed. For example, models, including AI-based models, have contributed to faster and often better decision-making. However, an over-reliance on models that are calibrated largely on historical data and events can be less effective when unprecedented, exogenous shocks occur. For example, the Covid-19 pandemic led to widespread deficiencies in credit rating models, as the dislocation of historically stable fundamental credit risk factors and macro-economic variables produced unreliable probability of default estimates. Additionally, the overreliance on high-speed processing and output decision making can lead to errors, lack of transparency, and underpin unintended biases. Furthermore, conflicting national priorities can also impede cross-border data sharing, compromising efforts around tackling cyber terrorism and financial crimes – both of which are on the rise. Finally, the use of social media platforms and online forums has the potential to negatively impact market volatility and risk, enabling frictionless retail financial participation on low-cost digital platforms and brokerages which, for example, contributed to the 2021 meme stock event.

increasing regulatory focus ensuring thoughtful innovation

Regulators and policymakers globally are increasingly cognizant of the technology-driven risks to market resilience and have been implementing measures to enhance the industry’s focus on model risk management and to encourage industry collaboration. In the U.S. for example, the White House executive order on responsible development of digital assets issued in March 2022 focused on consumer and investor protection and ensuring financial stability. This was followed by multiple related reports issued by the U.S. Department of the Treasury, including the digital asset, financial stability, risk and regulation report published by the Financial Stability Oversight Council (FSOC) which focused on crypto asset risk and outlined regulatory gaps and market risks that could pose threats to stability. Given significant risk events in the crypto markets of late, it is likely that legislative activity will accelerate this year.

PHOTO?

As we look ahead, the risks that come with any new technology implementation should not deter innovation strategies of global financial services firms. Instead, firms should be encouraged to adhere to three guiding principles as they embark on their innovation journeys, in order to ensure that their services continue to provide optimal value and maximum benefit to clients.

17 Intelligent Risk - February 2023

First and foremost, any development of digital solutions, including those covering digital assets, must begin with a strong client and industry-centric approach. Client benefits must be clear and tangible, backed by customer engagement, industry support and proven hypotheses. New or enhanced solutions should allow the industry to optimize the full value chain, or key components of it, to achieve cost and operational efficiency.

Second, any new technology initiative should provide equal or greater resilience than existing infrastructures and solutions.

And finally, given that the financial markets ecosystem will continue to change at a rapid pace, all participants and market providers must be prepared to adjust early and often, and focus on creating the most flexible long-term solution, not the only short-term solution.

conclusion PHOTO?

Without doubt, emerging technologies offer many benefits, leading to greater access to products and services along with lower costs of participation and other efficiencies for both consumers and institutions. However, today, many of these benefits have yet to be fully realized despite great promise, and we have already seen some significant, unexpected risks arise as a result of new technology implementation. Evidence to date indicates that the balance between prudent innovation and systemic risk considerations has not yet been achieved. Policymakers and regulators should act swiftly to promote the implementation of appropriate guardrails and best practices globally, and to ensure that any new technology initiative provides equal or greater resilience than existing infrastructures. The industry, market participants and regulators should continue to work together to avoid a scenario whereby in the pursuit of rapid innovation, and the desire for speed and convenience, we undermine the progress in systemic risk mitigation achieved since the last financial crisis.

Carl Densem peer-reviewed by 18 Intelligent Risk - February 2023

Michael Leibrock

Michael Leibrock is a Managing Director in the Depository Trust & Clearing Corporation’s (DTCC) Financial and Operational Risk Management division, with primary oversight for the Counterparty Credit Risk and Systemic Risk functions. He is responsible for the analysis, approval and ongoing credit surveillance for all members of DTCC’s clearing agencies. Michael is also responsible for the identification and monitoring of potential systemic threats to DTCC and the securities industry, actively engaging with DTCC clients and regulators on systemic risk topics and producing periodic thought leadership content. In addition, Michael serves as co-chair of the Systemic Risk Council and is a member of the Management Risk Committee and Model Risk Governance Committee.

Prior to joining DTCC in 2011, Michael’s career in risk management spanned over 20 years during which he held senior risk management roles at several firms, including Chief Counterparty Risk Officer at Fannie Mae and North American Head of Financial Institutions Credit for Commerzbank A.G.

Michael holds a Doctorate in Finance and International Economics from Pace University’s Lubin School of Business and a M.B.A. in Finance from Fordham University. He is a Lecturer in Columbia University’s M.S. program in Enterprise Risk Management and co-author of a 2017 book titled “Understanding Systemic Risk in Global Financial Markets” (Wiley Finance).

author 19 Intelligent Risk - February 2023

Synopsis

The COVID-19 Pandemic has ushered in a heightened focus on climate change risk and resultant exposures across the corporate world. Disparate political views and evolving regulatory requirements result in uncertainty, requiring boards to establish internal standards necessary to safeguard their companies, enhance corporate culture, and serve as an innate moral compass in order to march toward a stable and sustainable future.

shepherd or the sheep: introducing climate risk frameworks

Climate risk is the defining issue of our generation, but it is the velocity of climate change which will have the greatest and most profound impact upon our lives. Climate risk is a long-term science-based, non-diversifiable risk, impacting and affecting all industries. To the corporate world, climate risk is a balance sheet risk, a profit and loss risk-- and more importantly a reputational risk. In fact, such is the influence, from this exogenous source, that reputational risk has been elevated in short order. The impact of this fast-impending influence is that banks in particular have had to up their game, in terms of their narrative, marketing, and branding.

The risk requires firms to think about assigning a set of comprehensive roles, social responsibilities, and values that can be measured and be accountable to mitigate the challenges posed. Through climate change, one sees the interconnectedness of emerging risks (with the current pandemic a manifestation of both the velocity and acceleration of climate risk) ushering in a change transformation of thought, word, and deed.

how Covid-19 has informed stress testing models for climate change

COVID-19 is a dress rehearsal for climate change, a harbinger and abstract of what is in store down the road. The current pandemic is an agent of change, causing disruption and requiring firms to adapt their business models to accommodate these evolving circumstances. Such a change can be seen in the realm of stress testing and scenario design, which are the common tools used to identify, measure, quantify, and review both known and unknown enterprise risks. The data provided by COVID-19 has resulted in more realistic testing parameters to also assess the emergence of climate change.

introduction
Intelligent Risk - February 2023 20

Stress testing and scenario design have been around for a while, but what has changed in this pandemic is the need to repurpose and enhance existing models, whilst at the same time overlaying and incorporating new thinking and parameters. Common issues such as data quality, information technology, and risk management have had to be structurally addressed to ensure that the output from these test results are both meaningful and transparent to their many users. The pandemic has asked questions regarding the gathering, collection, and frequency of data utilized, requiring enhanced data sets which can be introduced and populated for climate change modelling. In particular, the data has been distorted, normal channels have been compromised, and there are more outliers than before with trends harder to discern.

COVID-19 has ignited and obfuscated the current climate change debate, with politicians from all walks of life trying to distract and manipulate the debate for short-term considerations. This political uncertainty is amplified by the fact that firms operating in multiple jurisdictions will have to cater to the differing lenses and perspectives of a constantly changing central government policy. Corporate leadership needs to be strong since it will have to navigate conflicting political and regulatory minefields. Strong governance can only be sustained by a united board willing to take a more active role to drive resiliency in the face of competing conflicts that otherwise would steer the company off course. If the way out of the current pandemic comes in the form of green shoots, it may be that the corporate world needs to take the initiative, strive to do what is right, and lead all to greener pastures.

Firms have now been forced to plan to include climate risk within their portfolio of risks, a corporate framework that includes five key pillars: governance, risk management, strategy, pricing, and metrics. The lynchpin of this framework is governance, evidenced by company boards (rather than senior management) driving and dictating change in order to set a whole new sense of social and commercial responsibility the likes of which have never been seen before. Indeed, some boards have insisted in terms of risk management that environmental due diligence be incorporated into their firms’ credit process by requiring an independent evaluation of counterparties’ environmental history and track record. Boards are having to be educated in short order, with many turning to colleagues from the insurance industry who have a wealth of experience in forecasting and dealing with the long-term implications of climate in order to expedite their thinking around strategy, pricing, and all-important metrics. It is becoming self-evident to boards that their firms’ very existence is wrapped up in the way they approach climate risk and the efforts needed to improve their continuing viability, sustainability, and resilience. For the few executive leaders who walk this path, this climate change DNA is invoking a strong culture of compliance and governance focused on quality returns and maximized customer experience.

how diverging political goals complicate corporate efforts on climate change
21 Intelligent Risk - February 2023
governance

Many firms have been forced to disclose their efforts by means of a written narrative in their financial reporting. For example, the European Commission’s Corporate Sustainability Reporting Directive (CSRD) amends and significantly expands the existing requirements for sustainability reporting. The quality of firms’ disclosures will be under the microscope, with stakeholders looking to see that the words have been translated into concrete actions. Boards and senior management will be watched with eagle eyes on how they behave, with any discrepancies possibly becoming the subject of litigation and social media scrutiny.

The messaging will be all important and could be part of the sustainable company brand. The requirements to comply with regulatory measures are exacting-- financial institutions now need to monitor their customers’ green efforts and behavior by means of covenants and warranties. These covenants and warranties are far-reaching and extend to both funding and investment decisions, in terms of research and development and capital expenditure. Lending firms are becoming more closely identified with their customers and their borrowing policies, a reflection of their climate corporate governance. This gives the opportunity for lending firms to position themselves as “green” role models and brand their lending accordingly. The more astute firms will take this onboard with performance tables being designed and produced to indicate the applicability of climate risk standards, enabling corporations to benchmark against one another.

conclusion

Climate risk now can be seen as representing a commercial risk and, as such, institutions must both acknowledge its existence and treat subsequent exposure. Institutions that can see climate risk as an opportunity rather than a threat can strengthen and align corporate values, enhancing reputation and business resiliency. Those institutions that adopt a higher purpose with a climate moral compass are likely to experience a more coherent and collective culture. This culture change can be seen as a competitive advantage, but it must be remembered that there are costs associated with the introduction of this climate change vision. Embedding this change transformation requires a sustainable reengineering in terms of business practice and models, demanding investment in different skillsets and training.

Mankind is watching and the question for the corporate world is, do you want to be the shepherd or the sheep? The choice is yours.

disclosures
22 Intelligent Risk - February 2023

John Thackeray

John Thackeray is a risk & compliance practitioner and an acknowledged writer. As a former senior risk executive at Citigroup, Deutsche AG and Société Générale, he has had a first-hand engagement with US and European regulators. John holds an MBA from the Chartered Institute of Bankers and was a Lecturer in Banking, Economics and Law.

He is a frequent contributor, thought leader and speaker on risk industry insights and has published risk articles and white papers for the Professional Risk Managers’ International Association, the Global Association of Risk Professionals, the Risk Management Association, the Association of Certified Fraud Examiners, the Association of Certified Anti-Money Laundering Specialists, and the Chief Financial Officers University.

author 23 Intelligent Risk - February 2023

P2P lending is not about to disintermediate established banks, mainly due to the higher risk profile of typical P2P borrowers and returns sought by investors on these platforms. Platforms also differ in several other important functions usually carried out by banks, nevertheless they serve an important niche market avoided by banks.

peer-to-peer lending: financial inclusion or debt trap?

The process of disintermediation is one of the most important innovations of the last decade, affecting the traditional core banking activity of financial intermediation. Alternative financing solutions are emerging, targeting individuals or business customers and offering cost efficiencies by removing the intermediary layer. In the case of peer-to-peer (P2P) lending, an online platform directly connects customers seeking credit with investors, promising lower costs for borrowers and higher returns for investors compared to bank lending.

After a rapid increase in the volume invested from 2005 onwards, the market share of P2P investments declined by 2022. A number of fintech companies have applied for and been granted banking licences, suggesting that alternative finance could be a market entry point. Future market developments depend on a number of factors, with high inflation encouraging market participants to seek high-yield investment opportunities, while declining risk appetite may keep investors away from the segment. The rationale for the business model of P2P platforms is far from obvious. The development of the technology alone does not seem to justify making the intermediary role of the banks obsolete. The empirical research in the field shows that platform clients are mainly high risk, bank-ineligible borrowers (De Roure et al., 2016) and the activity on platforms is higher in those areas that are underserved by banks (Jagtiani and Lemieux, 2018). A popular argument in favor of P2P lending emphasizes the contribution of the platforms to financial inclusion by offering financing to the subprime segment. On the other hand, the higher ex-ante probability of default, even when compensated for the lender by extremely high interest rates, results in very high ex-post default frequency. Therefore, financial inclusion by promoting the indebtedness of lower income individuals can create a debt trap for them.

In the following I compare platform lending with traditional bank financing based on four main functions used by Freixas and Rochet (2008). Thereafter I introduce the portfolio characteristics of the Estonian platform, Bondora, and conclude that platform lending is an alternative to loan sharks rather than bank lending.

Synopsis
introduction Intelligent Risk - February 2023 24

function 1: asset transformation

An important element of traditional financial intermediation is the transformation of assets by size, maturity and quality. The majority of deposits collected by banks are short-term, while borrowers typically seek long-term funding. The size and risk profile also differ: investors require high quality, low risk deposits, while borrowers represent larger and significantly riskier loans.

Peer-to-peer platforms do not provide traditional asset transformation as they directly connect investors and borrowers. The maturity of investments is the same for lenders and borrowers, typically 3-5 years. However, by diversifying the investment portfolio, financing only a part of several loans, it is possible to resize and even change the risk characteristics.

Banks’ risk management is comprehensive and strictly regulated. Banks measure their credit, market, liquidity and operational risk and must meet strict capital adequacy, risk calculation and reporting requirements, monitored by banking supervision. In addition to banking regulation, deposit insurance strengthens investor protection.

Online platforms, although assisting in risk assessment by providing proprietary rating systems and automatic rejection below a certain rating, pass the credit and liquidity risk entirely to investors. Some platforms operate a secondary market where investors can sell the financed loan or even offer a buy-back guarantee, but these models are less common. Online intermediaries also face very significant operational risk; they are highly exposed to cyber-attacks.

Online intermediaries are not subject to the Basel framework, as they do not collect or create money. The regulation of fintech providers is an ongoing issue, but until now it only consists of transparency requirements and some restrictions to protect investors.

function 2: risk sharing and risk management function 3: liquidity and payment services

Banks provide a range of payment services that facilitate financial transactions and contribute to the financial liquidity of the economy.

Online platforms do not provide any additional payment services, only the settlement of the cash flow related to the credit transaction. There is no possibility of early redemption, but if the platform operates a secondary market investors can try to sell their portfolio at a cost depending on the liquidity of the secondary market.

25 Intelligent Risk - February 2023

function 4: monitoring and information processing

Banks play a very important role in reducing the information asymmetry between borrowers and investors. It is more cost-effective for the intermediary to gather information, and its own exposure is an indication of the debtor’s creditworthiness (Leland and Pyle, 1977). The sophisticated rating models and the available information, such as the customer’s account history, reduce information asymmetry.

Platforms also analyze customers’ sociodemographic and income data, although they have less access to the information of previous financial behavior. However, soft information voluntarily provided by customers can be an alternative source.

Online intermediaries usually started as technology start-ups, so they rely heavily on innovative digital solutions such as machine learning or artificial intelligence. These solutions can improve the process of gathering and processing information, and thus the efficiency of the credit assessment.

What then is the role of P2P lenders?

Table 1 below summarizes the main characteristics of the loan portfolio of Bondora as of October 2020. The majority of the loans are rated C or worse and the average default rate is 40%, which is extremely high. The default frequency is in line with the ex-ante rating of applications. Our ongoing research (with Ferenc Illés and Tímea Ölvedi) on the performance of P2P lending shows that ratings are based more on hard information, with no convincing evidence that platforms can benefit from alternative information and reduce information asymmetry more than banks.

The loan amounts are small on average, and the interest rates are high, making P2P loans similar to loan shark creditors.

26 Intelligent Risk - February 2023

In the above-mentioned research, we also investigate the performance of P2P investments and how investors benefit from P2P participation. We found that although the default frequency is high, a huge fraction of the defaulted loans are recoverable later and fewer than 10% of investors realize a negative rate of return.

Based on the above, peer-to-peer lending is not comparable to bank lending, as such high default probabilities are unacceptable for traditional financial institutions due to the high capital burden and reputational risk. Thus, peer-to-peer platforms seem to operate in the part of the credit market which banks avoid. It could be argued that platforms serve as a legal alternative to loan sharks and allow financial inclusion of the bank-ineligible segment. On the other hand, this inclusion costs an interest rate of about 40% on average and leads to a default rate of up to 75%. Di Maggio and Yao (2021) found that borrowers on P2P platforms became more indebted, counting their original bank loans, shortly after origination.

references

De Roure, C., Pelizzon, L., & Tasca, P. (2016). How does P2P lending fit into the consumer credit market?

Di Maggio, M., & Yao, V. (2021). FinTech borrowers: Lax screening or cream-skimming? The Review of Financial Studies, 34(10), 4565-4618.

Dömötör, Barbara Mária, and Tímea Ölvedi. “A személyközi hitelezés létjogosultsága a pénzügyi közvetítésben.”

Közgazdasági Szemle 68, no. 7-8 (2021): 773-793.

Dömötör, B., & Ölvedi, T. (2021). The Financial Intermediary Role of Peer-To-Peer Lenders. In Innovations in Social Finance (pp. 391-413). Palgrave Macmillan, Cham.

Freixas, X., & Rochet, J. C. (2008). Microeconomics of banking. MIT Press.

Jagtiani, J., and Lemieux, C. (2018). Do fintech lenders penetrate areas that are underserved by traditional banks? Journal of Economics and Business, June 2018.

Leland, H. E., & Pyle, D. H. (1977). Informational asymmetries, financial structure, and financial intermediation. The Journal of Finance, 32(2), 371-387.

peer-reviewed by

Carl Densem

author

Barbara Dömötör

Barbara Dömötör is an Associate Professor of the Department of Finance at Corvinus University of Budapest. She received her PhD in 2014 for her thesis modelling corporate hedging behavior. Her research interest focuses on financial markets, financial risk management and financial regulation. She is the regional director of the Hungarian Chapter of Professional Risk Managers’ International Association.

27 Intelligent Risk - February 2023

Synopsis

The rise of machine learning algorithms, especially in credit scoring and bankruptcy prediction, has overshadowed traditional methods of late and, simultaneously, raised challenges and attracted academic research. From this emerging body of research comes best practices and guardrails that encourage better use of ML algorithms and awareness of their downsides.

machine learning and credit risk: challenges and insights

The vast proliferation of machine learning (ML) applications in recent decades due to the surge in available computing power as well as the introduction of several new ML algorithms and their relative commercial success in tasks such as credit scoring and bankruptcy prediction have contributed to the growth of various research strands that look at ML applications in the field. ML algorithms have been popular in the field of bankruptcy prediction and credit scoring, showing superior classification performance compared to classic methods such as logistic regression and discriminant analysis1 .

The popularity of ML algorithms in this field arises from the fact that they tend to perform better in imbalanced and smaller datasets which is an important advantage in bankruptcy and credit scoring applications 2. Results from current research and applications of ML to bankruptcy and credit scoring suggest that the selection of the estimation method is as important as the selection of the input variables in determining the performance of credit and bankruptcy prediction models. Barboza et al. (2017) find that using ML algorithms instead of traditional models lead to an average improvement in the classification performance by 10%.

challenges

1 / The classification performance is a statistical concept that measures the accuracy (actual versus forecast) of the model when assigning the different cases to the different groups (e.g., bankrupt versus non-bankrupt).

2 / Imbalanced datasets are sets where some groups are much smaller than others as is the case with bankrupt versus non-bankrupt firms, where the former group is much smaller than the latter. Classic methods such as logistic regression and discriminant analysis do not perform well when applied to such datasets due to the lack of sufficient observations in one group.

The increasing popularity of ML techniques among academics and practitioners alike brings with it several questions and challenges. Perhaps the most important challenge any operator of ML algorithms is faced with is choosing among the many available ML algorithms. Intelligent Risk - February 2023 28

This challenge, and other related questions regarding the required complexity and potential applicability or suitability of the increasing number of available ML algorithms, makes the task of applying ML algorithms to real life problems both interesting and difficult.

For instance, should one forgo the relative simplicity and structure offered by standard regression models in favour of the more complex and ‘black-box like’ algorithms such as neural networks or gradient boosting? Also, are there always additional benefits from adding more complexity to the estimation process through utilizing more complex models or combining estimations from multiple ML algorithms? Perhaps the simplest answer to these questions is that there is no clear rule on what one should do and that, like each dataset or situation is different, each ML algorithm is different too.

Therefore, the job of any ML user is to use the available tools and metrics to find the ML algorithm that provides the best answer to the question being researched while keeping an eye on any algorithm’s pitfalls or shortcomings. In their no free lunch theorem, Wolpert and Macready (1995) state that if algorithm A beats algorithm B in some cases, then it is likely that there is a similar number of cases where algorithm B beats algorithm A, which suggests that there is ‘no free lunch’ or ‘one-size-fits-all’ when it comes to applying ML algorithms.

insights and trends

There are a few observations and trends that arise from current ML applications.

First, while the majority of past applications have focused on analysing and comparing the performance of individual ML algorithms, several studies have advocated using a hybrid approach where an ML algorithm is used for feature selection followed by another ML algorithm to addresses the classification or regression problem. Advocates of this approach argue that using an ML algorithm to select the most important features of the input variables makes the model more robust and mitigates the issue of overfitting3. This approach also reduces the time and computing power needed to run the model and allows for the inclusion of more input variables.

Second, current research also appears to suggest employing recommendations from multiple ML algorithms rather than a single one (Tsai et al., 2004; West, 2000; Zhang et al., 1999). This approach is similar to relying on a group of analysts with different approaches (i.e. algorithms) instead of making decisions based on the recommendation of one analyst. The approach allows the production of more robust estimates and reduces volatility in the final recommendation.

Third, ML algorithms also allow the analysis of text-based sources such as management discussion and analysis (MD&A) and other regular disclosures which brings more value and insights into the prediction process. The systematic analysis of MD&As using ML algorithms will allow more robust capturing of management’s sentiment and views about the company’s performance, industry trends and financial health.

29 Intelligent Risk - February 2023
1 / In statistics, overfitting is defined as the estimation of a model that simulates too closely the particular set of data being used, and may therefore fail in providing reliable estimates when applied to a different set of data.

Last, more research needs to be done on the ability of ML algorithms to detect the impact of ‘black-swan’ events on credit conditions. Most of the research done so far focuses on the performance of ML algorithms across time with no attention being given to the fact that the performance of such algorithms may change significantly during periods of high stress or unforeseen events.

concluding remarks

While ML techniques bring numerous opportunities to the credit risk field, there are still several difficulties with their implementation in practice. First, discussion regarding the period and frequency (e.g. monthly, quarterly, annually, etc.) to be used for the calibration of the model is still inconclusive. Second, concerns about the ‘black-box’ nature of many of the ML algorithms which prevent proper auditing of the calibration process may continue to pose challenges to the wider utilization of such algorithms in the field. Last, but not least, some of the algorithms may require massive computing power to run, especially when using a large dataset or a large number of inputs. This puts restrictions on the ability to recalibrate the model frequently in order to incorporate changes in the underlying patterns or trends.

references

Barboza, F., Kimura, H., & Altman, E. (2017). Machine learning models and bankruptcy prediction. Expert Systems with Applications, 83, 405-417.

Tsai, C.-F., Hsu, Y.-F., & Yen, D. C. (2014). A comparative study of classifier ensembles for bankruptcy prediction. Applied Soft Computing, 24, 977-984.

West, D. (2000). Neural network credit scoring models. Computers & Operations Research, 27(11), 1131-1152.

Wolpert, David H. and Macready, William G., (1995), No Free Lunch Theorems for Search, Working Papers, Santa Fe Institute.

Zhang, G., Y. Hu, M., Eddy Patuwo, B., & C. Indro, D. (1999). Artificial neural networks in bankruptcy prediction: General framework and cross-validation analysis. European Journal of Operational Research, 116(1), 16-32.

peer-reviewed by

Dan diBartolomeo, Carl Densem

author

Nawaf Almaskati

Dr. Nawaf Almaskati is an active researcher in machine learning and artificial intelligence, financial markets and ESG with several high impact publications and working papers in the field. Dr. Almaskati’s current research focuses on utilizing machine learning and advanced econometrics methods to address the challenges facing the financial industry.

30 Intelligent Risk - February 2023

Projects geared at climate change and energy risk must undergo clear and precise project risk management to attain their stated goals. By capturing individual risk categories in a stage-wise manner and treating these independently with established methods, risk management can better assure shareholder value and project completion. This important step cannot be overlooked.

digging into project risks aimed at climate change

Several organisations are presently in the process of initiating and planning significant projects for addressing climate and energy risks, for reduction of their carbon emissions and water consumption, or for switching partly or fully to renewable energy sources. However, a common failing observed in the preliminary evaluation of such projects relates to the confused identification and assessment of the different kinds of risks impacting project success.

It is important that risk identification and assessment for this type of project, that usually requires large initial investments for feeble returns but bears high chances of failure, be carried out by paying attention to the various stages at which these risks arise. Such a stage-wise differentiation enables a better understanding of the risks themselves, permitting the definition of appropriate risk responses so that threats can be effectively eliminated within each stage for the overall chances of project success to be enhanced.

introduction categories of project risk

It is possible to obtain a holistic and exhaustive view of all potential project risks during preliminary project evaluation by proceeding in a sequential manner to identify and categorise such risks as pertaining to one of: modelling, financing, or schedule realisation of investments and returns. This categorisation is reflective of the project evaluation and selection process that is performed sequentially, with project modelling being the first step, choices for project financing considered second, followed by the finalisation of dates when investments are to be made and returns programmed. Though these three risk types are separate and distinct from each other, their simultaneous impacts on each other (e.g. financing risks can impact both the developed model and its realisation), cause confusion that can be resolved only by individual identification and assessment of risks for evolving appropriate responses to them at each stage.

Synopsis
Intelligent Risk - February 2023 31

This article refers as example to projects that are currently being planned by businesses in non-EU countries for exporting carbon-intensive products (such as iron and steel, cement, fertilisers, etc.) to the EU whilst adhering to the EU CBAM (Carbon Border Adjustment Mechanism).

Modelling risks relate to the project idea, which is the asset being created, in the form of benefits that will be realised less their costs for realisation. The project idea is securely established and realised when project planning, control and execution are appropriately exercised. At the modelling stage however, it is only the value of the asset that is being evaluated, independent of either how the asset construction is financed or when value flows, in and out, are scheduled.

Projects that do not provide intrinsic value do not merit further evaluation, but of those projects that do, the risks that can diminish their intrinsic value need to be fully identified and assessed at this stage for mitigation by suitable means. Thus, the quanta of value flows, their certainty and controllability, are the important conditions to be captured in the model at this stage, whilst guarding against excessive optimism or pessimism and retaining mindfulness that the value flow rendered is because of the asset itself and not due to any conditions surrounding it.

For example, lower taxes or grants from the government accruing to a climate risk addressing asset are relevant value inflows for its model, but the financing assistance that is available in the form of a lower cost debt instrument for constructing the above asset, belongs rightfully to the next stage. Apart from building a realistic model, sufficient attention has to be paid to embedding control points within the project for safeguarding the value created by the project. Furthermore, the opportunities provided by the modelling stage for programming proper risk response actions to protect appraised value should be fully utilised.

In the case of non-EU businesses exporting carbon-intensive products to the EU, modelling for CBAM involves appropriately capturing a carbon charge linked to the EU’s carbon market price that is currently around €90/tonne, as against the global average of $2/tonne. On the other hand, modelling for exporting to the US market may mean factoring in tax credits for supporting the development of green technologies. Moreover, as such tax credits are conditioned on local content requirements and may conflict with WTO rules, they are likely to be replaced, due to which the latter project model may have to be re-built sometime in the future.

Financing risks are linked to how the appraised, intrinsic project value is expected to be conveyed to the financiers of the project. This is the financing model for building the asset, and a project that is financed using incorrect instruments or employing inappropriate proportions of equity and debt is certain to reduce the project’s intrinsic value delivered to shareholders.

Assessing a project’s delivered value on the sole basis of a low-cost (green) debt instrument that is used to partly finance it may satisfy the returns demanded by the debt financiers but it will be destructive of overall shareholder value, as the benefits delivered are deemed lower than overall project costs. Whilst it is conceivable that such projects are deemed strategic and are hence undertaken despite

1.
32 Intelligent Risk - February 2023
2.

their negative net present value (NPV), the project’s negative value add should be closely controlled during project execution. Further, and more importantly, the cost of capital used in evaluating a project is usually different from the actual achieved cost, and hence risk responses should be geared towards their equalisation.

Financing risks are unique and hence not controllable as part of project management techniques of planning, execution, and monitoring. For developing optimal responses to fluctuations in the cost of capital, detailed sensitivity analysis should be carried out by performing simulations such that in each iteration, the proportions of equity and debt are varied, in combination with their individual component costs. The resulting optimal ranges both for component costs and financing proportions should be adhered to in financing the project, and suitable financial risk responses should be programmed before project initiation for ensuring control over financing costs.

For non-EU businesses exporting carbon-intensive products to the EU, a favourable interest rate environment and high investor demand are currently permitting the issue of sustainable debt. Such debt was mainly in the form of green bonds earlier, but of late, so-called “brown” or “transition” bonds that encourage transiting to net-zero emissions, are becoming increasingly popular. Whilst green bonds provide the lowest cost financing for a compliant business, brown bonds start costlier but feature a step-down mechanism that lowers their cost to the level of green bonds until the time that the business becomes net-zero. More important however, brown bonds may include “step-ups”, which are key performance indicator (KPI) targets that are penalties for failure of the business to achieve the pre-agreed emission targets. It is necessary to adequately and realistically capture these targets and their associated costs in the financing model within the context of a risk response structure, for use during the subsequent and ultimate stage of the project.

Schedule realisation risks are linked both to project financing and to the project model and its execution. They fall into an entirely different category because the planned time schedule of investments and returns provides an estimate of the intrinsic value, whereas the actually realised time schedule underlines the delivered value of the project to its stakeholders.

Given that a discounted cash flow (DCF) model, based on a well-controlled cost of capital provides the most appropriate value for the project, any uncontrolled variances between planned and actual, inflows and outflows, will cause project values to fluctuate from the plan. The use of project management techniques to control both flow and timing variances is necessary at this time. This ultimate stage is therefore different from the two preceding it. Control over risks arising at this stage is essential to ensure full planned value realisation on the project.

3.
33 Intelligent Risk - February 2023

The stage-wise identification and categorisation of risks on climate change and renewable energy projects, as pertaining to one of modelling, financing, or schedule realisation of investments and returns, helps in improved risk assessments and also permits the evolution of distinctive responses to the risks at their very origin.

At the modelling stage, risks to intrinsic value should be identified for mitigation. During the financing stage, risks that are linked to how the intrinsic project value is delivered to the financiers of the project should be identified for mitigation. Schedule realisation of a project leads to actual value delivery to stakeholders and control over risks at this stage is essential to ensure full planned value realisation from the project. As the risks at each of the three stages are separate and distinct, a stage-wise differentiation enables better understanding and allows the definition of targeted risk responses.

Carl Densem

author

Ganesh Melatur

Ganesh Melatur is a Chartered Manager and an experienced strategic management professional. He is Fellow member of the Chartered Institute of Management Accountants (CIMA) and Fellow member of the Association of Corporate Treasurers (ACT). Possessing considerable experience in the planning, controlling and execution of projects as a Systems Engineer, he is a Project Management Professional (PMP) from the Project Management Institute (PMI) and Senior Member of the Institute of Electrical and Electronics Engineers (IEEE). Ganesh completed his MBA from the Alliance Manchester Business School, University of Manchester.

peer-reviewed by
34 Intelligent Risk - February 2023
conclusion

Synopsis

Rapid adoption of APIs is making application endpoints an increasingly attractive target for cyber criminals. API calls from within an app can provide a blueprint for critical backend infrastructure making it vulnerable to malicious attacks. This calls for a security roadmap beyond the traditional focus on the network layer.

mitigating cybersecurity risk through effective management of application programing interfaces (APIs)

International Data Corporation (IDC) projects global digital transformation spending will reach $3.4 trillion in 2026. Post-pandemic, many businesses are focused on enhancing customer experiences and personalized customer journeys through newer and improved digital engagements, products, and services. Implementing technology solutions that leverage robotic process automation (RPA), artificial intelligence and machine learning (AI/ML), real-time data processing and the power of cloud computing can increase the operational resiliency of businesses against market disruptions.

A key capability enabling such solutions is the development of microservices based API-first modular application architectural patterns. With businesses recognizing APIs as a critical component to drive digital transformation use cases, adoption of APIs has increased exponentially in recent years. See Figures 1 and 2:

To what extent do you agree or disagree with the following statement? “APIs are an essential part of an organization’s digital transformation”

Figure 1: 98% of Enterprise leaders agree

Source: 2021 RapidAPI State of APIs

“Do you expect to rely on APIs more, less, or about the same in 2022?”

Figure 2

Source: 2021 RapidAPI State of APIs

1 / Worldwide Digital Business Spending Guide (2022) 2 / The State of Enterprise APIs (2021) 3 / State of APIs Developer Survey (2021) Intelligent Risk - February 2023 35

While APIs have become integral to the cloud native application design patterns and for interoperability in a microservices architecture, managing API-related security risks has become paramount. According to a report by Imperva, unsecure APIs can significantly increase the cost of business operations attributed to cyber losses.4 See Figure 3:

Unsecure APIs can provide threat actors an attack route for backend systems and underlying infrastructure leading to breaches involving sensitive data. Increasing usage of cloud-based services and applications can inherently extend the attack surface through the use of managed infrastructure (virtual machines, storage, cryptographic systems etc.) as well as the availability of descriptive API documentation that can provide hackers with a better understanding of the functional layers of a web application. While cloud service providers build controls such as metering and tracking of API usage to address API security for managed services, organizations must understand the lifecycle of APIs they implement across their distributed hybrid cloud and multi-cloud IT environments in order to effectively manage risks and better govern their digital footprint.

API risk scenarios

Some common risk scenarios involving APIs include:

Incomplete API visibility: Time-to-market pressure and evolving application architecture and DevOps processes can render even well-designed API development, deployment and documentation practices inefficient. Incomplete visibility into the API inventory can persist the growth of shadow and zombie APIs which can impact security and monitoring practices leading to increased risk of cyberattacks. Hidden APIs can be a breeding ground for SQL injection attacks which use malicious SQL code to access backend systems. A recent survey5 by Noname Security indicates that 74% of companies do not have a full API inventory, impeding their ability to manage API related risks.

Unsecure API Endpoints: Threat actors can exploit API endpoints to retrieve Personal Identifying Information (PII) data, e.g. name, address, social security numbers, credit card information etc. by invoking API calls using basic parameters such as a mobile number us or enumeration of user IDs.

Source: 2022 Imperva Quantifying the Cost of API Insecurity Report Figure 3: Estimated total cyber loss represents any damage, loss, claim or cost directly or indirectly attributed to a cybersecurity incident 8% of Enterprise leaders agree 1. 2.
36 Intelligent Risk - February 2023
4 / Quantifying the Cost of API Insecurity (2022) 5 / The API Security Disconnect

Broken user and object level authentication should be addressed through robust implementation of authentication and authorization protocols as well as testing of the business logic layer of the APIs. A recent data breach at Optus 6 appears to have involved an unauthenticated publicly accessible API endpoint which resulted in exposure of PII information of over 2 million customers.

Insufficient API Testing: Lack of sufficient testing of the API codebase and user interface (UI) integration layer during development can trigger significant remediation activities later in the production environment, thus delaying the overall API release timelines. In 2019, Centers for Medicare and Medicaid Services (CMS) experienced a coding error in their Blue Button 2.0 API7 used for providing access to beneficiary Medicare claims data. The issue persisted in production environment and was determined to have been caused by poor code quality review and validation process. The impact of the coding error could have led to inadvertent sharing of Personal Health Information (PHI) data with unintended recipients. Mock API endpoints can be used to accelerate the application development lifecycle by enabling front-end developers to build UI features and integrate testing against the mock endpoints in order to resolve bugs and defects earlier in the API lifecycle.

actions to manage APIs and mitigate risk

The following considerations are crucial for managing risk across the API lifecycle:

Automate API footprint discovery process to facilitate cataloging and risk classification of APIs, including internal and external APIs. Integrate the discovery process with security monitoring to manage unexpected API activity by leveraging log and traffic data from API gateways, cloud provider logs, content delivery networks, and other orchestration tools.

Deploy real-time threat detection and prevention countermeasures that continuously scan incoming data for vulnerabilities and business logic abuse, paired with real-time blocking and alerting. Integrate security monitoring with API testing during the development phase to conduct in-depth codebase reviews pre-production.

Safeguard secrets (e.g., API keys, tokens, passwords, certificates, etc.) from code leaks by implementing source code hygiene training for relevant stakeholders (developers, testers, system administrators, etc.). Implement a peer review process that incorporates automated review of sensitive data in addition to bugs and styling issues before committing any proposed code changes.

Leverage API schemas to identify and mitigate unexpected API usage patterns related to client interactions with API endpoints. Deploy API discovery tools with schema ingestion capabilities to enable API traffic pattern analysis by understanding methods and data components. The analysis can feed the implementation of business logic rule-based workflows for accessing API endpoints (e.g., blocking unused HTTP methods, implementing rate limiting to mitigate DDoS attacks, etc.) that can mitigate malicious attacks.

1. 2. 3. 4.
7 / CMS Blue Button 2.0 Coding Bug Exposed PHI of 10,000 Medicare Beneficiaries 6 / Optus data breach: everything we know so far about what happened 37 Intelligent Risk - February 2023
3.

conclusion

Strategic management of APIs can scale and effectively monetize APIs while mitigating the risks associated with API security and implementation deficiencies. Implementing a well-designed API governance model can enable uniformity and compliance with standards covering the API lifecycle. In addition, it can provide clarity on roles and responsibilities to key stakeholders (e.g., product owners, developers, testers, and security teams) that can supplement data privacy and information security compliance across the organization’s API sprawl. Further, by automating key API-related processes, such as API documentation, testing, and security assessments, an organization can gain a more timely view of its API inventory and identify potential security vulnerabilities and business logic flaws. This allows for quicker triage and remediation of events such as account takeovers, fake account creation, or scraping by malicious actors.

Disclaimer: All views expressed in this article are author’s own based on his practical client side and consulting experience working in the data analytics field. These views do not represent the opinion of any entity author has been or is associated with

Carl Densem

Preet Makkar

Preet Makkar is a Data Analytics Subject Matter Professional at U.S. Bank with a broad experience in data governance, data management, analytics and technology domains. As a Principal, Data Analytics at Cuda Data Analytics, he has authored several thought leadership articles covering topics such as Cloud Data Governance, Data Privacy and Open Source Technologies. He previously worked at KPMG, where he oversaw the delivery of large-scale transformation projects aimed at leveraging data and analytical solutions to improve the strategic execution of risk-driven processes for his clients. Over his consulting career, he has served more than a dozen global and US domestic financial institutions in managing core data analytics initiatives, covering strategy, design, implementation and risk management. Preet holds an MBA in Business Analytics from Bentley University and a B.S. in Business from University of Technology, Sydney.

author peer-reviewed by 38 Intelligent Risk - February 2023

Fraud costs organizations dearly and, unlike some other risks, is present across countries and industries. The repercussions of ignoring it are well-documented in case studies. Still, better and worse approaches make a difference to the magnitude of fraud losses, as well as reassure management and the Board that everything is being done to identify and address fraud wherever it may pop-up next.

fraud risk management: an assurance approach

introduction

Given the reality that all companies face the risk of fraud, this article seeks to make a case for an assurance approach to fraud risk management rather than a siloed approach. A siloed approach is where just one assurance unit takes responsibility for fraud risk management, while an assurance approach suggests an active coalition of various assurance units in the management of fraud risk.

the reality of fraud

Agnes has been solely running the fraud risk program of her company for the last 5 years. She is an Internal Control officer, so the management thought it best to have her manage the fraud risk program since she would be able to initiate appropriate controls head on based on whatever she finds.

Synopsis
“Corruption, embezzlement, fraud, these are all characteristics which exist everywhere. It is regrettably the way human nature functions, whether we like it or not. What successful economies do is keep them to a minimum. No one has ever eliminated any of that stuff”
Intelligent Risk - February 2023 39

The problem, however, is that the company has had at least one major fraud incident in the last 3 years, the last one almost taking the company out. Agnes is feeling the heat, and everyone is fearing when the next one might hit.

While the story above is an adapted version of a true situation, the reality of it stares many companies in the face. The Association of Certified Examiners (ACFE) Report to the Nations for the year 2022 (The ACFE, 2022) estimates that organizations lose 5% of revenue to fraud each year. It can’t be said that Agnes is not trying all she can, but she can only see as far as she knows. When interviewed, it was discovered that Agnes had been running the fraud program independently just as the company policy had stipulated. Herein lies the pitfall though.

how do you gauge the susceptibility of your company to fraud?

Many companies are in the same situation as Agnes’ company while some are even in worse situations. A Director at Agnes’ Company began asking some questions to determine the company’s fraud risk management maturity. These questions included: when was the last time the company carried out a fraud risk assessment? Does the company have a fraud risk program? If the company has a fraud risk program, how robust is it when compared to best practices? Then comes the question, is the fraud program operating a standalone approach or an assurance approach? This was found to be a major issue with Agnes’ company’s fraud risk management program. They were operating a standalone approach where only the internal control team was involved in managing the fraud risk program.

ASSET MISAPPROPRIATION SCHEMES are the most common but least costly FINANCIAL STATEMENT FRAUD SCHEMES are the least common but most costly Among these cases, cryptocurrency was most commonly used for: A TYPICAL FRAUD CASE $8,300 causes a loss of per month 12 months lasts before detection of fraud cases involved the use of CRYPTOCURRENCY Making bribery and kickback payments 48% Converting misappropriated assets 43% of cases $593,000 median loss 9% $100,000 median loss of cases 86% 8% CFEs estimate that organizations LOSE of revenue to FRAUD each year MEDIAN LOSS PER CASE: AVERAGE LOSS PER CASE: $11 7,000 $1,783,000 5% ORGANIZATIONS WITH HOTLINES detect fraud more quickly and have lower losses than organizations without hotlines Without fraud hotlines $200,000 18 Months $100,000 12 Months With fraud hotlines Median loss Duration SCHEMES OUR STUDY COVERED: Causing total losses of more than $3.6 Billion 2,110 CASES 133 COUNTRIES from of frauds were detected by tips, DETECTION 42% More than HALF of all tips came from employees which is nearly 3x as many cases as the next most common method Email and web-based reporting BOTH surpassed telephone hotlines Web-based/ online form Telephone hotline Email 33% 27% 40% KEY FINDINGS was the most common scheme in every global region 4 KEy FiNDiNgS Occupational Fraud 2022: A Report to the Nations 40 Intelligent Risk - February 2023
Figure 1: ACFE Key Findings (2022)

the assurance approach to managing fraud

An assurance approach is where the assurance teams come together in the running of the fraud risk program of a company. Just as fraudulent activities cut across different departments of a company, assurance teams also need to come together to pool their knowledge in curbing them. The ACFE Report to the Nations notes that nearly half of all occupational fraud comes from these four departments: Operations 15%; Accounting 12%; Executive/Upper Management 11% and Sales 11% (The ACFE, 2022). This spread further justifies the need for an assurance approach to the fraud risk management program.

While the Internal Control team or the Operational Risk team might take a lead on managing the program, other assurance teams such as the internal audit, cyber security and even the compliance teams need to be involved and this starts as early as the fraud risk assessment. For a robust fraud risk assessment, these teams who have different views of the company’s control gateways should contribute where and what to check, where and what to ask, what to expect and what the company might be most susceptible to. This sometimes might not even be from their experience within the company but from their interactions with peers and professional colleagues. The coming together of these thoughts translates into a more robust fraud risk management program and controls.

a note of caution

ASSET MISAPPROPRIATION SCHEMES are the most common but least costly FINANCIAL STATEMENT FRAUD SCHEMES are the least common but most costly Among these cases, cryptocurrency was most commonly used for: A TYPICAL FRAUD CASE $8,300 causes a loss of per month 12 months lasts before detection of fraud cases involved the use of CRYPTOCURRENCY Making bribery and kickback payments 48% Converting misappropriated assets 43% of cases $593,000 median loss 9% $100,000 median loss of cases 86% 8% CFEs estimate that organizations LOSE of revenue to FRAUD each year MEDIAN LOSS PER CASE: AVERAGE LOSS PER CASE: $11 7,000 $1,783,000 5% ORGANIZATIONS WITH HOTLINES detect fraud more quickly and have lower losses than organizations without hotlines Without fraud hotlines $200,000 18 Months $100,000 12 Months With fraud hotlines Median loss Duration SCHEMES OUR STUDY COVERED: Causing total losses of more than $3.6 Billion 2,110 CASES 133 COUNTRIES from of frauds were detected by tips, DETECTION 42% More than HALF of all tips came from employees which is nearly 3x as many cases as the next most common method Email and web-based reporting BOTH surpassed telephone hotlines Web-based/ online form Telephone hotline Email 33% 27% 40% KEY FINDINGS was the most common scheme in every global region 4 KEy FiNDiNgS Occupational Fraud 2022: A Report to the Nations
Figure 2: ACFE Key Findings (2022)
There is a firm line to be drawn though. The third line of defence – Internal Audit should not institute and implement controls. Their contribution and role must not go past contributing to the Fraud Risk Assessment (FRA) and probably making non-committal suggestions on controls. 41 Intelligent Risk - February 2023

Reports and reviews should however be shared across these teams with necessary briefings and suggestions passed around when needed. And this does not need to wait till certain fixed times, it should be instituted as an ongoing as-need-be exercise. The Institute of Internal Audit (IIA) auditing standards (The IIA, 2022) have also recently started advising that the Internal Audit team always consider fraud risks in their reviews.

There is never a foolproof fraud risk management approach that guarantees zero fraud cases. The assurance approach to fraud risk management, however, gives far more assurance of cover compared to a standalone or siloed approach.

references

“Occupational Fraud 2022: A report to the Nations.” Association of Certified Fraud Examiners. https://legacy.acfe.com/report-to-the-nations/2022/

“The IIA Releases Updated Practice Guide on Assessing Fraud Risk Governance and Management.” The Institute of Internal Auditors. May 2022: https://www.theiia.org/en/content/communications/2022/may/the-iia-releases-updated-practice-guideon-assessing-fraud-risk-governance-and-management/

peer-reviewed by

Carl Densem

author

Dami Osunro

Dami Osunro is a Risk Management and Audit professional with a little over ten (10) years of experience cutting across external audit, internal audit, investigations and risk management. His experiences cut across banks, consulting services and investment managers.

Dami holds professional qualifications such as the Certified Fraud Examiner (CFE), Certified Risk & Information Systems Control (CRISC), Certified Information Systems Auditor (CISA) and a Certified Business Continuity Professional (CBCP). He holds a Msc in Corporate Finance and a Bsc in Economics.

He is currently a Senior Analyst, Operational Risk Management with the Enterprise Risk Management team at Alberta Investment Management Corporation (AIMCo).

42 Intelligent Risk - February 2023

Greenwashing presents a tempting alternative for companies struggling to meet evolving ESG regulatory standards and consumer preferences. This article examines the pitfalls of Greenwashing and the very real regulatory, legal, financial, and reputational implications organizations may face if they fail to safeguard against unsound or half-measured ESG practices.

knowing the unknown – the stain greenwashing leaves on financial institutions

The word Greenwashing is omnipresent. In almost every discussion on sustainable finance and related ESG risks it is considered an important issue, and its prevention and tackling are prioritized by financial regulation authorities such as the European Securities and Markets Authority (ESMA) and Securities Exchange Commission (SEC). Recognized as an emerging risk by the European Banking Authority (EBA)1, Greenwashing and its prevention will be a key objective in the organization’s sustainable finance roadmap in coming years.

It is becoming increasingly obvious that Greenwashing poses a huge risk to the efficiency and integrity of efforts to achieve sustainability goals. It appears like an external phenomenon, floating above all— yet deep in the background at the same time. The term is, however, so generic and abstract that it is barely included in the definitions of regulatory guidelines or technical standards. Nor is it included structurally in the classification and methodological concepts of measuring and monitoring ESG risks.

how are we going to prevent it, if we do not define it?

In simple words, Greenwashing means to present activities as green and sustainable when they are not. In this sense, Greenwashing can be intentional and unintentional. Typical examples of Greenwashing in the financial industry are:

• Making misleading claims on green bond investments through self-labeling with no guarantee of proceeds meeting sustainability-eligible criteria

Synopsis
introduction
Intelligent Risk - February 2023 43

Granting green mortgages to finance buildings with low energy efficiency

• Touting long-term commitments to climate goals (e.g. achieving Net Zero) while at the same time sustaining the short-term continuation of business as usual by lending to the fossil fuel sector and related industries

Other less widely-used terms related to Greenwashing are Bluewashing, which describes misleading claims of companies on social matters (e.g. human rights), and SDG-washing, which stands for businesses that claim to be aligned with the Sustainable Development Goals (SDGs) set by the United Nations without taking action for a meaningful contribution to the achievement of these goals. This becomes apparent when companies focus on goals that are easily achievable (rather than prioritizing goals with high impact) or by focusing only on positive impacts and neglecting related trade-offs 2 .

The preconditions and incentives for Greenwashing are fostered due to increasing expectations on companies to communicate about sustainability issues. Being silent about them is also not an option. This practice, known as Greenhushing, is even less transparent and more harmful than Greenwashing 3, which is why the latter becomes a probable event.

From regulatory and risk management perspectives, Greenwashing falls into the same category as misconduct, fraud, corruption, tax evasion, and money laundering. However, its distinguishing characteristic lies in its fundamentality for all ESG risks and not only in being part of them. The negative effects of Greenwashing go beyond legal risk and the associated litigation costs, especially when the concept of Double Materiality is applied. The transmission channels of Greenwashing are multidimensional and affect all risk types.

operational risk

The risk of Greenwashing emerges within the scope of operational risk, when there are no practices to identify it, or they are inadequate due to lack of knowledge or misconduct. When relying on false or bad quality data provided by external counterparties, financial institutions unconsciously run the risk of misleading about their sustainable activities. Also, the lack of internal control mechanisms regarding Greenwashing prevention allows for intentional misconduct and fraud by employees, if for example their remuneration is linked to “green” market activities (which is encouraged by regulators). It is also possible to mislead investors, when sustainable investments cannot meet the eligibility criteria provided in pre-contractual disclosures.

credit risk

Greenwashing can lead to underestimated ESG risks. A distorted picture of the environmental impact of investments increases unexpected losses if transition and physical risks materialize. Counterparty financial situations, and even solvability, can be significantly harmed, which should be reflected in the estimation of credit risk parameters such as Probability of Default (PD) and Loss Given Default (LGD).

44 Intelligent Risk - February 2023

The potential threat of Greenwashing increases market credit spreads and should be considered in Credit Value Adjustments (CVAs).

market risk liquidity risk

Similar to the credit risk channel, assets that have been affected by Greenwashing can significantly lose market value and even become stranded, so they cannot be sold. This uncertainty should be reflected in the market risk parameters such as Value-at-Risk (VaR). For example, DWS, the second-largest asset management company in Europe saw its share price lose 13.7% of its value after accusations of misleading claims on its ESG-related Assets under Management (AuM), which also caused a share price fall for its parent company, Deutsche Bank4. Investments in companies or bonds, which are uncertain in terms of meeting sustainability-eligible criteria, might significantly impact the current market value of asset portfolios.

If Greenwashing risk materialises (i.e. the activities claimed as sustainable are suspected not to be), some investors may withdraw their funds, which could lead to liquidity shortage and increased funding costs. Companies involved in Greenwashing will not be able to attract funds easily, which will be reflected in higher liquidity spreads for their issuances on the capital markets and may impact the liquidity of other related companies acting on the same market and the same industry. On the money market, it is possible that securing roll-overs of continuous short-term funding might also be impacted in terms of price and quantity.

On the asset side of liquidity, stranded assets impacted by Greenwashing become less liquid on the market. The respective effect on liquidity buffers and liquidity ratios, such as Liquidity Coverage Ratios (LCRs) and Net Stable Funding Ratios (NSFRs), should not be underestimated as the ability of banks to meet regulatory requirements might be impacted as well. Becoming entangled in such a course of events can lead to further bad press for a company’s standing.

reputation risk

The reputation of the financial institution may be harmed by Greenwashing related to negative image among stakeholders. The effects are not linear and can be associated with significant costs of re-building the brand and the image on the market. A prominent case was the Volkswagen scandal of cheating emissions testing. The brand’s stock has still not recovered since it was damaged back in 20155. With growing awareness and commitment towards sustainability within the financial services industry failing to meet public expectations, Greenwashing practices—or even accusations of it—will become even more risky for a company’s reputation.

45 Intelligent Risk - February 2023

When a Greenwashing practice is identified, financial institutions may be challenged by legal claims of investors and environmental activists. The risk then materializes through increased litigation costs. Even lacking direct criminalisation of Greenwashing under the current law in Europe, regulations on consumer protection and fair competition allow for legal action against misleading sustainability claims. In the United States the Federal Trade Commission can investigate companies’ environmental marketing claims under the so called “Green Guides.” Settlement costs are usually between 1% and 15% of a company’s profits, and several non-financial corporates already paid millions of USD with one of the highest being $21 million by Fairlife LLC6

Greenwashing is a huge threat for fulfilling regulatory expectations on the management of ESG risks and meeting sustainability goals. The risk of becoming non-compliant is connected with possible regulatory sanctions such as penalizing factors or increased capital requirements. Financial regulators around the world have strengthened requirements on sustainability disclosures by market participants, which increases the regulatory risk of Greenwashing due to the obligatory nature of external reporting.

With the Sustainable Finance Disclosure Regulation (SFDR) and the Corporate Sustainability Reporting Directive (CSRD) regulatory authorities, Europe is leading the way in the supervision of Greenwashing activities. Upon request by the European Commission (EC), the European Supervisory Authorities (ESAs) have launched an industry-wide survey on key aspects and risk exposures associated with Greenwashing and related market practices7 .

Thus, the European regulators are assessing the need for amendments to the EU supervisory framework and the EU single rulebook8. The main purpose is to identify, prevent, investigate, sanction, and remediate Greenwashing in the financial market. It is therefore expected that new supervisory measures will follow, including assessments of mandatory disclosures, investigations, and formal enforcement actions such as sanctions or other administrative measures.

legal risk regulatory risk conclusion

Given the multidimensional and complex nature of Greenwashing, it cannot be left on the sidelines of internal risk management. Underestimation of its potential impact on the major risk types leads directly to undercapitalization of such risks.

Moreover, the main responsibility of tackling Greenwashing should not be driven solely by regulatory authorities. It should primarily be prevented and managed by financial institutions and their customers.

46 Intelligent Risk - February 2023

To increase awareness and accountability, Greenwashing should be integrated as a separate risk category or risk factor. It should be included on a stand-alone basis in the risk management and prudential regulation framework. It should be explicitly considered in all three Pillars of the Basel framework as a minimum standard. Once this bottom line is safeguarded, the learning curve in ESG risk management will become clearer—and risk mitigation and steering more effective.

references

1. EBA. “The EBA publishes its roadmap on sustainable finance”, Press release, 13 December 2022, available at https://www.eba.europa.eu/eba-publishes-its-roadmap-sustainable-finance

2. Dinter, L. “5 Tips to avoid SDG washing”, 7 October 2021, https://sustainlab.co/blog/check-out-our-five-tips-to-avoid-sdg-washing-and-make-sure-your-company-achievesa-real-positive-sustainability-impact

3. World Economic Forum. “What is ‘greenhushing’ and is it really a cause for concern?”, 18 November 2022, available at https://www.weforum.org/agenda/2022/11/what-is-greenhushing-and-is-it-really-a-cause-for-concern/

4. Azzouz, M. “Greenwashing allegations are jolting the financial industry: heightened needs for cautiousness, integrity and guidance”, 30 September 2021, available at https://gsh.cib.natixis.com/our-center-of-expertise/articles/greenwashing-allegations-are-jolting-the-financial-industry-heightened-needs-for-cautiousness-integrity-and-guidance

Schiftler, A. “DWS and the Global Crackdown on Greenwashing”, published on Morningstar, 19 September 2022, available at https://www.morningstar.co.uk/uk/news/226564/dws-and-the-global-crackdown-on-greenwashing.aspx

5. Learnsignal. “The Volkswagen Emissions Scandal: A Comprehensive Overview”, 2020, https://www.learnsignal. com/blog/volkswagen-emissions-scandal-overview/#:~:text=The%20Volkswagen%20emissions%20scandal%20 had,reputation%20for%20honesty%20and%20integrity

6. Washington Legal Foundation. “Greenwashing litigation disincentivizes adoption of ESG values”, 10 August 2022, available at https://www.wlf.org/2022/08/10/wlf-legal-pulse/greenwashing-litigation-disincentivizes-adoption-of-esg-values/

7. EBA. “ESAs launch joint Call for Evidence on greenwashing”, Press release, 15 November 2022, available at https://www.eba.europa.eu/esas-launch-joint-call-evidence-greenwashing

8. EBA. “The EBA publishes its roadmap on sustainable finance”, Press release, 13 December 2022, available at https://www.eba.europa.eu/eba-publishes-its-roadmap-sustainable-finance

author

Ina Dimitrieva Elisabeth Wilson

Ina Dimitrieva is a business advisor in ESG & Sustainability, partnering with financial institutions, public organizations, and management consultancies to accelerate the transition towards a green economy. She provides state-of-the-art solutions for financial market participants, enabling them to be forerunners in sustainability and effective managers of ESG risks. She is based in Vienna, Austria, where she graduated from the University of Economics and Business Administration.

peer-reviewed by
47 Intelligent Risk - February 2023

Inflation has multiple ramifications creating downside risks to growth. Traditional tools to control inflation are to raise rates and absorb excess liquidity that can diminish consumer demand propelled by price rises. When demand is brought down, the inflation trajectory can correct itself to eventually stabilise prices. The new thoughts on the nuances of reining in product pricing strategies can resonate as alternate ways to douse inflation.

today’s inflation risk: new approaches

introduction

Inflation is a hot topic, which all economists are discussing across the world. Currently, it affects everyone indiscriminately. What shall we do? How to react? The classic corporate response to inflation is to select one of three unattractive traditional options. Managers can upset their customers by raising prices, upset their investors by cutting margins, or upset practically everyone by cutting corners in order to cut costs. The most popular option is the first one.

What they overlook, however, is that those three options are shortsighted tactical relics of earlier eras. In the 1970s, when “stagflation” gripped major economies, managers lacked the technology, the data, and in many cases the notion to do anything bolder or more strategic. Stagflation was a big problem and challenge. When inflation hit during the Great Recession of 2008-09, managers still saw themselves constrained within the same old trilemma, asking themselves, which of these shall we do?

Inflation in 2022 is a different story. Managers now enjoy a level of market visibility and agility that their predecessors could have hardly imagined even one generation ago. It’s an ideal time for them to treat inflation as a strategic opportunity rather than a tactical challenge, and to choose from a better set of options. Instead of worrying about how much more to charge their customers, they should devote their resources to figuring out how and why they should be charging them by bringing innovations in managing inflation risks.

Synopsis Intelligent Risk - February 2023 48

new options in the inflation fight

Managers should consider these three strategic options, especially if inflation persists: recalibrate and clean up the product portfolio, reposition the brand, or replace the price model. These options are not mutually exclusive, so managers could also pursue a combination of them. There is no need to give up anything. All models can be implemented concurrently.

Inflation is not a problem today. It is a very comfortable opportunity for managers. Nowadays, companies have several ways to implement this option. They can bundle or unbundle existing products, either to create new value propositions or to expose customers to lower price points for the disaggregated goods and services they want to buy. They can draw on insights from behavioral economics to change price gaps in order to steer customers toward more profitable offerings. Depending on what they have in their R&D pipelines or how flexible their production capacity is, they can also introduce less-expensive alternatives or, introduce higher-end products that make the existing product line appear more affordable.

addressing customers’ real sensitivities

When thinking about inflation, managers tend to overemphasize price sensitivity as the key factor in determining how customers will respond to any changes they see. But they should keep in mind that customers are also quantity sensitive and quality sensitive.

If customers are more price sensitive than quantity sensitive — as research shows they are — they are less liable to notice a price increase in the form of a smaller quantity at a constant price. Quality sensitivity comes down to the features that customers could live without or accept at a lower level. If a product or service has several such features, managers can consider whether removing or adjusting them creates opportunities for new versions with fewer features at a lower price point. The converse also holds true. Slight changes in quality can unlock a much greater willingness to pay without a significant increase in costs, allowing the company to establish new offerings at the higher end of the market.

At any given time, most offerings are either overpriced or underpriced — in some cases significantly — relative to the value they deliver. A wave of inflation offers managers an opportunity to correct these misalignments in their product positioning.

Let’s start with overpriced products. When a company is investing large amounts in marketing to maintain or prop up a value proposition that is becoming increasingly tenuous, a price cut can make sense. That can happen when an offering is losing its competitive edge or was priced too high from the outset. Inflation makes it riskier to maintain that position. The company can solve this problem by reducing marketing expenditures and lowering the price at the same time to support a more realistic positioning. Depending on the magnitude of those changes, the moves could even lead to higher profits.

49 Intelligent Risk - February 2023

The more common situation, however, is that a product is underpriced relative to the value that customers derive. In that case, the uncertainty surrounding inflation, combined with customers’ expectations that they might need to pay more, provides an opportunity to change communication and position a product in a higher price tier. That opportunity is especially promising when the company has relied on low prices as a source of competitive advantage.

Intrigued by the success of subscriptions and “my-product-as-a-service” models, many companies have already considered adopting new price models. The immediate need to respond to inflation gives them a compelling reason to implement these plans now and avoid needing to settle for the littlest of three evils in the classic trilemma.

One drawback to the “raise your prices” answer to inflation is that you might lose customers who can’t bear the higher costs. But is it worth losing any customers when they might not be “bad” customers (whatever that means), but victims of an outdated transactional model that restricts their ability to try and use your product? Access, consumption, and outcome-based price models allow more customers to buy and use what they need from you, when they need it, rather than tying up their money in expensive assets or foregoing the inputs they need to develop better products at their own speed and scale.

New pricing models often sacrifice the upfront revenue impact of big-ticket sales, but they generally make up for that with higher recurring revenue over a customer’s lifetime. Investors tend to find these revenue streams more attractive because they are predictable and because they spread risk over a wider basis of customers

conclusion

That final point is what makes new price models a strategic decision rather than a tactical response. In 2013, Adobe switched from selling its customers perpetual licenses via plastic discs in boxes to selling them software access via a monthly subscription to its Creative Cloud. During the adjustment period, the company absorbed slight declines in revenue and profit; but has achieved strong growth ever since. Opportunities exist for change: the food industry was negatively affected by inflation in 2022 and world inflation achieved 8.8%.

But with today’s data resources and analytical power, there is no reason why corporations shouldn’t explore an attractive strategic response to inflation rather than trying to choose between tactical price increases, margin hits, or reductions in quality. If companies don’t want to make a wholesale change to a new model, they can allow the new and old models to exist together and allow customers to self-select. Managers of all businesses should collaborate with each other in the world arena and find ways to solve the inflation problem and douse its risks. Inflation can also be treated as a strategic opportunity for innovations in risk management.

50 Intelligent Risk - February 2023

references

1. Gourville, J. & Koehler, J. “Downsizing Price Increases: A Greater Sensitivity to Price than Quantity in Consumer Markets.” January 2004. SSRN Electronic Journal.

https://www.researchgate.net/publication/228149855_Downsizing_Price_Increases_A_Greater_Sensitivity_to_Price_than_ Quantity_in_Consumer_Markets

peer-reviewed by

Dr. K. Srinivasa Rao

author

Dr. Maya Katenova, Assistant Professor of Finance, KIMEP University. Maya teaches bachelor’s students as well as master’s students including Executive MBA students. She received a Teaching Excellence Award in 2017. Courses in her teaching portfolio include Financial Institutions Management, Ethics in Finance, Financial Institutions and Markets, Principles of Finance, Corporate Finance, and Personal Finance.

She supervised master’s thesis dissertations of several students and has numerous publications in different journals including high-quality journals. Her research interests are mostly related to corporate social responsibility and global ethics.

Maya holds the Professional Risk Manager designation and is planning to teach Risk Management in the future. Her future career is strongly connected with Risk Management conferences, symposiums, and workshops.

Maya Katenova, DBA, PRM, CMA
51 Intelligent Risk - February 2023
INTELLIGENT RISK knowledge for the PRMIA community ©2023 - All Rights Reserved Professional Risk Managers’ International Association

Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.