CONNECT MAGAZINE - ISSUE 2

Page 1


On the Horizon: Chris Smigielski explores the themes that will dominate risk management over the next 5 years

The Power of Risk Psychology: Patrick Fagan looks at how banks can use psychology to help manage customer risk behaviors

How Much AI is Too Much AI?: Guest editor Chandrakant Maheshwari reviews key themes from our AI in Financial Services conference

Model Risk Best Practice: Givi Kupatadze examines the way in which evolving regulatory and governance demands are changing Model Risk Management

www.cefpro.com/magazine

The views and opinions expressed in this publication are those of the thought leader as an individual, and are not attributed to CeFPro or any particular organization.

FOREWORD

The Future is Here

Guest Editor Chandrakant Maheshwari welcomes you to Connect and looks at how the convergence of a new political and technological era will impact the future strategy of risk management

BEYOND THE NUMBERS: HOW BEHAVIORAL SCIENCE IS SHAPING THE FUTURE OF BANKING RISK

Behavioral scientist Patrick Fagan looks at how banks can reduce consumer risk by better prediction of customers’ financial actions through personalized data-driven insights

Patrick Fagan is co-founder of Capuchin Behavioural Science and The Factory AI, and a former lead psychologist at Cambridge Analytica 06

ON THE HORIZON: INSIGHT FROM THE INSIDE

NAVIGATING THE FUTURE OF RISK MANAGEMENT IN FINANCE – AN EDITORIAL PERSPECTIVE

Guest editor Chandrakant Maheshwari looks back at CeFPro’s AI in Financial Services conference and explores the importance of balancing dynamic risk management skills and AI tools with human collaboration.

Chandrakant Maheshwari is FVP, Lead Model Validator at Flagstar Bank, NY 10

18

THE CASE FOR DATA IN SUSTAINABLE FINANCE: HOW DO WE DRIVE REAL IMPACT?

CeFPro’s Thea Holland shares research to suggest data quality could be key in driving growth in a sector that is expected to be valued at well over $6 trillion in 2024

Thea Holland is an Event Producer at CeFPro

22

EXPLORING THE GROWING CHALLENGE OF LARGE LANGUAGE MODEL VALIDATION

Large Language Models (LLMs) have become powerful tools in AI and risk management – but as their popularity grows, so do the challenges of validating them.

An interview with Indra Reddy Mallela, VP-Model Risk Manager at MUFG Bank

24

HOW MUCH AI IS ENOUGH? BALANCING THE FUTURE OF LARGE LANGUAGE MODELS

Chandrakant Maheshwari asks whether, in the rush to integrate AI into financial services risk management, there’s a danger we’re seeking complexity for complexity’s sake. Chandrakant Maheshwari is FVP, Lead Model Validator at Flagstar Bank, NY

26

THE GHOST EFFECT IN AI – WHAT IS IT AND WHY SHOULD WE CARE ABOUT IT?

Guest editor Chandrakant

Maheshwari looks at how the ghosts of past data impact on current LLM modelling, and the challenges of tackling it.

Chandrakant Maheshwari is FVP, Lead Model Validator at Flagstar Bank, NY 12

A new regular feature that looks beyond the immediate risk environment and offers industry experts the opportunity to share their take on emerging trends and challenges over the next 3 to 5 years. This month, Chris Smigielski, Director of Model Risk Management at Arvest Bank, looks at the long-term regulatory impact of AI and cybersecurity risk 14 BEST PRACTICE IN MODEL RISK MANAGEMENT FRAMEWORKS FOR BANKS

Givi Kupatadze, Head of Model Risk Management at TDC Bank, explains why risk managers should reassess their MRM strategies and examines how best practice in the discipline is changing over time

Givi Kupatadze, PhD is Head of Model Risk Management, TBC Bank

The Future is Here

Chandrakant Maheshwari FVP, Lead Model Validator Flagstar Bank

In this issue of Connect, we explore the critical themes surrounding the future of risk management, building on the valuable insights shared at CeFPro’s AI in Financial Services conference, held in October.

The articles in this edition of Connect Magazine provide fascinating food for thought as we embark on a journey to a destination that is only partially known.

encourage you to embrace the culture of future exploration and collaboration that the experts and contributors whose views are reflected here represent. Together, we can navigate the complexities of an AI-driven world, ensuring that we remain connected to both the digital landscape and each other.

andreas.simou@cefpro.com

As we advance, let us commit to fostering effective risk management practices that will serve us well in the dynamic financial landscape ahead. Continuous learning, verification, and validation will be the guiding principles that drive our success in this new era. In this edition, you will also be able to gain a fascinating insight into how consumer behavior in finance is often shaped by irrationality and biases, as seen in trends like NFTs, where emotion and heuristics (e.g., bandwagon effect) drive decisions.

Financial institutions are now using behavioral science to reduce risk, prevent fraud, and improve customer outcomes through personalized nudges based on data and psychology. For instance, timely reminders lower loan defaults, while personality-based messaging aids fraud prevention.

Behavioral scientist Patrick Fagan, who has helped some of the world’s largest brands to leverage data psychology and influence consumer purchasing decisions, looks at how financial institutions can reverse-engineer that thinking to better guide customers toward positive financial habits and security.

hope you enjoy this edition of Connect Magazine. And if you’d like to contribute to Connect or guest edit a future edition of the magazine or think your organization might be interested in advertising or advertorial opportunities, please contact the editorial team (all our relevant contact details are on the page opposite).

Chandrakant Maheshwari

Beyond the Numbers: How Behavioral Science is Shaping the Future of Banking Risk

Would you like to buy a JPG of a monkey? It can be yours for £50,000.

At least, that’s what Justin Bieber’s Bored Ape NFT (NonFungible Token) was last valued at. It might sound like a lot, but it was priced at over a million pounds in 2022.

The NFT craze is a great example of how consumer behavior – even in financial services, where you might think (or hope) that people are ruled by logic and spreadsheets – is influenced by irrationality, emotion, and bias. In this case, heuristics like the bandwagon effect and scarcity played a big role.

Behavioral science is vital for survival in financial services. Without it, you risk falling prey to customers’ biases and flaws. Over £1.2 billion is lost to fraud alone every year, and the vast majority of this is due to human error rather than technical weaknesses. All the security protocols and penetration testing in the world do little to stop a person from sharing a picture of their credit card on Instagram, for example; and the most common password in the world is 123456.

The fact is, all of us – including your customers – are cognitive misers. That means we have very limited brainpower for paying attention to the world and for making decisions. It’s impossible to put a number on it, but one guess based on sensory neurons firing in the brain is that we’re consciously aware of only 0.0004% of everything the brain is processing at any one time.

We can’t think through all of our decisions carefully, so we have to rely on quick shortcuts called heuristics. If one bank has good reviews and one bank has bad reviews, which will you choose? It’s an immediate gut response with little careful thought – depending in this case on a bias called social proof.

This reliance on heuristics is true even when you’d think people would be careful and logical – like financial services. For example, one study found that logins to an investment app correlated with the performance of the stock market: the worse the market was doing, the less likely people were to log in. When the news was bad, they didn’t want to know. It’s called the ostrich effect.

Other examples include fluency, where people are put off by overwhelming information (one study found that every ten funds added to a pension plan reduced participation by 1.5-2.0%); or the default effect, where we tend to go with the status quo, which is why the UK’s autoenrollment pension scheme increases yearly contributions by £33 billion within ten years.

Yet the application of nudges is not just about gains. More importantly, it prevents losses and reduces risk. Take the case of fraud prevention. Some banks have implemented a simple yet effective nudge: a popup message asking customers to pause and reflect before making large transfers. This brief moment of contemplation has been shown to significantly reduce the incidence of fraud, as it interrupts the automatic thinking that scammers often exploit. It takes people from a ‘hot’ state into a ‘cold’ one. It’s the same thinking behind that social media alert asking if you’re sure you want to repost that article without reading it.

In the realm of loan repayments, timely reminders have proven to be a powerful nudge, using the principle of saliency. We only act upon what is front of mind. A study by the Financial Conduct Authority found that sending personalized text message reminders to customers a few days before their loan repayment was due reduced default rates by up to 28%.

Patrick Fagan is a Behavioral Scientist with expertise in nudging, comms and data psychology. He is a Sunday Times bestselling author, a guest lecturer at UCL and Lecturer in Consumer Psychology at University of the Arts in London and is a former lead psychologist at Cambridge Analytica.

Crucially, however, nudges are not one-size-fits-all. What works for some audiences, or contexts, may not work for others. For example, saying a product is bought by every household in the country wouldn’t work for a luxury handbag; likewise, few people are scrambling for limited edition cans of baked beans.

One study sent over 50,000 letters advertising loans to households in South Africa. The researchers investigated the effect of various nudges on loan uptake – for example, simplifying the information in the letter had a significant effect. They also found that adding a picture of a smiling woman increased uptake – but only among men. The nudge worked, but only for a particular target group.

In banking, one paper concluded that the effectiveness of default options in retirement savings plans varied based on individuals’ financial literacy levels, wherein those with lower financial literacy were more likely to stick with the default option. Other studies have found that the default effect is more effective for anxious people.

It’s not just the message that can be targeted to reduce risk, but also how it is communicated. Research has shown, for example, that extroverts will respond better to bright colors, social imagery, and casual language. This kind of personality-based targeting increased the conversion rates of Facebook ads by up to 50% in one study. It hasn’t been tested in areas like fraud or defaults yet, but the potential is enormous.

The key here is that personality traits – that is, underlying behavioral dispositions – predict outcomes consistently across multiple contexts. Not only can personality predict nudge susceptibilities and messaging preferences, it can also predict banking behavior (both adverse and otherwise) – for example, impulsiveness has been linked to loan default, disagreeable ‘dark traits’ to financial dishonesty, neuroticism and disorganization to poor financial planning, and anxiety, impulsiveness, and agreeableness to scam susceptibility. If you understand a customer’s personality, you can predict their propensity towards certain behaviors, and you know the best way to message and nudge them into, or out of, acting a given way.

Crucially, since personality is simply consistent behavior, it can be predicted from the data points people leave behind – their digital footprints. An analysis of financial transaction data found that people who are less conscientious for example (and thus more likely to possibly default) take more cash out and spend more on takeaways, while those who are more conscientious put more money into savings

accounts. People who are more disagreeable (and thus perhaps likelier to be less honest with their bank) spend more money on investments and legal fees.

This kind of ‘data psychology’ approach can also be used contextually. Companies are, for example, able to predict loan defaults based on smartphone metadata – where users are more likely to default if they take more photos at night (suggesting a degree of impulsive sensation-seeking), have no fitness apps installed (suggesting low conscientiousness), or have many finance apps installed (suggesting a tendency to seek credit). Elsewhere, research suggests how deception and fraud can be predicted based on how quickly or slowly people answer questions in online forms.

Overall, there is significant potential to combine data and psychology to reduce risk. It could look like this: a user’s financial footprint suggests they are high in impulsiveness and low in conscientiousness (for example, spending a lot on taxis and nights out, and less on savings), suggesting they’re at higher risk of loan default and precipitating a ‘just in time’ nudge to remind them of their payment, using a fear-of-missingout nudge in an urgent and exciting tone of voice.

This kind of targeted psychology has been used to nudge people away from adverse behaviors in other sectors like public health – yet in financial services it is currently underexploited.

The future of banking lies not just in crunching numbers, but in understanding people. By embracing behavioral science, banks can evolve from mere money managers to true partners in their customers’ financial journeys. In this brave new world of behavioral banking, success will come not just from predicting behavior, but from nurturing it—gently nudging customers towards a brighter financial future.

Choose confidence, Choose LSEG World-Check

Access accurate and structured information to help you meet your KYC and third-party due diligence screening obligations.

Navigating the Future of Risk Management in Finance –

An

Editorial Perspective

Chandrakant is First Vice President, Lead Model Validator at Flagstar Bank, New York. He has more than 15 years’ experience in Financial Risk Management (Market and Credit risk) and has previously worked with business consulting firm Genpact.

As the guest editor of the November edition of Connect Magazine, am very excited to offer key ideas and insights that emerged from CeFPro’s AI in Financial Services conference in New York last month.

This gathering brought together thought leaders, industry experts, and practitioners to explore the transformative potential of artificial intelligence in the financial sector. The discussions highlighted the pressing need for an evolving skill set in risk management as we navigate the complexities introduced by AI and digital technologies.

Embracing Dynamism in Risk Management

In today’s fast-paced financial environment, risk management is no longer a static field. Professionals must cultivate a dynamic mindset, characterized by a relentless curiosity to explore and verify information.

As artificial intelligence (AI) tools become more prevalent, the ability to adapt to new methodologies and technologies is paramount. It is no longer sufficient to have established opinions or practices; successful risk managers must actively engage in continuous learning and remain open to change.

For more than 30 years, continuous learning has been a cornerstone of formal risk management frameworks. However, in an era marked by rapid technological advancements, the need for continuous verification and validation has never been more critical.

Risk managers must routinely assess their strategies and approaches, ensuring they align with the latest developments in AI and risk assessment methodologies.

Those who thrive will be those who seek to validate their beliefs, challenge the status quo, and embrace innovative solutions. This adaptability will enable professionals to address the complexities and uncertainties inherent in modern finance.

The Critical Role

of Social Skills

As technology advances, the importance of social skills within risk management cannot be overstated. The rise of AI and machine learning has enhanced data analysis capabilities, but it has also led to a tendency for professionals to operate within their ‘bubbles’.

While technology can provide quick insights, the richness of human interaction remains essential for holistic risk assessment.

At the conference, delegates witnessed firsthand how collaboration and networking can lead to fresh insights and innovative solutions.

Historically, effective risk managers stepped outside their comfort zones to gather information and collaborate with colleagues. Today, reliance on automated systems can unintentionally diminish critical thinking and collaborative skills.

To counter this trend, professionals must engage with one another, share insights, and challenge different viewpoints. This collaborative environment fosters a deeper understanding of risks and facilitates better decision-making.

Leveraging Collective Knowledge

In the realm of risk management, the collective wisdom of experienced professionals is a goldmine of knowledge. Each individual’s unique insights contribute to a broader understanding of risk dynamics.

While automated systems can streamline data access, the best strategies often emerge from meaningful discussions and collaborative efforts. As we face challenges in data interpretation and automated analysis, the need for critical thinking and interpersonal skills becomes even more pronounced.

The discussions at the conference reinforced the idea that professionals who actively engage with their peers will find that shared knowledge and diverse perspectives lead to more robust risk management practices.

A Balanced Approach to Risk Management

To effectively navigate the complexities of an AI-driven world, financial institutions must embrace a balanced approach that combines advanced analytical tools with human interaction and collaboration.

Organizations should foster environments that encourage exploration, critical thinking, and teamwork among risk management teams.

By promoting open dialogue and collaborative problem-solving, firms can prepare their employees to meet the challenges of a rapidly evolving landscape. This proactive approach will ultimately lead to enhanced risk management strategies and more resilient financial institutions.

Conclusion

As we look to the future of risk management in finance, it is clear that the evolving skill set required for success hinges on a combination of curiosity, adaptability, and strong social skills. While technology provides unprecedented access to data and insights, the invaluable benefits of human interaction remain irreplaceable.

The Ghost Effect in AI – What is it and why should we care about it?

Chandrakant is First Vice President, Lead Model Validator at Flagstar Bank, New York. He has more than 15 years’ experience in Financial Risk Management (Market and Credit Risk) and has previously worked with business consulting firm Genpact. He is this month’s guest editor of Connect Magazine

In the world of data science and econometrics, one of the most perplexing challenges is the ‘ghost effect’. This concept refers to the lingering impact of past data values on current model parameters, even when those values may no longer be relevant.

The ghost effect poses a significant dilemma: while certain historical data points might not accurately reflect current realities, they can still influence the performance of predictive models.

As a result, these seemingly outdated values can lead to severe miscalculations and inaccuracies in model outputs. This issue has long troubled econometricians and continues to provoke thought among data scientists and analysts alike.

The ghost effect is particularly insidious because it often cannot be simply discarded. Business considerations, regulatory requirements, or even legacy systems may require that certain historical data points remain in the dataset.

Thus, the challenge becomes one of managing these ghosts—acknowledging their presence while attempting to mitigate their negative effects on model performance. The implications of the ghost effect stretch far beyond mere statistical models; they delve into the realms of decision-making and predictive accuracy in various industries.

As we witness the rapid advancement of large language models (LLMs) and generative AI (GenAI), the

ghost effect takes on a new dimension. At CeFPro’s recent AI in Financial Services conference in New York, experts emphasized how GenAI tools are becoming increasingly adept at human-like interactions.

One speaker even claimed that in tests conducted within medical contexts, patients reported that interactions with GenAI felt more empathetic and human-like than those with actual healthcare providers.

While this progression marks a significant leap in AI technology, it also raises the question of how ghosting— rooted in past data—will manifest in these sophisticated systems.

Consider how LLMs operate: at their core, these models are predictive machines, designed to generate responses based on patterns learned from vast datasets.

As these models become more advanced and humanlike, the implications of the ghost effect will likely become more pronounced. For instance, if a language model interacts with a user who previously exhibited certain behaviors or attitudes, those past interactions could unduly influence the model’s current responses.

The challenge lies in understanding the extent to which these ‘ghosts’ shape the behavior of LLMs, particularly as they increasingly enter roles that require empathy, understanding, and nuanced communication.

Moreover, the ghost effect doesn’t just reside within the algorithms; it also mirrors human behavior in fascinating ways. In our daily lives, we often encounter situations where we have to decide how to treat individuals based on their past behaviors.

Imagine a customer who has previously behaved poorly; even if they’ve changed their attitude, we may hesitate to extend them the same privileges or discounts granted to more trustworthy customers. However, this response is often tempered by our human capacity for discretion and forgiveness.

We may choose to ‘forget’ past missteps while still retaining the memory of those actions. This human tendency to selectively recall past behaviors is deeply ingrained, shaped by our experiences, education, and the environments in which we were raised.

The complexities of human interactions highlight the unique challenges facing LLMs and GenAI as they evolve. Unlike humans, who navigate their relationships with a blend of emotional intelligence, social understanding, and instinct, LLMs are bound by their programming and the data they have been trained on.

As they become more adept at mimicking human conversation, the potential risks associated with ghosting become increasingly concerning. If an LLM continuously relies on outdated or negative patterns from previous

interactions, it may fail to recognize and adapt to positive changes in user behavior, thereby perpetuating biases and inaccuracies.

Furthermore, the ramifications of this ghost effect could extend into critical domains such as healthcare, finance, and customer service.

For example, a language model providing medical advice might cling to outdated patient data, inadvertently influencing its recommendations in ways that could harm patient care.

Similarly, in financial services, an LLM might make decisions based on historical customer interactions that no longer reflect their current circumstances, potentially leading to unfair treatment or financial miscalculations.

The challenge of ghosting in LLMs raises broader questions about trust and accountability in AI systems. As these models take on increasingly human-like roles, users must grapple with the implications of their interactions.

Will individuals trust a system that might be swayed by irrelevant past behaviors? How will organizations ensure that their AI tools are equipped to recognize and adapt to changes in user behavior over time? These are complex issues that deserve careful consideration, particularly as the reliance on AI in sensitive areas grows.

As LLMs become more integrated into our daily lives and workplaces, the ghost effect may lead to unintended consequences that warrant serious reflection.

While data-driven predictions can enhance our decisionmaking processes, we must remain vigilant about the limitations of these predictive machines.

The nuances of human behavior, shaped by personal experiences and emotional intelligence, cannot be fully replicated by algorithms. As such, the growing reliance on LLMs to facilitate critical interactions raises important ethical and practical questions that the industry must address.

In conclusion, the ghost effect presents a formidable challenge in the realm of time series analysis and predictive modeling, with implications that extend into the future of large language models and generative AI.

As we explore this multifaceted issue, we are left to ponder the risks associated with AI systems that may not fully grasp the complexities of human behavior.

The question remains: as we integrate increasingly sophisticated predictive models into our lives, how do we ensure that these systems recognize the ghosts of past data without allowing them to dictate our present and future interactions?

The answers to these questions may shape the landscape of AI and its role in our society for years to come.

Insights from the Inside

Chris Smigielski has more than 30 years’ experience in the financial services industry. He is currently Director of Model Risk Management at Arvest Bank, and was previously Vice President, Director of Model Risk Management at TIAA Bank.

On the Horizon is a new regular feature in which industry thought leaders take a long view on the emerging trends that they believe will be front of mind for risk professionals over the next three to five years.

As technological advancements and global dynamics evolve, the financial industry faces a range of risk challenges that will require proactive management. This series of interviews with industry leaders aims to explore individual perspectives on key challenges—from artificial intelligence and machine learning impacts to regulatory changes and geopolitical tensions.

In our inaugural interview, Chris Smigielski shares his thoughts on the trajectory of non-financial risk and the issues that he believes will start to demand increasing attention in the medium to long term.

Chris has more than 30 years of financial services industry experience, is currently the Director of Model Risk Management at Arvest Bank, and was previously Vice President, Director of Model Risk Management at TIAA Bank. His experience includes leadership roles at Diebold and Fiserv, as well as Asset/Liability Management and quantitative analysis at HSBC and First Niagara Banks.

What do you see as the most significant risks facing the financial industry over the next 3-5 years, and how should institutions prepare for them?

The significant risks facing the financial industry in the next three to five years include sophisticated cyber threats, data breaches, and AI-enabled fraud. Additionally, AI introduces internal model risks related to bias, decision-making integrity, and transparency, which could negatively impact consumers.

To prepare, institutions should enhance their risk management frameworks, implementing robust governance structures like SR 11-7. The NIST AI & GenAI Risk Management Frameworks provide valuable guidance for governance and controls. Proactive testing, performance monitoring, and strong internal controls, along with cybersecurity and fraud monitoring, are essential to mitigate these risks.

What are some best practices you can suggest for addressing biases in AI models? How can organizations ensure integrity and transparency in decision-making processes?

Addressing biases in AI models requires a comprehensive approach beyond traditional validation techniques. Best practices include:

• Thorough Data Analysis: Examine data representativeness to identify and mitigate biases at the data collection stage.

• Fairness and Bias Assessment: Implement techniques throughout the AI model lifecycle, creating test cases to evaluate model performance across various subpopulations.

• Methodology Review: Analyze algorithms to ensure they do not favor certain outcomes or groups due to design biases.

• Governance Frameworks: Establish frameworks to enhance transparency and accountability, documenting key decisions like data selection, algorithm choice, and performance metrics.

• Independent Audits: Conduct regular monitoring and independent audits of AI models to prevent biases from emerging as models train on new data.

By combining these practices, organizations can proactively ensure that AI systems remain transparent, accountable, and equitable in decision-making.

With the rapid evolution of technology in finance, such as AI and blockchain, how do you think the traditional banking and finance landscape will change in the next decade?

AI and blockchain are poised to significantly transform the traditional banking landscape over the next decade.

AI will enhance data analytics, provide personalized services, and automate processes such as credit underwriting and fraud detection. However, it also introduces risks like bias and model drift, necessitating rigorous governance to ensure transparency and explainability.

Blockchain technology is set to improve transaction processing efficiency and security in payments, smart contracts, and digital identity verification.

The growing demand for transparency, consumer protection, and accountability will drive an increased focus on ethical AI usage and stringent regulatory oversight.

Could you elaborate on what effective governance structures look like for AI and innovative technologies? Are there specific frameworks or strategies that you find particularly effective?

Effective governance structures for AI and innovative technologies must address the unique risks and challenges these systems pose while ensuring accountability, transparency, and alignment with the institution’s risk management goals.

There are several elements to creating a robust AI governance strategy.

One is Model Inventory and Risk Assessment. Similar to Model Risk Management (MRM) programs, governance structures should maintain an inventory of AI systems, assess their risk profiles, and continuously monitor their performance, focusing on transparency, explainability, and fairness.

It’s also important to address AI-specific risks. AI models introduce challenges like model drift, data bias, and lack of explainability. Governance should incorporate traditional validation techniques and enhancements such as fairness assessments, explainability analysis, and regular audits to detect drift.

Establishing an AI lifecycle framework that includes stages like model development, deployment, and ongoing monitoring is critical for compliance and effectiveness, but it’s also important to utilize established frameworks as well. Resources like the NIST AI Risk Management Framework provide valuable governance standards for AI technologies.

Finally, there needs to be centralized oversight. The creation of bodies such as an AI steering committee or an AI center of excellence can enhance oversight and ensure alignment with organizational goals and ethical standards.

Integrating AI governance with existing MRM frameworks while tailoring them to the specific risks of AI creates a comprehensive strategy for overseeing both model and nonmodel AI tools.

What emerging regulatory challenges do you believe financial institutions should prioritize, and how can they best adapt to an increasingly stringent regulatory environment?

There are so many emerging regulatory challenges that financial institutions will need to prioritize.

AI and machine learning risks are obviously key considerations. Regulators are focusing on the fairness, transparency, and compliance of AI models, necessitating robust governance frameworks like the NIST AI and GenAI risk management Frameworks for proper monitoring and auditing.

Managing cybersecurity risk will be critical. As cyberattacks become more frequent and sophisticated, enhancing cybersecurity defenses to meet those challenges is crucial.

And then we need to think about data security and privacy. Institutions must address concerns surrounding data ethics and privacy in their AI applications.

To adapt to this increasingly stringent regulatory environment, banks should foster cross-functional collaboration among risk management, compliance, and technology teams and also continuously update their control frameworks to ensure they meet regulatory expectations.

In your experience, what role do you think collaboration between fintech startups and established financial institutions will play in shaping the future of finance?

Fintech startups offer agility and cutting-edge solutions like digital payments, AI analytics, and blockchain applications. By partnering with these startups, traditional banks can enhance customer experience, streamline operations, and introduce new products.

However, this collaboration requires careful management, particularly regarding governance, risk, and compliance. Established institutions must mitigate risks associated with third-party partnerships—such as data security, model explainability, regulatory compliance, and operational resilience—through strong oversight and robust contractual agreements.

What safeguards do you believe are most critical for traditional banks adopting new technologies? How can they balance innovation with compliance and security?

In the highly regulated banking industry, adopting new technologies requires balancing innovation with compliance and security.

Critical safeguards include a robust risk assessment framework. Banks should evaluate the potential risks and benefits of new technologies, ensuring they align with existing infrastructure and regulatory requirements, such as the Fair Credit Reporting Act (FCRA).

As reliance on technologies like cloud computing increases, banks must also implement comprehensive cybersecurity protocols, including encryption, access controls, and continuous monitoring of third-party risks. Regular security audits and penetration testing are essential to identify vulnerabilities.

And then financial institutions will need to look to establish governance frameworks that prioritize transparency, accountability, and ethical considerations. This includes documenting AI-driven decision-making processes, maintaining audit trails, and conducting fairness assessments to prevent bias and ensure regulatory compliance.

By integrating these safeguards, banks can effectively embrace innovation while ensuring compliance and security are upheld.

What trends or developments in the global economy do you believe will have the most profound impact on the financial sector in the near future, and why?

Several global trends are poised to profoundly impact the financial sector, including increasing geopolitical tensions, leading to market volatility and disrupting global supply chains, thus creating financial uncertainty.

Economic decoupling between major economies may result in new trade barriers, economic sanctions, and shifts in global financial flows.

Finally, the ongoing transition to sustainable finance, driven by climate change concerns and ESG regulations, will compel financial institutions to adapt

their portfolios and risk assessments to align with longterm sustainability goals. Institutions that effectively manage these risks will gain a competitive edge in the evolving landscape.

What steps should financial institutions take to align their portfolios with sustainability goals? Are there specific challenges they face in this transition?

Aligning financial portfolios with sustainability goals requires a multifaceted approach tailored to each institution’s geographic footprint, strategic objectives, and regulatory environment.

Key steps include establishing clear, measurable sustainability targets—such as reducing carbon emissions or increasing financing for renewable energy—that are integrated into the broader business strategy.

A significant challenge in this transition is the need for accurate data and reliable metrics to assess the sustainability impact of investments, necessitating the adoption of standardized Environmental, Social, and Governance (ESG) criteria despite inconsistent reporting standards.

Additionally, financial institutions must navigate potential trade-offs between sustainability and profitability, as sustainable investments may involve longer payback periods or lower short-term returns.

To address this, they can develop innovative financial products like green bonds or sustainability-linked loans that align profitability with environmental and social outcomes.

Ultimately, achieving sustainability goals demands strong governance, transparency, and adaptability to meet regulatory requirements and societal expectations.

The Case for Data in Sustainable Finance: How Do We Drive Real Impact?

Sustainable finance has quickly become one of the most dynamic sectors in the global economy. Valued at $5.4 trillion in 2023, the market is expected to grow to $6.61 trillion by 2024, underscoring the sector’s importance as financial institutions around the world adopt greener strategies and set ambitious sustainability targets.

Yet, as the pressure to drive environmental impact increases, so do the obstacles—from regulatory demands to data quality and the emerging field of biodiversity investment.

To better understand the complexities of this evolving landscape, CeFPro conducted in-depth interviews with 30 sustainability experts across banks, insurance, and investment firms.

This research reveals several of the most pressing challenges facing financial institutions today. Here’s a closer look at these key findings and what they mean for the future of sustainable finance.

The Data Dilemma: Challenges in Accessing Quality Sustainability Data

As sustainable finance strategies expand, so does the need for quality data. To make informed decisions, financial institutions require accurate data on greenhouse gas (GHG) emissions, climate scenario analysis, and know-your-customer (KYC) information, among others.

However, sourcing reliable and standardized data remains a significant hurdle. Without consistent data metrics, comparing and assessing sustainability impact can be challenging, leading to unreliable benchmarks that undermine progress.

Current market research shows that, while the role of data in sustainability is growing, the path to obtaining high-quality information is still obstructed by inconsistencies.

A lack of unified reporting standards complicates comparisons across sectors and regions. Moreover, financial institutions often rely on third-party providers for non-financial data, which can introduce gaps in accuracy and timeliness.

Could AI Be the Answer?

Artificial intelligence (AI) is increasingly viewed as a potential solution for managing vast and complex datasets.

By automating data collection and analysis, AI can help reduce dependency on external providers, thus enhancing the accuracy of sustainability metrics. Financial institutions may need to evaluate both the opportunities, and the risks associated with transitioning to AI-driven methods.

Integrating AI could improve transparency, streamline reporting, and bring consistency to sustainability data, paving the way for more reliable impact measurement.

Regulatory Shifts and Sustainability:

Preparing

for the Corporate Sustainability

Due

Diligence Directive

Navigating the ever-evolving regulatory landscape is another major challenge in sustainable finance. As governments and international bodies set ambitious sustainability goals, financial institutions must adapt to new regulations designed to enforce accountability.

One key example is the EU’s Corporate Sustainability Due Diligence Directive (CSDDD), which will take effect at the national level in 2026.

This directive aims to hold companies operating within the EU more accountable for their environmental and social impact. However, preparing for CSDDD presents

its own set of challenges.

Among these challenges is the task of gathering accurate, high-quality data for compliance. To meet CSDDD’s standards, companies must establish strong supply chain transparency, implement continuous environmental and social impact monitoring, and create comprehensive reporting systems.

For many, this means reevaluating their existing data collection practices and investing in new technologies to ensure compliance.

The implications of CSDDD are far-reaching. Companies that successfully adhere to the directive could benefit from increased investor confidence,enhanced corporate reputation, and a competitive edge in the market.

Conversely, those that struggle to meet its demands may face financial penalties and reputational risks. As sustainability regulations continue to tighten, financial institutions must prioritize robust compliance strategies to stay ahead of regulatory expectations.

Beyond Climate: Biodiversity and Nature as Emerging Investment Frontiers

While climate change has long dominated the sustainable finance conversation, biodiversity and nature are beginning to gain traction as vital areas for investment.

Experts argue that integrating biodiversity into sustainable finance is crucial for mitigating environmental risks and creating new opportunities. However, as biodiversity-focused finance remains relatively new, both investors and institutions are still exploring the best ways to identify and support naturepositive initiatives.

Biodiversity loss poses significant financial risks, as industries such as agriculture, fisheries, and pharmaceuticals are highly dependent on natural ecosystems.

Financial institutions are therefore recognizing the need to develop nature-based investment strategies that contribute to biodiversity preservation. Yet, unlike climate metrics, there are fewer standardized methods for measuring biodiversity impact, making it difficult to quantify and track progress.

Thea Holland is an Event Producer at CeFPro

From Risk to Opportunity: Unlocking the Potential of Nature-Based Investments

Despite these challenges, there is growing interest in turning biodiversity risk into investment opportunity.

As financial institutions seek ways to quantify the economic benefits of biodiversity, a range of new metrics and frameworks are emerging. These include innovative tools for assessing ecosystem services, natural capital accounting, and green bonds specifically linked to biodiversity outcomes.

To unlock the full potential of nature-based finance, the industry will need to establish standards that allow for consistent measurement and reporting, thereby building investor confidence.

The Role of Greenwashing Regulations and Consumer Transparency

In response to rising concerns about greenwashing, regulatory bodies worldwide are ramping up transparency requirements for consumer-facing disclosures.

From sustainability product labels to public sustainability reports, financial institutions are under pressure to back their claims with concrete data.

The EU’s taxonomy for sustainable activities, for instance, is setting a framework for what qualifies as a truly sustainable investment. Additionally, the introduction of new greenwashing laws could hold companies more accountable for misleading claims.

With these regulations, companies are required to provide more transparent and accurate disclosures about the sustainability impact of their investments.

Financial institutions, in turn, are tasked with ensuring that their offerings meet these standards, avoiding reputational risks and fostering greater trust among consumers.

Leveraging Behavioral Science for Sustainability

Interestingly, behavioral science is increasingly being employed in the financial sector to nudge customers towards more sustainable choices.

Financial institutions are exploring ways to incentivize environmentally friendly spending habits and sustainable investment decisions.

For instance, some banks have introduced “green nudges” that encourage customers to support ecofriendly businesses or opt for paperless statements. Such initiatives are proving effective at raising awareness and fostering sustainable consumer behavior.

Behavioral science also plays a role in encouraging corporate clients to adopt greener practices. Simple prompts, such as messages reminding companies of the benefits of sustainability for both the environment and their bottom line, have shown to influence decision-making.

By tapping into behavioral insights, financial institutions can promote more sustainable actions across their client base, ultimately contributing to broader environmental goals.

A Look Ahead: The Path to Sustainable Finance

As the sustainable finance sector continues to grow, the challenges are significant—but so are the opportunities.

Financial institutions are increasingly aware that their role extends beyond profit-making; they are key players in driving a global transition to a greener economy.

From harnessing AI for data quality improvement to adapting to regulatory demands and exploring biodiversity investment, the industry must navigate a complex landscape with agility and foresight.

To stay at the forefront of sustainable finance, institutions must foster innovation, build resilience, and prioritize transparency.

By embracing these principles, they can not only overcome the challenges of today but also seize the opportunities of tomorrow. With sustainable finance predicted to reach new heights, the financial sector is poised to make a meaningful impact on global sustainability efforts.

Want to learn more about the latest trends in sustainable finance? Join us at the Sustainable Finance Europe Summit on February 25-26th in London. Network with leading experts, gain insights into cutting-edge strategies, and explore how your organization can drive meaningful change.

February 25-26, 2025

Why attend?

From innovative trends to practical solutions, our research-driven agenda delivers actionable insights and practical solutions to keep you at the forefront of sustainable progress.

Explore new strategies into sustainability progression and get one step ahead of upcoming regulations. Save over if you secure your place before December 13! 40%

Build your network

Connect with industry leaders, peers, and future partners that will propel your career in sustainable finance forward and leave your mark in this progressive sector.

Exploring the Growing Challenge of Large Language Model validation

Indra is VP-Model Risk Manager at MUFG Bank. He has previously held senior roles at New York Community Bank (NYCB) and GE Capital. He specializes in Quantitative Analytics, Data Science, Model Validation and Data Analytics and has a strong understanding of Machine Learning and Statistical Modeling.

In the world of artificial intelligence (AI), Large Language Models (LLMs) have emerged as powerful tools capable of understanding and generating human-like text. However, validating these models presents unique challenges that differ significantly from traditional machine learning (ML) models.

Indra Reddy Mallela is VP-Model Risk Manager at MUFG Bank. We caught up with him at our recent AI in Financial Services conference in New York to get his views on this emerging field, and the future challenges it will bring.

Fundamental Differences in Validation Requirements

One of the primary distinctions between traditional ML models and LLMs lies in the type of data they utilize and how they are validated. Traditional ML models typically work with structured data, such as numbers and categories, and their validation often involves standard methods like cross-validation. Metrics such as accuracy, precision, recall, and F1 score are commonly used to evaluate their performance.

In contrast, LLMs operate on unstructured data, primarily text. This introduces complexities in validation that require a different approach.

According to Indra, validation for LLMs involves assessing the quality of generated content, coherence, and alignment with human-like reasoning. Validators must evaluate outputs for ambiguity, contextual correctness, and creativity, which requires more subjective and context-aware assessments.

Unique Challenges in Validating LLMs

Validating LLMs comes with several challenges, Indra says, particularly concerning data quality and model interpretability:

1. Data Quality: LLMs are trained on vast and diverse datasets that can include biases, inaccuracies, and outdated information. Ensuring that training data is high-quality, and representative of diverse perspectives is crucial for effective validation.

2. Model Interpretability: Unlike traditional models, where decision-making processes are often transparent, LLMs can be considered “black boxes.” Understanding why a model generates specific outputs can be challenging, making debugging and refinement difficult.

3. Generalization and Alignment: LLMs are known to produce plausible sounding but incorrect outputs, often referred to as “hallucinations.” Ensuring that these models generalize well to new contexts while maintaining accuracy poses a significant challenge for validators.

Real-World Applications and Improved Outcomes

Indra highlighted several examples where effective LLM validation has led to improved outcomes in real-world applications:

• Risk assessment: In financial institutions, LLMs are utilized to assess Anti-Money Laundering (AML) suspicious risks by analyzing extensive datasets of transaction records and customer interactions. By validating these models through stress tests

and real-world simulations, organizations have improved their predictive accuracy, reducing false positives and enhancing the detection of suspicious activities.

• Customer engagement: Companies like OpenAI have implemented LLMs to enhance chatbot interactions in customer service. Through rigorous validation processes that include human feedback and continuous training on real customer conversations, LLMs have been fine-tuned to provide more relevant and personalized responses, resulting in increased customer satisfaction and engagement.

Emerging Industry Standards for Performance Metrics

As the field of AI evolves, several emerging industry standards for performance metrics are shaping best practices in LLM validation:

• HELM (Holistic Evaluation of Language Models): This framework focuses on evaluating the overall quality, robustness, fairness, and efficiency of language models.

• GLUE (General Language Understanding Evaluation): This widely used benchmark assesses LLMs on various language understanding tasks.

• MMLU (Massive Multitask Language Understanding): This metric measures a model’s ability to generalize across diverse domains, influencing future best practices by encouraging models that excel in fairness, efficiency, and contextual accuracy.

Leveraging Performance Metrics for Strategic Decision-Making

Organizations can harness performance metrics to align LLMs with business objectives, particularly in highstakes environments such as healthcare and finance. Indra emphasized that metrics can help:

• Identify areas for improvement: Metrics can pinpoint where models need refinement, such as reducing bias or enhancing accuracy.

• Inform risk management: By assessing a model’s performance in handling edge cases or critical failure scenarios, organizations can better manage risks associated with AI deployment.

• Guide investment decisions: Organizations can use performance metrics to identify which models yield the best return on investment through increased accuracy, customer satisfaction, or cost savings.

Ethical Considerations in LLM Validation

The validation of LLMs also brings ethical considerations to the forefront:

• Bias and Fairness: Given that LLMs can inherit biases present in their training data, organizations must rigorously validate these models for fairness, ensuring outputs do not discriminate based on gender, race, or socioeconomic status.

• Accountability and Transparency: As LLMs generate human-like text, they can inadvertently spread misinformation or inappropriate content. Ensuring accountability in validation and transparency in model outputs is essential for ethical deployment.

• Data Privacy: When LLMs are trained on sensitive or personal data, careful validation is necessary to ensure they do not generate sensitive information in their outputs, adhering to privacy laws like GDPR.

The Future of LLM Validation and Performance Metrics

Looking ahead, Indra sees several trends in the evolution of LLM validation and performance metrics:

• Improved Explainability: We can expect the emergence of more tools and methods aimed at enhancing the explainability of LLMs, allowing stakeholders to better understand how and why models generate specific outputs.

• Adaptive Metrics: As LLMs are applied across various fields, performance metrics may evolve to address domain-specific challenges, such as accuracy in highly regulated industries or ethical considerations in social applications.

• AI Co-Pilots: Innovations in human-AI collaboration will likely become more prominent, with humans working alongside LLMs to enhance validation processes and ensure that AI supports rather than replaces decision-making.

• Real-Time Validation: Organizations may develop mechanisms for real-time validation of LLM outputs, ensuring models continue to perform optimally as new data becomes available, thereby maintaining accurate and relevant outputs.

Conclusion

The validation of Large Language Models presents a complex yet fascinating landscape that differs significantly from traditional machine learning models. Indra’s insight illuminates the unique validation requirements, challenges, and ethical considerations that organizations must navigate.

As LLMs continue to gain traction in various industries, understanding these dynamics will be critical. By leveraging emerging standards and performance metrics, organizations can enhance their AI applications while ensuring accountability, fairness, and alignment with their strategic goals.

As we look to the future, ongoing innovations in explainability, adaptability, and human-AI collaboration will shape the effective deployment of LLMs, ultimately transforming the way we interact with technology.

How Much AI is Enough? Balancing the Future of Large Language Models

Smartphone Analogy: The Risk of Over-Engineering

A salient analogy that emerged during the conference was that of smartphones. The smartphonerevolution transformed how we communicate and access information. Initially, smartphones offered groundbreaking features that dramatically enhanced user experience. However, as technology progressed, manufacturers began releasing new models packed with features that often exceeded the requirements of the average consumer.

While features like ultra-high-resolution cameras, multiple lenses, and advanced AI capabilities can certainly enrich user experience, most individuals primarily use their phones for basic tasks such as texting, calling, and browsing social media.

This smartphone analogy serves as a cautionary tale. Just as consumers do not necessarily need the latest smartphone equipped with every conceivable feature to stay connected, the rapid advancement of LLMs may not align with the actual needs of the financial sector. In many cases, a simpler model could effectively meet the requirements without overwhelming users with unnecessary complexity or capabilities.

Take, for instance, the case of a fifth grader needing assistance with math. The obvious choice for a tutor might be a PhD holder or a seasoned educator.

for the workforce in financial services. As AI technology continues to evolve, there is a fear that many jobs may be rendered obsolete.

AI – Enhancing Rather than Replacing Human Intelligence

However, the consensus among conference attendees was that AI should not be viewed as a replacement for human intelligence but rather as a tool to augment and enhance human capabilities. The financial sector will always require human oversight and expertise, particularly in areas that demand critical thinking, ethical decision-making, and emotional intelligence.

As we look to the future of AI in financial services, it is essential to foster a culture of collaboration between technologists and domain experts.

By working together, these groups can ensure that AI solutions are designed with a clear understanding of industry requirements and challenges. This collaboration will help create AI applications that not only leverage advanced technology but also align with the practical needs of the industry.

The extraordinary rate of proliferation of AI technology within the financial services sector raises an interesting question: just how much AI is enough?

It’s a question that was very much to the fore at CeFPro’s AI in Financial Services conference, held in New York during October, as attendees debated the rapid development of large language models (LLMs) and questioned whether the financial industry truly needs increasingly advanced AI.

High on the list of concerns expressed in the various sessions and more informal discussions at the event were those relating to the balance of technological sophistication with practical applications.

The pace at which LLMs are being developed is staggering. Each day, new models are unveiled, boasting improved speed and efficiency, often claiming to be multiple times better than their predecessors.

Chandrakant is First Vice President, Lead Model Validator at Flagstar Bank, New York. He has more than 15 years’ experience in Financial Risk Management (Market and Credit Risk) and has previously worked with business consulting firm Genpact.

Balancing Technological Sophistication with Practical Needs

It’s remarkable to reflect on the genius required to create the first generations of LLMs. Initially, these models represented a groundbreaking achievement in natural language processing, setting the stage for a new era in AI.

However, as technology companies refine their processes and algorithms, it appears that a wide range of organizations are now capable of producing their own versions of LLMs. This proliferation of models has ignited a competitive race, leaving us to wonder about the implications for the financial services sector.

At the heart of this discussion lies the crucial question of how much AI we are truly ready for. The financial industry is characterized by its complexity and regulatory requirements, making the integration of AI both promising and challenging. While there is no doubt that LLMs have the potential to revolutionize areas like customer service, compliance, and risk assessment, it is essential to consider whether the level of sophistication offered by these models aligns with the actual needs of the industry.

However, a bright tenth grader could effectively fulfill that role, demonstrating that the level of expertise required often depends on the task at hand.

Similarly, when considering AI applications in financial services, it is crucial to assess whether the sophistication of the models being developed is truly necessary for the specific use cases being addressed.

Moreover, the financial industry faces unique challenges when integrating AI technology. Issues such as data privacy, security, and regulatory compliance demand a thoughtful approach.

The introduction of overly complex AI models can exacerbate these challenges, potentially leading to unintended consequences. As financial institutions navigate the complex regulatory landscape, the focus should be on developing AI solutions that enhance operational efficiency without compromising compliance or security.

Innovation vs. Practicality: Avoiding Complexity for Complexity’s Sake

During the conference, experts highlighted the importance of striking a balance between innovation and practicality. While it is essential to leverage advanced AI technologies to drive efficiency and improve customer experiences, there is a danger in pursuing complexity for complexity’s sake.

Financial institutions should focus on identifying specific pain points and determining whether advanced LLMs are truly the best solution. In many cases, simpler models that are tailored to the task at hand can deliver significant value without introducing unnecessary risks. The race to develop ever more sophisticated LLMs has also raised questions about the long-term implications

One of the key takeaways from the conference was the importance of responsible AI development. As the financial services sector embraces AI, it must prioritize transparency, accountability, and ethical considerations. This means establishing guidelines for the ethical use of AI, ensuring that models are developed and deployed in a way that aligns with societal values and legal requirements.

How Best to Harness the True Power of AI for the Benefit of All

The dialogue around LLMs in the financial sector is only just beginning. As more organizations enter the race to develop these technologies, it is crucial for industry leaders to engage in thoughtful discussions about the implications of AI on their operations, workforce, and society as a whole.

The ultimate goal should be to harness the power of AI in a way that enhances the human experience, drives efficiency, and addresses real-world challenges.

In conclusion, the rapid evolution of LLMs presents both opportunities and challenges for the financial services industry.

While the development of increasingly complex models may be enticing, it is essential to consider the practical needs of the sector and strike a balance between innovation and functionality.

As we move forward, let us embrace the potential of AI while remaining mindful of its implications, ensuring that the technologies we adopt serve to enhance, rather than complicate, our financial landscape.

In the era of digital transformation, banks increasingly rely on sophisticated models powered by big data and complex algorithms. However, these same models, designed to enhance risk management and profitability, have themselves become sources of a relatively new threat: model risk. This article aims to elucidate key aspects of model risk management (MRM) and highlight best practices employed by banks to mitigate this risk amid increasing regulatory scrutiny. The article focuses on three critical components of MRM framework: model identification, model risk tiering, and model risk appetite.

Model Risk

Model risk can arise from several sources: a) Incorrect model development; b) Correct development but flawed implementation; and c) Correct development and implementation but inappropriate model use.

Best Practice in Model Risk Management Frameworks for Banks

Givi Kupatadze, PhD is Head of Model Risk Management, TBC Bank. He is also a Lecturer of Data Science at Caucasus University in Tbilisi, Georgia.

The Guidance on Model Risk Management (FRS/ OCC, SR 11-7, 2011), the first comprehensive regulatory framework for MRM, defines model risk as: “The potential for adverse consequences from decisions based on incorrect or misused model outputs and reports. Model risk can lead to financial loss, poor business and strategic decision-making, or damage to a banking organization’s reputation.”

MRM Framework

Recognizing the importance of model risk, forwardthinking banks are implementing robust Model Risk Management (MRM) frameworks. A comprehensive MRM framework consists of several key components, each playing a crucial role in effectively managing model risk. The following table 1 summarizes these components and provides examples of their practical implementations within banks.

Table 1 provides a clear overview of the essential elements that constitute a robust MRM framework. Each component contributes to the overall effectiveness of model risk management within the organization. By implementing these components, banks can create a comprehensive approach to managing model risk, aligning with regulatory expectations and industry best practices. What follows is a discussion about practical aspects of selected key elements from components of MRM framework.

By implementing these components, banks can create a comprehensive approach to managing model risk, aligning with regulatory expectations and industry best practices. What follows is a discussion about practical aspects of selected key elements from components of MRM framework.

Model Identification

To effectively manage model risk, simple and clear guidance about what constitutes a model is a critical first step, serving as the foundation for subsequent risk assessment, validation, and governance processes. Regulatory frameworks have evolved to provide comprehensive definitions of models. SR 11-7 defines a model as:

“A quantitative method, system, or approach that applies statistical, economic, financial, or mathematical theories, techniques, and assumptions to process input data into quantitative estimates.” This definition encompasses “quantitative approaches whose inputs are partially or wholly qualitative or based on expert judgment, provided that the output is quantitative in nature.”

The Model Risk Management Principles for Banks (Bank of England, SS1/23, 2023), the latest solid regulatory framework, further specifies concept of models by distinguishing between quantitative methods and models: “Models are a subset of quantitative methods. The outputs of models are estimates, predictions, or projections, which ... are inherently uncertain.” This definition of models clearly delineates the difference between models and quantitative methods by emphasizing the presence of uncertainty in model outcomes.

described in the following table 2 typically qualifies a quantitative process as a model.

By assessing a quantitative process against these criteria, banks can more accurately determine whether it qualifies as a model. This approach aligns with regulatory guidance and provides a practical tool for model risk management, ensuring appropriate oversight and controls are applied where necessary.

Table 2: Four Components of Models

Input data and assumptions

To emphasize complexity and variety of inputs that models typically require

Quantitative process

To highlight sophisticated mathematical or statistical techniques often employed in models

Uncertain model outcome

To underscore inherent uncertainty in model outputs, a crucial characteristic distinguishing model from calculators

Use of model outcome in decision making

Stresses the importance of how model results are applied, particularly in business-critical or regulatory contexts

Statement

To facilitate efficient management and monitoring of model risk

Roles and Responsibilities To ensure accountability across all levels of the organization

Supervisory Board, Board of Directors, Model Developers, Model Implementation, Model Owners, Model Validators, Model Risk Governance, Audit

To operationalize the concept of uncertainty, one approach is to examine whether changes in input data or assumptions alter the formula used to calculate the final output. For instance, in a multiple linear regression model, modifications to the input data result in changes to the model coefficients. This, in turn, alters the formula used to calculate the final output, thus demonstrating the inherent uncertainty in the model’s predictions.

Synthesizing the guidance from SR 11-7 and SS1/23 provides banks with a clarity to differentiate between models and non-models (such as calculators). Specifically, the presence of all three components

Model Tiering

Model risk tiering is a crucial component of model risk management, serving two primary purposes: optimizing the allocation of model validation resources and establishing foundations for model risk appetite statements and metrics. Model risk tier assessment typically employs a scorecard approach based on two primary dimensions: Model Materiality and Model Riskiness. The common classification of model risk tiering levels includes four distinct tiers: Critical (tier 1), High (tier 2), Medium (tier 3) and Low (tier 4). For each dimension, relevant components are selected for

Table 1: Components of Robust MRM Framework

assessment. In a simplified example: Model Materiality can be measured by Monetary Impact component and Model Riskiness can be measured by Input Data Quality component. The standardized criteria are used to evaluate each component across the four tiering levels: assess monetary impact (Critical to Low) and evaluate input data quality (Critical to Low). Based on the assessment of each component, overall rating for Model Materiality and Model Riskiness is determined. The model tiering matrix is used to determine the final

model risk tier based on the intersection of materiality and riskiness ratings.ratings in the model tiering matrix, which provides a consistent approach for model risk classification across the bank.

The final model risk tier is determined by crossreferencing the model materiality and model riskiness ratings in the model tiering matrix, which provides a consistent approach for model risk classification across the bank.

How streamlined are your

Model Risk Appetite Statement

Banks assume risk by using models designed to enhance risk management and profitability. However, without appropriate limitations, model risk has the potential to jeopardize the bank’s strategic objectives and overall business strategy. The model risk appetite statement, metrics and limits are used to establish comprehensive oversight and maintain robust model risk control at the enterprise level.

A practical and effective approach to defining model risk appetite involves analyzing the distribution of models across risk tiers in the bank’s model inventory. This framework establishes clear thresholds for model risk assessment and monitoring on aggregate level:

Red Zone (High Risk) - triggered when tier 1 models constitute 20% or more of the total model inventory.

Yellow Zone (Medium Risk) - triggered when tier 1 models represent less than 20% and tier 2 models account for 40% or more of the inventory.

Green Zone (Low Risk) - maintained when conditions specified red or yellow zones are not met.

This tiered approach enables systematic monitoring of the bank’s aggregate model risk exposure and facilitates timely implementation of necessary risk mitigation measures when predefined thresholds are breached.

As operational inefficiencies persist, are you feeling the pressure from regulators? Have your say >

Join us in our essential research on the challenges and priorities of financial reporting in banking. Your insights will help identify key pain points and opportunities for improvement, ultimately contributing to more effective and compliant financial reporting practices across the industry.

Event SUSTAINABLE FINANCE EUROPE

London, United Kingdom 25-26

FEB View details >

www.cefpro.events/sustainable-finance-europe

Event ADVANCED MODEL RISK USA

NYC, United States of America 4-5

MAR View details >

www.cefpro.events/advanced-model-risk-usa

Event TREASURY & ALM USA

NYC, United States of America 25-26

MAR View details >

www.cefpro.events/treasury-alm-usa

Event AI IN INSURANCE EUROPE

London, United Kingdom 4-5

JUN View details >

www.cefpro.events/ai-insurance-europe

To view our full upcoming events calendar click here or visit, www.connect.cefpro.com/upcoming/events

Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.