Has your organization implemented any AI solutions for risk management purposes
If yes, or planning to implement in the next 12 months, please provide any detail on where you plan to leverage AI for risk management
How familiar are you with AI technologies in the context of risk management
What do you perceive as the primary benefits of integrating AI into risk management processes
How do you think AI can assist in identifying and mitigating emerging risks within the financial industry
In which area of risk management do you see the most significant opportunities for leveraging AI
What do you perceive as the 3 key benefits of leveraging AI for risk management within financial services?
Can you provide any context – how can AI impact this area?
What are the main challenges hindering AI adoption in risk management within your organization
What is the biggest regulatory or compliance challenge you foresee in the adoption of AI for risk management in the financial sector?
In your opinion, what are the 3 primary risks associated with implementing AI in risk management processes?
How do you expect AI adoption to impact job roles and responsibilities within risk management teams over the next decade
What role do you see human expertise playing in conjunction with AI-powered risk management systems
What changes do you foresee being required to adapt to the evolution of an
Approximately how much new
team
Key stats
Benefits of AI in risk management:
Anticipate
Executive summary
This report aims to provide an in-depth analysis of artificial intelligence (AI) adoption for financial services organizations. A survey was conducted to explore the perspectives of key industry stakeholders in relation to AI, upcoming trends, and service delivery. The results provide insight into the current level of implementation across the industry, perceived benefits and challenges, and future investment priorities. Some key findings of the data include:
AI adoption status and growth trends:
• Current implementation: 32% of surveyed financial institutions have already implemented AI solutions for risk management, while 30% plan to implement AI in the next 12 months. However, 33% of surveyed organizations have not yet adopted AI, indicating a mix of early adoption and cautious approaches within the industry.
• Emerging use cases: AI is being applied across a diverse range of risk management activities, including fraud detection, liquidity risk monitoring, credit analysis, and automated risk assessment; these applications demonstrate the expanding role of the technology beyond traditional applications.
Strategic benefits of AI integration:
• Operational efficiency: 66% of respondents cited improved accuracy and efficiency as the primary benefit of AI, followed by enhanced risk assessment capabilities (64%) and cost reduction (48%). These findings underscore AI’s potential to streamline processes and reduce operational costs.
Key challenges to AI adoption:
• Skill gaps and data quality: the most significant barriers to AI adoption were cited as data privacy and security concerns (57%), a lack of skilled personnel (46%), and data quality and infrastructure issues (50%). These challenges highlight the need for targeted investment in talent development and data management strategies.
• Regulatory compliance and integration: concerns around data privacy, security, and integration with existing systems were also noted as major obstacles, indicating a need for robust governance frameworks and strategic planning for successful and effective AI integration.
Workforce impact and organizational changes:
• Job roles and responsibilities: 62% of respondents believe that AI will augment existing roles by enhancing productivity, while 50% anticipate automation of routine tasks, which may potentially lead to job displacement. This suggests a transformative impact on job roles, necessitating a focus on workforce upskilling and reskilling.
• Human expertise in AI systems: despite advancements in AI, human oversight remains crucial, particularly in interpreting results and making strategic decisions. This hybrid approach is seen as essential for effective AI deployment in risk management and financial services organizations.
Investment trends and strategic priorities:
• Investment levels: 60% of organizations reported spending less than $1,000,000 on AI initiatives in the past fiscal year (2023–2024), reflecting a cautious yet strategic approach to AI investment. Future spending is expected to focus on areas such as real-time risk monitoring, fraud detection, and compliance.
• Priority areas for AI deployment: Financial institutions are reviewing AI deployment across a diverse range of areas. Survey respondents reported to be prioritizing investments in fraud detection (37%), operational risk management (22%), and compliance (29%). These areas are seen as offering the highest potential for AI to enhance risk management processes. Recommendations:
• Strategic integration: financial institutions should adopt a phased approach to AI implementation, focusing on high-impact areas and ensuring strong data governance frameworks are in place.
• Upskilling and workforce development: investment in training programs to develop AI and data management skills across the organization will be critical to overcoming talent shortages and ensuring successful AI integration.
• Regulatory preparedness: organizations must proactively engage with regulators and develop internal policies to navigate the evolving regulatory landscape for AI.
Introduction and methodology
With the use of artificial intelligence (AI) and other technology advancements increasing within financial services, it is vital that organizations have a deep understanding of the status of the industry and the perceived risks and opportunities. CeFPro sought to explore the direction of AI within the financial services sector and gain a broad insight on where investments are being focused within a risk management context. This report looks across the financial services sector as a whole, while future reports will explore the more intricate nuances across the banking and insurance sectors.
By adopting a broad approach, the data offered additional opportunities to understand the general direction that the industry is heading in with regard to AI adoption; it also provided an opportunity to examine where risk management processes can be enhanced. The survey focused specifically on risk management, identifying where AI is advancing it, and enhancing organizational efficiency and accuracy.
CeFPro independently conducted the survey with its global financial services audience, with a specific focus on risk management. While the research was open to a global cohort of finance professionals, the survey respondents were primarily from North America, Europe, and the UK. The survey ran from July 3, 2024, through to August 23, 2024, and received 250 responses. The scope of the respondent cohort was varied, with key stakeholders from the insurance, banking, asset management, consultancy, and other sectors
contributing to the final data. The second phase of research consisted of 1-on-1 interviews with industry thought leaders to better understand the results and provide industry examples and case studies. These interviews are referred to throughout as ‘additional research’.
The adoption of AI in risk management is rapidly transforming the financial services industry, in particular risk management. As organizations explore opportunities to harness the power of AI to improve efficiency, accuracy, and risk assessment, there remain many unknown risks, challenges, and opportunities. Based on a comprehensive global survey and detailed discussions with industry leaders, this report aims to provide an indepth analysis of the current state of AI integration in risk management across the financial services sector.
The survey findings reveal that, despite the growing interest in the field, many firms are still only in the early stages of AI adoption, with a surprising number yet to implement AI-driven solutions. While operational risk areas, such as fraud detection and financial crime, are leading the way, barriers such as data quality, regulatory compliance, and executive buy-in continue to hinder broader adoption.
This report delves into these key themes, highlighting the benefits and challenges associated with AI adoption, the functional use cases gaining traction, and the strategic considerations for future development.
Current state of AI adoption in risk management
A:
The adoption of AI in risk management is in its earliest stages for many financial institutions. While interest in AI-driven solutions is high, implementation remains limited, with most organizations focusing on pilot projects or operational risk areas like fraud detection. This section examines the current landscape of AI adoption, highlighting the progress, challenges, and varying maturity levels across different sectors within the industry.
Surveyed organizations were largely split regarding the implementation of AI solutions for risk management purposes (Figure A). 33% of respondents reported to have not yet implemented any AI solutions, while an additional 5% were not planning to implement for risk management at all. These findings clashed with the more positive view of the majority, who reported to have either already implemented solutions (32%) or were planning to in the next 12 months (30%).
In additional research, multiple industry experts were surprised at the high percentage of respondents who were not currently implementing AI solutions in risk management, or not planning to at all. Given extensive industry discussions and interest in AI over the last few years, it was surprising that only approximately a third had begun any level of implementation, suggesting that their efforts may still be in an immature state.
In additional research, several use cases were highlighted as some of the areas where organizations are exploring AI-related opportunities. Many centered around optional risks, such as fraud and financial crime. Some respondents also reported to be exploring AI opportunities in credit risk, human resources (HR), and legal and compliance issues. Almost all programs were exploring efficiency opportunities and how to drive efficiency across teams.
From the initial findings, it appears firms are cautiously exploring AI, with the primary focus being directed towards operational efficiency, likely until the longer-term implications for the financial sector, including regulation, are understood.
Regulatory concerns, and the need for robust data quality and controls, remain major barriers to entry for AI implementation. Alongside challenges surrounding executive buy-in, which seemed more prevalent in smaller organizations due to the perception of high costs of implementation, and the largely unproven use cases and benefits. The barriers to adoption and key challenges are addressed in more detail in section 5 of this report.
Those who reported to have already implemented AI solutions in risk management were asked to provide specific use cases. A condensed version of the range provided is presented in Figure B.
Figure
Figure B:
If yes, or planning to implement in the next 12 months, please provide any detail on where you plan to leverage AI for risk management
Risk profiling and scoring
IT security, including cybersecurity threat prevention
AI lending models for underbanked populations
KYC
Operational risk assessments Intelligent agents for investment risk assessment
Document review automation
Using of external sources
Automation in claims, policy processing, and underwriting
Policy gap analysis
Customer complaint processing
Systematic risk monitoring Process
Regulatory research and documentation analysis
Loan processing document analysis
Fraud mitigation and prevention
Log analysis and transaction monitoring
Regulatory compliance monitoring
Data extraction from customer documents
Creditworthiness assessment
AI for customer service automation
Credit risk scoring and evaluation
Internal fraud, insurance, supply chain, and ESG fraud
Scenario planning and stress testing
Contract and third-party due diligence
Compliance review and categorization
Respondents were also questioned on their familiarity with AI technologies in the context of risk management; 59% stated that they were ‘somewhat familiar’ (Figure C). This was aligned with the expectations of the industry experts and additional research, given that the field is still emerging and has a high level of uncertainty. While there is general awareness across the industry, there is limited deep expertise in certain departments and areas, including data science. The depth of familiarity varies significantly across roles, which could also contribute to the high number of respondents that are not ‘very familiar’ with AI; for example, those working in risk management may have less need to learn about AI than those working in governance, who are traditionally more
C:
*Data may not add up to 100% due to rounding
focused on oversight. There appears to be an upward trend in hiring individuals with computational finance and mathematics backgrounds who may have exposure to AI. However, there are few educational courses on offer that focus exclusively on AI in a risk management context.
There is an expectation that focus and investment in AI will continue on an upwards trajectory. Respondents stated that they expected AI to become more integrated into business processes. With this comes a higher demand and greater need for subject matter experts within risk management.
Figure
Benefits of AI in risk management
This section explores how AI is helping organizations optimize risk assessment processes, reduce operational costs, and make more informed decisions, ultimately enabling them to navigate an increasingly complex financial landscape with greater agility and precision.
Range of benefits
The survey examined the perceived primary benefits of integrating AI into risk management processes (Figure D). The leading areas highlighted by respondents were better detection of anomalies and fraud (66%), improved accuracy and efficiency (66%), and enhanced risk assessment capabilities (64%). These areas were also highlighted by the industry experts interviewed in additional research as the most immediate and impactful current applications. Additional research also highlighted the role of AI in streamlining internal processes, with the organization of data, compliance management and credit risk cited as key features. For example, AI can use alternative data in credit risk to assess loan applications, among other things.
Less referenced perceived benefits were cost reduction (48%) and enhanced customer experience (30%). Both were considered secondary benefits to AI adoption, with an expectation that they will continue to increase in relevance as the industry progresses.
Respondents provided several text responses, which provided other perceived benefits, including:
• ‘Decrease in handling times if some of the work can be done by AI’
• ‘Accelerate evaluation process of clients at potential risk’
• ‘Cyber security and threat prevention’
• ‘Forecasting and early warning’
• ‘Upskilling of employees by augmenting AI capabilities’
• ‘Increased revenue with reduction in false positives’
Mitigating risk
The survey also asked respondents to describe how they felt AI could be used to identify and mitigate emerging risks in the financial industry (Figure E). As the early detection of anomalies can help organizations to identify and take a proactive approach to emerging threats, it was unsurprising that 74% of respondents highlighted it as a key application. Again, a case study of fraud detection was given, with the opportunity for AI models to conduct real-time fraud detection in areas such as check deposits and transactions, where AI can identify unusual patterns that may indicate fraudulent activity. Detection capabilities and the ability to monitor internal and external data sources in real time contribute to disruption avoidance, enabling organizations to proactively manage risks before they impact business operations.
Another area highlighted by respondents was scenario analysis and stress testing (56%). By simulating various risk scenarios, AI can provide a more dynamic and comprehensive assessment, enabling organizations to better prepare for potential crises; in particular, AI’s ability to model complex scenarios in operational risk can help improve the accuracy and speed of stress testing,
Figure F:
In which
*Data may not add up to 100% due to rounding
which can further assist efforts in regulatory compliance. The use of AI in stress testing was also highlighted by industry experts in the additional research phase. Despite the benefits mentioned, challenges remain in integrating AI for real-time monitoring and scenario analysis. Data quality and the ability to process unstructured data were highlighted as major barriers in additional research, with participants noting that without high-quality data and robust infrastructure, AI models may produce unreliable or inconsistent results. Repeated inconsistencies could undermining the operational effectiveness of organizations, as well as undermine their reputation in the eyes of key stakeholders.
Leverage opportunities
The survey also explored where respondents felt the significant opportunities to leverage AI in risk management lay (Figure F). Operational risk was ranked as the most significant (32%), followed by credit risk (26%), and compliance/regulation risk (25%). As outlined throughout this report, operational risks are largely seen
Figure G:
as the most accessible opportunities for AI development, with work underway in many organizations to develop their use of AI in fraud detection, financial crime, third party risk, cybersecurity, and model risk.
Credit risk, while not reported as the ‘most significant’ opportunity, remains a subject of great interest, with many industry experts viewing it as a ‘next-phase opportunity’, which will rise in prominence as the industry develops maturity and understanding within operational risk areas. While areas such as credit scoring, regulatory reporting, and compliance monitoring all hold potential for future implementation, efforts in these fields are more cautious due to the complex regulatory landscape and potential for broader impacts.
Participants were asked to list, in text form, the three key benefits of leveraging AI for risk management within financial services. Figure G lists a summary of the key areas.
What do you perceive as the 3 key benefits of leveraging AI for risk management within financial services?
Ranked as #1:
• Improved accuracy
• Cost reduction
• Better detection of anomalies and fraud
• Early detection of risks
• Enhanced data analysis
• Automation of decisionmaking
• Enhanced risk detection and mitigation
• Improved efficiency
• Real-time monitoring
• Enhanced predictive analytics
Ranked as #2:
• Efficiency improvements
• Cost efficiency
• Better trend analysis
• Faster data collection and analysis
• Enhanced customer outcomes
• Early warning indicators
• Enhanced compliance and fraud detection
• Predictive analytics for risk mitigation
• Improved decision-making
• Better quality and quantity of analysis
Ranked as #3:
• Better customer experience
• Consistency in processes
• Enhanced regulatory compliance
• Improved monitoring
• Better risk management and control
• Enhanced productivity
• Enhanced data usage
• Faster detection of fraud
• More comprehensive risk assessments
• Proactive risk management
Moving forwards
The need for a strong governance framework to oversee AI applications in risk identification was emphasized in CeFPro’s additional research. Organizations must be conscious to ensure that AI tools are used responsibly and in compliance with regulatory requirements.
For now, the applications of AI within operational risk are visible and appear highly effective. However, its application to financial risk areas, including credit and
Figure H:
Can you provide any context – how can AI impact this area?
market risk, are evolving, and will need to be continually monitored to ensure best practices are developed. The complexity of financial models and the potential consequences of errors in financial risks are potentially higher than in non-financial risk disciplines. Once technology and data frameworks are more mature, AI has huge potential in financial risk, with the potential for more accurate modeling and risk forecasting.
Main uses of AI in risk management
• Fraud detection: Detecting and preventing fraud
• Data aggregation: Aggregating data sets from multiple sources
• Predictive analytics: Predictive modeling, early warnings
• Bias: Potential for algorithmic bias in decision-making
• Data quality: Inconsistent or poor-quality data can lead to inaccurate outcomes
• Resource requirements: High demand for skilled personnel and expertise
• Privacy concerns: Data privacy and security issues, especially with sensitive financial data
• Transparency: Difficulty in explaining and understanding AI’s “black box” processes
• Integration: Challenges in integrating AI with existing systems and workflows
Examples of AI implementation
• Fraud detection and prevention: Identifying fraudulent activities by analyzing transactional patterns and behaviors
• Transaction monitoring: AML, Anti-Fraud systems
• Third-party monitoring: Due diligence, third-party assessments
• Anomaly detection: Identifying unusual patterns in real-time to mitigate potential risks
• Cybersecurity threat detection: Real-time monitoring and detection of security breaches
Challenges and barriers to AI adoption
Despite the numerous potential benefits that AI could bring to risk management, there remains significant hurdles to its widespread adoption. Financial institutions must manage and address complex issues such as data quality and regulatory compliance, and the challenge of integration with existing systems. Additionally, a perceived lack of skilled personnel and executive buy-in can further impede progress. This section explores the primary challenges organizations encounter when implementing AI solutions.
Data use
While there are a wide range of challenges hindering AI adoption in risk management within organizations, concerns surrounding the use and protection of data appear to be the most prevalent (Figure I). A total of 57% of respondents rated ‘data privacy and security concerns’ as a major issue hindering adoption, while 50% of the same cohort also highlighted ‘data quality and infrastructure’ as a concern.
A significant barrier to AI adoption is upholding data privacy regulations such as Europe’s GDPR and California’s CCPA. Due to their nature, financial institutions manage vast amounts of sensitive data; regulations seek to protect said data from unauthorized access and breaches. In additional research, industry experts highlighted that the stringent requirements for data handling and processing can slow down AI implementation, and limit the types of data that can be used for model training and risk assessment.
Additional research also highlighted concerns around the potential for malicious or bad actors to exploit weaknesses in AI models and cloud-based data storage platforms. The reputation and financial damage for such a breach could be significant. Ensuring robust cybersecurity measures and data encryption is an essential aspect of AI implementation. However, these aspects also add to the complexity of widespread
implementation, and can ultimately exacerbate the cost and timescale of effective AI deployment. As outlined, organizations need strong data governance frameworks to monitor and control how AI systems access and use sensitive data. Policies should be included on data usage, retention, and sharing.
Data infrastructure and quality
For AI models to be effective, they require high-quality, consistent, and complete data sets. Incomplete or inaccurate data can undermine the validity of such models. In additional research, industry experts highlighted the struggles that many organizations have with inconsistency in data formats and completeness of datasets. Poor data quality can lead to inaccurate assessments, minimize the effectiveness for risk assessments and reduce false positives.
Many financial institutions are complex organizations, operating on multiple layers of legacy infrastructure. As a result, integrating AI models with legacy systems and databases can be a major challenge. As AI adoption grows, so does the need for scalable data infrastructure. In additional research, industry experts mentioned that organizations are not equipped to manage the volumes of data required for AI models. Although investing in modern data platforms and cloud solutions is a necessity, it can add to the challenge of securing buy-in from senior management.
What are the main challenges hindering AI adoption in risk management within your organization? (Select all that apply)
Figure I:
What is the biggest regulatory or compliance challenge you foresee in the adoption of AI for risk management in the financial sector?
*Data may not add up to 100% due to rounding
Regulation
Another challenge in adoption is uncertainty around the future of regulation in the industry. As global regulations continue to develop, the future of development and expectations remains somewhat uncertain. Some recent changes include the proposed AI Act in Europe, GDPR, and ensuring data protection standards are aligned with AI deployment. While the US has limited federal regulation, key regulators are expected to draft guidelines for AI use in the imminent future. Data standards such as CCPA are also key considerations for those affected. Other jurisdictions, such as the UK, are developing their own regulatory frameworks. China has been vocal about its focus on ‘ethical AI development’, and Canada, Australia, and Singapore are all advancing AI regulations.
As a result, it was not surprising to see a ‘lack of clear regulatory guidelines’ highlighted by 35% of survey respondents as the most significant regulatory or compliance challenge in the adoption of AI for risk management in the financial sector (Figure J). As many jurisdictions are only in the process of developing, or beginning to formulate, comprehensive AI regulations, it is likely that the exact nature of these regulations, as well as how they interact on an international level, will remain unclear for the immediate future. Financial institutions remain uncertain as to the future of these evolving standards, which could be hindering confident AI implementation.
The second largest regulatory/compliance challenge impacting the adoption of AI for risk management reported in the survey was ‘data privacy and security concerns’, which was selected by 33% of the respondents. These findings did not come as a surprise to the industry experts interviewed in additional research,
as the stringent nature of data privacy regulations are well known across the industry. Protecting sensitive information is key to financial institutions from a regulatory perspective. Regulations like GDPR and CCPA put stringent data protection measures in place, aligning with high concerns for data privacy and security in AI applications.
Not all respondents selected the options provided in the survey; 4% selected ‘Other’ as an option. Although not all highlighted alternatives were related to regulation or compliance, several raise interesting points and reflect the range of opinions held by key stakeholders:
• ‘The regulatory landscape around AI ethics hasn’t been defined’
• ‘Immature model risk guidance’
• ‘Inability to explain why the AI has detected what it has as it learns beyond our capability’
• ‘How to incorporate the use of external data – e.g. adverse media reports’
• ‘AI does not address the challenge of managing risk in real time, it is a solution looking for a problem’
• ‘Lack of skilled personnel and clear vision for use case’
• ‘Presence of poor and unverified data inputs driving poor quality AI outputs’
• ‘Methods often cannot easily be tailor-made to fit the process they are modeling’
Figure J:
Risk
The survey also asked respondents to rank the top 3 risks they associated with implementing AI in risk management processes (Figure K). With 250 text responses, below is a summary of the top 3 in each section:
Figure K:
In your opinion, what are the 3 primary risks associated with implementing AI in risk management processes?
Ranked as #1:
• Data privacy and security: Concerns about safeguarding sensitive data, preventing breaches, and ensuring compliance with data protection regulations like GDPR.
• Algorithmic bias: There is significant concern that AI models may introduce bias due to flawed or unrepresentative training data, which could lead to unfair or discriminatory outcomes, particularly in lending or credit risk decisions.
• Model accuracy and reliability: Institutions are worried about the accuracy of AI predictions and decisions. The potential for AI to generate incorrect or unreliable outputs poses a substantial risk to decisionmaking in critical areas like credit scoring and risk assessment.
Ranked as #2:
• Regulatory compliance and uncertainty: Financial institutions are grappling with the challenge of aligning their AI systems with existing and emerging regulatory frameworks, which are often unclear or evolving, especially across multiple jurisdictions.
• Lack of skilled personnel: A shortage of expertise in AI development, implementation, and monitoring is a major barrier to adoption, making it difficult for organizations to fully leverage AI technologies.
• System integration and compatibility: Difficulty in integrating AI technologies with existing legacy systems and ensuring seamless operation across platforms is a significant operational challenge.
Ranked as #3:
• Transparency and explainability issues: The ‘black box’ nature of some AI models makes it difficult for institutions to explain decision-making processes to regulators, customers, and internal stakeholders, leading to a lack of trust.
• Cybersecurity threats: AI systems are increasingly becoming targets for cyberattacks. The vulnerability of AI systems to data manipulation or breaches poses a serious threat to the security and integrity of financial operations.
• Overreliance on AI: Some institutions are concerned about becoming overly dependent on AI for decision-making, potentially overlooking the need for human oversight and judgment in critical risk scenarios.
Impact on workforce and organizational structure
The integration of AI in risk management is transforming not only technological processes, but also the workforce and organizational structure of financial institutions. As AI takes on more complex tasks, institutions are rethinking roles and upskilling employees. This section explores how AI is impacting job functions, creating new demands for specialized skills, and driving organizational changes to accommodate human and AI collaboration.
Productivity and role remit
When asked how respondents expect AI adoption to impact job roles and responsibilities within risk management teams over the next decade, 62% responded with: ‘Augment existing roles, enhancing productivity’ (Figure L). Many respondents in the survey and additional research expected AI to automate many routine tasks such as data collection, report generation, and certain risk monitoring activities. There is debate about the future role of the risk manager with the use of AI, and whether AI will make certain roles redundant. In CeFPro’s additional research, industry experts highlighted that AI could be used to upskill teams in their current roles and increase efficiency. Automating manual tasks allows humans to focus on more practical analytical and strategic decision-making, and conduct oversight of AI and its outputs. The impact of AI in this field widely believed to be substantial; only 1% of survey respondents expected to see no significant impact of AI adoption to job roles within risk teams over the next decade.
Survey respondents (50%) also highlighted that they expected role automation to have an impact on job displacement. Industry experts held a positive outlook towards job displacement, with many seeing little
negative impact and a drive for more enhanced processes as a positive bi-product. There is also an expectation that the workforce will evolve to focus on more specialist roles. As AI becomes more integral to an organization and risk management practices, additional opportunities are expected to be created to ensure the fairness, accuracy, and long-term impact of AI applications. 43% of survey respondents saw AI adoption as a key way to create new job opportunities within risk management teams over the next decade.
Organizations will need to invest in upskilling their workforce and hiring talent proficient in AI and data science. This is particularly the case for risk management services, where there remains a limited pool of specialized AI talent. As implementation continues, cross-functional collaboration will become increasingly important, with collaboration between risk management, data science teams, and IT necessary to ensure a holistic assessment of risk and opportunity.
How do you expect AI adoption to impact job roles and responsibilities within risk management teams over the next decade? (Select all that apply)
Figure L.
M:
Human expertise
The survey also explored the role of human expertise in the use of AI-powered risk management systems. Respondents highlighted a range of potential roles, with the most popular being in the interpretation of results and decision-making (70%), closely followed by the oversight and validation of AI outputs (68%) (Figure M). This aligns closely with the data presented in Figure L, where participants in additional research focused heavily on the opportunities for AI to do much of the ‘heavy lifting’, with the human focus being on strategy and decision-making as a result of the AI generated output. The changes may impact the structure of risk management teams, creating more specialized and streamlined organizational structures. There may also be impacts to organizational hierarchy, with AI automating data-driven processes. The role of mid-level management may adapt to many of the processes being made more efficient, and focus on the additional upskilling of teams to further build on specialized departments and crossdepartment collaboration.
Industry experts also highlighted in additional research how the demand for cross-functional collaboration is expected to increase. Given the level of maturity, concerns remain around AI performance, particularly in customer-facing applications. Collaboration will be an essential tool to ensure accuracy and completeness of projects. Underpinning all these areas is data. Without data quality and integration plans with existing systems, AI models produce unreliable results that undermine their value and the output of risk management teams.
Impact
The adoption of AI will undoubtedly have an impact on the workforce and organizational structure of risk management teams. The survey asked respondents to select all the changes they felt would need to be implemented to ensure that teams could keep up with the evolution of AI (Figure N). Several prominent themes emerged, including: the need to invest in employee training and upskilling (selected by 62%); the creation of clear governance and oversight mechanisms (63%); the need to collaborate with technology vendors (40%); and the replacement of legacy systems (38%).
In the long-term, the adoption of AI in risk management teams is expected to lead to a shift in roles, rather than role elimination. As AI takes over automation of routine tasks, risk professionals have the opportunity to shift from operational to strategic roles. The integration looks to reshape the workforce, shifting to one that is strategic-focused and uses data-driven insights. Demand for AI expertise is on the rise; risk managers will have to adapt and embrace new technologies, and consistently upskill their staff, to continue to deliver in the changing environment.
Figure
Investment trends and strategic priorities
The final section of this report explores investment in AI initiatives and industry priorities moving forward. Respondents were asked to select how much their organizations had spent on AI initiatives in the past fiscal year (Figure O); 60% stated they had invested less than $1,000,000, while 22% stated they had invested somewhere between $1,000,000 and $5,000,000. In additional research, industry experts voiced surprise to see that 60% of respondents had invested less than $1,000,000, given the increased focus and recognition of its potential in the field. This could indicate that many organizations are still in the exploratory phase. This builds on the data presented in Figure A, where 38% of respondents either had not started an AI program, or were not planning to. Smaller and midsized organizations remain more cautious of large investments due to uncertainty on full implementation costs, ROI and the future of compliance and whether that will disrupt implementation. Budgets for AI initiatives may remain limited until some industry use cases emerge, and the regulation landscape becomes clearer.
Industry experts were less shocked to see that 22% of respondents reported to have invested between $1,000,000-$5,000,000. This demonstrates that a large portion of organizations are committing resources to
AI projects, and that medium sized organizations may be investing more than their smaller counterparts; that being said, this is likely to be primarily in areas such as fraud detection.
The other end of the scale saw 5% of respondents reporting to have invested over $25,000,000. These respondents could potentially be representative of global financial institutions that are fully integrating AI into core operations and exploring its full potential. These firms are likely to incorporate AI into their long-term strategy and be at the forefront of innovation.
Survey respondents reported that their organizations were investing in a range of areas, the largest of which being fraud detection (37%), followed by regulation/ compliance (29%), financial crime detection and surveillance (27%), and real-time risk monitoring (26%), among others (Figure P). It was unsurprising to see fraud detection, financial crime-related monitoring, and surveillance being the most significant use cases described by respondents. Figure P demonstrates the diversity in application areas within risk management; the findings suggest that organizations are investing in AI across a wide range of functions.
Figure O.
Approximately how much new spend did your organization spend on AI initiatives for risk management in the past fiscal year?
Figure P:
Overall, the findings suggest that organizations have a diverse range of priorities, with focus ranging across operational and non-financial risks, regulation/compliance and credit risk, among several more administrative tasks such as document summary and report writing. For those that are investing, for the most part, the figures remain relatively conservative, with 5% investing substantially and demonstrating a long-term strategic commitment. While the majority of firms are in the early stages of exploration and adoption, a clear commitment to fraud detection, risk monitoring and compliance is indicated throughout the survey. AI has the potential to become a critical tool for financial institutions to enhance their risk management capabilities in these areas and far beyond.
Conclusion
The integration of AI into risk management presents, as yet, an unrealized opportunity for financial institutions. AI can be an essential tool to enhance capabilities in fraud detection, operational efficiency, and risk assessment, among others. However, the journey towards full-scale adoption is far from straightforward. The survey data and in-depth interviews with industry leaders reveal that the industry is moving forward with cautious optimism, with many firms still grappling with foundational challenges and strategic uncertainties.
While operational risk areas such as fraud detection and financial crime have seen more advanced AI implementation, broader adoption across enterprise risk management remains limited. The primary barriers include data quality issues, regulatory concerns, and the need for robust governance frameworks. Executive hesitancy, especially in medium and smaller firms, further complicates the adoption process, as AI's high costs and unproven short-term benefits lead to a conservative approach.
Despite these hurdles, there is visible momentum towards realizing the potential of AI. Organizations are beginning to understand the strategic value of AI, not just in risk management but also in improving internal processes and enhancing decision-making capabilities. As the technology matures and regulatory frameworks evolve, continued investment and focus on data infrastructure, governance, and collaboration with education institutions will be needed to develop subject matter expertise.
This report highlights the need for a cautious and balanced approach, one that embraces change and innovation, while maintaining stringent risk management practices.
About CeFPro
About CeFPro:
The Center for Financial Professionals (CeFPro) is an international research organization and the focal point for the global community of finance, technology, risk, and compliance professionals from across the financial services industry. CeFPro is driven by high-quality, reliable, primary market research. It has developed a comprehensive methodology that incorporates data from its global community that has been validated by an international team of independent experts.
Examples of some of CeFPro’s research include:
To find out more and access our full collection of market intelligence reports, visit www.cefpro.com/research
More about the Center for Financial Professionals
Each step of the way, CeFPro is here to support industry professionals around the world with cutting-edge insights and market intelligence that will enhance professional understanding and accelerate personal development in unprecedented ways.
Navigating the intricacies of risk management is no small feat, and we recognize the unique challenges our industry faces in this ever-evolving landscape. We understand the pain points that can be a hinderance to success, and we’re here to offer tailored solutions through our extensive range of offerings:
No part of the AI in Financial Services: Risk Management publication, or other material associated with CeFPro®, may be reproduced, adapted, stored in a retrieval system or transmitted in any form by any means, electronic, mechanical, photocopying, recording or otherwise, without the prior permission of Center for Financial Professionals Limited, or as trading as the Center for Financial Professionals or CeFPro®.
The facts of the AI in Financial Services: Risk Management report are believed to be correct at the time of publication but cannot be guaranteed. Please note that the findings, conclusions and recommendations that CeFPro® delivers will be based on information gathered in good faith, whose accuracy we cannot guarantee. CeFPro® acknowledges the guidance and input from the Advisory Board, though all views expressed are those of the Center for Financial Professionals, and CeFPro® accepts no liability whatever for actions taken based on any information that may subsequently prove to be incorrect or errors in our analysis. For further information, contact CeFPro®.
CeFPro®, Fintech Leaders™ and Non-Financial Risk Leaders™ are either Registered or Trade Marks of the Center for Financial Professionals Limited.
Unauthorized use of the Center for Financial Professionals Limited, or CeFPro®, name and trademarks is strictly prohibited and subject to legal penalties.