3 minute read

The Right Balance in AI Governance

Perspectives from the financial services industry

BY STEPHANIE WAKE

FEW TECHNOLOGIES IN RECENT HISTORY have been hyped as much as ChatGPT and other generative artificial intelligence (AI) applications—perhaps only the internet. With this hype comes justified interest from policymakers who feel the need to get a handle on this technology without inhibiting its positive potential.

Generative AI undoubtedly will be a disruptive technology with the potential to change businesses and industries of all sizes and across sectors. As with all rapidly evolving technologies, risks and challenges may likely emerge alongside the promising new innovation.

BANKS USE OF AI

While definitions vary, AI is an umbrella term for techniques that enable computers to mimic human intelligence. AI systems, which include machine learning (ML) techniques, have traditionally been used to analyze data and make predictions.

In the financial sector, banks have been using AI and ML techniques for years to drive better client experience, enhance operational efficiencies, and manage risk. Far from an exhaustive list, a few core uses include:

Fraud Detection and Prevention: Banks use AI-driven pattern prediction techniques to find anomalies in transactions and identify fraudulent activity before it impacts customers.

Customer Service: Banks currently use AI-driven chatbots to provide customers with more and faster support without having to escalate to human operators. Additional tools can present customers with useful products or services and create more personalized digital interfaces based on their individual profile.

Cybersecurity: The financial services sector is a primary target for cybersecurity attacks. AI models can be used to detect and respond to cyber attacks more quickly and efficiently than human intelligence alone.

Anti-Money Laundering: AI models have the potential to improve detection of suspicious activity as they can understand complex patterns in data, resulting in detection of unusual activity and reduction in false positives.

With recent advancements in generative AI, a new type of AI, which differs from traditional AI in that it enables machines to create new content, banks are looking to leverage this technology in a variety of ways. For example, banks are evaluating how generative AI may help software developers write code or to facilitate information retrieval to parse information and summarize insights much faster from multiple sources.

Banks responsibly deploy AI for these cases only after rigorous model risk management procedures and within an appropriate control framework to ensure the appropriate development, use, validation, and governance of AI/ML models and against a backdrop of extensive laws and regulations, such as fair lending or data privacy laws.

LOOKING AHEAD

Some policymakers are considering implementing new AI-specific laws and regulations—this has accelerated with the advancement of generative AI and more public awareness. The direction and scope of AI policy should emerge as the product of a deliberate and transparent process with broad stakeholder input.

Businesses currently using or likely to integrate this rapidly innovating technology will want to closely monitor and look for opportunities to engage in the dialogue around the topic to support a balanced, risk-based approach that fosters, rather than stifles, innovation.

Stephanie Wake is a senior vice president in Citi’s Office of the Chief Technology Officer, which is responsible for regulatory strategy and advocacy for emerging technologies.

This article is from: