4 minute read

The legal implications of AI for insurers

By Katie Simmonds, Managing Associate at Womble Bond Dickinson

“One of the issues with using AI is that it is opaque, meaning what we can't do sometimes is actually explain how the AI system works. This can create several potential risks and issues. The key dangers of becoming overly reliant on technologies is that you fail to be able to understand how you are using individuals' personal data or verify that an answer or response is 'right'. Even where the answer is 'right', there is a risk of perpetuating historic biases and discrimination into future decisions. For example, in health insurance, an individual may be unfairly blocked from certain policies. This could have knock-on effects for that individual, who may require the health insurance for a mortgage rate, potentially removing housing opportunities.

Advertisement

“For businesses, overly efficient use of location data could mean higher rates of rejection based on historical crime rates or antisocial behaviour. This postcode lottery begs the question of how anywhere that could meet these parameters could ever realistically level up, even with sensible mitigations? Whether it’s a third-party application or something that is bespoke, you will need a complete view and understanding of what is going in and what process leads to what comes out.

“Such is the nature of AI, it will be constantly learning, therefore this understanding must remain agile. It can only do exactly what you tell it to do, and poor instruction or understanding will not warrant a pass if the machine behaves in a way that is illegal. For this reason, we’re likely to see more appointments of a Chief AI Officer or similar role that will bridge the understanding between tech, ethics, and the legalities.” for example. Predictive AI will be the next step once AI becomes more mainstream, but this area is still in its infancy. Once AI becomes more widely adopted and models have captured sufficient levels of data, we will start to see real-world applications of predictive AI.”

When we think about AI, one of the main insurance-led applications that comes to mind is around customer service. The technology can be used to automate firstpoint-of-contact for customer enquiries, freeing up human customer service agents to handle more complex queries or to work on other tasks within the business requiring judgement or discretion, for example. According to Lombard, this represents a shift away from customer service interfaces that work for the insurer and a shift towards customer service that works for the customer, improving overall satisfaction.

Fujitsu’s Meghana Nile elaborates:

“Customers want an omnichannel experience, which is much more achievable with the help of AI. It makes self-service claims processing much easier, dramatically improving customer experience. But insurance can feel like quite a personal experience to many and there are times where there will be more complex claims and customers expect the ‘human touch’.

“According to HubSpot, 40% of customers who couldn’t find someone to help them with their problem are still having issues with the product or service. So, it’s clear that when implementing AI, insurers must strike the balance between digital and human interaction; not everything should be done by a machine.

“Most important, however, is that AI in insurance is ethical. To be beneficial to both customers and the insurers, AI models have to be fair, transparent, and explainable. As AI evolves, becoming more complex, the companies that develop and provide the technology – and all stakeholders involved in AI – must practise ethics in each process.

“If insurers aren’t careful, unconscious bias will creep into AI if the algorithms are set up by a narrow group of people. If there’s a lack of diversity among data scientists –the experts that develop and test these AI models – then they’ll only further reinforce unconscious bias. And that is why we must consciously build solutions that constantly look out for these biases, preventing them from manifesting and causing harm.”

How important is data input for predictive modelling?

Those well-versed in AI will be familiar with the acronym ‘GIGO’, which stands for ‘garbage in, garbage out’. This refers to the principle that, if your AI algorithm is using poor data, it will return poor results. For example, if an insurer is using AI to identify problematic patterns of behaviour as part of its fraud prevention strategy, then bad data will diminish the algorithm’s ability to effectively spot fraud. This speaks to a much broader theme of bias within AI.

Peppercorn’s Nigel Lombard says: “Currently, risk analysis is a linear experience; it’s a one-size-fits-all approach that’s designed to favour the provider. AI on the other hand can collate volumes of data and identify behavioural patterns and trends, allowing providers to listen and react to their customers. In practice, this could mean tweaking the way a provider speaks to a customer based on what mood they’re in or creating new products following feedback, for example. Predictive modelling can take this one step further, but it’s entirely dependent on the quality of data inputted into the models.”

Meghana Nile adds: “While AI has its potential ethical risks if not used correctly, if applied right, it can be exceptionally powerful. AI can address potential bias in underwriting by identifying and eliminating any potential decision-making disparities due to race, gender, age, or ethnicity, and that’s what can make for fairer pricing.

“Another positive impact AI will have on premiums is its ability to detect fraud and identify high-risk customers. This ability enhances risk monitoring and, in turn, reduces pricing. With regulations like the Financial Conduct Authority (FCA) guideline of customer duty, this will steer the industry into taking a more holistic and analytical approach to pricing. Because AI can play a big role to estimate equitable and fair premiums, we’re likely to see its presence in insurance massively increase.”

Should insurance technology translate into lower premiums?

When implementing new technologies like AI, customer buy-in is extremely important.

Yet, no matter how useful AI may be for an insurer – or even how much simpler it makes the customer experience – AI will have to lower premiums before customers fully embrace it. That is the cost of change, even change for the better: customers want to realise actual financial gain. That may take the form of reducing the incidence of fraud, and subsequently minimising loss to the insurer, or it might be something else.

Meghana Nile explains that insurance customers often misjudge the amount of cover they need, taking out the wrong policy and leaving themselves underinsured or over-insured. This is one way that AI can help achieve savings.

“There is an opportunity for conversational AI to right this wrong,” she says. “By putting the customer in control of the conversation, they’re able to ask the right questions and AI can pick up on verbal triggers that ensure customers have the right cover in place. This can result in fairer pricing.

“Furthermore, by focusing on creating efficiencies, AI can also result in leaner operational costs and lower expense ratios, which can ultimately be passed back onto customers.”

This article is from: