Artificial intelligence (AI) is already part of the insurance industry, used for tasks like claims processing, underwriting and fraud detection. Traditional insurance companies have started to embrace the technology and plenty of insurance-tech startups have sprung up around it, buoyed by investors attracted to AI’s real potential as well as its buzz.

Daniel Schwarcz, professor of insurance law and regulation at the University of Minnesota, said AI is one of the most significant technologies currently impacting the insurance industry. But he pointed out that the difference between AI and just collecting a lot of data is sometimes lost in the excitement. Some companies market products as AI-powered but are really just collecting data and feeding it into hand-built models — AI is instead defined by how data is processed after collection, using specific machine-learning algorithms.

“Traditional underwriting and rating models, they start from certain assumptions about what does and doesn’t matter.”

That’s not to say artificial intelligence is inherently superior to other innovations — if anything, the collection of data has changed the insurance industry more over the past few years. Car insurance companies, especially ones that offer pay-per-mile insurance plans like Metromile, have begun to embrace using customers’ driving habits data to set insurance rates. A driver installs an app on their phone that takes measurements such as driving speed, time and location, which are then sent to the company.

But all of that is data collection. Even companies that collect driver behavior data still may determine insurance risk using traditional top-down methods, with humans deciding which factors should determine insurability, then collecting data from customers to feed into those models.

“Traditional underwriting and rating models, they start from certain assumptions about what does and doesn’t matter,” Schwarcz said. “And then they test them and try to find correlations.”

Find out who's hiring.
See all Data + Analytics jobs at top tech companies & startups
View Jobs

He said traditional, human-led insurance risk guidelines are based on human intuition about causation, but that AI — through techniques such as neural nets — instead uncovers correlations from data in a bottom-up approach.

“AI essentially uses data to come up with correlations,” he said. “But you have no idea about why those correlations exist and what the causal pathways are, at least with most traditional usages of AI.”

Depending on how AI techniques were used, humans may be shut out of the process entirely — even when it comes to fully understanding how risk is ultimately determined by the resulting model.

MORE ON AIMake Sure the AI You’re Buying Isn’t Just a Marketing Gimmick

 

AI Can Speed Up Insurance Processes

Considering AI’s track record of abuses when applied in the real world, including over-surveillance through facial recognition and AI’s tendency to absorb human biases when building models, it may be tempting to flat-out oppose the idea of mixing AI with insurance. But despite the potential problems, Schwarcz doesn’t think it’s advisable to write off AI from the future of the insurance industry.

“If there weren’t potential, the solution would be easy: just ban AI insurance,” he said. “But I certainly don’t think that’s a good idea. ... I think there’s a tremendous potential for AI to improve virtually every element of the insurance process.”

Schwarcz said AI is already used to help insurers find evidence of potentially fraudulent claims and to speed up the underwriting process, during which insurance companies evaluate potential customers to determine their risk. Customers who are deemed to be more risky pay more for premiums.

“There’s a tremendous potential for AI to improve virtually every element of the insurance process.”

AI can do these tasks faster and more cheaply by training models using historical data and using the models to automatically process new customers and claims.

“It’s better at finding patterns between all sorts of random data points,” he said. “It can find all sorts of correlations that no person would ever sift between for factors of fraud.”

It’s possible for AI to help reduce bias as well. For instance, rates for car insurance are traditionally determined by a buyer’s personal factors, such as credit score, income, education level, occupation, and marital and homeowner status. But these factors penalize low-income buyers and aren’t directly related to a driver’s likelihood of getting into collisions. Companies using AI to build models can reduce these biases by actively excluding these factors during the training process.

“Things that are not relevant to the inquiry that you’re not asking the AI to engage in, it’s not going to look at,” Schwarcz said. “If race or sex are not relevant in the historic data that it’s looking at, it won’t take into account, as opposed to people who may.”

MORE ON AIHere’s What AI-Enabled Design Frameworks Are Teaching Us

 

Problems With Transparency

Car insurance works because the amount of money spent to pay for members’ occasional collisions is smaller than the sum of the regular payments members make. But Schwarcz said that’s a simplified view of how insurance really works in the world — it doesn’t take into account the number of insurance companies competing for customers, and all the buyers shopping around for the best coverage at the best price.

Companies jostle for the least risky customers so they can offer lower rates and therefore attract more customers. As part of that, they charge different rates for buyers deemed to have different risk levels; for car insurance, those who are more likely to have collisions and file claims will have higher premiums. Insurance companies are allowed to charge customers differently based on different risk factors, although many states have regulations on what factors companies can look at.

“Is that discriminatory? Well, it’s discriminatory in the sense that it’s discriminating on the basis of your perceived risk — but that’s OK discrimination,” Schwarcz said. “Insurance is built on discrimination, but it’s built on discrimination that is permissible, and there are certain impermissible types of discrimination. The question is, what is an insurer doing? And to what extent is its process potentially producing the unfair type of discrimination?”

Schwarcz laid out a hypothetical: If a traditional car insurance company found that low-income drivers were more likely to make claims — for instance, due to not being able to afford fixing minor expenses out-of-pocket — those companies may want to discriminate based on income, but they shouldn’t be allowed to.

“We don’t even know what ways the AI model is creating linkages between the input data and the output predictions.”

“We say they still can’t,” Schwarcz said. “At least that’s the norm — they shouldn’t be able to do that, even if there’s more of a likelihood of a claim, because there’s a countervailing social interest in making sure that people are not charged more who are least able to afford it.”

But when risk models are built using AI, it may be much harder to pin down what insurance companies are basing higher premiums on, he said. For instance, if companies use neural nets, an AI technique that’s the basis for deep learning, the resulting model is basically an opaque box. Insurance companies would know what factors were used to train their AI model, and using the models to evaluate new customers would be as simple as feeding it the same types of inputs, but companies wouldn’t know how the model internally related those factors to risk and which inputs are more important.

“If you’re using an AI, there’s no way of knowing how the AI is using that data,” Schwarcz said. “We don’t even know what ways the AI model is creating linkages between the input data and the output predictions.”

As a result, AI insurance models may be harder to regulate than traditional insurance models, he said. Regulators can request to see the guidelines traditional insurance companies use to determine risk and ban companies from using factors that discriminate. But if insurers use an AI model, regulators who gain access to the model would still end up with the same visibility into how rates are determined as the insurance company itself — which could be no visibility. AI models could effectively make it harder to hold insurance companies accountable for unfair discrimination.

Find out who's hiring.
See all Data + Analytics jobs at top tech companies & startups
View Jobs

 

Beware of ‘Proxy’ Factors

And even if companies avoid training the AI model with personal data such as race or income, AI may still incorporate those factors in its algorithm through “proxy” factors. For instance, if something like the time of day when driving is taken into account to build a car insurance model, that could be a proxy for income level, Schwarcz said.

“If people who drive at a certain time of night are more likely to have claims, an insurer might say we should charge them more,” Schwarcz said, continuing with his hypothetical. “But it may be that the reason there’s a correlation is not because driving at that time is riskier, but because people with lower income drive at the time of night, and people with lower income are more likely to make claims.”

Basically, even if companies don’t provide data about factors like gender, race and income, the AI could still find other factors that stand in for that data and have effectively the same outcome.

“We don’t want insurers to be using race. We don’t want insurers to be using income. We don’t want insurers to be using gender,” Schwarcz said. “But if you have an AI that’s processing the type of data you described, it’s going to end up creating linkages between the data it has and claims, in part because of the correlation between that data and income.”

It’s a balance going forward to develop AI’s benefits to insurance while finding solutions to its challenges, especially around the potential to perpetuate harm on historically oppressed groups. Part of that will involve standards and regulations catching up to the quickly changing industry, he said.

“AI raises all these difficult questions that insurance regulation hasn’t really figured out yet,” Schwarcz said. “The future is really unclear. Right now, insurance regulation is really behind, both in modernizing to allow AI where it’s beneficial, while at the same time safeguarding against some of these risks.”

Great Companies Need Great People. That's Where We Come In.

Recruit With Us