Tuesday, 8 September 2020

How insurers can use data to beat adverse selection and moral hazard

This week, my ECONS102 class has been covering the economics of information. In particular, we focus on the problems of information asymmetry, and we spend a fair amount of time talking through problems of adverse selection. Adverse selection arises when one of the parties to an agreement (the informed party) has private information that is relevant to the agreement, and they use that private information to their own advantage at the expense of the uninformed party.

A classic example of adverse selection, which I've blogged about many times, occurs in the market for insurance (regardless of whether we are discussing home insurance, car insurance, health insurance, or even life insurance). The insured person knows whether they are high risk or low risk, but the insurer doesn't know - risk is private information. Since the insurer doesn't know how risky any person applying for insurance is, their best option is to assume that everyone is high risk. We refer to this as a pooling equilibrium - all insurance applicants are pooled together as if they are the same risk. The insurer then sets the insurance premium on the basis of the risk pool they think they have (high risk). The low risk people will (rightly) identify that the insurance premium is too high for them, and they drop out of the market, leaving only high risk people buying insurance. The insurance market for low risk people fails - they can't by insurance if they can't credibly convince the insurer that they are low risk. This problem is referred to as adverse selection, because the people who select into applying for insurance are the people that the insurer least wants to insure!

As you know, we do have insurance markets that cater to low risk people, so the markets must have adapted to deal with this adverse selection problem. This involves the private information (about the level of risk) being credibly revealed to the uninformed party (the insurer). If the insurer tries to reveal the private information, or tries to induce the informed party (the person applying for insurance) to reveal the private information, this is referred to as screening.

Insurers can screen applicants on the basis of their demographic and other information that they provide when they apply for insurance, their insurance history or credit history, and details about what they are insuring (house, car, health, life, etc.). However, insurers are increasingly using online data to screen applicants and determine their risk. Take this example, from The Wall Street Journal (gated) last year:

"Did you document your hair-raising rock-climbing trip on Instagram? Post happy-hour photos on Facebook? Or chime in on Twitter about riding a motorcycle with no helmet? One day, such sharing could push up your life insurance premiums.

In January, New York became the first state to provide guidance for how life insurers may use algorithms to comb through social media posts—as well as data such as credit scores and home-ownership records—to size up an applicant’s risk. The guidance comes amid expectations that within years, social media may be among the data reviewed before issuing life insurance as well as policies for cars and property.

If you're not thinking about how much information you reveal on social media, perhaps you should be now that it might cost you more in terms of insurance (on the other hand, if you are a low risk person, then perhaps your social media posts will earn you a lower insurance premium). However, that isn't the end of insurance companies' use of data.

Another information asymmetry problem in insurance happens after the insurance contract is agreed. This is the problem referred to as moral hazard - this problem arises when one of the parties to an agreement has an incentive, after the agreement is made, to act differently than they would have acted without the agreement. In the case of insurance, the insured party might act in a more risky manner when they are insured than they would have acted without insurance. They can do this because they have passed some of the (financial) risk of their actions onto the insurer.

One solution to moral hazard problems is for the uninformed party (the insurer) to monitor the actions of the informed party (the insured) more closely. And, you guessed it - insurers are looking at data to deal with moral hazard problems. As one example, Sven Tuzovic (Queensland University of Technology) wrote in The Conversation last year that:
...wearable devices are not only being embraced by consumers, but also across insurance industries. Health and life insurance companies collect data from fitness trackers with the goal of improving business decisions.
Currently, these business models work as a “carrot” incentive. That means consumers can benefit from discounts and cheaper premiums if they are willing to share their Fitbit data.
But we could see voluntary participation become mandatory, shifting the incentive from carrot to stick. John Hancock, one of the largest life insurance companies in the United States, has added fitness tracking with wearable devices to all of its policies. Though customers can opt out of the program, some industry experts argue that this “raises ethical questions around privacy and equality in leaving the traditional life insurance model behind”.

In terms of moral hazard, the insured is less likely to engage in risky behaviour if they know that their insurer is watching their every move. Insurers can use the data they collect from devices like Fitbit to not only monitor the insured, but also to determine their risk and adjust future premiums. It potentially solves both moral hazard and adverse selection problems at the same time.

And this is just the beginning. Insurers may turn to more sophisticated artificial intelligence tools in the near future, as David Tuffley (Griffith University) wrote in The Conversation last year:

Then you have a car accident. You phone your insurance company. Your call is answered immediately. The voice on the other end knows your name and amiably chats to you about your pet cat and how your favourite football team did on the weekend.

You’re talking to a chat-bot. The reason it “knows” so much about you is because the insurance company is using artificial intelligence to scrape information about you from social media. It knows a lot more besides, because you’ve agreed to let it monitor your personal devices in exchange for cheaper insurance premiums.

This isn’t science fiction. More than three-quarters of insurance executives believe artificial intelligence will revolutionise the industry within a few years. By 2030, according to McKinsey futurists, artificial intelligence will mean your car and life insurance premiums could change based on whether you decide to take one route or another.

If you're starting to think that there is nowhere to hide, you're right. Even if you refuse to let your insurer access your data, you're simply suggesting to the insurer that you are high risk. The insurer may frame it as if those agreeing to share data are receiving a discount, but really they are applying a higher premium to the high risk people who are least likely to want to share their data.

Should we be worried? Arguably no, unless we are high risk people wanting to pass ourselves off to insurers as low risk. Otherwise, we get insurance priced at premiums that is actuarially fair and accurately reflects our level of risk. However, as Tuffley notes, we should be concerned about what happens to the data that insurers collect about us:

An insurer might also be tempted to use the data for purposes other than assessing risk. Given its value, the data might be sold to third parties for various purposes to offset the cost of collecting it. Advertisers, marketers, lobbyists and political parties are all insatiably hungry for detailed demographic data.

It pays to read the fine print on contracts, and if insurers are going to collect much more data about us in the future, we should at least be aware of the risks of what will happen to that data.

[HT: The Dangerous Economist last year, for the Wall Street Journal article]

Read more:

No comments:

Post a Comment