From time to time, somebody considering the world with lots of self-driving cars suggests that human driving will become discouraged, “because insurance rates will go through the roof.” They imagine that if the self-driving cars are much safer than humans (which is true by definition because they won’t see wide deployment until they are) that human drivers will be so poor in comparison that it will cost too much to insure them.
This goes against the normal rules of insurance. Normally, insurance is priced by taking a collection of drivers, and looking at the total cost of accidents by drivers in that pool and dividing it by the number of drivers. A bit more is added to cover expenses. Most auto insurers don’t actually make a profit here, instead they make their profit because they collect billions of dollars at the start of the year, and pay it out slowly over the course of the year, earning income on investing the float. Sales costs are low for auto insurance because it’s compulsory — it’s easy to sell a product customers are legally forced to buy!
At the basic level, insurance would only go up if the human drivers like you are having more accidents. It doesn’t matter if the robots are having fewer. It’s not likely this will be the case, and in fact it’s very likely those human drivers, with new cars equipped with a wide variety of accident-avoidance technologies (some developed to make the robots) will be having a lot fewer accidents. Their insurance will get cheaper, not more expensive. In addition, with the more reliable robot drivers on the road, once people get used to them, accidents where both drivers were partly at fault should also reduce.
This could change if the cost of each individual accident went up. The vast majority of car accidents are property damage only. That’s not going to go up a lot. Again, if anything, the collision warning systems and auto-braking make the damage less.
It could be that in injury accidents, in the tiny few which go to court, somebody might argue that the human driver who injured (or killed) the victim is negligent because they decided to take the risk of driving themselves, rather than riding as a robocar passenger. They might try to assign higher damages due to that special negligence. The insurance companies will use their considerable weight to fight this. They already keep awards down to much less than people think they should be, and will keep doing so. Generally, awards tend to match how much insurance the defendant has, unless the defendant is particularly wealthy and worth going after. This makes this outcome fairly far in the future, if it happens at all. Perhaps in the very distant day when almost everybody rides as a passenger and driving manually is an affectation as rare as riding a horse, this type of thinking might come into play. It might mostly come into play for those wishing to drive an old school car without advanced crash protection systems for human drivers.
For the robocar companies
The forecast for insurance for the robocars themselves is quite different. Today, insurance is priced by putting drivers into risk groups, and rooms of actuaries quantify the risk of them driving. Those people have no knowledge of what risk of accidents a self-driving car has — it is the engineers making the car who will be doing extensive study of that risk, and quantifying it far better. While actuaries study the patterns that humans have in their driving, robocars will not have the same sort of patterns. In fact, if a robocar ever causes an accident, the bug that caused this will be fixed, and no car in the fleet (or any other fleet) will ever make that mistake again! Each accident will be unique, which is both good and bad. The good part is they will be rare. The bad part is that at first, our legal system won’t know how to deal with that in an efficient way.
The car accident is, by far, the most common large tort in the world, with 6 to 25 million happening each year in the USA, depending on the severity and how you count. (Most are small bumps the police and insurance companies never hear of.) In spite of that, it almost never ends up in court. One of the biggest roles of the insurance system is to make this much more efficient. If every accident ended up in court they would cost vastly more. Some argue that the industry is too good at this. With their own incentive to keep awards low, they pay out about $200B in damages in the USA, while NHTSA estimates there are around $870B in real damages — that’s quite a difference. But it would be even more if they all went to court. Lawyers don’t have a knack for making things cheaper to resolve. With each accident different costs could get very high.
The developers will always delve into the cause of any of their accidents in depth, so they can fix the problem. They will be required to provide the results of this delving in any legal conflict. As yet, though we have no way to streamline that — we need to find one out.
The role of insurance companies will change. When it comes to the basic insurance product, it makes much more sense for fleet operators like Waymo to self-insure. They know the risk far better, and they are already pooling the risk. In the case of Waymo, Amazon
This won’t happen at first. Nobody wants it to be efficient at first. In fact it’s strange that we’ve managed to get tragic injury accidents to become something “efficient.” At first, the public reaction to people harmed by machines will rebel against efforts to make resolving it efficient. Or so one would predict — in the only example we have to date, when an Uber
Even so, the efficiency is necessary. After all, if robocars have 1/5th the accidents of humans, that’s a great boon for society. But if going to court on the unique accidents with deep pocketed defendants makes them cost six times as much to resolve per accident, it’s an overall loss and the wrong outcome. Companies that did great good would be punished for it. If it costs 60 times as much, then there isn’t a business, in spite of all the lives being saved.