OTA updates won't save buggy autonomous vehicle software

There is a feeling that it's OK for software to ship with questionable quality if you have the ability to send out updates quickly. You might be able to get away for this with human-driven vehicles, but for autonomous vehicles (no human driver responsible for safety) this strategy might collapse.


Right now, companies are all pushing hard to do quick-turn Over The Air (OTA) software updates, with Tesla being the poster child of both shipping dodgy software and pushing out quick updates (not all of which actually solve the problem as intended). There is a moral hazard that comes with the ability to do quick OTAs in that you might not spend much time on quality since you know you can just send another update if the first one doesn't turn out as you hoped.

"There's definitely the mindset that you can fix fast so you can take a higher risk," Florian Rohde, a former Tesla validation manager   (https://www.reuters.com/article/tesla-recalls-idTRNIKBN2KN171)

For now companies across an increasing number of industries have been getting away with shipping lower quality software, and the ability to do internet-based updates has let them get by with such a strategy. The practice is so prevalent that the typical trouble-shooting for any software after "is the power turned on" has become "have you downloaded the latest updates."

But the reason this approach works (after a fashion) is that there is a human user or vehicle operator present to recognize something is wrong, work around the problem, and participate in the trouble shooting. In a fully automated vehicle, that human isn't going to be there to save the day.

What happens when there is no human to counter-act the defective and potentially fatally dangerous defective software behavior? The biggest weakness of any automation is typically that it is not "smart" enough to know when something is going wrong that is not supposed to happen. People are pretty good at this, which is why even for very serious software defects in cars we often see huge numbers of complaints compared to few actual instances of harm -- because human drivers have compensated for the bad software behavior.

Here's a concrete example of a surprising software defect pulled from my extensive list of problematic automotive software defects: NHTSA Recall 14V-204:

  • Due to software calibration error vehicle may be in and display "drive" but engage "reverse" for 1.5 seconds.

If a human driver notices the vehicle going the wrong direction they'll stop accelerating pretty quickly. They might hit something at slow speed during the reaction time, but they'll realize something is wrong without having explicit instructions for that particular failure scenario. In contract, a computer-based system that has been taught the car always moves in the direction of the transmission display might not even realize something is wrong and accelerate into collisions.

Obviously a computer can be programmed to deal with such a situation if it has been thought of at design time. But the whole point here is that this is something that isn't supposed to happen -- so why would you waste time programming a computer to handle an "impossible" event? Safety engineering deals with hazard analysis to mitigate low risk things, but even that often overlooks "impossible" events until after they've occurred. Sure, you can send an OTA update after the crash -- but that doesn't bring crash victims back to life.

In practice the justification that it is OK to ship out less-than-perfect automotive software has been that human drivers can compensate for problems. (In the ISO 26262 functional safety standard one takes credit for "controllability" in reducing the risk of a potential defect.) When there is no human driver, that's a problem, and shipping defective software is more likely to result in harm to a vehicle occupant or other road user before it can be noticed there is a problem for OTA to correct.

Right now, a significant challenge to OTA updates is the moral hazard that software will be a bit more dangerous than it should be due to pushing the boundaries of human driver ability to compensate for defects. With fully automated vehicles there will be a huge cliff of ability, and even small OTA update defects could result in large numbers of crashes across a fleet before there is time to correct the problem. (If you push a bad updated to millions of cars, you can have a lot of crashes even in a single day for a defect that affects a common driving situation.)

The industry is going all-in on fast and loose OTAs to be more "agile" and iterate software changes more quickly without worrying as much about quality. But I think they're heading right into a proverbial brick wall that will be hit when the human drivers are taken out of the loop. Getting software quality right will become more important than ever for fully autonomous vehicles.