This article was co-authored by Caden Rosenbaum, Senior Policy Analyst at Libertas Institute.
The recent fatal crash involving a Tesla on autopilot and a motorcyclist in Draper has sparked debate about autonomous vehicles (AVs). In this case, the accident occurred when the motorcyclist’s illegal lane change, crossing double white lines into the HOV lane—a human error. One that autonomous systems are designed to reduce, if not eliminate.
The Tesla’s Autopilot system detected the motorcycle and attempted to brake, as did the driver, but the collision occurred, raising concerns about the technology’s readiness as it’s deployed onto public roads. With the latest Tesla announcement of fully autonomous vehicles rolling out in California and Texas, this question is on everyone’s minds. But let’s be clear, this isolated incident shouldn’t overshadow the broader and undeniable benefits AVs are bringing to road safety. It’s important to consider the alternative, that a human driver might have failed to react entirely – as is the case all too often.
When deciding not to file charges, District Attorney Sim Gill noted that investigators learned the Tesla registered the motorcycle and tried to stop nearly a second before the driver pressed the brakes himself, but it couldn’t stop fast enough to avoid the collision. “If anything,” District Attorney Gill said, “the system did what it was supposed to do. It reacted to the hazard sooner than a human being could have reacted.”
The outcome was unfortunate, but it’s critical to ask: would the accident have been avoided if the driver had been driving without autopilot? In this case, the answer is no.
Incidents like this shouldn’t derail progress on AV technology. Consider the early days of public air travel, when fatalities were a tragic part of advancing the technology. The crash of Air Union’s “Farman Goliath” in 1923, and others like it, stemmed from the limitations of early aircraft and the absence of safety systems. In the case of airships – which would have been a much cheaper and efficient means of transporting cargo – famous accidents like the Hindenburg put an end to all attempts to innovate for over a century.
These accidents were the result of the technology itself not being fully developed. But today, with autonomous vehicles, the story is different—most fatalities involving AVs, like the Draper crash, are not caused by the AV systems themselves but by human error, which account for roughly 90% of all crashes. AVs are an upgrade to the status quo, not a new form of travel with a new set of dangers to contend with.
Incidents like the accident in Draper shouldn’t derail progress on AV technology. They should prompt us to focus on improving these systems through further testing and innovation. History has shown us that technological advancements—such as early air travel—were not without risks, but those risks paved the way for safer, more reliable systems in the long run. The reality is that autonomous vehicles have already begun to reduce the dangers we’ve come to accept on the road. While this crash is heartbreaking, it should serve as a reminder of the primary goal for autonomous travel: minimizing human error to save lives. A goal we can only achieve if we allow innovators to keep working on the problem.