Unless you’ve been living deep in the wilderness, you’ve been hearing that self-driving cars are swooping in just around the corner—for at least a dozen years now. Every six months would bring more fevered proclamations by automakers, tech startups, OEMs, industry pundits, and mainstream media that autonomous vehicles, or (“AVs”), would make the very act of driving obsolete any day now.
We were also promised that AVs would utterly transform our lives. They’d make our roads safer by eliminating human driver error, improve our environment because fewer vehicles would be needed to meet our transportation needs, return valuable productive time to us during long commutes, and cut congestion.
But about two years ago, the rate of these aspirations began slowing faster than a Tesla regeneratively braking up a hill. In fact, some experts are now admitting that fully self-driving cars that we can privately own and operate are still twenty to thirty years away—or maybe even never. The stunning collapse of Argo AI, jointly funded by Ford and Volkswagen, is the latest casualty as frustrated investors, now bereft of billions of dollars with no viable ROI in sight, pack up shop.
So, just what the heck happened? What are the technical, operational, and regulatory reasons why AVs are taking so long? Are they still coming at all, and if so, when? And will we really be safer when they do arrive, and we’re not the ones operating them?
There’s no doubt that AVs represent the biggest leap forward in transportation since cars were invented in 1885 by Karl Friedrich Benz. A century later, the first AVs appeared in 1984 through a national defense funding project in the United States and then again in 1987 through a European joint collaboration (which happened to include Mercedes-Benz, the descendent of the original automobile inventor’s company).
Over the past twenty years, AV technology has progressed at warp speed, made possible by exponential advancements in software chip technology, radar, lidar, intensive navigation mapping, artificial intelligence, and machine learning. These technologies are progressing so swiftly that even two years make a huge difference.
But, as of February 2023, here’s where we are: (1) We are still not even close to having self-driving vehicles that the general public can buy and operate; (2) AV research, development, testing, and infrastructure construction are continuing to progress, especially in California, Israel, Europe, and China; (3) at least 100 billion dollars have been invested; (4) almost all new vehicles are equipped with advanced driver-assistance systems (ADAS), which are the building blocks of AVs; and (5) many consumers mistakenly believe that some current vehicles equipped with ADAS are able to drive themselves (more on that later).
Before we continue, we need to define exactly what an autonomous vehicle is. The terms “autonomous,” “automated,” “self-driving,” and “driverless” are often used interchangeably, and their subtle but important differences are argued even within the industry.
To reduce confusion, the US-based Society of Automotive Engineers International (SAE) proposed six levels of driving automation in 2014, which are now widely accepted globally and continuously updated. Although SAE J3016 is often defined in terms of how much humans are involved in the driving task, it’s far more accurate to measure the levels by how much automation technology supervises and accomplishes it; it’s a terminology standard, not a safety standard.
Level 0 represents no driving automation at all, while Level 5 is fully automated, requiring no human inputs and is operational in all conditions. Levels 1, 2, and 3 use increasing amounts of ADAS, such as lane-centering and adaptive cruise control, but they operate under limited conditions, and humans are still required to monitor and override the technology when it makes mistakes, or there is a system failure. Level 4 vehicles intervene on their own when things go wrong, but a human can still manually override them. They operate only in limited areas on preprogrammed routes (known as “geofencing”).
The highest level available on the vehicles we buy and drive today is still only Level 2, while self-driving shuttles like Waymo and Cruise are Level 4. For the purposes of this article, I will only refer to vehicles with Level 4 or 5 automation as “AVs.”
It’s often said that in any major undertaking, the last miles are the most elusive. With autonomous vehicles, that’s literally been true. While many pieces of self-driving technology are ready for widespread use, their inventors have been moving far faster than bureaucratic government agencies can regulate them, especially in the United States, where few uniform federal standards exist and policy is left up to individual states.
This hobbles the entire industry, which desperately needs government to uniformly implement unambiguous laws and support around testing, liability, and infrastructure—yesterday. In fact, developing AVs is so complicated that in 2020, the World Economic Forum created the Safe Drive Initiative to offer an effective framework for regulators and companies worldwide to collaborate, develop, and deploy AVs on public streets in the safest ways possible.
Liability, of course, is the biggest elephant in the room. If a human driver crashes a car today, it’s pretty easy to figure out who caused it. But an AV, with no apparent human operator, is trickier. Is it the automaker? The third-party software company responsible for the car’s electronic brains and algorithms? If the vehicle uses wireless networks, roadway sensors, and lane markings to navigate, could a communications provider or even a local transportation department be at fault? And, unless the vehicle is fully driverless, to what extent does some blame remain with the human operator?
Accounting for thousands of unpredictable real-world scenarios and conditions amid very litigious landscapes is giving automakers and governments plenty of reasons to slow down. This is one reason why, for instance, Audi decided not to roll out its Level 3 partial automation on its flagship A8 in the US in 2019—it was just too risky.
And then there is the insurance industry. Whilst insurers have started to talk a good game, the industry has so far been one of the slowest to deliver meaningful digital transformation or simplify user experiences. Allocating liability and settling AV-related claims will require motor insurers to be able to access and process complex data from multiple third parties, all in an ever-thornier data-sharing environment (think GDPR, who owns the data, whether there is a legal basis for processing, and how to ensure necessary consents and also the ability of data subjects to enforce their rights).
To navigate its world, an autonomous vehicle needs to measure its position in relation to other vehicles, people, signs, animals, buildings, poles, curbs, and of course, the road itself. To do this, it uses built-in cameras, sensors, lidar, and radar that process staggering amounts of information to form “pictures” of its environment that help it determine what to do—or avoid—next.
This can’t happen on just any road, however. Lane and shoulder markings must be consistently detectable, even when they’re covered by snow, ice, leaves, and debris. Roadside sensors need to be installed on sidewalks, curbs, and lanes to foresee emerging dangers and transmit that information to the AV. And signage also needs to be machine-readable, not just as visual images.
In addition, AVs process too much data to fit on their onboard microchip brains, so they need to stay connected to cloud-based platforms for geofenced mapping, vehicle-to-vehicle communications, remote monitoring, and other functions—and this connection needs to be completely dependable with no signal dropouts, and robust backup systems at the ready. This may mean installing transmitters at specific intervals along the roadway, especially in dense urban areas with heavy usage and many obstacles, or where difficult topography like mountains and cliffs can block traditional signals.
Given these requirements, most of the world’s roads just aren’t ready for AVs in any large numbers yet. An exception is forward-thinking China, where so many towns and cities are sprouting at lightning speed that new highways are already being built with dedicated lanes for AVs. In other countries where the urban landscape is already developed, a wholesale upgrade of existing infrastructure is simply not viable given the scale and cost of comparable projects. As an example, in the UK the cost of building the “High Speed 2” rail links has soared to over £100bn and will take until at least 2045 to complete.
One of the biggest obstacles to AV acceptance is public opinion. Our unreasonable expectations of automated driving mean that we hold it to incredibly unfair double standards.
One of the biggest obstacles to AV acceptance is public opinion.
According to the US federal traffic agency NHTSA, 5.25 million police-reported motor vehicle crashes took place in 2020—that means an average of 6,252 people were injured and 106 people died every day that year, with relatively little public attention. By contrast, in an eleven-month period from 2021 to 2022, true AVs were reported to be involved in 130 crashes. None were serious, and they were mostly caused by regular vehicles rear-ending them and not by the AVs themselves. There were zero deaths. In fact, just one fatality has happened in the US among true AVs (when an Uber human safety driver wasn’t paying attention), plus five reported fatalities among vehicles with partially automated driving technology (specifically Teslas) in the entire past seven years.
Because of societal fear of the unknown and media hype, incidents involving self-driving cars tend to be highly publicized, while the huge loss of life with vehicles with no automation, to which society is tragically accustomed, is overlooked. On the one hand, it is only natural that a new technology should be thoroughly vetted from a safety perspective. However, all this plays into the worst fears of a general public conditioned to remember and fear the novel, the negative, and the unknown. And that doesn’t bode well for consumer acceptance.
Despite the famous 2015 US Department of Transportation brief that stated that human error and bad choices cause 94 percent of fatal vehicle crashes, the US collision rate for 30- to 80-year-old drivers is fewer than 330 per 100 million miles driven. So, the challenge for the AV industry is to deliver vehicles that are even safer than this, and creating a foolproof one is forcing us to confront the complexities of driving as never before.
We tend to believe in the myth that driving is a simple task because we do it regularly with little effort and mostly without incident. But as engineers struggle to design AVs that behave safely and predictably in countless driving situations, they’re finding out that humans are often much better at sorting out information, prioritizing, making judgments, and adjusting to quickly changing events than the most robust software algorithms and sensors. “Humans are really, really good drivers—absurdly good,” says George Hotz, founder of self-driving startup Comma.ai, told Bloomberg News.
With our current technology, AVs must make binary decisions on how to handle gray situations (called “edge cases”), like a bird standing in the roadway that will likely fly off before the car reaches it, or when it’s safe to make an unprotected left turn. An AV will detect a pedestrian standing by a curb and stop, thinking that person will step off into its path, but it may not “see” that person waving their hand to gesture for the vehicle to proceed. Humans can discern this body language instantly, but training AVs to recognize their meaning is challenging.
Don’t worry, AVs are still coming—just not in your own private garage any time soon. Given the immense liability and safety obstacles outlined above, AVs do best in situations where the route is predictable and environmental variables are limited. We’ve already been seeing them for years in the form of shuttle services (so-called “robo-taxis”), which run at low speeds on predetermined geofenced routes in many cities around the world.
AVs are especially useful in hazardous environments such as mining and construction, where their deployment can keep humans out of harm’s way and increase worker safety. Agriculture is also ideal for AV deployment—think about a tractor or combine that needs to do repetitive plowing, fertilizer application, planting, and harvesting, plus it can do it 24 hours a day during a time-crucial season.
Much of what drives these alternate uses for automation, literally, is a severe labor shortage, especially in commercial trucking. In the US alone, a shortfall of 160,000 human drivers is expected by 2030, with a profound effect on a nation’s entire economy if goods can’t get where they need to go. Fully autonomous trucks face the same infrastructure and edge-case issues that their smaller vehicle counterparts do, but implementing and scaling up at least partial automation is easier since freight trucks and their drivers are already subject to heavy federal regulation and oversight. The commercial benefits are also more clear-cut, especially since automation can help eliminate the safety issues related to drowsy and distracted driving and accomplish more trips in less time.
For now, don’t expect too many fully autonomous trucks on a road near you next year, but you will be seeing trucks running with partial automation and people on board (after all, a real human is still needed to help with loading operations at its destination).
And where won’t we be seeing AVs? It’s safe to say that the more complex and less predictable the road and supporting infrastructure, the greater the challenge will be in training a machine to replace a driver. So, think highways over country lanes, AV-only urban centers over mixed-AV or non-AV urban driving, developed world over undeveloped nations, simple over complex weather patterns, and so on.
So much attention has been focused on future AVs that we’ve overlooked the far more serious problem staring at us when we get into a modern vehicle made in the last five years—the advanced driver-assistance systems that can give us the illusion that the car can drive itself.
Before I continue, I must point out a critical distinction between true AVs and what consumers often believe are AVs. No current Tesla is an AV—even those with the egregiously named Autopilot and Full-Self Driving features, which are merely very advanced ADAS. Current Teslas should never be mistaken for true AVs like those developed by Waymo, Cruise, Zoox, and other manufacturers.
The irony is that, despite widespread consumer hesitation about feeling safe riding in a fully autonomous car, we’ve gotten pretty comfortable letting technology take over many driving tasks for us already. Just because a vehicle can steer, accelerate, brake, and make some lane-changing decisions on its own doesn’t mean it can drive itself, but it’s easy to forget that and think we can turn our attention to our phones or even take a nap instead. In fact, this complacency has become such a problem that automakers have developed counteracting technology like General Motors’s Super Cruise to remind drivers to keep paying attention, such as tracking eye movements.
The truth is that this period of partial but not full automation is probably the most dangerous time on this road to autonomous driving. It simply isn’t realistic to give humans a message that they can take their hands off their wheels or don’t need to fully operate their machines, and yet expect them to snap to instant attention and awareness if something goes wrong. Given that it looks like we’ll still be at least partly driving ourselves for a long time, that’s even more reason to keep up our driving skills and approach driving with the same level of caution that we always have.
About the author
Mi Ae Lipe
Mi Ae Lipe is a freelance editor and graphic designer in Seattle, Washington, who lives another life as a traffic safety advocate. She blogs on Driving in the Real World, Tweets daily driving news links and tips on Twitter at @DrivingReal, and writes a regular column on street driving for BMW CCA’s Roundel magazine. She is a past recipient of the NHTSA Award for Public Service for her work in driver training in Washington State, and she is also a member of the Washington State Transportation Commission’s Autonomous Vehicle Work Group’s Safety Subcommittee.
We are pleased to announce the latest Manager Portal updates that build on the recent App releases. These include single score and harsh breaking reports.
If you drive for work a lot, does your back hurt? Cranky and fatigued? This means your driving ergonomics should improve. Here are some recommendations
Mi Ae Lipe
Heineken UK wins Safety and Health Excellence Award 2023 and drives down costs with Brightmile. They have achieved a total fleet cost savings of £452,000.
"The Brightmile app is the perfect mix of Safety, Fleet, Sustainability and HR tools to manage the fleet and engage drivers"
Global HSE Manager, SGS