After Cruise Ban, What’s Next for Autonomous Vehicles?

Fully autonomous vehicles encountered another roadblock after Cruise’s ban in San Francisco. Here’s what that could mean for the future of AVs and a potential path forward. 

Written by K. Scott Griffith
Published on Nov. 09, 2023
Autonomous vehicle with passenger
Image: Shutterstock / Built In
Brand Studio Logo

The future of autonomous vehicles is at a crossroads. On Oct. 4,  Cruise, a robotaxi company that uses the electric Chevy Bolt outfitted with autonomous gear to transport passengers across San Francisco, was told to halt operations in the city immediately. This happened after a woman crossing a downtown street was struck by a hit-and-run driver and then pinned beneath one of Cruise’s autonomous vehicles. 

California Department of Motor Vehicles suspended Cruise’s permit to operate stating that its vehicles weren’t safe for public operation and that the company had misrepresented safety information. Days after the California ruling, Cruise announced its fleets in other U.S. cities would now be operated with human oversight.

This follows a troubling trend regarding autonomous vehicles. 

2 Types of Autonomous Vehicles

  1. Driver-assisted: These vehicles deploy autonomous vehicle technology to support the driver, using features like backup cameras, adaptive cruise control, lane-assist and assisted driving. 
  2. Fully autonomous: These are driverless vehicles that don’t use steering wheels or braking and acceleration pedals and are controlled solely by autonomous vehicle technology. 

But for every autonomous vehicle death, there are thousands more caused by our driving behaviors. The reality is that by changing our mindset about the development and oversight of driverless cars, we can dramatically reduce the overall number of deaths caused by traffic accidents.

Related ReadingAI in Cars: 20 Examples of Automotive AI

 

What’s Preventing Autonomous Vehicle Growth?

We see the world through the lenses of our experiences. Usually, we get to where we’re going without accidents or close calls, even when we engage in risky behaviors like speeding or texting. Yes, it’s risky, but it’s the devil we know, and we’ve grown dependent on its benefits. Driving falls within our societal risk tolerance.

Statistics tell a different story. The National Highway Traffic Safety Administration estimated that 42,795 people died in motor vehicle traffic crashes in 2022, according to its latest report. When it comes to driving safely, we simply aren’t that reliable.

Conversely, we’re often disproportionately frightened by crashes involving autonomous vehicles. We fear the unknown, and don’t understand the algorithms and decision-making capabilities of these “black boxes.”

Yet fully autonomous vehicles, or those without a steering wheel or braking and acceleration pedals, have the potential to save thousands of lives. With full implementation of driverless vehicles, some researchers estimate deaths could ultimately fall to 1 percent of current rates.

However, lawsuits and regulatory restrictions, such as the California DMV’s San Francisco prohibition, threaten to curtail these benefits by causing manufacturers to slow down driverless development in favor of driver-assisted technology. That puts responsibility on humans, not automation, which can be proven to cause far more deaths than driverless vehicles.

More on AIWhat Is Autonomous Trucking?

 

Why Driver-Assisted Vehicles Aren’t the Answer

The allure of driver-assisted technology is understandable. Driver-assist provides the gentle nudges we can appreciate, without giving up our sense of control. Few of us like to parallel park, so we gladly surrender that task. 

But human-assisted AV poses dynamic challenges, often causing the driver to over- or under-react. Under-reaction can happen through complacency; the driver trusting the autopilot “knows what it’s doing” and so the human doesn’t intervene until it’s too late, such as a malfunctioning adaptive cruise control.

An example of overreaction is when the driver is surprised by a driver-assist nudge and tries to bring the vehicle back to its intended course. For example, a driver who wants to change lanes but fails to signal. As the driver steers into the adjacent lane, the automation gently beeps or vibrates and directs the vehicle back in the original lane. The driver, not being familiar with the technology, turns the wheel even more aggressively until it overrides the automation feature. But depending on what level of override is allowed by the software, a human-automation tug-of-war may result. This can be physical, or cognitive, leading to a “What’s it doing now?” phenomenon. 

Driver-assisted autonomous vehicles aren’t a completely new phenomenon. It has parallels to the rise of autopilot in airplanes. The key to this evolution was the proficiency-based training all pilots are required to take and two reporting programs. Without fail, pilots must pass annual exams to demonstrate not only knowledge, skills, and abilities, but also to show proficiency in using the automation under different flight situations.

Meanwhile, the Aviation Safety Action Program and the Flight Operations Quality Assurance also collect both crew-informed and digital data to monitor trends and correct errors. Combined, those programs have amassed petabytes of data illuminating risk — both technological and human — in everyday flight operations, enabling manufacturers, airlines, and regulators to improve public safety through collaboration. Today, autopilot is treated as the norm, not an exception.

But the parallel between aviation and the roadways shows a distinct difference: U.S. drivers are not trained to proficiency. The last time you demonstrated your ability to drive a vehicle was likely as a teenager during your initial driving test at the DMV. 

This lack of proficiency, especially since driver-assisted technology has not been standardized among different manufacturers, makes the driver-automation coupling very risky. For human-assisted vehicles to work, drivers would need to be trained and a system for tracking data and reporting errors would need to be put in place. For this reason, the sooner the switch to full autonomy, the more lives can be saved.

More on AIA Feast of Sensor Data: Feeding Self-Driving Algorithms

 

Path Forward for a Safer Autonomous Vehicle Future

As the recent suspension of driverless vehicles demonstrates, government oversight of AVs hasn’t kept pace with advancing technology. Like all industries reliant on emerging technologies, most of the expertise lies in the private sector, where financial incentives compete for engineering skills. 

But when regulators overreact, it disincentivizes manufacturers to self-report the full set of data that could otherwise be used to guide technology development. Government overreaction has a chilling effect on manufacturer transparency.

In addition to the challenge of improving regulatory oversight, there are legal roadblocks to AV technology development. The U.S. tort system is not designed to manage risk. Its purpose is to compensate people who have already been harmed by “shifting the cost of harm to another person or entity who has erred in some legally cognizable way.” 

This approach means manufacturers will simply shift the technology from driverless to driver-assisted operations to shield themselves from liability. 

Instead, we need to see and understand the emerging risks of autonomous vehicles and manage them accordingly. Regulators such as the U.S. Department of Transportation, the National Highway Traffic Safety Administration and your local DMV should transition to risk-based regulation strategies, similar to the way the FAA embraced Safety Management Systems in the 1990s. To do this, regulators must partner with industry and labor associations. 

An alternative to the U.S. tort injury compensation model has been proposed, modeled after government action to incentivize vaccine development. In 1986, Congress created the Vaccine Injury Compensation Program in response to lawsuits against vaccine manufacturers when people suffered harm. The program compensates people injured by certain vaccines while limiting manufacturer liability, which stimulates vaccine development. 

The rationale: The private sector would not pursue vaccine research and development if the liabilities outweighed the potential returns on investment.

The clear, collaborative path forward is a national alternative death and injury compensation program, combined with collaborative public-private partnerships in the auto manufacturing and rideshare industries. 

Much like in other industries, these actions would accelerate research and development and make U.S. roads far safer. If autonomous vehicles can drastically reduce the number of deaths and injuries on our roads, failing to embrace these measures would leave more Americans at risk of dying at the hands of human drivers. 

While driver-assist technology provides short-term, incremental benefits, delaying full autonomy will cost thousands, if not tens of thousands, of lives. We have a once-in-a-generation opportunity to save them.

Explore Job Matches.