Whether we like it or not, self-driving cars are coming. In fact, they are already here. Although completely autonomous cars are not widely available yet, most new cars offer a number of driver-assist or autonomous features. The thought of vehicles making maneuvers without input from the driver is frightening to many of us. However, given that many serious and deadly car accidents are caused by driver error, autonomous cars provide the potential for reducing highway deaths. So which is it? Are self-driving cars a good thing or a bad thing? We take a look at the issue.
What Is an Autonomous Car?
Companies like Tesla claim that we are only two years away from “sleeping in our cars” as they drive. However, it is more likely that fully autonomous cars will not be generally available until 2020. Analysts predict that 95 percent of cars sold by 2040 will be completely self-driving. The U.S. Department of Transportation (DOT) classifies autonomous technology as follows:
- No automation. The driver performs all driving tasks.
- Driver assistance. Some driver assist features are included in the vehicle design, such as cruise control and warning signals.
- Partial automation. The car has some automated functions, such as braking and steering, but the driver remains in control the entire time.
- Conditional automation. The driver is still a necessity but is not required to monitor the environment. The driver must be ready to take control of the vehicle with notice.
- High automation. The vehicle is capable of full automation under certain conditions, but the driver has the option of controlling the vehicle.
- Full automation. The vehicle is fully automated under all conditions. There may still be a driver override option.
Many new cars sold today offer partial automation and questions of liability are already being raised. If the driver claims that the car accelerated all on its own and caused an accident, where does the fault lie? This is a question that will have to be answered as technology continues to advance.
Government Regulation of Autonomous Cars
Under the Obama administration, the DOT issued its first set of guidelines for autonomous vehicles. Many saw the recommendations as too lenient, especially because they were not mandatory. However, the current administration has revised these guidelines and made them even more lenient. While the guidelines suggest that autonomous carmakers consider the following areas, they are not required by law to meet any particular standards:
- Safety. Design decisions should take all aspects of safety into consideration.
- Operational domain. Acceptable roadway types, geographic areas, speed ranges, and weather conditions for a particular vehicle should be defined and documented.
- Object and event detection and response. The vehicle should be capable of addressing a wide variety of foreseeable encounters.
- Fallback. A clear and effective plan for autonomous system failure should be in place.
- Human interface. Potential problems with the human interface should be anticipated and addressed.
- Cybersecurity. Autonomous vehicles collect a large amount of data and the accessibility of this data and the privacy of the driver should be considered.
- Crashworthiness. Autonomous carmakers should consider the vehicle’s crashworthiness just as they would for a non-autonomous car.
- Data recording. It is recommended that autonomous cars have data recording capability so that the cause of a crash can be determined.
- Consumer education and training. Buyers of autonomous cars should be educated about their features, fallbacks, and potential problems before driving home.
Again, these DOT guidelines do not include mandatory standards that must be met. While it is hoped that carmakers will produce autonomous vehicles that are safe for the driver and other cars on the road, they will not be forced by the DOT to do so. It is still possible, however, that Congress will issue regulations for the protection of the general public.
Where Will Liability Fall After a Fatal Self-Driving Car Accident?
Based on the one fatal crash involving an autonomous vehicle that has occurred so far, it looks as if liability for self-driving crashes will be determined on a case-by-case basis. In the 2016 crash of a Tesla, the driver was killed. The National Transportation Safety Board determined a year later that, while Tesla’s Autopilot system worked as designed, it failed to prevent the driver from using it improperly. The resulting crash was caused by a combination of human error and a lack of sufficient system controls. This may indicate that, in these kinds of cases down the road, liability will be shared between the driver and the automaker.