General Publications June 27, 2017

“Why We Must Teach Self-Driving Cars How to Crash,” Law360, June 27, 2017.

Extracted from Law360

The National Highway Traffic Safety Administration’s (NHTSA) policy on automated vehicles has sparked debate on a number of issues except one: how should self-driving cars, or highly autonomous vehicles (HAVs), make ethical decisions when an accident is unavoidable?

For example, should an HAV crash into a wall and kill the lone passenger inside in order to avoid killing a large group of pedestrians? A recent study shows that people’s ethics on this topic change based on where they are standing at the time of the accident.[1]

As pedestrians, they want the HAV to crash into a wall in order to save the larger group (i.e., follow a utilitarian philosophy of saving the most lives possible). As passengers, they want the exact opposite — the HAV should protect itself and (more importantly) them.

The study considered regulation as a possible answer to this social dilemma. However, the research showed that consumers would be less willing to buy an HAV if such regulations were in place. This means that requiring all HAVs to follow a utilitarian philosophy may reach a very nonutilitarian result: it could increase the amount of harm by delaying the adoption of HAV technology.

The NHTSA’s policy acknowledges this problem but does not say how the NHTSA will solve it. Instead, the NHTSA invites input from manufacturers, consumers and local governments.[2] The few comments on the NHTSA’s policy that actually addressed the issue of ethics agreed that this topic should be dealt with in the future.[3]

Yet the data shows that how moral algorithms are regulated (or not) could impact the acceptance of HAVs. So we should be talking about this now.

As a starting point, just having this debate shows how much safer HAVs will be. The test you took for your driver’s license did not have an ethics portion that asked what you would do when faced with the sort of “trolley car” problem we all remember from Philosophy 101.[4] And we (human) drivers certainly don’t have time to internally debate the merits of utilitarianism just before an accident occurs.

In that split second we are usually pretty frantic and saying (or at least thinking) things that in no way advance the field of normative ethics. So once again, we are talking about just how much more we will require of HAVs than we already do of ourselves.

This is where the NHTSA’s policy comes in. If we decide to regulate moral algorithms for HAVs, will the NHTSA set a uniform nationwide standard or leave the matter to the states as part of their product liability laws? There are problems with either approach.

If the NHTSA requires a single rule such as utilitarianism to govern the ethical decisions HAVs will make, they would effectively be forcing some consumers to choose a philosophy they do not agree with. For example, the recent study showed that most people would not buy HAVs that were programmed with utilitarian algorithms.

It is easy to see why — what parent would sacrifice their children in order to save the lives of strangers who did not have enough sense to stay out of the road? So this sort of regulation could delay the acceptance of HAVs. And fewer HAVs on the road means more lives will continue to be lost through human error.

On the other hand, letting the states write their own rules does not seem to be a good answer, either. Different states could have different rules, or no rules at all. And while it may be possible to program HAVs to change their moral algorithms as they cross state lines, this would create a number of other problems.

For example, what sort of meaningful warning could an HAV give to its passengers that the car’s moral algorithms are about to switch to utilitarian mode?[5] Things would be even worse in states that do not adopt a rule. Whether the HAV is programmed to protect the passenger or the pedestrian, the “other one” will be a plaintiff.

That plaintiff will argue that the HAV’s moral algorithm is “defective” because it followed a philosophy that caused him harm. Because strict liability laws are not the best rubric for debating philosophy, this legal threat could reduce the number of people who are willing to make or buy HAVs.

So what other options are there? The NHTSA could approve different forms of moral algorithm and let consumers choose (e.g., between a utilitarian algorithm that protects the greatest number of people or a “self-protective” one that guards the HAV’s passengers above all else). This would address one of the problems noted in the study: consumers do not want the government to make that choice for them through regulation.

However, this option would leave the customers and manufacturers open to attack as plaintiffs and their lawyers come to call for the consequences of the chosen moral algorithm. That legal risk could delay the wide-scale deployment of HAVs, which will cause additional lives to be lost.

On the other hand, it may be possible to eliminate that legal risk. The NHTSA could approve utilitarian and self-protective algorithms, and Congress could insulate the manufacturer and customer from liability for choosing one of those approved algorithms. This seems to be the best result because it recognizes that there is no answer to this moral koan; the NHTSA is being asked to do the impossible.

The data from the recent study confirms this. Consumers want other people to have HAVs with utilitarian algorithms, but do not want to ride in one. So maybe before we ask whether the NHTSA or the states should solve this problem, we ought to ask whether anyone actually could.

There is another reason why we should require consumers to choose the moral algorithm that will guide their HAV: it forces them to think about the issue before an accident occurs. As with so many other things related to HAVs, this is an improvement over the way things are now. I’m not sure exactly how many people currently give detailed thought to the ethics of unavoidable accidents before driving a new car off the lot, but I do know I’ve never met any of them.

As the study also acknowledges, the “trolley car” situations we are talking about hardly ever happen. On top of that, the whole scenario seems to assume the sort of perfect information that we rarely have in real life (e.g., death will be certain for the passenger as well as every one of the pedestrians).

This is all the more reason why this ethical issue should not delay the deployment or acceptance of HAVs. The very low probability of moral algorithms ever being triggered does not justify all of the lives that will be lost by waiting — especially since there is nothing that could be gained from this delay.

Philosophers have been debating the duty to protect others as opposed to the self for thousands of years. Do we really expect a government agency to suddenly solve this puzzle, as an afterthought, just because we have invented a new kind of technology?

 


 

[i] “The Social Dilemma of Autonomous Vehicles,” Bonnefon, et al., Science, Vol. 352, No. 6293 (June 24, 2016).

[ii] Federal Automated Vehicles Policy, (Sept. 23, 2016), at pp. 26-27, and 106 at n. 31.

[iii] See e.g., Comments of the Association of Global Automakers Inc., at pp. 15-16, § 1.3.11 (observing that because morality is subjective, it may not be possible to provide both moral and legal certainty).

[iv] A runaway trolley is barreling down the tracks, headed towards five people who will not be able to get out of the way. You are standing next to a lever that can switch the trolley to another track where there is only one person (who also will not be able to get out of the way). Do you pull the lever?

[v] To be effective, the warning would probably have to be some sort of short course in philosophy — explaining the tenets of utilitarianism, the most popular criticisms of that theory, the choices that different philosophical doctrines would call for in certain scenarios, and so on. All of this becomes even trickier when you consider the possibility that HAVs will change the nature of car ownership. If ride-sharing becomes the new model, then every time you get into an HAV you would have to audit this same Philosophy 101 seminar, which would probably get very old very fast. (And exactly how the HAV would effectively warn pedestrians about its moral algorithm is anyone’s guess.)

Meet the Author
Media Contact
Alex Wolfe
Communications Director

This website uses cookies to improve functionality and performance. For more information, see our Privacy Statement. Additional details for California consumers can be found here.