A Global Standard for Autonomous Vehicle Moral Dilemmas?

By Clark A. Belanger

The Problem:

 

As early as 1905, moral psychologists began contemplating the ethical dilemma known as the “trolley problem.”[1] The trolley problem is a thought experiment in ethics, where an individual is asked to consider the following no-win situation. This problem has become increasingly popular more recently, as autonomous vehicle manufacturers consider how to program autonomous vehicles to make such ethical judgments. Consider the following situation:

 

The driver of a car is driving along a road on a hillside. The highly automated car detects several children playing on the road. The driver of a manual vehicle would now have the choice of taking his own life by driving over the cliff or risking the death of the children by heading towards the children playing in the road environment. In the case of a highly automated car, the programmer or the self-learning machine would have to decide what should be done in this situation.[2]

 

The implications of how autonomous vehicle manufacturers choose to address this problem will certainly expose the manufactures to liability.  To avoid a finding of a design defect, autonomous vehicle manufacturers must show that there was not a safer alternative design for the ethical programming of the vehicle. Which begs the question — safer for whom?

Determining an answer to this question will require extensive debate between politicians, manufacturers, motorists, and pedestrians.

 

The Solution:

 

            In June 2017, Germany’s Ethics Commission on Automated Driving published a report containing guidelines for the programming of automated driving systems. Generally, these guidelines favored the protection of individuals over all other utilitarian concerns.[3] 

 

In hazardous situations that prove to be unavoidable, despite all technological precautions being taken, the protection of human life enjoys top priority in a balancing of legally protected interests. Thus, within the constraints of what is technologically feasible, the systems must be programmed to accept damage to animals or property in a conflict if this means that personal injury can be prevented.[4]

 

Further, the guidelines prohibit consideration of personal features, such as age, gender, or race, in the determination of valuing individual human lives over others.[5] However, factors regarding the present activities of individuals may be considered.[6] For example, in the event of an unavoidable accident, the vehicle may target an individual who is crossing a street illegally rather than a pedestrian who is behaving legally.

While these guidelines provide a rough framework to consider solutions to potential trolley problem situations, the guidelines contain a wide range of permissible constructions. Thus, developing a global answer to this question may prove impossible.[7] Due to differing moral values that exist within individual nations, a legally permissible design in one country may be determined as defective in another country.[8]

In October 2018, a study titled The Moral Machine Experiment was published, in Nature Information Journal of Science, analyzing a survey of 2.3 million people worldwide.[9] The survey asked several questions regarding the respondents’ moral principles that guide their driving decisions.[10] The authors of the study analyzed the results from 130 countries with at least 100 respondents, finding that the countries could be divided into three categories: Western Nations, Eastern Nations, and Southern Nations.[11]

The Western Nations category includes the United States, Canada, and several European nations, where Christianity has historically been the dominant religion.[12] This region showed a stronger preference for sacrificing older lives to save younger ones than did the other two regions.[13]

The Eastern Nations category includes countries in the Middle East and Asia, such as Japan, Indonesia, and Pakistan, where strong Confucian and Islamic traditions predominate.[14] In these nations, respondents favored saving pedestrians more than the other two regions.[15] These respondents also favored sparing individuals who are behaving lawfully, much more than respondents in the other two regions.[16]

The Southern Nations category includes Central and South American countries, as well as France and former French colonies.[17] This region showed stronger preferences towards sparing individuals who are younger, female, more physically and mentally fit, and have higher societal status.[18]

            Due to the differences amongst the differing regions, as well as differences in countries within each region, it seems unworkable that a common moral algorithm will be adopted globally.[19] This discrepancy will likely cause confusion and hardship for autonomous vehicle manufacturers as they will be required to produce different products to comply not only with the general morals of each region, but also specific guidelines within each nation.[20]

 

 


[1] Frank Chapman Sharp, A Study of the Influence of Custom on the Moral Judgment Bulletin of the University of Wisconsin no.236, 138 (1908).

[2] Ethics Commission: Automated and Connected Driving, Federal Ministry of Transport and Infrastrucuture, June 2017, 16, https://www.bmvi.de/SharedDocs/EN/publications/report-ethics-commission.pdf?__blob=publicationFile (hereinafter “Ethics Commission”)

[3] Ethics Commission, supra note 2, at 10.

[4] Id. at 11.

[5] Id.

[6] Id.

[7] Tim Worstall, When Should Your Driverless Car From Google Be Allowed To Kill You?", Forbes, June 18, 2014.

[8] Id.

[9] Amy Maxman, Self-driving car dilemmas reveal that moral choices are not universal, Nature, Oct. 24, 2018, https://www.nature.com/articles/d41586-018-07135-0?fbclid=IwAR0MSjkoAW9x-PcaNuIwSheHumVWoVhSwwee7TBPCM-5JF72a2l1leJzvTw.

[10] Id.

[11] Id.

[12] Edmond Awad, et. al. The Moral Machine Experiment, NATURE, Oct. 24, 2018, https://www.nature.com/articles/s41586-018-0637-6.

[13] Id.

[14] Id.

[15] Id.

[16] Id.

[17] Id.

[18] Id.

[19] Worstall, supra note 7.

[20] Id.