top of page

Defective functioning and liability

Updated: May 26, 2021

The article discusses the liability and ethical issues involved in driverless systems.



In the future, the admittance on the road of autonomous vehicles will certainly bring a series of benefits ranging from safety to the reduction of CO2 emissions and traffic congestions. The number of road accidents and deaths can be expected to decrease significantly as the root cause of them, namely human errors, will be completely ruled out. As a result, security issues will likely only be related to software bugs or mechanical malfunctions. In this scenario where humans no longer directly affect the movement of vehicles, new laws need to be established to correctly deal with liability issues.


Although to the current state the regulation of road traffic liability varies depending on the country considered, it can be mainly classified into two opposite legislation categories. In the first, adopted by several European countries, the holder of the vehicle, who uses the car at his own expenses, is fully responsible for all damages due to the operational risk including automation. This means that technical defects, as well as the improper management of the automation system by the driver, are exclusively under the responsibility of the car holder.


In the second type of jurisdiction, the liability of the driver, which does not necessarily correspond to the holder of the vehicle, is based on fault. It is defined to the extent that the driver directly influences the occurrence of the accident. Since in fully autonomous cars the driver is not expected to pay attention or intervene, he should not be held responsible after a road accident.


A possible solution that could reasonably be adopted in the future develops in an intermediate direction between those mentioned. Specifically, even though based on the strict liability, holders can always take recourse against the relative vehicle manufacturer. Furthermore, by following this strategy, car manufacturing companies will be neither overexposed nor underexposed to liability. If in the first case this prevents the discouragement of production and innovation, in the second it avoids that fewer resources are devoted to safety research.


Ethical Dilemmas


Although self-driving cars are assumed to be much safer than manually controlled vehicles, their size and weight, especially at high speed, still limit their manoeuvrability. As a result, in the event that an obstacle suddenly appears near the autonomous car, the vehicle may not have enough time or space to avoid a collision. In general, the unpredictable moving object mentioned may be a pedestrian, a cyclist or wildlife. Nonetheless, as the introduction of self-driving cars will be gradual, human-driven vehicles need to be taken into account as well. Additionally, as explained in the first part of the article, software bugs or defective mechanics functioning could also occur provoking road crashes. In summary, autonomous vehicles cannot be completely safe and therefore require to be programmed to handle unavoidable traffic collisions.


However, this entails serious ethical dilemmas that inexorably raise questions. For example, what goal should cars be programmed towards? Should they be aimed at ensuring the safety of passengers regardless of the cost? Or conversely, is it necessary to prioritize other considerations? In any case, driverless systems implicitly place machines in the position of having to make decisions with life or death implications.


In the latest years, several philosophers and journalists have associated accident management with the so-called trolley problem. In its "switch" variant, a runaway trolley is hurtling towards five people who are stuck in the middle of the track. All of them will be killed unless the vehicle is diverted onto another track after the pulling of a switch. However, such a manoeuvre will inevitably cause a collision with a person standing on the adjacent track and hence his death. The trolley problem is also oftentimes referred to as the different choices that people make according to the variation considered and the characteristics of the person to sacrifice. Whereas in some cases killing one person seems morally permissible in other circumstances the opposite is true.


Overall, the reaction a driver might have in response to an unpredictable and sudden change of direction made by an entity on the road remains different from how this situation is handled by a driverless system. Unlike the first case, in the second the decision is made not immediately by a group of individuals who have a much larger number of situational features at their disposal.


Moral Machine


To further investigate the moral dilemmas concerning autonomous vehicles, Awad et al. designed a web platform (Moral Machine) where users can put themselves in the role of programmers. In particular, the website provides a series of scenarios that somehow recall the trolley problem. An example of the proposed questions is displayed in the picture below.

The following interesting results are obtained by the authors on the basis of the over 40 million gathered decisions made by users across 233 different countries.

As noticeable and as expressed by the delta Pr, the probability of sparing some characters with respect to the probability of sparing an adult man or woman remarkably depends on which types of characters are in danger.




References

  1. Nyholm, Sven, and Jilles Smids. "The ethics of accident-algorithms for self-driving cars: An applied trolley problem?." Ethical theory and moral practice 19.5 (2016): 1275-1289.

  2. Holstein, Tobias, Gordana Dodig-Crnkovic, and Patrizio Pelliccione. "Ethical and social aspects of self-driving cars." arXiv preprint arXiv:1802.04103 (2018).

  3. "Moral Machine." www.moralmachine.net.

  4. Awad, Edmond, et al. "The moral machine experiment." Nature 563.7729 (2018): 59-64.

  5. Lohmann, Melinda Florina. "Liability issues concerning self-driving vehicles." Eur. J. Risk Reg. 7 (2016): 335.


The images in the blog are either copyright free or designed from scratch. Some figures presented in this article are extracted and modified from the following sources:

  • Moral Machine website: "https://www.moralmachine.net/"

  • Awad, Edmond, et al. "The moral machine experiment." Nature 563.7729 (2018): 59-64.

61 views0 comments

Related Posts

See All

Comments


bottom of page