4 minute read

Civil and Criminal Liability and Autonomous Robotic Surgery

Is the legal world prepared against AI medical malpractice? CIVIL AND CRIMINAL LIABILITY AND AUTONOMOUS ROBOTIC SURGERY

Konstantinos Apostolos National Researcher of ELSA Greece ILRG: Human Rights and Technology

The rapid development of modern medicine entails, inter alia, the ever-increasing use of AI and robots. Medical robots constitute a decisive factor for high accuracy surgery, possible better outcomes in rehabilitation, whereas their use contributes to the reduction of healthcare costs by enabling medical professionals to shift their focus from treatment to prevention and by making more budgetary resources available for improved adjustment to the plethora of patients’ needs, life-long training of the healthcare experts and research.1 However, contrary to the traditional definition of medical malpractice, the framework on civil and criminal liability for medical errors using AI and robots seems vague. On 21.4.2021, the European Commission took the initiative globally by proposing a holistic legal

1 Resolution 2018/C 252/25 of the European Parliament with recommendations to the Commission on Civil Law Rules on Robotics (2015/2103(INL)), 16 February 2017. framework for the regulation of artificial intelligence.2 The framework follows a risk-based approach and categorizes the possible uses of AI depending on the risk posed for public interest or interference with fundamental rights, forming a four-pillar system— minimal risk, limited risk, high risk, de facto ban. Although the proposal addresses a variety of issues, such as the development, distribution and use of AI, a wide range of dilemmas remains unsolved, spearheaded by the civil and criminal liability due to the use of AI, especially in the field of robotic surgery. A study carried out for the European Parliament can shed some light on this labyrinth. In cases, where the doctor acts lege artis, but the system is wrong – two different sets of liability rules apply: product liability rules, concerning the liability of the manufacturer,

2 Proposal COM/2021/206 for a Regulation of the European Parliament and of the Council <https://eur-lex.europa.eu/legal-content/EN/ ALL/?uri=CELEX:52021PC0206> accessed 5 October 2021.

and liability based on medical law, proceedings are brought against the practitioner and/or the medical structure where the operation took place. Normally, the patient will sue the doctor and/or the hospital, for having followed instructions that were incorrect. If held liable, the latter may then sue the producer in recourse. In the second situation, only medical liability rules apply, mainly regulated through the general principles of negligence.3 Still, numerous issues remain to be solved. Firstly, the causal link between malfunctioning or error in the system and the damage, given that normally the system is not responsible for the final decision, but merely provides an analysis which the doctor may rely upon, is not always obvious. On the one side, should the doctor be considered to have relied on the system, the final choice arguably falls to the doctor? On the other side, the doctor or hospital could sue the manufacturer under contract law, on the basis that the system does not offer promising performance and thus there is a lack of conformity or breach.4 But what happens in the futuristic possibility of a robot deciding? The actions of an autonomous robot, self-learning and adaptive to the conditions around it, could be damaging. Hence, that would not be the result of its programming and thus, its acts would not be controlled and directed by any human.5 Ιf robotic surgery eventually is proved to be more efficient than regular surgery, which could have consequences on medical malpractice standards. In this regard, the question is; should surgeons be held accountable for non-robotic surgery in case it performs worse or should they be considered to be responsible for any kind of surgery they perform in general? Similarly, whether the same professional standards as human surgeons are applicable, or maybe some higher standards.6 Additionally, since the liability of a robot or AI program for its acts cannot be established, the liability for malpractice of autonomous robots is necessarily passed on to the people who manufacture, distribute, own and operate them. Consequently, only a human being can be regarded as criminally guilty for an autonomous robot's fault and not a surgical robot. However, the circle of the potentially liable people remains uncertain. If the remote operation of the robot takes place, then the surgeon should be held

3 Andrea Bertolini, ‘Artificial Intelligence and Civil Liability Legal Affairs’ (2020) Policy Department for Citizens' Rights and Constitutional Affairs, European Parliament. 4 Ibid 115. 5 Andreas Matthias, ‘The responsibility gap: ascribing responsibility for the actions of learning automata. Ethics and Information Technology’ (2004) Vol. 6, Iss. 3, Ethics and Information Technology 181‐183. 6 Shane O'Sullivan et al., ‘Legal, regulatory, and ethical frameworks for development of standards in artificial intelligence (AI) and autonomous robotic surgery’ (2019) Vol. 15, Iss. 1, International Journal of Medical Robotics and Computer Assisted Surgery e1977. liable, unless the death of the patient is the result of a malfunction of the machine. In this case, criminal liability is logically attached to the manufacturer. But, in the scenario of signal loss, during telesurgery, for example, who will be held liable?7 In conclusion, in light of only a few dilemmas in medical malpractice with the use of AI and robotic surgery, anyone can hardly argue that humanity is indeed ready for the digital era. 7 Ibid e1975.

This article is from: