THE EUROPEAN – SECURITY AND DEFENCE UNION
Responsibility must remain in human hands
by Dr Michael Stehr, Advocate, Board Member of EuroDefense, Bonn, Germany cultural bias seems to dominate debates on “autonomous systems” with perceptions and expectations shaped by an independently acting “Terminator” or “Skynet”, a superior and amoral artificial intelligence (AI) system with nearly unlimited faculties.
perception of its environment, such a system is able to take appropriate action to bring about a desired state. It is capable of deciding a course of action, from a number of alternatives, without depending on human oversight and control, although these may still be present. Although the overall activity of an autonomous unmanned aircraft will be predictable, individual actions may not be.”
The mundane reality
Violation of international law?
The reality is that there is a variety of unmanned systems, in the air, on land and at sea: tanks, flying drones, mine hunting sub systems, ships and submarines. Today UxV1 technologies are developed and designed to hit targets when remotely controlled by soldiers. They only can act independently during take-off and landing, automatically controlled cruising and the “lost-link-mode”. So far, no system is steered by a tactical computer although scientists are developing computers with advanced, AI driven capabilities. The technological outcome is not predictable. In consideration of human responsibility, there is a need today, and even more in the future, to differentiate two categories of systems: automated systems and autonomous systems, as defined by the UK Ministry of Defence2: “An automated or automatic system is one that, in response to inputs from one or more sensors, is programmed to logically follow a predefined set of rules in order to provide an outcome. Knowing the set of rules under which it is operating means that its output is predictable.” “An autonomous system is capable of understanding higher- level intent and direction. From this understanding and its
International law is binding on belligerent parties and their personnel in international armed conflicts. One has to consider the possibility that using automated or autonomous systems could be in violation of international law. Three major rules from the “Protocol Additional to the Geneva Conventions of 12th August 1949 and relating to the Protection of Victims of International Armed Conflicts (Protocol 1)”3 can be taken as examples:
A
50
Rule 1: Article 35 1. In any armed conflict, the right of the Parties to the conflict to choose methods or means of warfare is not unlimited. 2. It is prohibited to employ weapons, projectiles and material and methods of warfare of a nature to cause superfluous injury or unnecessary suffering (…) Prohibited weapons are e.g. poison gas, lasers to blind someone permanently, ammunition which is invisible to x-ray devices. Methods of warfare that exceed the purpose of incapacitating or killing enemy combatants are also prohi bited. Consequently, the operation of remote controlled or
photo: ©boscorelli, stock.adobe.com
Unmanned systems – ethics and international law