3 minute read
EDITORIAL
from ADBR May-June 2020
by adbr5
Initial Point THE ETHICS OF AI By Andrew McLaughlin
Unmanned aerial system, unmanned aerial vehicle, uninhabited vehicle, autonomous system, remote piloted aircraft system, drone … these are just some of the names and variations of what is essentially the same thing – a vehicle that flies, walks, rolls, or swims, that does not require an onboard operator.
Advertisement
There are several ways of controlling these vehicles. Their missions and courses may be preprogrammed, or there may be a ground or ship-based, or airborne controller with a remote-control link. But what is becoming increasingly more common is that the vehicle possesses a degree of artificial intelligence to allow it to adapt to mission scenarios as they occur, without the intervention of a humanin-the-loop. This is the goal of Boeing’s Advanced Teaming Systems (ATS), and similar unmanned air combat programs being run in the US (Skyborg) and Europe (FCAS).
It’s likely these systems will retain a pre-programmed ‘point-and-click’ type of control for missions such as strike or EW tasks. But for the Loyal Wingman mission where the UAS conducts a high value asset (HVA) escort, or for offensive counter air and defensive counter air missions, the UAS will need the ability to independently adapt to the task as it evolves, and this requires a degree of artificial intelligence, or AI.
There has been an increased emphasis in recent years on developing capabilities to counter HVAs operating at stand-off ranges by potential adversaries as part of a growing anti-access-area denial (A2AD) philosophy.
So, for the Loyal Wingman role which is one of many which the RAAF is initially looking to develop the ATS, the air vehicle would be tasked with protecting HVAs such as E-7A, KC-30A, P-8A, and the MC-55A, all of which are crucial for the success of an expeditionary or defensive air operation.
The loss of just a couple of HVAs would be devastating for a small air force such as the RAAF. So, it’s easy to imagine the primary role of the Loyal Wingmen UAS is to be tasked to escort RAAF HVAs in a high-intensity conflict.
Imagine a scenario where an E-7A and a KC-30 are operating in a region supporting a strike mission of F-35s and EA-18Gs. Despite each of these HVAs being several hundred kilometres to the rear of the strike package, their adversary has supersonic lowobservable fighters with long-range air-to-air missiles. Therefore, each HVA is escorted by two or more Loyal Wingmen flying several kilometres ahead of and above them.
Notionally, one of the escorting UASs would be equipped with a comprehensive EW/ESM suite to bolster the HVA’s own systems, while another could carry a load of expendables – either decoys, projectiles or, in the future, lasers – to defeat incoming threats, or to at least divert them away from the HVA.
If these measures are not successful, near the end game a UAS could put itself between the threat and their HVA, making itself a target and possibly sacrificing itself. This is where the ‘attritable’ term, sometimes used for some of the more affordable systems, comes from. All of these scenarios require ‘...it’s essential the ADF leans forward the air vehicle, or a system of air vehicles, to be able to network with onboard and offboard sensors to early and often so it can control the rapidly determine the best course of action to protect the HVA, narrative...’ and to react accordingly without the need for human intervention. They need almost instant situational awareness of not just the threat but also the HVA, and they need to react in such a way to give the HVA the best chance of survival.
The mainstream media will inevitably draw unqualified parallels with movies like The Terminator or Stealth whenever any discussions of integrating artificial intelligence with ‘drones’ arise, so it’s essential the ADF leans forward early and often so it can control the narrative on the issue.
As UASs proliferate in ADF ranks, it needs to assure the public that Australia operates with strict rules of engagement that align with or exceed United Nations conventions, that there will always be humans-in-the-loop in case of a systems failure, and that ethicists and lawyers have been involved in the development of concepts of operations for these systems.