3 minute read
Ethical AI for defence forces
As militaries the world over grapple with pros and cons of automating warfighting capabilities, the US Department of Defense has adopted a list of five ethical principles to govern the use of AI by US armed forces. The guidelines, which have been developed following 15 months’ consultation with AI experts, aims to fulfil the department’s purported objective of ensuring the US military lead the way in the development of AI ethics and the lawful use of AI systems.
According to the principles, the use of AI in warfare and national defence should be responsible, equitable, traceable, reliable and governable.
In practical terms, this will involve approaches including taking conscious steps to minimise unintended bias in AI capabilities, as well as setting explicit, well-defined uses for AI and engineering AI capabilities to avoid unintended consequences.
This includes “the ability to disengage or deactivate deployed systems that demonstrate unintended behaviour” — meaning a hypothetical “Skynet” style AI could be shut off before going rogue and ushering in the apocalypse.
As part of the initiative, the department’s Joint Artificial Intelligence Center will take the role of coordinating implementation of AI ethical principles within the defence forces. The centre is already hosting working groups to solicit input from services and AI and technology experts throughout the department.
The US Secretary of Defense, Dr Mark Esper, who drafted the recommendations the principles are based on, called on US allies — including Australia — to accelerate the adoption of AI in defence. “The United States, together with our allies and partners, must accelerate the adoption of AI and lead in its national security applications to maintain our strategic position, prevail on future battlefields and safeguard the rulesbased international order,” he said.
“AI technology will change much about the battlefield of the future, but nothing will change America’s steadfast commitment to responsible and lawful behaviour.”
AI AND AUSTRALIA Australia’s defence sector is accordingly grappling with similar issues surrounding the ethical use of AI. Last year in Canberra, the Defence Science and Technology Group jointly led an Ethical AI for Defence workshop to address ethics across a range of military applications for AI.
The workshop was also led by Plan Jericho and the Trusted Autonomous Systems Defence Cooperative Research Centre, and included representatives from the ADF, the Centre for Defence Leadership and Ethics, and industry, universities and institutes from Australia and overseas.
The workshop was one of the first steps towards the development of Defence’s own ethical principles for the use of AI, as well as a roadmap for ethical AI use in the future.
In a recent blog post, Australian Army Major Daniel Lee said it will be important for the defence sector to build an interdisciplinary understanding of three aspects of the use of deployment and use of AI.
The first of these is understanding the technology, which will require developing a common lexicon about the nascent field. The second is strategy, which will involve understanding potential current and future uses of AI and autonomous systems. The final is ethics, and will require understanding whether individual strategies about the use of AI should be pursued.
“A broad understanding of the technological, strategic and ethical principles relevant to the potential employment of AI and autonomous systems will set the foundations for an informed discussion not only within Army, but also within the wider Australian Defence Force, society and parliament,” Lee said.
“Only once Army understands what AI and autonomous systems are, and why and how they may be used, can we begin to discuss if they should be used in a military context at all.”