5 minute read
Insights from the GC REAIM South Africa Symposium exploring Responsible AI in the Military
/ Read original article here /
The global conversation surrounding the ethical integration of Artificial Intelligence (AI) in military operations took a significant step forward at the Symposium on Responsible AI in the Military Domain.
This was held in Stellenbosch, South Africa, on 14 November. The event, co-hosted by the Global Commission on Responsible AI in the Military Domain (GC REAIM) and the Defence AI Research Unit (DAIRU) of Stellenbosch University, gathered military practitioners, academics, policymakers and civil society leaders to exchange interdisciplinary perspectives on AI’s role in defence.
The Symposium was an extension of the GC REAIM conference, which had run for three days prior, delving into the intersection of AI and defence systems. This particular day focused on the ethical challenges and opportunities AI presents within the military context, addressing everything from technological foundations to the implications for peace, security and governance.
The initiative began with the first summit in February 2023 as the Commission aims to raise awareness, foster understanding and deliver a comprehensive guidance report on responsible military AI. Key issues under discussion include addressing rapid technological advancements, ensuring AI use aligns with peace and security goals, embedding ethics in decision-making and developing regulatory frameworks grounded in international law.
The highlight of the event was the keynote address by South African Deputy Minister of Defence, Major General (Retired) Bantu Holomisa. Holomisa discussed the transformative potential of AI in enhancing military capabilities, with particular emphasis on its application in border monitoring and counteracting illicit activities. He underscored that while AI presents substantial opportunities for improving security, efficiency and decisionmaking, it also brings forward ethical responsibilities.
“Artificial intelligence is not merely a tool for military advancement; it is an opportunity to reinforce our commitments to peace and stability,” Holomisa stated.
His remarks were rooted in South Africa’s broader strategy for responsible AI deployment, which includes the National Artificial Intelligence Policy Framework and the establishment of DAIRU, designed to promote local research and ensure the ethical use of AI in defence systems.
Holomisa also highlighted the challenges posed by low-cost, off-theshelf technologies, which can undermine advanced defence systems. He emphasised the need for adaptive procurement strategies and robust oversight mechanisms to protect national security interests.
Referencing British politician Tom Tugendhat, Holomisa stated: “The economies of war have changed and lowcost technology is now able to undermine advanced equipment. Defence leaders therefore must adapt swiftly, enhancing oversight and regulation of artificial intelligence to ensure national security keeps pace with technological shifts.”
“Reforming procurement systems is essential to address these emerging challenges,” he added.
He emphasised that AI should be used to bridge digital divides and foster equity, stressing the importance of international collaboration for successful AI integration in the military.
The symposium featured several discussions on the ethical implications of integrating AI into defence systems. AI’s ability to enhance decision-making, predictive analytics and autonomous operations in military contexts bring numerous advantages. However, experts raised critical concerns about the misuse of autonomous weapons and the ethical challenges of entrusting life-or-death decisions to machines.
A panel of experts explored these concerns, discussing the need for human oversight in autonomous systems, especially those involved in lethal actions. The consensus was clear: AI should assist, not replace, human decision-making in situations where moral judgments are essential. Furthermore, there was a call for the development of international governance frameworks to regulate AI use in military settings, ensuring that its deployment aligns with humanitarian and legal principles.
One of the central ethical concerns discussed was the potential use of lethal autonomous weapons, which has sparked global debate. While AI can significantly improve targeting precision and reduce collateral damage, its deployment in combat raises moral questions about accountability, transparency and the risks of unforeseen consequences. The need for clear guidelines and regulatory frameworks to govern AI’s use in warfare was emphasized by several speakers who underscored the importance of ethical frameworks in mitigating such risks.
The issue of governance was also a major theme at the Symposium. The dual-use nature of AI (where the same technology can be applied to both civilian and military domains) complicates efforts to regulate it effectively. Experts discussed the need for international cooperation to create comprehensive governance structures that balance innovation with the responsible use of AI in defence.
Proposed solutions included the establishment of global AI conventions, similar to frameworks for nuclear or chemical weapons, to set clear guidelines and protocols. Additionally, the role of accountability across the entire AI value chain, from developers to end-users, was seen as critical to ensuring AI systems are used ethically and in accordance with international law.
However, the potential consequences of a State or non-State actor’s deviation from ethical AI development standards are yet to be fully understood.
The Symposium served as a crucial platform for shaping the future of AI in defence, where discussions on ethical considerations, governance and international collaboration will continue to drive the global dialogue on AI’s role in military applications.
Future plans include institutionalising these efforts, presenting a draft Strategic Guidance Report during the NATO summit in June 2025. The Strategic Guidance Report will contain short- and long-term recommendations on norm development on responsible design and use of Al in the military domain. The ultimate goal is to submit the report to the UN General Assembly, advocating for actionable frameworks to ensure responsible AI integration in military contexts.