3 minute read

AI in DoD Requires Security, Education, and ATO

Next Article
The INSIDE TRACK

The INSIDE TRACK

By Allen Badeau, Chief Technology Officer at Empower AI

When deployed military personnel undertake sensitive field missions, operations centers at the Department of Defense (DoD) are tasked with making real-time decisions, such as where to go to avoid hostile forces or when to deploy reinforcements, based on constantly shifting information.

These high-pressure decisions often require a significant amount of background information to accurately assess the situation. Normally, artificial Intelligence (AI) would be an easy way to streamline necessary data and support the decision-making process. But due to DoD routinely working with highly classified data, standard AI solutions are not viable.

The Pentagon must implement AI that can access and organize classified data. To do that, it needs to understand how AI fits into highsecurity environments.

Privileged AI: Easier said than done Often, people associate AI with one-off unattended bots that automate tasks such as budget management or desk work. But AI within DoD is different for one critical reason: it must be able to work with the same authority as privileged accounts.

A privileged account is authorized to perform high-security functions that everyday users cannot perform. A system administrator overseeing 5,000 user accounts with sensitive information needs support, but any AI that works within those parameters would need advanced privileges.

This authorization is especially crucial for AI operating at the tactical edge. If DoD urgently needs to know the locations of all the fighter jets within a particular region but receives conflicting information from its numerous classified databases, standard AI programs are not authorized to look through and deconflict the information. The time it takes to find the correct information without AI assistance may risk the lives of warfighters or citizens.

To help people make informed decisions more quickly, everyone from the human deciding on the automation gathering the information must have the capability to see all relevant information. While defense agencies understand this, there are several challenges associated with integrating privileged AI into a network.

Education and Authorization are the best approachese

The most common challenge associated with AI in DoD is a general lack of trust and understanding of how secure AI would operate. While standard AI often consists of unmanaged automation conducting repetitive tasks, advanced AI is closely managed by the people using it. It would not make decisions such as where to send reinforcements or when to fire missiles; rather it would merely provide the relevant information to the person making that decision.

One way to combat the misconception that AI would oversee critical decisions is by implementing standard AI tools for humans to interact with. By allowing people to be directly involved in dictating and monitoring an AI’s responsibilities, they become familiar with the limitations and benefits of AI, making them less distrustful and resistant to the technology.

Change management is another part of the education process. Ensuring that each agency meets the baseline requirements needed to fully understand and realize all the potential impacts of implementing AI into a secure network ensures a smooth undertaking.

Another obstacle is the need for standardized authority to operate (ATO) for AI. ATO is an accreditation and certification process that allows the government to evaluate applications and ensure they are safe to run on critical systems. While the ATO itself does not affect the privileges or credentials associated with automation, it cannot be used as intended – to confirm the AI solution is secure – when each agency has its requirements.

While this is not an issue easily solved, one thing agencies can do is ensure they have the baseline capabilities that lend to clean AI software. Things such as ensuring AI runs within cleared environments on software made in the U.S. by cleared developers are great starting points when looking to standardize ATO and enhance inter-agency collaboration.

Another way to streamline the ATO process is to use the AI-as-a-Service (AIaaS) model. Historically, deploying AI requires different vendors for each step of the implementation process (purchase, installation, and maintenance). These vendors have different standards, often leading to ineffective deployment and unfocused goals, making it difficult to secure ATO. AIaaS streamlines the buying, installation, and maintenance process, ensuring that AI is focused on mission-relevant capabilities.

Education and authorization need to be a critical piece of DoD’s AI strategy when planning for the next few years. Without AI that can operate at a high-security level, the government is putting the lives of warfighters and citizens at risk. With international threats becoming more prevalent, the government must utilize every available asset to stay ahead of its adversaries. Using AI in secure networks can provide previously unattainable insight into the information needed to ensure that the well-being of everyone, from the citizen to the warfighter, is protected.

This article is from: