4 minute read

In Warfare, the Future is Now by David Ignatius

Next Article
JWOW

JWOW

Political Crossfire In Warfare, the Future is Now

By David Ignatius

We’re standing outside an empty brick warehouse in Alexandria, Va., but it could just as easily be a hidden command center for hostile forces in Iraq, Afghanistan or some battlefield of the future. Our challenge is: How are we going find out who’s inside without exposing ourselves to gunfire?

An operator named Jack Ambridge removes a small quadcopter drone, less than a foot square, from his backpack and soon its tiny rotors are buzzing. “Nova 1,” as the drone is called, ascends to the nearest open window and surveys the warehouse, room by room, using artificial intelligence software called Hivemind that’s embedded in the drone. It doesn’t need to connect with a server at headquarters; it’s fully autonomous.

The tiny drone emerges from the building several minutes later with a detailed map of the structure and imaging that shows it’s empty. Mission accomplished – a job that, in a real-life combat situation, could get soldiers killed.

Welcome to the rapidly advancing world of autonomous weapons – the cheap, highly effective systems that are revolutionizing militaries around the world. These new unmanned platforms can make U.S. forces much safer, at far lower cost than aircraft carriers and fighter jets. But beware: They’re being deployed by our potential adversaries faster than the Pentagon can keep up, and they increase the risk of conflict by making it easier and less bloody for the attacker.

Nova 1 was created by a high-tech start-up called Shield AI, which was co-founded by ex-Navy SEAL Brandon Tseng after he returned from Afghanistan in 2015. Tseng’s unit had suffered casualties in an operation in Uruzgan province when he couldn’t target a hostile building because he didn’t know if civilians were inside. Tseng knew that AI could solve this problem. He got a degree from Harvard Business School, grew his company, and hired Ambridge, a former Air Force special tactics officer, and others. potentially hostile territory. Christian Brose, the chief strategy officer of a start-up called Anduril Industries, hands me some virtual reality goggles. In their 3-D images, I can see the terrain, in real time, through the fusion of different sensors mounted on autonomous systems in the target zone.

I focus on a suspicious object and query the AI-enabled operating system, called Lattice: Where was that object 30 minutes ago? Two hours

Mission accomplished – a job that, in a real-life combat situation, could get soldiers killed.

Shield AI’s systems are now deployed in combat locations abroad. The real breakthrough is that its AI brain is at the “edge,” in the quadcopter itself. It doesn’t have to communicate with a server back at headquarters – a link that would probably be jammed in a real conflict.

Let’s take another real-world military problem: force protection and perimeter defense. Soldiers regularly get killed manning checkpoints and scouting potential threats “outside the wire.” I recently watched a demonstration of a high-tech solution to that one, too.

We’re in an office building in downtown Washington, D.C., but it could be a command post anywhere. We’re worried about the security of ago? Based on AI predictions, where is it going next? The system shows me, with earlier imagery and future plotting.

The live feed I’m watching comes from Anduril’s test range in Southern California, but the technology has already been sold to the Pentagon and deployed to operational zones for force protection, and to Customs and Border Protection for monitoring U.S. frontiers. Anduril was founded in 2017 by Palmer Luckey, a California entrepreneur who created Oculus Rift, the VR system that’s credited with revolutionizing that display technology.

Brose, who spent years campaigning for military reform as staff director of the Senate Armed Services Committee for the late Sen. John McCain, R-Ariz., is now “walking the talk” at Anduril. The key advantage of the autonomous AI systems, he explains, is that “rather than lots of humans operating one system, we have one human operating many systems.” In other words, rather than having a big, vulnerable aircraft carrier, we have swarms of hard-to-target drones.

For a final real-world problem, think about a Marine Corps squad out in the desert. It’s a tiny unit, just 12 Marines and a squad leader. But a new Defense Advanced Research Projects Agency program called Squad X is experimenting with ways to use autonomous ground and air vehicles to augment the team’s situational awareness, reach and impact. Defense giant Lockheed Martin is prime contractor for the program, and a company called BAE Systems is creating an AI system to fuse data from the sensors and allow quicker, better decisions by the squad on the ground. (Disclosure: My wife, a BAE Systems software engineer, was part of the Squad X project.)

Wars of the future may look like video games, as operators control faraway swarms of autonomous systems, but the lethality on the ground will be devastating. What’s encouraging is that people like Tseng and Brose are taking their frustration with the human cost of the wars in Iraq and Afghanistan and turning that knowledge into new systems that will keep U.S. troops safer, at lower cost – even as they combat future adversaries.

This article is from: