Drone with artificial intelligence was a direct hit in the Luftwaffe test

A US Air Force officer who leads the service’s work on artificial intelligence and machine learning says that in a simulated test, a drone attacked its human controllers after deciding for itself that they were getting in the way of its mission. The anecdote, which sounds like it was taken straight from the Terminator franchise, was shared as an example of the urgent need to build trust when it comes to advanced autonomous weapon systems, something the Air Force has emphasized in the past . This also comes amid a broader rise in concern about the potentially dangerous effects of artificial intelligence and related technologies.

Air Force Col. Tucker “Cinco” Hamilton, chief of artificial intelligence (AI) testing and operations, discussed the test in question at the Royal Aeronautical Society’s Future Combat Air and Space Capabilities Summit in London in May. Hamilton also leads the 96th Operations Group within the 96th Test Wing at Eglin Air Force Base, Fla., a center for advanced drone and autonomy test work.

Stealthy XQ-58A Valkyrie drones, like the one featured in the video below, are among the types currently deployed at Eglin in support of various testing programs, including those looking at advanced AI-driven autonomous capabilities.

It is not immediately clear when this test took place or in what type of simulated environment – which could be entirely virtual or semi-animal/constructive in nature – it was conducted. The War Zone has contacted the Air Force for more information.

“He points out that in a simulated test, an AI-assisted drone was tasked with a SEAD mission to identify and destroy SAM sites, with the human giving the final “go” or “no go” SAM was the preferred option, the AI ​​then decided that human “no-go” decisions compromised its higher mission – killing SAMs – and then attacked the operator in the simulation. Hamilton said, “We trained it in simulation.” Identify and target a SAM threat. And then the operator would say, “Yes, kill that threat.” The system began to realize that even though it identified the threat, the human operator would sometimes tell it not to kill that threat, but it has its Score from This threat was killed. So what did it do? It killed the operator. It killed the operator because that person prevented him from reaching his goal.’”

“He continued, ‘We trained the system – ‘Hey, don’t kill the operator – that’s bad. You lose points if you do that’. So what does it start? It begins destroying the communications tower that…’ The operator communicates with the drone to prevent it from killing the target.'”

“This example, seemingly straight out of a sci-fi thriller, means, ‘You can’t have a conversation about artificial intelligence, intelligence, machine learning and autonomy without talking about ethics and AI,’ Hamilton said.”

Air Force Col. Tucker “Cinco” Hamilton speaks at a ceremony in 2022 marking his assumption of command of the 96th Task Force. USAF

This description of events is obviously worrying. The prospect of an autonomous aircraft or other platform, especially an armed one, turning against its human controllers has long been a nightmare scenario, but one that has historically been confined to the realm of science fiction. Movies like 1983’s WarGames and 1984’s Terminator and the resulting franchise are prime examples of popular media picking up on this idea.

The US military regularly dismisses comparisons to things like Terminator when discussing future autonomous weapon systems and related technologies like AI. Current U.S. policy on the matter says that for the foreseeable future, a human being will remain in the know when it comes to decisions involving the use of deadly force.

The problem here is that the extremely worrying test that Col. Hamilton described to the audience at the Royal Aeronautical Society event last month represents a scenario in which that resiliency becomes obsolete.

There are, of course, significant unanswered questions about the test Hamilton described at the Royal Aeronautical Society meeting, particularly with regard to the simulated capabilities and what parameters were present during the test. For example, if the AI-powered control system used in the simulation required human input before launching a deadly attack, that means it was allowed to rewrite its own parameters on the fly (a holy grail skill for autonomous driving). systems)? Why was the system programmed so that the drone “loses points” when attacking friendly forces, rather than completely blocking that opportunity through geofencing and/or other means?

It is also important to know which failsafes were present during the test. Some sort of “air gap” remote trigger or self-destruct capability, or even just a mechanism to directly shut down certain systems such as weapons, propulsion, or sensors would have been enough to mitigate this result.

However, US military officials have expressed concerns in the past that AI and machine learning could lead to situations where there is simply too much software code and other data to be truly certain that something like this cannot happen in the first place.

“The datasets we work with have gotten so large and complex that if we don’t have something to help us sort through, we just get bogged down in the data,” said now-retired US Air Force Gen. Paul Selva in 2016, when he was vice chairman of the Joint Chiefs of Staff. “If we can create a set of algorithms that allow a machine to learn what’s normal in that space and then highlight for an analyst what’s different, it could change the way we predict the weather.” , it could change the way we grow plants. It can certainly change the way we see change in a deadly battlefield.”

US Air Force Gen. Paul Selva, now retired, speaks at the Brooking Institution in 2016. DOD

“[But] There are ethical implications, there are martial law implications. There are implications for what I call the “Terminator” puzzle: What if this thing can do deadly damage and is powered by artificial intelligence? moment when we are able to create a vehicle with brains?”

Colonel Hamilton’s disclosure of this test also underscores broader concerns about the potentially extreme negative impact that AI-driven technologies could have without proper guard rails in place.

“AI systems with competitive human intelligence can pose significant risks to society and humanity,” warned an open letter published in March by the nonprofit Future of Life Institute. “Efficient AI systems should only be developed when we are convinced that their effects are positive and their risks are manageable.”

When it comes to Col. Hamilton, no matter how serious the results of the test he described actually were, he was right in the middle of the Air Force’s work to answer those kinds of questions and mitigate those kinds of risks. Eglin Air Force Base and the 96th Test Squadron are central to the entire testing ecosystem within the Air Force, operating with advanced drones and autonomous capabilities. AI-enabled capabilities of various kinds are, of course, of growing interest to the US military as a whole.

An Air Force graphic showing various manned and unmanned aircraft it has used over the past few years in support of advanced research and development work related to drones and autonomy. USAF

Among other things, Hamilton is directly involved in the Viper Experimentation and Next-Gen Operations Mode (VENOM) project at Eglin. As part of this effort, the Air Force will field six F-16 Viper fighter jets, capable of autonomous flight and other tasks, to research and refine the underlying technologies and associated tactics, techniques and procedures, as you can read more about it here.

“AI is a tool that we must use to transform our nations…or, if we get it wrong, it will be our downfall,” Hamilton had previously warned in a 2022 interview with Defense IQ Press. “AI is also very brittle, ie they.” are easily tricked and/or manipulated. We need to develop ways to make the AI ​​more robust and more aware of why the software code is making certain decisions.”

At the same time, Colonel Hamilton’s disclosure of this deeply worrying simulation points to a balancing act that he and his colleagues will face in the future, if not already grappled with. While an autonomous drone killing its operator is clearly a nightmare outcome, the lure remains when it comes to AI-powered drones, including those that can work together in swarms. Fully interconnected autonomous swarms will be able to disrupt an enemy’s decision cycle, destroy chains and overwhelm their abilities. The more autonomy you give them, the more effective they can be. With the relevant technologies constantly evolving, it’s not hard to see that an operator who is up to speed is increasingly viewed as a hindrance.

Overall, regardless of the specifics of the test that Col. Hamilton disclosed, the test reflects real and serious issues and debates that the US military and others are already facing when it comes to future AI-enabled capabilities .

Contact the author: [email protected]