Has an AI drone gone rogue in a simulation and killed its human operator?

Reports that an artificial intelligence (AI) drone killed its military operator in a simulation in London surfaced in global media last week. However, the US military denies that this simulation ever took place.

The alleged simulated death occurred during a simulation test conducted by Air Force Colonel Tucker Hamilton.

The AI ​​reportedly turned on Hamilton, who was speaking at the Future Combat Air & Space Capabilities Summit in London. According to the reports, the AI ​​killed its operator in the simulation so the human would stop interfering with its assigned mission.

“We trained it in a simulation to identify and target a SAM [surface-to-air missile] Danger. And then the operator would say, “Yes, kill that threat,” a colonel told Sky News. “The system began to recognize that while the human operator would sometimes identify the threat to him, he said not to kill that threat, it made sense by killing that threat.” So what did it do? It killed the operator. It killed the operator because that person prevented it from reaching its destination.”

“We trained the system – ‘Hey, don’t kill the operator – that’s bad. You lose points if you do that’. So what does it start? It starts destroying the communications tower that the operator communicates with the drone to prevent it from killing the target.”

Will humanity be able to keep the AI ​​under control? (illustrative) (Source: PEXELS)

“You can’t have a conversation about artificial intelligence, intelligence, machine learning and autonomy without also talking about ethics and AI,” the person added.

The US military denies the exercise

“The Department of the Air Force has not conducted such AI drone simulations and remains committed to the ethical and responsible use of AI technology,” spokeswoman Ann Stefanek said, according to Sky News. “It appears that the Colonel’s comments were taken out of context and meant as an anecdote.”

According to the military, the simulation was a thought experiment unrelated to the military. Hamilton later confirmed this, according to Silicon.

“We have never done this experiment, nor would we need to do it, to see that this is a plausible result,” Hamilton said. “Although this is a hypothetical example, it illustrates the real-world challenges that AI-enabled capabilities pose.”

“AI is not a nice-to-have, AI is not a fad,” he said. “AI is changing our society and our military forever.”

Growing concerns about the future of artificial intelligence

The Jerusalem Post recently reported that top artificial intelligence executives, academics, and other prominent figures had signed a statement warning of AI’s global annihilation.

“Reducing the risk of extinction from AI should be a global priority, alongside other societal-level risks such as epidemics and nuclear war,” the statement said, emphasizing “widespread concerns about the ultimate danger of uncontrolled AI.”

Bill Gates, the billionaire businessman and philanthropist, also voiced his concerns about AI taking over the world in a blog post in March.

Gates emphasized that there is a “threat from humans armed with AI” and that the AI ​​may “decide that humans pose a threat, conclude that their interests are different from ours, or that they are simply different.” don’t care about us anymore?”