Artificial intelligence shouldn’t control the use of nuclear weapons, officials say

According to a new US proposal on the military applications of the nascent technology, artificial intelligence systems should not control “actions” crucial to the use of nuclear weapons.

“States should maintain human control and participation in all actions critical to informing and executing sovereign decisions about the use of nuclear weapons,” the State Department said on Thursday.

This statement was a key proposal in a policy statement presented by Foreign Minister Antony Blinken’s team during a conference on the military implications of artificial intelligence in The Hague. Dutch and South Korean officials held the conference amid the launch of artificial intelligence-enabled chatbots, which have fueled new unease about the specter of military conflicts being waged with weapons systems capable of operating independently of humans.

“AI is everywhere. On our kids’ phones, ChatGPT is their new best friend when it comes to homework,” said Dutch Foreign Minister Wopke Hoekstra on Wednesday at the start of the conference. “Nevertheless, AI also has the potential to destroy in a matter of seconds. And that’s worrisome, considering that only caution has prevented a nuclear escalation for the past few decades. How will that evolve with technology that can make decisions faster than any of us can think?”


Cold War history confirms the importance of human decision-making in averting nuclear war. A famous incident in 1983 involved a false alarm from Soviet systems that appeared to detect an incoming attack from the United States. The Soviet officer on duty correctly suspected that the detection system had malfunctioned and had delayed reporting the alarm to his superiors.

READ :  US expands guidance on artificial intelligence with NIST AI Risk Management Framework | Cooley LLP

“There was no rule as to how long we could think before reporting a strike. But we knew that every second of procrastination cost valuable time; that the military and political leadership of the Soviet Union had to be informed immediately,” officer Stanislav Petrov told the BBC in 2013. “Twenty-three minutes later I realized that nothing had happened. If there had been a real strike, I would know. It was such a relief.”

Research on artificial intelligence could open the door to weapons systems that bypass such human reasoning, dozens of countries have agreed.

“We recognize that AI can be used to shape and influence decision-making, and we will work to keep people responsible and accountable for decisions when using AI in the military arena,” as more than 50 states in agreed to a “call to action” released this week at the Responsible AI in the Military Domain Summit. “We recognize that untimely adoption of AI can result in military disadvantage, while premature adoption without adequate research, testing and security can result in unintended harm. We see a need to increase the sharing of experiences related to risk mitigation practices and procedures.”

Signatories to this broad declaration included four of the five nuclear powers that veto it in the UN Security Council, including the US, China, France and the UK. The fifth, Russia, the fifth was not invited to the conference because of the invasion of Ukraine.

The US offered a more specific statement with 12 proposals aimed at guardrailing the military use of artificial intelligence, specifically by maintaining a “responsible human chain of command and control” over AI-powered weapons.

READ :  ANALYSIS: As AI Meets Privacy, States’ Answers Raise Questions

“States should design and develop military AI capabilities to be able to detect and prevent unintended consequences and to disable or disable deployed systems that exhibit unintended behavior,” the US policy statement said . “States should also take other appropriate safeguards to mitigate the risk of serious outages.”

But the voluntary political declaration is a far cry from a treaty that could restrict militaries around the world.


“The goal of the declaration is to respond to the rapid advances in technology by initiating a process to form an international consensus on responsible behavior and guiding the development, deployment and use of military AI by states,” said Deputy State Department spokesman Vedant Patel on Thursday. “We encourage other states to join us in building an international consensus on the principles we have articulated in our political declaration.”