A Machiavellian machine raises ethical questions about AI

The author is a science commentator

I remember my daughter’s first lie. She was standing with her back to the living room wall, crayon in hand, trying to hide a rambling scribble. Her explanation was as creative as her handwork: “Daddy do it.”

Deception is a milestone in cognitive development because it requires an understanding of how others might think and act. This ability is shown to a limited extent in Cicero, an artificial intelligence system designed to play Diplomacy, a war strategy game in which players negotiate, form alliances, bluff, withhold information, and sometimes mislead. Developed by Meta and named after the famous Roman orator, Cicero pitted his artificial mind against human players online – and outperformed most of them.

The arrival of an AI that can play the game as proficiently as humans, revealed last week in Science magazine, opens the door to more sophisticated human-AI interactions, such as: B. better chatbots, and optimal problem solving where compromises are essential. But as Cicero demonstrates that AI can employ underhanded tactics if necessary to achieve specific goals, the creation of a Machiavellian machine also raises the question of how much agency we should outsource to algorithms — and whether similar technology should ever be used in real-world diplomacy .

Last year, the EU commissioned a study on the use of AI in diplomacy and its likely impact on geopolitics. “We humans aren’t always good at resolving conflicts,” says Huma Shah, an AI ethicist at Coventry University in the UK. “If AI could complement human negotiation and prevent what is happening in Ukraine, then why not?”

Like chess, the game of diplomacy can be played on a board or online. Up to seven players compete to control different European territories. In a first round of actual diplomacy, players can make alliances or agreements to hold their positions or move troops, including to attack or defend an ally.

The game is considered to be a kind of big challenge in AI, as players need to understand the motivations of others in addition to strategy. There is both cooperation and competition, with betrayal being a risk.

In contrast to chess or Go, communication with the other players is important. Cicero therefore combines the strategic thinking of traditional games with natural language processing. During a game, the AI ​​calculates how other players could behave in negotiations. Then, by generating appropriately worded messages, it persuades, coaxes, or coerces other players into partnering or making concessions in order to carry out its own game plan. Meta-scientists trained Cicero using online data from about 40,000 games, including 13 million in-game messages.

After playing 40 games in an anonymous online league against 82 people, Cicero was in the top 10 percent of contestants who played more than one game. There were hiccups: it sometimes spat out conflicting messages about invasion plans, confusing the participants. Despite this, only one opponent suspected that Cicero could be a bot (all was revealed later).

Professor David Leslie, AI ethicist at Queen Mary University and the Alan Turing Institute, both in London, describes Cicero as a “very tech-savvy Frankenstein”: an impressive marriage of multiple technologies, but also a window into an unsettling future. A 2018 report by the UK Parliamentary Committee warned that AI should never be endowed with “the autonomous power to harm, destroy or deceive people”.

His first concern is anthropomorphic deception: when a person mistakenly believes, as an opponent did, that there is another human being behind the screen. This can pave the way for people to be manipulated by technology.

His second concern is the AI, endowed with cunning but lacking a sense of basic moral concepts such as honesty, duty, rights and duties. “A system is endowed with the ability to deceive, but it doesn’t work in the moral life of our community,” says Leslie. “To state the obvious, an AI system is fundamentally amoral.” Cicero-like intelligence, he says, is best applied to difficult scientific problems like weather analysis, not thorny geopolitical issues.

Interestingly, the makers of Cicero claim that his messages, filtered for toxic language, were “largely honest and helpful” to other players, and speculate that success may have come from suggesting and explaining mutually beneficial moves. Perhaps instead of marveling at how well Cicero plays diplomacy against humans, we should despair at how badly humans play diplomacy in real life.