In 2016, a computer called AlphaGo made headlines when it defeated then-world champion Lee Sedol at the old, popular strategy game Go. The “superhuman” artificial intelligence developed by Google DeepMind lost just one of the five rounds against Sedol, drawing comparisons to Garry Kasparov’s 1997 chess loss to IBM’s Deep Blue. Go, in which players compete by moving black and white pieces, called checkers, to occupy an area on the game board, was seen as a more formidable challenge for a machine opponent than chess.
AlphaGo’s victory was followed by much heartache over the threat posed by AI to human ingenuity and livelihood, not unlike what is happening right now with ChatGPT and its kin. However, in a 2016 press conference after the defeat, a subdued Sedol made a comment with a core of positivity. “The style was different and it was such an unusual experience that it took me some time to get used to it,” he said. “AlphaGo made me realize that I need to learn Go more.”
At the time, European Go champion Fan Hui, who also lost a private round of five matches to AlphaGo months earlier, told Wired that the matches made him see the game “completely differently.” This improved his game so much that, according to Wired, his world rankings “skyrocketed”.
It can be difficult to formally trace the chaotic process of human decision-making. But decades of recording professional Go player movements gave researchers a way to assess the strategic human response to an AI provocation. Now, a new study confirms that Fan Hui’s improvements after the AlphaGo challenge weren’t just a coincidence. In 2017, after that humiliating AI victory in 2016, human Go players were given access to data detailing the AI system’s moves and developed new strategies in a very human way, leading to better quality decisions in their game led. Confirmation of the changes in human gameplay appears in results published March 13 in the Proceedings of the National Academy of Sciences USA.
“It’s amazing to see that human gamers have adapted so quickly to incorporate these new discoveries into their own game,” said David Silver, senior research scientist at DeepMind and leader of the AlphaGo project, who is not involved in the new study was. “These results suggest that people will adapt to these discoveries and build on them to massively increase their potential.”
To find out if the advent of superhuman AI has prompted humans to devise new gaming strategies, Minkyu Shin, an assistant professor in the Department of Marketing at City University of Hong Kong, and his colleagues used a database of 5.8 million moves, which were recorded during games from 1950 to 2021. This record, maintained on the Games of Go on Download website, reflects every move of Go games played in tournaments up to the 19th century. Researchers began analyzing games in the 1950s, which is when the modern Go rules were introduced.
To comb through the massive record of 5.8 million moves, the team first developed a method to assess the quality of decision-making for each move. To develop this index, the researchers used another AI system, KataGo, to compare the win rates of each human decision to those of AI decisions. This extensive analysis involved simulating 10,000 ways the game might play out after each of the 5.8 million human decisions.
Using a quality score for each of the human decisions at hand, the researchers then developed a means to pinpoint when a human decision during a game was novel, meaning it had never been recorded before in the game’s history. Chess players have long used a similar approach to determine when a new strategy emerges in the game.
In the Go game novelty analysis, the researchers mapped up to 60 moves for each game and marked when a new move was introduced. If it appears on, say, move 9 in one game but not until move 15 in another, then the former game would have a higher novelty index value than the latter. Shin and his colleagues found that after 2017, most of the moves that the team defined as a novel occurred up to move 35.
The researchers then examined whether the timing of novel moves in the game was associated with increased decision quality – whether making such moves actually improved a player’s edge on the board and likelihood of winning. They particularly wanted to see what happened to decision quality after AlphaGo defeated its human challenger Sedol in 2016 and another set of human challengers in 2017.
The team found that human decision quality levels remained fairly constant for 66 years before AI defeated human Go champions. After that fateful period of 2016-2017, decision quality scores began to increase. Humans made better game decisions – maybe not enough to consistently beat superhuman AIs, but still better.
Even after 2016-2017, novelty values skyrocketed as people introduced new moves into games earlier during the game sequence. And in their assessment of the association between novel moves and better-quality decisions, Shin and his colleagues found that prior to the success of AlphaGo against human players, humans’ novel moves contributed less, on average, to quality decisions than non-novel moves. After these landmark AI victories, the novel moves humans introduced into games contributed more, on average, to better decision quality than previously known moves.
One possible explanation for these improvements is that people remember new sequences of moves. In the study, Shin and his colleagues also assessed how much memorization might explain decision quality. The researchers found that memorization would not fully explain the improvements in decision quality and “unlikely” underlies the increased novelty observed after 2016–2017.
Murat Kantarcioglu, a professor of computer science at the University of Texas at Dallas, says these results, along with the work he and others have done, show that “AI can clearly help improve human decision-making.” Kantarcioglu, who was not involved in the current study, says the AI’s ability to process “wide search spaces” like all possible moves in a complex game like Go means the AI can “find new solutions and approaches to problems .” For example, an AI that flags medical imaging as an indication of cancer could prompt a clinician to look more closely than before. “This, in turn, will make the person a better doctor and prevent such mistakes in the future,” he says.
One problem – as the world is seeing with ChatGPT right now – is the problem of making AI more trustworthy, Kantarcioglu adds. “I think that’s the biggest challenge,” he says.
In this new phase of concerns about ChatGPT and other AIs, the findings offer “a hopeful perspective” on the AI’s potential to be an ally rather than a “potential enemy in our journey of progress and improvement,” according to Shin and his co- Authors wrote in an email to Scientific American.
“My co-authors and I are currently running online lab experiments to study how people can improve their prompts and get better results from these programs,” says Shin. “Rather than seeing AI as a threat to human intelligence, we should embrace it as a valuable tool that can enhance our capabilities.”