Twitter welcome page is seen on a digital device on April 25, 2022 in San Diego. (AP Photo/Gregory Bull, file)
WASHINGTON (AP) — Over the past 11 months, someone has created thousands of fake, automated Twitter accounts — maybe hundreds of thousands — to praise Donald Trump.
The fake reports not only published admiring words about the former president, but also mocked Trump’s critics from both parties and attacked Nikki Haley, the former South Carolina governor and UN ambassador who is challenging her former boss for the 2024 Republican presidential nomination.
When it came to Ron DeSantis, the bots aggressively suggested that the Florida governor couldn’t beat Trump but would be a great running mate.
As Republican voters weigh up their 2024 candidates, the creator of the bot network is attempting to put their thumb on the scales, using online manipulation techniques developed by the Kremlin to influence the conversation on the digital platform about candidates while exploiting Twitter’s algorithms to maximize their reach.
The sprawling bot network was uncovered by researchers from Cyabra, an Israeli tech company, who shared their findings with The Associated Press. While the identities of the people behind the network of fake accounts are unknown, Cyabra analysts determined that it was likely created in the United States
To identify a bot, researchers look for patterns in an account’s profile, its followers list, and the content it posts. Human users typically post on a variety of topics with a mix of original and reposted material, but bots often post repetitive content on the same topics.
This was true of many of the bots identified by Cyabra.
“A report will say, ‘Biden is trying to take our guns; Trump was the best,” and another will say, “Jan. 6 was a lie and Trump was innocent,'” said Jules Gross, the Cyabra engineer who first discovered the network. “These voices are not people. In the interests of democracy, I want people to know this is happening.”
Bots, as they are commonly known, are fake, automated accounts that became notorious after Russia used them to meddle in the 2016 elections. While big tech companies have improved their detection of fake accounts, the network identified by Cyabra shows that they remain a powerful force in shaping online political discussion.
The new pro-Trump network actually consists of three different networks of Twitter accounts, all created in bulk in April, October and November 2022. Overall, researchers believe hundreds of thousands of accounts could be involved.
The accounts all contain personal photos of the alleged account owner along with a name. Some of the accounts have posted their own content, often in response to real users, while others have reposted content from real users to further amplify it.
“McConnell…Traitor!” wrote one of the reports in response to an article in a conservative publication about GOP Senate leader Mitch McConnell, one of several Republican critics of Trump targeted by the network.
One way to measure the impact of bots is to measure the percentage of posts on a given topic that are generated from accounts that appear to be fake. The percentage for typical online debates is often in the low single digits. Twitter itself has said that less than 5% of its daily active users are fake or spam accounts.
However, when Cyabra researchers examined negative posts about certain Trump critics, they found far higher levels of inauthenticity. Almost three quarters of the negative posts about Haley, for example, came from fake accounts.
The network also helped publicize a call for DeSantis to join Trump as a vice presidential nominee — a result that would serve Trump well and allow him to avoid a potentially bitter matchup if DeSantis enters the race.
The same network of accounts shared overwhelmingly positive content about Trump and contributed to an overall misrepresentation of his support online, researchers found.
“Our understanding of mainstream Republican sentiment for 2024 is being manipulated by the proliferation of bots across the internet,” the Cyabra researchers concluded.
The triple network was discovered after Gross analyzed tweets about various national political figures and found that many of the accounts posting the content were created on the same day. Most accounts remain active despite having a relatively modest number of followers.
A message left with a spokesman for Trump’s campaign was not immediately answered.
Most bots aren’t designed to persuade people, but rather to amplify certain content so more people see it, according to Samuel Woolley, a University of Texas professor and misinformation researcher whose latest book focuses on automated propaganda.
When a human user sees a hashtag or content from a bot and reposts it, it does the network’s work for it and also sends a signal to Twitter’s algorithms to further encourage the content to spread.
Bots can also be successful in convincing people that a candidate or idea is more or less popular than reality, he said. For example, more pro-Trump bots may lead people to overestimate his overall popularity.
“Bots are absolutely affecting the flow of information,” Woolley said. “They were built to give the illusion of popularity. Repetition is the core weapon of propaganda, and bots are really good at it. They’re really good at getting information in front of people.”
Until recently, most bots were easily identifiable thanks to their clumsy writing or account names containing nonsensical words or long strings of random numbers. As social media platforms became better at recognizing these accounts, the bots became more sophisticated.
So-called cyborg accounts are one example: a bot regularly taken over by a human user that can post original content and respond to users in a human-like manner, making them much more difficult to spy on.
Thanks to advances in artificial intelligence, bots could soon become much more sneaky. New AI programs can create lifelike profile photos and posts that sound much more authentic. Bots that sound like a real person and use deepfake video technology could challenge platforms and users alike in new ways, according to Katie Harbath, a fellow at the Bipartisan Policy Center and former director of public policy at Facebook.
“Platforms have gotten so much better at fighting bots since 2016,” Harbath said. “But the guys we’re starting to see now can create fake people with AI. Fake videos.”
These technological advances are likely to ensure that bots have a long future in American politics – as digital foot soldiers in online election campaigns and as potential problems for voters and candidates trying to defend themselves against anonymous online attacks.
“There’s never been more noise online,” said Tyler Brown, a policy adviser and former digital director of the Republican National Committee. “How much of it is maliciously or even unintentionally factual? It’s easy to imagine humans being able to manipulate that.”