Can advanced chatbots wreak havoc on social media?

ChatGPT is powered by sophisticated speech processing AI

Whether it’s cooking tips or help with a speech, ChatGPT has been many people’s first opportunity to play with an artificial intelligence (AI) system.

ChatGPT is based on an advanced speech processing technology developed by OpenAI.

The artificial intelligence (AI) was trained using text databases from the Internet, including books, magazines and Wikipedia entries. A total of 300 billion words were fed into the system.

The end result is a chatbot that can appear eerily human, but with an encyclopedic knowledge.

Tell ChatGPT what you have in your kitchen cupboard and you will receive a recipe. Need a crisp introduction to a large presentation? No problem.

But is it too good? Its compelling approximation of human responses could be a powerful tool for those up to no good.

Academics, cybersecurity researchers, and AI experts warn that ChatGPT could be used by bad actors to sow dissent and spread propaganda on social media.

Until now, spreading misinformation has required a lot of human work. However, according to a report published in January by Georgetown University, the Stanford Internet Observatory and OpenAI, an AI like ChatGPT would make it much easier for so-called troll armies to expand their operations.

Sophisticated language processing systems such as ChatGPT could influence so-called social media influencers.

Such campaigns aim to deflect criticism and positively profile a ruling ruling party or politician, and may also advocate for or against political action. They also spread misinformation on social media using fake accounts.

An official report revealed that thousands of social media posts from Russia were aimed at disrupting Hillary Clinton’s presidential bid in 2016

READ :  Hearsay Rolls Out Client Engagement Platform For RIAs

One such campaign was launched in the run-up to the 2016 US election.

Thousands of Twitter, Facebook, Instagram and YouTube accounts created by the St. Petersburg-based Internet Research Agency were focused on harming Hillary Clinton’s campaign and supporting Donald Trump, the Senate Intelligence Committee concluded in the year 2019

But future elections may have to contend with an even greater barrage of misinformation.

The story goes on

“The potential for language models to compete with human-written content at low cost suggests that, like any powerful technology, these models can offer distinct advantages to propagandists who choose to use them,” the statement reads AI report published in January.

“These advantages could expand access to a larger number of actors, enable new influencer tactics, and make a campaign’s message much more tailored and potentially more effective,” the report warns.

It’s not just the amount of misinformation that could go up, but the quality as well.

AI systems could improve the persuasive quality of content and make it harder for ordinary internet users to spot these messages as part of coordinated disinformation campaigns, says Josh Goldstein, co-author of the paper and a research fellow at the Georgetown Center for Security and Emerging Technology, where he works on the CyberAI project is working.

“Generative language models could produce a large amount of content that is original every time… and allow each propagandist not to rely on copying and pasting the same text across social media accounts or news sites,” he says.

More technology of business:

Mr. Goldstein goes on to say that when a platform is flooded with untrue information or propaganda, it becomes more difficult for the public to discern the truth. That can often be the target of these bad actors participating in influence operations.

READ :  The government is trying to ban the promotion of online betting on social media sites

His report also notes that access to these systems may not remain the domain of a few organizations.

“Right now, a small number of companies or governments have best-in-class language models that are limited in the tasks they can reliably perform and the languages ​​they output.

“As more actors invest in cutting-edge generative models, it could increase the chances of propagandists gaining access to them,” his report says.

Nefarious groups could display AI-written content similar to spam, says Gary Marcus, an AI specialist and founder of Geometric Intelligence, an AI company acquired by Uber in 2016.

“People who spread spam rely on the most gullible people to click on their links, using this spray-and-pray approach to reach as many people as possible. But with AI, this squirt gun can become the biggest Super Soaker ever.”

Even if platforms like Twitter and Facebook take down three quarters of what these perpetrators are posting on their networks, “there is still at least ten times as much content as before that can still aim to mislead people online,” Mr. Marcus says.

The tide of fake social media accounts has become a thorn in the side of Twitter and Facebook, and the rapid maturing of today’s language model systems will only flood those platforms with more fake profiles.

“Something like ChatGPT can scale this proliferation of fake accounts to levels we’ve never seen before,” says Vincent Conitzer, a professor of computer science at Carnegie Mellon University, “and it can become more difficult to distinguish each of these accounts from humans.” beings.”

READ :  Social Media Goes Wild For Viral Charlie Woods Swing Video

Fake accounts using technology like ChatGPT will be difficult to distinguish from humans, says Vincent Conitzer

Both the January 2023 paper, co-authored by Mr. Goldstein, and a similar report by security firm WithSecure Intelligence, warn of how generative language models can quickly and efficiently create fake news articles that could be circulated across social media, further fueling the spate of hoax narratives, that could influence voters ahead of a crucial election.

But if misinformation and fake news become an even greater threat due to AI systems like chat-GPT, should social media platforms be as proactive as possible? Some experts think they will be lax in enforcing such posts.

“Facebook and other platforms should flag fake content, but Facebook spectacularly failed that test,” said Luís A. Nunes Amaral, co-director of the Northwestern Institute on Complex Systems.

“The reasons for this inaction are the cost of monitoring every single post and the realization that these fake posts are meant to anger and divide people, which drives engagement. This is beneficial for Facebook.”