Is the Facebook account real? Meta reports “rapid rise” in AI-generated profile pictures

Facebook parent Meta sees a “rapid rise” in fake profile photos generated by artificial intelligence.

Publicly available technologies like Generative Adversarial Networks (GAN) enable anyone—including threat actors—to create the uncanny deepfakescreating dozens of synthetic faces in seconds.

These are “basically photos of people who don’t exist,” said Ben Nimmo, global threat intelligence lead at Meta. “There’s not actually a person in the picture. It’s an image created by a computer.”

“More than two-thirds of all [coordinated inauthentic behavior] Networks we disrupted this year contained accounts that likely had GAN-generated profile pictures, suggesting threat actors might see this as a way to make their fake accounts look more authentic and original,” META revealed in a Thursday public reporting.

Investigators at the social media giant are “looking at a combination of behavioral signals” to identify the profile photos generated by GAN, a step up from reverse image search to identify more than just standard photo profile photos.

Meta showed some of the fakes in a recent report. The following two images are among several that are fake. When superimposed as shown in the third image, all of the eyes line up exactly, revealing their artificiality.

ai1.png
AI Generated Fake Facebook Profile for “Ali Ahmed Ghanem”

Meta


ai-alice.png
Fake AI image from Alice Schultz’s Facebook profile.

Meta


aisuper.png
Six AI-generated photos of supposedly different people, when superimposed on the right, show that all of their eyes are perfectly aligned, revealing that they are fake.

Meta/Graphic


Those trained to spot flaws in AI images will quickly discover that not all AI images look perfect: some have telltale melted backgrounds or mismatched earrings.

ai-melted-background.png
AI generated image showing the “melting” on top of the baseball cap.

Meta


“There’s a whole community of open search researchers who just love finding these [imperfections,]said Nimmo. “So what threat actors think is a good way to hide is actually a good way to be discovered by the open source community.”

But the increasing sophistication of generative adversarial networks, which will soon rely on algorithms to produce content indistinguishable from that produced by humans, has a complicated hit-and-run for the global social media threat intelligence team -created the mole game.

Since public reporting began in 2017, more than 100 countries have been the target of what Meta calls “coordinated inauthentic behavior” (CIB). Meta said the term refers to “a coordinated effort to manipulate public debate for a strategic goal in which fake accounts are at the heart of the operation.”

Since Meta began publishing threat reports just five years ago, the tech company has disrupted more than 200 global networks — in 68 countries and 42 languages ​​– that it says are violating policies. According to Thursday’s report, “The United States was the country most targeted globally [coordinated inauthentic behavior] Operations we have paused over the years, followed by Ukraine and the UK.”

Russia cited the prosecution as the “most prolific” source of coordinated inauthentic behavior, with 34 networks originating in the country, according to Thursday’s report. Iran (29 networks) and Mexico (13 networks) also ranked high among geographic sources.

“Since 2017, we have disrupted networks operated by individuals linked to the Russian military and military intelligence services, marketing firms and entities linked to a sanctioned Russian financier,” the report said. “While most public reports have focused on various Russian operations targeting America, our research has found that more operations from Russia were targeting Ukraine and Africa.”

“If you look at Russian operations, Ukraine was consistently the single biggest target they picked,” said Nimmo, even before the Kremlin invasion. But the United States is also among the culprits in violating Meta’s policies governing coordinated online influence operations.

Last month, in a rare attribution, Meta reported that individuals “connected to the US military” fostered a network of about three dozen Facebook accounts and two dozen Instagram accounts focused on US interests abroad and focused on audiences in Afghanistan and Central Asia.

Nimmo said last month’s removal was the first U.S. military-related shutdown based on a “set of technical indicators.”

“This particular network operated on a number of platforms and published general events in the regions it spoke about,” Nimmo continued. “For example, to describe Russia or China in these areas.” Nimmo added that Meta went “as far as we could go” to establish the operation’s connection to the US military, which did not name a specific service department or military command.

The report revealed that the majority – two-thirds – of the coordinated inauthentic behavior removed by Meta “most often targeted people in their own country”. At the forefront of this group are government agencies in Malaysia, Nicaragua, Thailand and Uganda, which have documented attacks on their own populations online.

The tech giant said it is working with other social media companies to uncover cross-platform information wars.

“We continued to uncover operations running on many different internet services simultaneously, with even the smallest networks taking the same diverse approach,” Thursday’s report said. “We saw that these networks operate on Twitter, Telegram, TikTok, Blogspot, YouTube, Odnoklassniki, VKontakte and Change[.]org, Avaaz, other petition sites, and even LiveJournal.”

However, critics say this kind of collaborative takedown is too little and too late. In a scathing rebuke, Sacha Haworth, executive director of the Tech Oversight Project, called the report “[not] worth the paper they are printed on.”

“By the time deepfakes or propaganda from malicious foreign state actors reach unsuspecting people, it’s already too late,” Haworth told CBS News. “Meta has proven that they are not interested in changing their algorithms that amplify this dangerous content in the first place, and that’s why we need legislators to fight back and pass laws that give them oversight over these platforms.”

Last month, a 128-page Senate Homeland Security Committee investigation obtained by CBS News claimed that social media companies, including Meta, prioritize user retention, growth and profits over content moderation.

Meta reported to congressional investigators that it “remove[s] Millions of hurtful posts and accounts every day,” and its artificial intelligence content moderation blocked 3 billion fake accounts in the first half of 2021 alone.

The company added that it invested more than $13 billion in security teams between 2016 and October 2021, with over 40,000 people dedicated to moderation, or “more than the size of the FBI.” But as the committee noted, “that investment represented approximately 1 percent of the company’s market value at the time.”

Nimmo, who was directly targeted with disinformation when 13,000 Russian bots jokingly declared him dead in 2017, says the online defender community has come a long way, adding that he no longer feels “to cry out into the wilderness”.

“These networks are being captured earlier and earlier. And that’s because we have more and more eyes in more and more places. If you look back to 2016, there really wasn’t a defender community on the field. That is no longer the case.”

READ :  Hexagon Adds AI Capabilities to Power Real-Time Crime and Operations Centers | News