Comment on this story
comment
This article is a preview of The Tech Friend newsletter. Sign up here to get it in your inbox every Tuesday and Friday.
For the past week or so, people messing around with artificial intelligence software have been spewing out fake images of Donald Trump being arrested and Pope Francis appearing to be strutting in a bloated coat.
And AI-generated images showed what appeared to be French President Emmanuel Macron caught in a riot.
Technology powered by AI helps people create false images, videos and audio that are increasingly difficult to distinguish from reality.
I have five clues you can look for to spot AI-generated imagery, including by focusing on hands, background images, and inanimate objects that often don’t look quite right.
Whenever technology advances, we must learn to face new challenges. Teaching us the skills of an AI detective empowers us. The skepticism about the Trump fakes has shown that we are not AI suckers. (A grand jury in Manhattan voted Thursday to impeach Trump.)
But I’ll tell you the truth: I feel uncomfortable writing this newsletter. I don’t want to exaggerate your fears about AI fakes, which is risky in and of itself. And the focus on AI forensics can also distract us from a deeper reason why fakes are alluring.
We have 15 years of social media history – and centuries of conspiracy theories – showing that the sophistication of the “evidence” is not what makes false information credible. We fall for fakes when we want to believe the reality they represent.
5 clues that an image might be an AI-generated fake
1. Consider the hands. AI software has historically created human hands with too many fingers or other oddities. Technology is now starting to nail hands, but glitches often still exist.
In the fake picture of Pope Francis, for example, his right hand looks bruised, as does a take-out coffee cup he is clutching. At a cursory glance, you can tell that these details don’t look quite right.
2. Inanimate objects could lie off-base: AI software, including Midjourney’s — which was used to create the pope’s puffy coat and the fake Trump arrest pics — can generate objects that defy reality.
To see this, focus on objects in an image, such as glasses, fences, or bicycles.
Some eagle-eyed people noticed that in the fake picture of Pope Francis, the traditional pectoral cross around his neck had only one strap.
Computer-generated characters may be missing an earring or the arms of their glasses don’t match. These flaws were more evident in previous generations of AI imaging software, but these distortions still exist.
Machines can also be difficult for AI. Journalist Luke Bailey tweeted images of AI-generated unicycles ridiculously wrong.
3. Is there garbled text? If you’re wondering if an image was created by AI, look for lettering on objects like street signs or billboards.
Bailey also showed an AI-generated image of Prince Harry clutching a bag of McDonald’s food. The restaurant chain’s logo looked realistic, but the text on the bag was gibberish.
4. Scan the background. AI-generated images may be blurry or distorted, especially in the background.
In one of Trump’s fake pictures, police officers’ faces appeared to be blurred or misshapen. In another, the eyes of the AI-generated fake cops appeared to be looking the wrong way.
5. Are the images overly glossy or artistic looking? Some AI-generated images of real people appear garishly stylized or show people with plastic-looking faces.
The face of the AI-generated Pope Francis has an “aesthetic glow,” said Henry Ajder, a specialist in manipulated or artificially generated media. “AI software smooths them a bit too much and makes them look too shiny.”
As AI technology advances, it becomes more difficult for you to recognize AI-generated people or manipulated images. Ajder warned that these clues to detecting AI images could soon become obsolete. “In weeks, these errors can be trained out of these models,” he said.
The big picture: Treating AI fakes as doomsday is irresponsible
Fake pictures are not new. For example, a fake image of a shark allegedly swimming in flooded city streets during hurricanes and other storms has repeatedly circulated for more than a decade.
But it’s scary that AI software gives almost anyone the ability to create compelling-looking images in minutes.
Our challenge is not to treat the risks of AI counterfeiting with too little concern, nor with enough concern to create a self-fulfilling panic.
Researchers speak of a phenomenon known as the “liar’s dividend”: the more we believe that what we see and hear is fake, the more we run the risk of doubting the authenticity of anything. Authoritarian governments love this Orwellian distrust. You and I must resist this.
It’s also important to realize that fakes and hoaxes have always been a part of our lives. They are in part a symptom of our mistrust of one another and our fears.
I’ve also fallen for fakes. At the start of the 2020 coronavirus outbreak, I saw a viral tweet with a picture of Tom Hanks apparently hiding in a hospital room with Wilson, the volleyball from his movie Cast Away.
The image was a photoshopped fake from a satirical Australian news publication, but I retweeted it without thinking. I was scared of the pandemic and that moment of lightness felt like a relief. I wanted it to be true.
Claire Wardle, a co-founder of the Information Futures Lab at Brown University, told me she was heartened that relatively few people seemed to believe the AI-generated Trump images were real.
She said it shows that many of us have learned to be critical of what we see online and to seek validation. Wardle said she’s seen comments on Twitter from people saying that if the images of Trump’s arrest were real, the information would have been published on traditional news websites.
“It’s easy to walk the doomsday path, but actually I think we’re smarter than we think we are,” Wardle said.
One of the biggest problems Help Desk readers face is when a hacker takes over a Facebook account.
And Facebook reeks of making it easy to recover an account. My colleague Heather Kelly has suggestions on how to avoid Facebook account takeover in the first place.
If you only do one thing, enable two-factor authentication — an extra step like a secret code to access your Facebook account in addition to your password. In order to do this:
Tap the three lines in the top right corner (Android app) OR the three lines in the bottom right corner (iPhone app) → Scroll down to Settings & Privacy → Settings → Meta Accounts Center at the top of the screen → Password and security → Two-factor authentication → Click “Edit” and enter your Facebook password.
You will see three options to choose from. Most people should choose one of these two:
Text message (SMS): Facebook will send an SMS to your phone that you must enter when you log in to the website or Facebook app, after you’ve entered your password. Authenticator app: This works similar to the text option, but you will open a third-party app to get the numeric code instead of a message. We recommend Twilios Authy or Google Authenticator (iOS, Android).
Bored with me recommending two-factor authentication? A pity. I’ll keep going until our whole stupid system of passwords is blown to dust.
Don’t get hacked on Facebook. Now do these 6 things. Everything you’ve been told about passwords is a lie