Train AI with social media trends

Key Findings: Any reason to worry about the increase in “Post a picture of you” tweets? In an age of rapid innovation in artificial intelligence (AI), some believe these tweets could be a well-planned strategy to get training data for the AI. We take a look at past social media trends, their AI sides, and why it should or shouldn’t be a cause for concern.

Post a picture of you wearing red. Post a picture of yourself now and one from when you were 18. Is it a coincidence that social media trends like this are on the rise when artificial intelligence is making waves around the world?

Tweeter CoriAgain2 isn’t taking any chances, and I wonder if her concerns are warranted.

“The upsurge in Twitter promoting tweets asking to see pictures of us along with personal information as AI continues to rise hasn’t escaped my notice. Personally, I will refrain from engaging in these image growing tactics/threads.”

Is Twitter really promoting tweets asking to see our pictures?

A quick search for “quote tweet with picture” will bring up many tweets, from those requesting a picture of you with glasses to one of you aged 18.

Yes, there have been a lot of these tweets lately, enough to make the “quote tweet with a picture of” trending. But is Twitter promoting it because it wants to encourage users to post more pictures of themselves?

I don’t want to say anything about Elon Musk’s Twitter, but promoting tweets with pictures of people to collect data about our appearance is most likely not the case.

Tweets with photos generally get more engagement.

After examining four million tweets, Stone Temple Consulting found that tweets with images received more than twice as many likes and retweets as tweets without images.

Buffer’s study also found that tweets with images received 150% more retweets.

Such tweets give more context to the text and people are more likely to pay attention to them because of the images and videos.

Visual content is processed 60,000 times faster than text, so it makes sense for social media sites to push content more with images.

There is cause for concern

Although the popularity of “quote tweets with a picture of yourself” is likely due to people’s fondness for images, the images you post on social media could be training material for AI.

It’s okay to doubt my submission, so let’s hear it straight from the horse’s mouth.

“Images on social media are often used to train artificial intelligence (AI) systems. Social media platforms like Instagram, Facebook, and Twitter are rich sources of user-generated visual content, and these images can be valuable for training AI models in various applications.” .” – ChatGPT

In particular, trends that encourage people to post at different ages could help the AI ​​understand the aging of people.

While Facebook (now Meta) denied it helped the 2019 10-year challenge go viral, experts like NYU professor Amy Webb believe it was a “perfect machine learning storm.”

Although countless pictures are uploaded to Facebook every day,

Kate O’Neill, author of “Tech Humanist: How You Can Make Technology Better for Business and Better for Humans,” believes the 10-Year Challenge uses a facial recognition algorithm with “a clean, simple, helpfully labeled set of then and now photos .”

The same applies to the Twitter trend. A tweet like “post a picture of you aged 31” will help the algorithm sort images more easily. The pictures are well labeled so you know what an 18 year old would look like at 31.

It’s not just a harmless trend

Meta’s defense in 2019 read, “This is a user-generated meme that went viral on its own. Facebook didn’t start that trend.” But O’Neill thinks that while the Facebook meme may have gone viral on its own, there are social media games and memes designed to extract data.

While there is much speculation about using the 10-year challenge to train AI, a certain social media trend has been used to teach a camera how depth works.

The Mannequin Challenge went viral in November 2016. CNN believes it was created by students at Edward H. White High School in Jacksonville, Florida, USA.

In just a few days, the #MannequinChallenge was used 60,000 times by people from different parts of the world.

The challenge required a group of people to strike various interesting poses and then remain standing until the end of the video.

Interestingly, the more people in the video showed interesting poses, the better the performance of the video.

However, a group of researchers at Google realized that the Mannequin Challenge was a good way to help cameras learn depth in different scenarios.

Here’s a no-fuss way to think about the concept.

A monocular camera — typically used in self-driving cars — is a specialized camera that shows how far or how close an object is. However, confusion arises when people move in the shot.

In 2019, the scientist developed a method for predicting the depth in a scenario where the special camera and the people move freely. Previously, depth prediction was done using objects, and the results were heavy guesses at best.

Using thousands of mannequin videos from the Internet, the researchers created a so-called multi-view stereo reconstruction.

When they tested it on real videos of people doing different things, it worked better than other methods and was able to create cool 3D effects.

Is it bad that your image could be used to train AI?

A picture of your face used to train the AI ​​isn’t necessarily a bad thing. In fact, O’Neill thinks it can be a good thing.

It can improve the ability of facial recognition technology when it comes to age prediction. So if a person is missing for a period of time, we could use AI to see how they’ve aged over the years and know what to look for.

There may be some downsides, but that’s a matter of perspective.

For example, camera or sensor-based advertisers may target you based on your age. They could say a lot about you just by your face and they could target ads based on those facial features.

Privacy is also an issue that cannot be overlooked. Identity theft is a possibility in a world where data breaches are becoming more common. Now that there are AI models trained to sound like you, there may well be models trained to look like you too.

AI will continue to advance, but the regulations that come with it will determine whether we should be concerned. The question, however, is whether the regulator has our best interests at heart.