Instagram is adding more kindness nudges as part of its plan to combat harassment

It’s no secret that Instagram has major problems with harassment and bullying on its platform. A recent example: a report that Instagram failed to respond to 90 percent of over 8,700 abusive messages received from several high-profile women, including actress Amber Heard.

To try to make its app a more hospitable place, Instagram is rolling out features that remind people to be respectful in two different scenarios: Now, when you first message a creator (Instagram defines a creator someone with more than 10,000 followers or users who set up “creator” accounts) or when you reply to an offensive comment thread, Instagram will display a message at the bottom of your screen asking for your respect.

These gentle reminders are part of a broader strategy called “nudging,” which aims to positively influence people’s behavior online by encouraging — rather than forcing — them to change their actions. It’s an idea rooted in behavioral science theory that Instagram and other social media companies have embraced in recent years.

While poking alone won’t solve Instagram’s problems with harassment and bullying, Instagram’s research has shown that this type of subtle intervention can curb some social media users’ most cruel instincts. Last year, Instagram’s parent company, Meta, said that after it began warning users before they posted a potentially offensive comment, about 50 percent of people edited or deleted their offensive comment. Instagram told Recode that similar alerts have proven effective for private messages as well. For example, in an internal study of 70,000 users whose results were shared with Recode for the first time, 30 percent of users sent fewer messages to creators with large followings after seeing the kindness reminder.

READ :  Chris Eubank Jr. breaks social media silence after Liam Smith knockout and promises 'we'll get back to it soon'

Screenshot showing Instagram’s new “kindness reminder” nudge, which urges people to be respectful when first writing to message creators who face disproportionate harassment on social media. The kindness reminder will appear at the bottom of the screen.

Nudging has shown promise that other social media apps with their own bullying and harassment issues — like Twitter, YouTube, and TikTok — are also using this tactic to encourage more positive social interactions.

“The reason we’re so committed to this investment is because we look through the data and user feedback that these interventions actually work,” Francesco Fogu, a product designer on Instagram’s Well-Being team, said focuses on ensuring that the time people spend using the app is supportive and meaningful.

Instagram first introduced nudges in 2019 to influence people’s comment behavior. The reminders, for the first time, urged users to reconsider posting comments that fall in a gray area — ones that don’t quite openly violate Instagram’s guidelines on harmful language to be automatically removed, but that still comes close to this line. (Instagram uses machine learning models to flag potentially objectionable content.)

The initial offensive comment warnings were subtle in wording and design, asking users, “Are you sure you want to post this?” Over time, Fogu says, Instagram has made the nudges more open by asking people to click a button to override the warning and continue with their potentially offensive comments, and to warn more clearly when comments may violate Instagram’s Community Guidelines. As the warning became more direct, Instagram said 50 percent of people edited or deleted their comments.

The effects of nudging can also be long-lasting, says Instagram. The company told Recode that it conducted research on so-called “repeated hurtful commenters” — people who leave multiple offensive comments within a time window — and found that nudging had a positive long-term effect by reducing the number and proportion of hurtful comments Comments reduced regular comments these people have made over time.

Starting Thursday, Instagram’s new nudging feature will not only apply this warning to people who post an offensive comment, but also to users who are thinking of replying to one. The idea is to get people to reconsider whether they want to “dive into a thread that’s spiraling out of control,” said Liz Arcamona, Instagram’s global head of product policy. This is true even if their individual response doesn’t contain problematic language — which makes sense considering many responses to mean comment threads are simply thumbs-up or happy tears emojis or “haha. For now, the feature will roll out over the next few weeks for Instagram users whose language settings are set to English, Portuguese, Spanish, French, Chinese, or Arabic.

One of the overarching theories behind Instagram’s nudging features is the idea of ​​an “online disinhibition effect,” which argues that people have less social reluctance to interact with people online than they do in real life — and that can make it easier for people to be unfiltered express negative feelings.

The goal of many of Instagram’s nudging features is to curb this online disinhibition and remind people in non-judgmental language that their words have a real impact on others.

“When you’re in an offline interaction, you see people’s responses, you sort of read the room. You feel their emotions. I think you often lose a lot of that in the online context,” Arcamona said of Instagram. “And so we’re trying to bring that offline experience into the online experience so that people take a swipe and say, ‘Wait a minute, there’s a human on the other side of this interaction, and I should think about it.'”

That’s another reason Instagram is updating its prods to focus on creators: people can forget that real human emotions are at stake when they message someone they don’t know personally.

About 95 percent of social media creators surveyed recently by the Association for Computing Machinery have faced hate or harassment during their careers. The problem can be particularly acute for YouTubers who are women or people of color. Social Media Public Figures, by bachelorette From stars and contestants to international soccer players, they have made headlines for being the target of racist and sexist comments on Instagram, in many cases in the form of unwanted comments and DMs. Instagram said it’s limiting its kindness reminders to People Messaging Creator accounts for now, but may expand those kindness reminders to more users in the future as well.

Of course, aside from YouTubers, another group of people who are particularly vulnerable to negative interactions on social media are teenagers. Facebook whistleblower Frances Haugen revealed internal documents in October 2021 showing that Instagram’s own research indicated that a significant percentage of teens felt worse about their body image and mental health after using the app. The company then faced intense scrutiny as to whether it was doing enough to protect younger users from viewing unhealthy content. A few months after Haugen’s December 2021 leak, Instagram announced it would nudge teens away from content they’ve been scrolling through continuously for too long, such as: B. Posts related to body image. This feature was introduced this June. Instagram said a week-long internal study found that one in five teenagers changed the subject after seeing the nudge.

Screenshot showing Instagram’s new comment warning labels at the bottom of the right screen that appear when people try to reply to an offensive comment thread.

While nudging seems to encourage a majority of social media users to adopt healthier behaviors, not everyone wants Instagram to remind them to be nice or stop scrolling. Many users feel censored by major social media platforms, which could make some resistant to these features. And some studies have shown that too much nudging to stop staring at your screen can cause users to shut down an app or cause them to ignore the message altogether.

But Instagram said users can still post something if they don’t agree with a nudge.

“What I find offensive might be a hoax to you. So it’s very important for us not to call you,” Fogu said. “At the end of the day, you’re in the driver’s seat.”

Several outside social media experts Recode spoke to saw Instagram’s new features as a step in the right direction, although they pointed to some areas for further improvement.

“That kind of thinking really excites me,” said Evelyn Douek, a Stanford law professor who studies social media content moderation. For too long, the only way social media apps dealt with objectionable content was to remove it after it had already been posted, in a slap-and-click approach that left no room for nuance. But in recent years, Douek said, “platforms are getting more creative when it comes to creating a healthier language environment.”

For the public to really gauge how well nudging works, Douek says social media apps like Instagram should publish more research or, even better, allow independent researchers to verify their effectiveness. It would also help Instagram to share instances of interventions that Instagram has experimented with but have not been as effective, “so it’s not always positive or enthusiastic reviews of their own work,” Douek said.

Another data point that might help put these new features into perspective: How many people experience unwanted social interactions at all? Instagram declined to tell Recode what percentage of creators receive unwanted DMs overall, for example. While we may know how much nudging can reduce unwanted Direct Messages to creators, we don’t have a full picture of the magnitude of the underlying problem.

Given the massive size of Instagram’s estimated 1.4 billion+ users, it’s inevitable that nudges, no matter how effective, won’t come close to deterring people from experiencing on-app harassment or bullying. There is debate over the extent to which the underlying design of social media, when maximized for engagement, encourages people to engage in inflammatory conversations in the first place. Right now, subtle reminders are perhaps some of the most useful tools to solve the seemingly unsolvable problem of how to stop people from misbehaving online.

“I don’t think there’s a single solution, but I think nudging looks really promising,” Arcamona said. “We’re optimistic it can be a really important piece of the puzzle.”