Twitter Faces Advertiser Boycott Due to Failures to Police Child Abuse Material

Twitter’s not good, very bad year continues, with the company this week being forced to inform some advertisers that their ads were showing in the app next to it Tweets encouraging child pornography and other abuse material.

As reported by Reuters:

Brands ranging from Walt Disney, NBCUniversal and Coca-Cola to a children’s hospital were among the 30 or so advertisers who have appeared on Twitter account profile pages offering links to the exploitative material.”

The discovery was made by cybersecurity group Ghost Data, which worked with Reuters to uncover concerns over ad placement, dealing another major blow to the app’s ongoing business prospects.

Already in a state of confusion amid Elon Musk’s ongoing takeover saga, and following recent revelations from its former security chief that it is lax on data security and other measures, Twitter is now also facing an exodus of advertisers, with big brands like Dyson , Mazda and Ecolab suspended their Twitter campaigns in response.

Which is really the least worrying thing about the discovery, as the Ghost Data report also identifies more than 500 accounts that openly shared or requested child sexual abuse material over a 20-day period.

Ghost Data says Twitter failed to remove more than 70% of accounts during the time of the study.

The findings raise further questions about Twitter’s inability or willingness to engage with potentially harmful material, with The Verge reporting on them late last month “cannot accurately identify child sexual exploitation and non-consensual nudity on a large scale.”

This finding came from an examination of Twitter’s proposed plan for give adult content creators the ability to start selling OnlyFans-style paid subscriptions within the app.

Instead of working to address the abundance of pornographic material on the platform, Twitter instead considered embracing it — which would no doubt increase the risk factor for advertisers who don’t want their promotions to appear alongside potentially offensive tweets.

Which is likely happening on an even larger scale than this new report suggests, as Twitter’s own internal investigation into its OnlyFans-like proposal found:

Twitter couldn’t allow adult creators to sell subscriptions because the company didn’t — and still doesn’t — effectively monitor harmful sexual content on the platform.”

In other words, Twitter couldn’t risk facilitating the monetization of exploitative material on the app, and since it has no way of taking action against it, it had to scrap the proposal before it really gained momentum.

Given that, these new findings come as no surprise — but again, advertiser backlash is likely to be significant, which could force Twitter to launch a new crackdown one way or another.

For its part, Twitter says it is investing more resources in child safety, “including hiring new bodies to write policies and implement solutions.”

Great, Twitter is taking action now. But these reports, based on examining Twitter’s own research, show that Twitter has been aware of this potential problem for some time – not specifically child exploitation, but a concern over adult content that it has no way of to monitor them.

In fact, Twitter openly helps promote adult content, albeit unintentionally. For example, in the For You section of my Explore tab (i.e. the Explore home page in the app), Twitter consistently recommends that I follow “Facebook” as a topic based on my Tweets and the people I follow in the app.

Here are the tweets that were highlighted as some of the hottest tweets for ‘Facebook’ yesterday:

Twitter Topic Recommendations

It’s not pornographic material as such, but I’m guessing I’ll find it pretty quickly by clicking through any of these profiles. And again, these tweets are highlighted based on Twitter’s proprietary recent tweet algorithm, which is based on interacting with tweets that mention the topic term. These totally unrelated and off-topic tweets are then forwarded by Twitter itself to users who have not expressed an interest in adult content.

Based on all the evidence available, it’s clear that Twitter has a porn problem and is doing little to address it.

Adult content distributors consider Twitter as such the best social network for advertising because it’s less restrictive than Facebook and has a much wider reach than niche adult sites, while Twitter gets the usage and engagement benefits of hosting material that other social platforms just wouldn’t allow.

That’s probably why it’s been willing to turn a blind eye to it for so long, to the point that it’s now being highlighted as a much bigger problem.

However, it is important to note that adult content is not in itself problematic, at least not among consenting adult users. It’s Twitter’s approach to child abuse and exploitative content that is the real problem.

And Twitter’s systems are reportedly “woefully inadequate” in this regard.

As reported by The Verge:

A 2021 report found that the processes Twitter uses to identify and remove child sexual exploitation material are woefully inadequate – largely manual at a time when larger companies are increasingly turning to automated systems, capable of intercepting material not tagged by PhotoDNA. According to the report, Twitter’s primary enforcement software is “an outdated, unsupported tool” called RedPanda. “RedPanda is by far one of the most vulnerable, inefficient and unsupported tools that we offer,” said an engineer cited in the report.”

In fact, additional analysis of Twitter’s CSE detection systems found that of the 1 million reports submitted each month, 84% contained newly discovered material – “none of which would be flagged by Twitter’s systems”.

So while it’s advertisers who are once again putting pressure on the company in this case, it’s clear that Twitter’s problems extend far beyond ad placement.

Getting Twitter to its bottom line, however, may be the only way to compel the platform to act — although it will be interesting to see how willing and able Twitter is to enact a broader plan to do so amid its ongoing ownership battle to tackle

There is a provision in its acquisition agreement with Elon Musk that states that Twitter must:

“Use its commercially reasonable efforts to maintain the essential components of its current business organization substantially intact.”

IIn other words, Twitter can’t make any significant changes to its operating structure while it’s in the transition phase that’s currently being discussed as it heads towards a court battle with Musk.

Would initiating a significant update to its CSE detection models be considered a material change – significant enough to change the Company’s operating structure at the time of the original agreement?

Essentially, Twitter probably doesn’t want to make big changes. But it might be necessary, especially if more advertisers join this new boycott and urge the company to take immediate action.

It’s probably a mess either way However, this is a major problem for Twitter, which should rightly be held accountable for its systemic failures in this regard.