In combating the spread of misinformation, social media platforms typically place most users in the passenger seat. Platforms often use machine learning algorithms or human fact-checkers to flag false or misinformative content for users.
“Just because this is the status quo doesn’t mean it’s the right way or the only way,” says Farnaz Jahanbakhsh, a graduate student at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL).
She and her collaborators conducted a study in which they put that power in the hands of social media users instead.
They first surveyed people to learn how they avoid or filter misinformation on social media. Using their findings, the researchers developed a prototype platform that allows users to rate content accuracy, indicate which users they trust for accuracy ratings, and filter posts that appear in their feed based on those ratings.
Through a field study, they found that users were able to effectively rate misleading posts without prior training. Additionally, users appreciated the ability to rate posts and view reviews in a structured way. Researchers also saw that participants used content filters differently — for example, some blocked all misinformation content, while others used filters to search for such articles.
This work shows that a decentralized moderation approach can lead to higher reliability of content on social media, says Jahanbakhsh. This approach is also more efficient and scalable than centralized moderation systems and could appeal to users who distrust platforms, she adds.
“Much of the research on misinformation assumes that users cannot decide what is true and what is not, and so we need to help them. We didn’t see that at all. We’ve seen that people actually scrutinize content and try to help each other too. But these efforts are not currently supported by the platforms,” she says.
Jahanbakhsh co-wrote the work with Amy Zhang, an assistant professor at the University of Washington Allen School of Computer Science and Engineering; and senior author David Karger, Professor of Computer Science at CSAIL. The research results will be presented at the ACM Conference on Computer-Supported Cooperative Work and Social Computing.
Combating Misinformation
The spread of misinformation on the Internet is a widespread problem. However, the current methods that social media platforms use to flag or remove misinformation content have drawbacks. For example, when platforms use algorithms or fact-checkers to rank posts, it can create tensions among users, who interpret these efforts as, among other things, violating freedom of expression.
“Sometimes users want misinformation to appear in their feed because they want to know what their friends or family are being exposed to so they know when and how to talk to them about it,” adds Jahanbakhsh.
Users often try to rate and flag misinformation themselves, and they try to support each other by asking friends and experts to help them understand what they are reading. But these efforts can backfire because they are not supported by platforms. A user can leave a comment on a misleading post or respond with an angry emoji, but most platforms consider these actions as a sign of engagement. For example, on Facebook, this could mean that the misinforming content would be shown to more people, including the user’s friends and followers – the exact opposite of what that user wanted.
To overcome these problems and pitfalls, researchers wanted to create a platform that would give users the ability to provide and view structured accuracy ratings for posts, specify other people they trust to rate posts, and use filters to filter the in control the content displayed in their feed. Ultimately, the researchers’ goal is to make it easier for users to help each other evaluate misinformation on social media, reducing the workload for everyone.
Researchers began by surveying 192 people recruited through Facebook and a mailing list to see if users would appreciate these features. The survey revealed that users are very aware of misinformation and try to track and report it, but fear their assessments could be misinterpreted. They are skeptical about platforms’ efforts to rate content for them. And while they would like filters that block unreliable content, they wouldn’t trust filters operated by a platform.
Using these findings, the researchers built a Facebook-like prototype platform called Trustnet. On Trustnet, users post and share fresh, complete news articles and can follow each other to see content that others are posting. However, before a user can post content on Trustnet, they must rate that content as accurate or inaccurate, or inquire as to its accuracy, which is visible to others.
“The reason people share misinformation isn’t usually because they don’t know what’s true and what’s false. Rather, at the time of sharing, their attention is misdirected to other things. If you ask them to rate the content before you share it, it helps them be more sophisticated,” she says.
Users can also choose trusted people whose content ratings they see. They do this privately, in case they follow someone they are socially connected with (perhaps a friend or family member) but would not trust to rate content. The platform also offers filters that allow users to configure their feed based on how posts have been rated and by whom.
Test trust network
After the prototype was completed, they ran a study in which 14 people used the platform for a week. The researchers found that users were able to effectively rate content, often based on expertise, the source of the content, or by evaluating an article’s logic, even though they received no training. They could also use filters to manage their feeds, although they used the filters differently.
“Even with such a small sample, it was interesting to see that not everyone wanted to read their messages in the same way. Sometimes people wanted misleading posts in their feeds because they saw the benefit in doing so. This indicates that this agency is now absent from social media platforms and should be given back to users,” she says.
Users sometimes had trouble rating content when it contained multiple claims, some of which were true and some of which were false, or when a headline and article were disjointed. This shows the need to give users more rating options — perhaps by noting that an article is true but misleading, or that it contains political bias, she says.
Since Trustnet users sometimes had difficulty rating articles whose content did not match the headline, Jahanbakhsh started another research project to create a browser extension that would allow users to change headlines to better match the article’s content.
While these findings show that users can take a more active role in fighting misinformation, Jahanbakhsh warns that giving users this power is not a panacea. For one, this approach could create situations where users only see information from like-minded sources. However, filters and structured ratings could be reconfigured to mitigate this problem, she says.
Besides researching trustnet improvements, Jahanbakhsh wants to explore methods that could encourage people to read content reviews from people with different viewpoints, possibly through gamification. And since social media platforms may be reluctant to make changes, she’s also developing techniques that allow users to post and view content reviews through regular web browsing, rather than on a platform.
This work was supported in part by the National Science Foundation.