Let’s start cleaning the internet

In January, Meta will take his place as executive chair of the cross-industry counterterrorism organization Global Internet Forum to Counter Terrorism (GIFCT). The upcoming appointment has kicked the meta-content moderation policy into high gear as the company strives to be worthy of its position.

As one of the founding members of GIFCT, Meta will share data with other companies to keep the internet free of violent imagery, terrorism and human trafficking.

Content Moderation Policy
Meta works with other companies to monitor terrorist content online. (A woman searches for content moderation on her desktop; photo credit – Freepik)

Content Moderation of Counter Terrorism Meta

More recently, Meta’s growth has been hampered by inflation and lawsuits as governments questioned content moderation and data policies.

As part of Meta’s commitment to protecting people from harmful content, the company is launching a new free tool to help platforms identify and remove violent content.

Meta’s Hasher-Matcher-Actioner (HMA) will be a free, open-source content moderation software tool that “will help platforms identify copies of images or videos and take action en masse,” said Nick Clegg, President of Global Affairs from Meta a release.

The HMA is being taken over by various companies to stop the spread of terrorist content on their platforms. It’s especially useful for small organizations that don’t have access to better resources like large companies do.

It’s a valuable tool for companies that don’t have the in-house capacity to moderate content in bulk. GIFCT member companies will carefully monitor their networks with the HMA and keep their platforms free of harmful and exploitative content.

It is estimated that Meta spent over $5 billion on safety and security worldwide in 2021. Over 40,000 people are working to improve its features.

READ :  Powershares Nasdaq Internet $PNQI investment analysis and advice

Meta content moderation hopes to tackle terrorist content as part of their larger plan to protect users from harmful content. The California-headquartered tech giant is also using AI to help moderate content and remove harmful content.

The company also announced that its content moderation tools have significantly reduced the visibility of hate speech and that it regularly blocks fake accounts to curb the spread of misinformation.

Matthew Schmidt, associate professor of national security, international affairs and political science at the University of New Haven, said abc news Most organizers of terrorist attacks or human trafficking take place on the dark web.

Schmidt acknowledged that open source software is key to stopping these evil forces from wreaking havoc in society as it limits their reach. He also mentioned that most content moderation efforts come from private companies rather than the government.

Content Moderation Policy

On September 13, 2022, California enacted a comprehensive Social Media Transparency Act (AB 587), requiring social media companies to file their terms of service with the California Attorney General’s Office and to file semi-annual reports.

The legislation applies to social media companies with revenue over $100 million in the previous year. The law does not define if and how social enterprises must moderate content.

For now, she expects social media companies to submit their current terms of use and semi-annual content moderation reports to the AG office.

Content moderation and privacy issues have been hotly debated in recent years. Both federal and state agencies have attempted to implement policies that protect users while curbing hate speech.

READ :  Internet access at 66% Indian schools, digital gap among various states: Latest U-DISE report

Previously, Florida and Texas had passed content moderation laws in hopes of bringing some order to what is shared online. Florida law limited Internet services’ ability to moderate content and required some disclosure requirements.

Texas law, on the other hand, prohibits social media platforms from “censorship.”[ing]” User or content based on the user’s viewpoint or geographic location in the state. It does not prevent companies from moderating content about unlawful statements or specific discriminatory threats of violence.

As nations realize the power of online platforms, social media companies are beginning to feel pressure to introduce stricter laws so they don’t indirectly encourage unlawful activity.