At the end of 2022, it sometimes seems that social media rules the world of internet business. The headlines are bursting with baby blue bird icons as Elon “Megabucks” Musk tries to figure out how to run Twitter.
It’s tempting for old-school techies to archive all social media under “teen angst” — a plethora of platforms often used by teens glued to their phone screens. But do phone zombies matter when it comes to technical problems in the real world?
Yes, because social media is at the forefront of government action to limit perceived social harm. And recently we’ve seen a renewed effort from governments in different regions.
Chief Digital Officers (CDOs) must take this into account. Various government actions indicate a serious level of disapproval regarding information and opinions communicated by these platforms. And subsequent lawsuits can serve as precedents.
The situation is fluid and ongoing. Let’s take a look at current lawsuits, counterclaims, and proposals.
The Southeast Asian city-state is known for draconian measures. In practice, however, the Singapore government is often an example of appropriate government restraint in censorship.
A new law, known as the Online Safety (Miscellaneous Amendments) Bill, follows the Protection from Online Falsehoods and Manipulation Act (POFMA) of 2019. It “authorizes IMDA to deal with harmful online content that users in Singapore can access, regardless of where that content is hosted or initiated,” according to a report by Channel News Asia.
Do phone zombies affect real-world tech concerns?
“Social media sites must shut down access to harmful content within hours after Parliament passed legislation to strengthen online safety on Wednesday (November 9),” CNA said. “If an online platform refuses to remove harmful content, the Infocomm Media Development Authority (IMDA) may issue an order to internet access service providers to block access for users in Singapore.”
The new regulation follows a global trend in legislation: emphasis on local culture and customs.
Individual European countries and the EU have been levying fines on social media platforms for years. As CDOTrends reported in May this year, “Margrethe Vestager, Executive Vice-President of the European Commission (the executive branch of the EU), said at the International Competition Network conference in Berlin: “The DMA (Digital Markets Act) will come into effect next spring in Kraft and we are preparing for enforcement as soon as the first notifications are received.”
The DMA is still pending, as is another EU directive, the Digital Services Act. “The DSA will apply immediately across the EU and will apply 15 months after the effective date or from 1 January 2024, whichever is later,” the European Commission said in a statement.
The DSA is “an EU regulation to modernize the E-Commerce Directive [of 2000] regarding illegal content, transparent advertising and disinformation.” The upcoming regulation is also: specifically Eurocentric, according to the European Commission website: “The responsibilities of users, platforms and public authorities will be rebalanced according to European values, with citizens to be put in the spotlight.”
Among other things, the regulation promises:
“For society as a whole:
• Greater democratic control and oversight of systemic platforms
• Reduction of systemic risks such as manipulation or disinformation”
A new definition is also on the agenda: “Very large online platforms [which] pose particular risks in the distribution of illegal content and societal harm.” The statement adds a metric: “Special rules are foreseen for platforms that reach more than 10% of 450 million consumers in Europe.”
It’s not hard to predict which platforms fall into this particular category.
Definitions and Metrics
Unboxing the EU regulation reveals concrete metrics that may or may not exist in other jurisdictions. Among other:
“• Measures taken to combat illegal goods, services or content on the internet
• new traceability obligations for business users on online marketplaces
• Effective safeguards for users, including the ability to challenge platforms’ content moderation decisions
• Ban on certain types of targeted advertising on online platforms (when they target children or when they use special categories of personal data such as ethnicity, political opinions and sexual orientation)
• Transparency measures for online platforms on a variety of issues, including the algorithms used for recommendations
• Obligations for very large platforms and online search engines to prevent abuse of their systems through risk-based measures and through independent audits of their risk management systems
• Access for researchers to key data from the major platforms and search engines to understand how online risks are evolving
How and when this list of measures might be enforced is speculative, but even the proposal to ban targeted advertising should attract some attention from “very large platforms”.
Non-US companies can expect anti-tech rhetoric in the US, especially during election season. Such a bid was recently promoted by two US politicians representing states in the north and south of the nation.
Unpacking the EU regulation reveals concrete key figures
In an op-ed published by the Washington Post, Senator Marco Rubio (Florida) and Mike Gallagher (Wisconsin) targeted a popular phone application. “The app can track mobile phone users’ locations and collect Internet browsing data,” the politicians wrote, “even when users visit unrelated websites.”
Unsurprisingly, the app in question isn’t based in the US. It is the popular short-form video hosting service from Chinese company ByteDance.
Rubio and Gallagher have an ally in Brendan Carr, an agent for the Federal Communications Commission. In June Carr tweeted: “I have urged @Apple & @Google to remove TikTok from their app stores for its pattern of stealthy data practices.
“The US government should ban TikTok rather than enter into a national security agreement with the social media app that could allow it to continue operating in the United States, according to Carr,” CNN said.
“The Committee on Foreign Investments in the United States, a multi-agency government agency tasked with reviewing foreign-owned business deals, has been negotiating with TikTok for months over a proposal to address concerns that Chinese government agencies might seek access to the data get TikTok detains US citizens,” CNN said.
Carr added more salt to his comments in a TV interview in mid-November, Yahoo reported. “Ultimately, TikTok is China’s digital fentanyl,” Carr said Friday in [a television] Appearance,” the report said. “Again, it’s not the videos, but it pulls everything from search and browsing history, possibly keystroke patterns, biometrics, including faceprints and voiceprints.”
It’s worth noting that the use (and potential abuse) of biometrics cited by FCC Commissioner Carr isn’t exclusive to TikTok.
key to take away
Media and governments have been discussing bans on social media for years. It’s tempting for CDOs to pay little heed.
But it seems that in late 2022, when the post-pandemic world slowly reopens to all businesses, attention will need to be paid to potential legislation. Watch this room.
Stefan Hammond is a contributing editor at CDOTrends. Best practices, IoT, payment gateways, robotics and the ongoing fight against cyber pirates arouse his interest. You can reach him at [email protected].
Photo credit: iStockphoto/wildpixel