‘Harmful’ and ‘likely’: one-third of UK children have adult social media accounts

Children seem unfazed by regulations, technology, or age-appropriate design codes, as they simply indicate their age to open online accounts. The UK’s telecoms regulator, Ofcom, has commissioned a study that found 32 per cent of UK children aged eight to 17 lie to create an adult account, and 47 per cent of those aged eight to fifteen lie to a 16-year-old user age and older, reports the BBC.

The news comes as the draft online safety law turns its attention to the country’s children, while the code for age-appropriate design gains prominence.

Instagram is testing Yoti’s face-based age verification for US users trying to increase their age to adult and vouching through existing adult users. British children simply enter a date of birth of their choice, which means that 60 per cent of children under 13 who have social media accounts do so through their own profiles, not, say, from an older relative. This rises to 77 percent of children aged eight to 17. And two-thirds had help from a parent or guardian, reports TechCrunch.

‘Harmful’

Ofcom’s research comes as Britain’s online safety law, which is five years in the making, is undergoing new “tweaks” (with new Prime Minister Liz Truss declaring: “What I want to make sure is that we protect those under 18 from harm (but we also make sure free speech is allowed, so some adjustments may be needed) following the addition of new offenses and politicians’ attempts to add a new category of ‘legal but harmful’ content.

The most recent responsible foreign secretary has said the legal but harmful part will not apply to adults as it could affect freedom of expression, as reported by TechCrunch, meaning the bill will increasingly focus on making the internet for children and not for make all users safer.

‘Probably’

The bill is firmly in the spotlight as it follows the UK Children’s Code (or the Age-Appropriate Design Code), which is being modified and adopted around the world. Its language of encompassing all websites “likely to be visited by children” proves seismic.

“The fact that the definition of ‘likely’ is vague does not mean that the need for regulation is vague,” Professor Sonia Livingstone, from the London School of Economics and Political Science, said in the Privacy, Children and Access to Services session Indian media company MediaNama’s PrivacyNama Summit.

“Children are more likely to access Instagram, Instagram clearly does not collect their data in a 100% privacy compliant manner and is not always secure.”

Livingstone noted that “it’s still in its infancy” for the code and major platforms have made changes, and having taken “previously outrageous measures” to keep children safe, “they took the code as their moment to act” .

“Even if a law sounds very positive, it is not implemented,” says the scientist and activist.

“It has shifted attention from services that are aimed directly at children to services that are ‘likely to be accessed by children’, and that language, which is likely to be accessed, is very powerful.”

The Code and the Online Safety Act have attracted the attention of civil society in India. Aparajita Bharati of YLAC and The Quantum Hub said India relies too much on parental consent. She believes politicians should start working on a children’s bill so that the focus can shift to protecting children once the country’s data protection regulator is in place.

Pacta’s compatriot Nivedita Krishna reminded participants that most children in India access the internet through their parents’ devices, echoing previous incarnations of the UK bill: “It is important to ensure that the internet comes first that’s for sure Everyone to use and then you come to talk about the safety of children. If the internet isn’t safer for everyone, then children certainly aren’t safe.”

Livingstone, a member of the euCONSENT project on online age verification technologies, hopes that trusted – non-commercial – intermediaries will carry out age verification, rather than the platforms themselves Distribute content to children, not just the content itself.

‘Attacking’

Meanwhile, in California, concerns are being raised about the word “likely” after Gov. Gavin Newsom enacted Assembly Bill (AB) 2273 (or the California Age-Appropriate Design Code).

Critics there fear the vague language could lead to “invasive” age verification requirements for a large number of websites, reports Reason, the “nation’s leading libertarian magazine”.

California law applies to any “business that provides an online service, product, or feature that children can access,” and businesses must conduct a privacy impact assessment to assess whether their products may be harmful to children. They must attempt to determine the age of users and, if they determine that a user is underage, enforce a list of actions, e.g. B. not collecting or sharing their data.

Reason cites organizations struggling to interpret “children’s best interests”.

It notes that age verification or estimation approaches like Instagram’s are invasive: “Ironically, online businesses may soon be enacting invasive age verification requirements to comply with what is ostensibly a digital privacy law.”

Article Topics

Access Management | Age Verification | Biometrics | children | Privacy Policy | social media