Instagram makes all teen accounts private, in highly critical investigation into child safety

Instagram on Tuesday announced a series of changes that will make the accounts of millions of teens private, tighten parental controls and make posting restrictions the default in an effort to protect children from danger.

Meta said users under 16 will now need parental approval to change restricted settings, called “Teen Accounts,” which filter out offensive words and limit who can contact them.

Why now

“It addresses the same three concerns we hear from parents about unwanted contact, inappropriate contact and wasted time,” Naomi Gleit, Meta’s chief product officer, said in an interview with NPR.

Now that teens are all moving to private accounts, they’ll only be able to receive messages or be tagged by people they follow. Content from accounts they don’t follow will be in the most restrictive setting, and the app will create periodic screen time reminders under a revamped “take a break” feature.

Instagram, used by more than 2 billion people worldwide, has come under increasing scrutiny for its failure to address a wide range of harms, including its role in fueling the youth mental health crisis and promoting the sexualisation of children.

States have accused Meta on Instagram’s ‘dopamine-manipulating’ features that authorities say have addicted an entire generation to the app.

In January, Meta CEO Mark Zuckerberg stood up during a Congressional hearing and apologized to the parents of children who had died from social media-related causes, such as those who committed suicide after online harassment, a dramatic moment that highlighted the mounting pressure on the CEO over concerns about child safety.

The new features announced on Tuesday follow other child safety measures Meta has taken recently releasedincluding in January, when the company announced that content related to self-harm, eating disorders and nudity would be blocked for teen users.

Changes come as federal bill stalls

Meta’s move comes as Congress hesitates over passing the Kids Online Safety Act (KOSA), a bill that would require social media companies to do more to prevent bullying, sexual exploitation and the spread of harmful content about eating disorders and substance abuse.

The measure passed the Senate but faced a challenge in the House of Representatives over concerns that the regulation would undermine young people’s freedom of speech. However, child safety advocates support the bill.

If passed, KOSA would be the first new legislation from Congress to protect children online since the 1990s. Meta has opposed parts of the bill.

According to Jason Kelley of the Electronic Frontier Foundation, the new Instagram rules appear intended to prevent the introduction of additional regulations as bipartisan support grows for holding Big Tech accountable.

“This change says, ‘We’re already doing a lot of the things that KOSA would require,’” Kelley said. “Often a company like Meta is doing the requirements of the legislation itself, so they wouldn’t be legally required to do that.”

Includes new systems to detect teens who lie about their age

Meta requires users to be at least 13 years old to create an account. However, social media researchers have long noted that young people may lie about their age to get on the platform and maintain multiple fake accounts, known as “finstas,” to avoid detection by their parents.

Meta officials say they have developed new artificial intelligence systems to detect teenagers who lie about their age.

This is in addition to the collaboration with the British company Yoti, which analyses a person’s face based on photos and estimates an age. Meta has been working with the company since 2022.

Since then, Meta has required teens to prove their age by submitting a video selfie or some form of identification. Now, Meta says, if a young person tries to log into a new account with an adult birthday, they are placed in teen protected settings.

In January, the washington post reported Meta’s own internal research showed that few parents use parental controls. Less than 10 percent of teens on Instagram use the feature.

Child safety advocates have long criticized parental controls, which are also available on other apps like TikTok, Snapchat and Google, because they place responsibility for the platform on parents, not companies.

While Instagram’s parental controls still require both teen and parent to give permission, the new policies add a feature that lets parents see who their teens have recently messaged (but not the content of the messages) and what topics they discuss on the app.

Attempt to balance parental control with teens’ freedom of expression

Meta hopes to prevent a worrying situation: someone who isn’t a parent finding a way to monitor a teen’s account.

“If we determine that a parent or guardian is ineligible, he or she will be excluded from supervision,” Meta wrote in a white paper on the new child safety measures released Tuesday.

But abuse is still possible among legitimate parents, said Kelley of the Electronic Frontier Foundation. He said that if parents are abusive or try to prevent their children from seeking information about their political beliefs, religion or sexual identity, more options for snooping could cause problems.

“I think it can certainly lead to a lot of problems, especially for young people in abusive homes who may be requiring them to have these parental control accounts, and young people who are investigating their identity,” Kelley said. “In already problematic situations, it can increase the risk for young people.”

Meta points out that parents will be limited to viewing about three dozen topics that their teens are interested in, including things like outdoor activities, animals and music. Meta says that topic viewing is less about parents keeping tabs on kids and more about learning about a child’s curiosity.

Still, some of Instagram’s new features for teens will focus on filtering sensitive content from the app’s Explore page and on Reels, the app’s short-form video service.

Teens have long figured out ways to avoid detection by algorithms. Kelley points out that many use what’s known as “algospeak,” or ways to evade automated takedown systems, such as writing “unalive” to refer to a death, or “corn” to talk about pornography.

“Kids are smart and algospeak will continue to evolve,” Kelley said. “It will be a never-ending game of cat and mouse.”

Copyright 2024 NPR

You May Also Like

More From Author