Making social media safer requires meaningful transparency

In 2021, Frances Haugen – the former Facebook employee who testified before the US Senate – shared internal documents showing that Instagram knew it was negatively impacting millions of teenagers. Two years later, Wall Street Journal reporters Jeff Horwitz and Katherine Blunt and researchers from Stanford University and the University of Massachusetts Amherst reported vast networks of people using Instagram to share child sexual abuse material (CSAM). More than a year later, Instagram launched “Teen Accounts,” which aims to give parents more insight into their teens’ use of the platform and minimize adults’ ability to contact teens on the platform. Still, many experts are skeptical about the usefulness of this launch. For example, Zvika Krieger, former director of Meta’s Responsible Innovation team, said, “I don’t want to say it’s worthless or cosmetic, but I do think it doesn’t solve all the problems.”

Unfortunately, it is virtually impossible for outside researchers to assess the effectiveness of Teen Accounts or any other launch from Meta and other social media companies. Currently, social media platforms make it easier for pedophiles to share child sexual abuse material, for extremists to incite violence, and for governments to carry out genocides than traditional media. At the same time, the way these platforms operate makes it difficult to assess whether they are making progress in limiting this harm.

But it doesn’t have to be this way. It is possible to change the way social media companies make decisions. The first step toward a safer social internet is to require meaningful transparency to incentivize companies to design safer products and create accountability when they fall short.

Social media companies have integrity teams and trust and security experts working to make platforms more secure. From the companies they protect users and societies against foreign interference, scams and illegal content. They see the causes of online damage and the impact that corporate decisions have on the security of platforms. And they understand how these companies make decisions.

The Integrity Institute, a professional community and think tank composed of more than 400 trust and security experts, found that 77% of these experts view transparency about the extent and cause of harm as the most critical public policy step toward a safer social internet. This may seem surprising: why don’t experts want policymakers to simply mandate safer design practices?

The reason experts want transparency about mandatory design practices is because the safest design practices depend on the nature of the platform and the context of how issues manifest. Sometimes a chronologically arranged feed is safer than an algorithmically arranged feed. And sometimes not. What we need is for companies to be incentivized to make safer, more responsible design choices and to deploy harm reduction efforts in the right way, empowering the people who work on platforms to find the right solutions .

Some companies proactively produce their own “transparency centers.” This is progress, but no company currently shares enough data to verify that their platforms are safe, or to monitor harmful and illegal activity. The limited data that social media companies share helps them make three claims that downplay the risks on their platforms.

One claim is that the prevalence of harmful content is minuscule. Instagram claims that the prevalence of suicide and self-harm content is below 0.05% – a number that indicates effective content moderation. However, typical Instagram users can view thousands of pieces of content every month. A very small prevalence can still translate into hundreds of millions of exposures to harmful content.

Meaningful transparency requires companies to disclose the total exposure to content infringement, not just its prevalence. Societies have the right to know the extent of the risks that the platforms can cause.

Another claim is that the companies remove massive amounts of harmful content. For example, Meta reports that 49.2 million child sexual exploitation posts on Facebook were removed in 2023. Companies seem to want people to combine deleting large numbers of messages with an effective harm reduction strategy. However, effective harm reduction strategies ensure that few people are exposed to harmful content, regardless of how much content needs to be removed. It’s possible that those 49.2 million Facebook posts were seen by a negligible number of people, or that those posts were viewed by all 2 billion users. The truth lies somewhere in between, but a confidence interval of 2 billion people is unacceptably large.

Meaningful transparency requires companies to disclose how many people were exposed to harmful content on the platform and how many people were exposed to high levels of harmful content.

The latest claim highlights the large amount of money these companies claim to spend on protecting users. Recently, TikTok announced a $2 billion investment in trust and security by 2024. U.S. Senator Lindsey Graham (R-SC) joked that these numbers are meaningless without context, saying, “$2 billion sounds like a lot unless you consider 100 billion dollars.”

This claim does not clarify why users are exposed to harmful content. If most exposure to harmful content is due to algorithmic recommendations of content that the platform makes to users, it doesn’t matter that the company is investing billions in mitigation efforts. This is like an arsonist bragging about how much money he spent repairing the buildings he set on fire.

Until we have meaningful transparency, independent research is our best tool. For example, the Neely Social Media Index surveys adults using standardized questions about their experiences on different platforms to reveal whether harmful experiences on specific platforms are decreasing (or increasing). These independent efforts are important, but insufficient because they are disconnected from how harmful experiences occur and because they lack crucial platform data.

Meaningful transparency connects the dots.

You May Also Like

More From Author