Assumptions about censorship in the digital domain are not always what they seem

SYNOPSIS

Despite their public reputations as libertarian bastions of freedom of expression, major private online communications platforms do not necessarily uphold the principles underlying freedom of information, especially when it comes to the public interest.

COMMENTARY

While voices against the moderation of online content are mainly limited to a minority of so-called “free speech absolutists,” the recent arrest of Telegram chief Pavel Durov has raised a few more public eyebrows. The unease over Durov’s detention by French prosecutors stemmed not only from high-profile anti-censorship advocates, but also from communities that relied on Telegram for vital, unfiltered information. A similar situation exists in Russia, where the messaging app is widely used by both the government and its rivals.

French authorities deny any political motivation and highlight Telegram’s lack of appropriate moderation and resulting complicity in cybercrime, including child sexual exploitation, drug-related crimes and other illegal content encoded on the app. Durov’s arrest evokes familiar clichés such as “suppression of expression by the state,” “invasion of the private sphere by public entities” and debates over the exercise of regulatory control by governments.

However, this wording insinuates that censorship is the exclusive preserve of states and governments. Because they tend to focus on restricting online content, many passionate defenders of free speech often overlook one of the biggest potential restrictors of private (and public) discourse: the private sector itself.

WHO uses the power to censor?

In the digital world, content moderation and censorship are sometimes distinguished by intent. Sometimes moderation is combined with censorship. The heart of the matter, however, may be a matter of relative reach and influence.

The idea of ​​censorship imposed by a higher authority is often associated with powerful state bodies that maintain a “monopoly on control” (following Weberian concepts of the state monopoly on violence). However, newer theories of structural power in political economy emphasize the immense – and still growing – role of large corporations and, consequently, the direct competition between private companies and states in multiple domains. This also applies to the communications sector, where the power of large media companies over information flows and the dissemination of stories can easily rival (or even surpass) that of states. The enormous influence of these companies likely gives them the ability to censor.

The evolution of major online platforms from the late 20th century to the present reflects their similarly transformative influence. On the one hand, these platforms have long had common legal measures (such as terms of service and privacy policies), which initially monitored how users interacted with their content while upholding the principles of “open” Internet access. On the other hand, platform owners can now increasingly exploit user content, including suppressing unwanted content.

WHO want to the power to censor?

Durov’s arrest sets a precedent: it creates criminal liability for owners of online platforms for the way their platforms are used (and misused). To date, few (if any) owners of major online communications or social media platforms have been held criminally liable for the user-generated activities and content their platforms host.

On the surface, Durov’s anti-establishment ethos, his public commitment to user privacy and encryption, and the app’s popularity among opposition movements in authoritarian states imply that Telegram’s owners are not engaged in censorship, nor do they intend to do so. Telegram’s laissez-faire philosophy on content moderation has even earned it the title of a “go-to app for troublemakers.”

Yet the reactions to Durov’s arrest from his most ardent supporters are revealing and sometimes even contradictory. X owner Elon Musk, who was quick to repeat his description of content moderation as “propaganda” for censorship, is simultaneously seen as a willing (and prolific) content remover on his own platform. Decisions to obscure content are often shaped by Musk’s personal views. There is even evidence that the platform deliberately restricts users’ access to politically opposing sources.

Conservative American political commentator Tucker Carlson also called Durov “a living warning to any platform owner who refuses to censor the truth at the behest of governments.” In a conversation with Carlson last April, Durov had emphasized his reluctance to comply with government guidelines restricting access to certain forms of content. He stated that he would not consider requests deemed to infringe on Telegram’s values ​​of freedom of expression and privacy.

However, Durov’s decision can also be interpreted as a subjective and discriminatory judgment about the correct content based on from a platform owner personal beliefs. Furthermore, even if you were to assume a hands-off approach to content regulation by default, contrary to its pro-privacy stance, Telegram does not automatically enable end-to-end encryption for most of its user conversations, which strongly suggests that it monitors private communications much more closely than public communications.

Where else might we identify ‘corporate censorship’?

The above examples illustrate the willingness of major online platforms to scrutinize users’ speeches and expressions based on the interests of their owners. However, personal objectives can also be strongly intertwined with commercial priorities. Examples include Google’s internal content moderation policy regarding the Israel-Hamas conflict, which is tied to its contract to provide Israel with cloud computing services, and moderation on

That said, corporate censorship does not necessarily reflect commercial or private interests competing with state or public agendas. In China, where private sector companies have significant ties to the state, the state can use influential private companies as tools to achieve its political objectives. This has recently led to speculation about the likelihood of politically motivated censorship on Chinese private gaming platforms, such as instructions for players to avoid discussing specific political issues and COVID-19-related content on the newly released game Black Myth: Wukong, for example. .

Final takeaways

Despite regulatory efforts by authorities to rein in the outsized influence of large private companies and their online platforms, some members of the public – especially those who prefer certain apps and platforms for personal or political reasons – may nevertheless remain suspicious of the intentions of the state.

To improve transparency and increase public trust, governments can complement ongoing regulatory efforts with initiatives to increase public involvement in the regulatory process. For example, the European Union’s Digital Services Act includes mechanisms to provide public interest investigators with access to internal platform data, to appoint trusted independent flaggers to detect illegal content, and for whistleblowers with insider knowledge about platforms to notify authorities of potentially illegal activities.

Also, existing laws that ostensibly protect the interests of major online platforms may instead be interpreted in a way that limits their outsized influence. A current example concerns Section 230 of the Communications Decency Act in the United States, which generally exempts major online platforms from liability for user-generated content and activities. Because the law also provides exemptions from liability for removing objectionable content, some lawmakers are considering whether Section 230 could give users the power to remove or edit content on online platforms at their own discretion rather than at the discretion of powerful companies.

Most importantly, the multifaceted and complicated nature of corporate censorship not only underlines it as a worrying problem, but also highlights the need to avoid worn-out assumptions and clichés about moderating online content.

About the authors

Sean Tan and Tan E-Reng are Senior Analyst and Research Analyst respectively at the Center of Excellence for National Security (CENS) at the S. Rajaratnam School of International Studies (RSIS), Nanyang Technological University (NTU), Singapore.

You May Also Like

More From Author