Elon Musk’s X says it’s monitoring harmful content as scrutiny of platform increases

Amid growing concerns that X has become less safe under billionaire Elon Musk, the platform formerly known as Twitter is trying to convince advertisers and critics that it still has policies against harassment, hate speech and other objectionable content.

From January to June, X suspended 5.3 million accounts and deleted or marked 10.7 million posts as violating privacy laws. lines against the posting of child abuse material, harassment and other harmful content, the company said in a 15-page transparency report released Wednesday. X said it received more than 224 million user reports during the first six months of this year.

It’s the first time X has published a formal global transparency report since Musk completed his takeover of Twitter in 2022. The company said last year it was evaluating its approach to transparency reporting, but has continued to publish data on how many accounts and how much content have been removed.

Security issues have long plagued the social media platform, which has faced criticism from advocacy groups, regulators and others that the company isn’t doing enough to moderate harmful content. But those fears have been heightened after Musk took over Twitter and laid off more than 6,000 people at the company.

The release of X’s transparency report also comes as advertisers plan to cut their spending on the platform next year and the company ramps up its battle with regulators. This year, X’s CEO, Linda Yaccarino, told U.S. lawmakers that the company was restructuring its trust and safety teams and building a trust and safety center in Austin, Texas.

Musk, who last year said advertisers who boycotted his platform could “go to hell,” has also softened his tone. At this year’s Cannes Lions International Festival of Creativity, he said that “advertisers have the right to appear next to content that they find compatible with their brands.”

When Musk took over Twitter, several changes he implemented raised alarm bells among security experts. X reinstated previously suspended accounts, including those of white nationalists, stopped enforcing its policies against COVID-19 misinformation, and abruptly disbanded its Trust and Safety Council, an advisory group made up of human rights advocates, child safety organizations and other experts.

X also struggles with criticism that it has become less transparent under Musk’s leadership. The company, once publicly traded, went private after Musk bought it for $44 billion.

The change meant the social media platform no longer made its quarterly user numbers and revenue public. Last year, X began charging for access to its data, making it harder for researchers to conduct studies on the platform.

Concerns about the lack of moderation on X are also threatening its ad business. The World Bank halted paid advertising on the platform in September after the ads appeared under a racist post. About 25% of advertisers expect to reduce their spending on X next year, and only 4% of advertisers believe the platform’s ads provide brand safety, according to a survey by market research firm Kantar.

The platform’s transparency report found that some of the biggest issues users reported on X involved posts that allegedly violated the platform’s rules on harassment, violent content, and hateful conduct.

Musk, a self-described “free speech absolutist,” has said on X that his approach to enforcing the platform’s rules is to limit the reach of potentially offensive posts rather than remove them. He also sued California last year over a state law that lawmakers said was meant to make social networks more transparent due to free speech concerns.

X’s transparency report found that nearly 2.8 million accounts were suspended for violating the platform’s rules against child sexual exploitation. This is more than half of the 5.3 million accounts that were removed.

However, the report also found that in some cases X labeled user content instead of deleting or suspending accounts.

X applied 5.4 million labels to content reported for abuse, harassment and hateful conduct, relying heavily on automated technology. About 2.2 million pieces of content were removed for violating those rules.

The platform’s rules state that the site does not allow media that depicts hateful imagery, such as the Nazi swastika, in live videos, account bios, profiles, or header images. However, other instances must be flagged as sensitive media. This week, X also made changes to a feature that allows users to block people on the platform. People blocked by users will be able to see their posts, but not interact with them.

X also suspended nearly 464 million accounts for violating its rules against platform manipulation and spam. Musk vowed to crack down on spambots on Twitter before taking over the platform. The company’s report also included a metric called the “post violation rate,” which showed that users are unlikely to encounter content that violates the site’s rules.

Meanwhile, X continues to face legal challenges in several countries, including Brazil, where the Supreme Court blocked the site after Musk failed to comply with court orders to suspend certain accounts for posting hate speech. The company this week bowed to legal demands in an attempt to be reinstated. It has also reported content moderation data to regulators in countries including Europe and India.

The report included the number of requests X receives from government and law enforcement agencies. The company received 18,737 government requests for user account information, and it released information in about 53% of those cases.

Twitter began publicly reporting the number of government requests it received for user information and content removals in 2012. The company’s first transparency report, which also included data on copyright takedown notices, came after Google began releasing that data in 2010.

After revelations emerged in 2013 that the National Security Agency had access to user data from Apple, Google, Facebook and other tech giants, more and more online platforms began releasing more information about requests they received from government and law enforcement.

You May Also Like

More From Author