Because the EU requires companies to produce such reports, Elon Musk's X has sucked it up and produced a transparency report — the first since Musk took over the company formerly known as Twitter in 2022. And it shows that the company still does a fair bit of removal of content for violations of policies, "free speech absolutism" be damned.

The company says in an introduction to the report, which available here,  that its policies are "grounded in human rights," and that it has been "taking an extensive and holistic approach towards freedom of expression by investing in developing a broader range of remediations."

While X, under Musk, has been widely criticized for rolling back policies such as those defining hate speech, and for allowing more chaos and harrassment to proliferate on the platform, this report seeks to say otherwise — though you'll note the de-emphasis on hate speech.

"Our mission at X is to promote and protect the public conversation," the intro to the report begins. "We believe X users have the right to express their opinions and ideas without fear of censorship. We also believe it is our responsibility to keep users on our platform safe from content that violates our Rules. Violence, harassment, and other similar types of behavior discourage people from expressing themselves, and ultimately diminish the value of global public conversation."

And blatant hate speech and transphobia don't?

As Wired explains, the new report can't really be compared in an apples-to-apples way to the last report produced when the company had a more robust Trust & Safety department and content moderation team, which covered the second half of 2021.

For example, the new report doesn't distinguish between reports on accounts that are in violation of policies, and reports on individual posts. In the 2021 data, 11.6 million accounts were the subject of reports, out of which 4.3 million were “actioned” and 1.3 million were suspended. But in the new 2024 report, the company says it received a whopping 224 million reports, both on accounts and posts, and 5.2 million accounts ended up suspended in the first half of the year.

The most telling figure has to do with hate speech — with Elon Musk having been very vocal, especially in the last year, about his view that hate speech is subjective and too hard to define. Still, the company draws some lines, and it has "actioned" 2,361 accounts for posting hateful content in the first half of 2024. This is a stark decline given that hate speech accounted for about half of all reports in 2021, and a quarter of all the accounts "actioned."

X says that many of its actions take place by algorithm now, through machine learning, but there is not a lot of transparency around how often this is subject to human review.

Theodora Skeadas, who formerly worked on Twitter’s public policy team and helped create its Moderation Research Consortium, tells Wired that the transparency report can't really explain "changes in the quality of experience" on the platform, since much content that used to be violative isn't anymore.

And also, X's active user base has dropped significantly — by some data, around one-fifth — since Musk's takeover. New data also suggests sharp declines in active users in the US and UK especially.

"If you account for changing usage, is it a lower number?" Skeadas asks rhetorically, referring to the number of accounts "actioned."

Also, Skeadas tells Wired, you have to wonder how much human review is actually happening. "They might have enforced a certain amount of content. But if a capacity has changed, the numbers might be understating the severity of impact because of reduced capacity for manual review," Skeadas says. And, she adds, with fewer staff, "automated systems might not be audited as regularly as they should."

This transparency report comes, ironically, just two weeks after the Ninth Circuit Court of Appeals sided with Musk in a lawsuit objecting to a recent content-moderation transparency law passed in California. The three-judge panel, two of whom were Republican appointees, agreed with X's attorneys that the California law oversteps in its requirements around moderating hate speech and explaining what a platform's policies are. The ruling said that the law requires a company to "recast its content-moderation practices in language prescribed by the State, implicitly opining on whether and how certain controversial categories of content should be moderated."

The EU's Digital Services Act has a similar requirement around producing transparency reports, also requiring companies to disclose how and when it complied with government requests, and thus we have X starting to do this again.

Related: Ninth Circuit Sides With Musk In Case About California Social Media Content Moderation Law

Photo: Getty Images