Facebook held a conference call Tuesday to discuss which posts they most often remove and why, which was inconveniently timed after the weekend’s Buffalo mass shooting video was still on the platform.

One of the many depressing aspects of Saturday’s racist mass shooting in Buffalo was how the grisly video proliferated on social networks. According to CNN, the shooter livestreamed it on Twitch, and to that streaming platform’s great credit, the stream was cut off within two minutes. The Washington Post reports that only 22 people saw it.

But eventually Facebook enters the picture. Clearly some (if not all) of those 22 viewers were horrible white supremacist trolls, because according to the New York Times, the video was “was posted on a site called Streamable and viewed more than three million times before it was removed. And a link to that video was shared hundreds of times across Facebook and Twitter hours after the shooting.”

As of Tuesday, there were still a few copies of the video floating around on Facebook, according to that Washington Post report. And this is the unfortunate backdrop against which Facebook released a quarterly report on which posts they remove and why, as The Verge explains.  

The report was accompanied by a conference call, as Facebook’s parent company Meta now has these calls and reports quarterly, not long after the company's earnings calls. The call was scheduled well before the shooting took place, but obviously, Meta had some explaining to do.

“People create new versions and new external links to try to evade our policies,” vice president of integrity Guy Rosen said, according to AdWeek. “We will continue to learn, refine our processes and refine our systems to ensure that we can take down these links more quickly in the future. It’s only a couple of days after the incident, so we don’t have any more to share at this point.”

Meta also released the Facebook quarterly community standards enforcement report, which The Verge describes as “a document that has a boring name, but is full of delight for those of us who are nosy and enjoy reading about the failures of artificial-intelligence systems.”

And yes, human moderators are much better at recognizing genuinely problematic posts than bot moderators. Facebook counts up the posts they admit were “wrongfully removed,” and the bots wrongfully remove posts more frequently than human moderators. No surprise there.

What is a surprise, at least in the context of the current Big Tech censorship discourse, is that very little political speech is removed. The Verge sifted through the removed-post numbers and concluded “Very little of it is ‘political,’ at least in the sense of commentary about current events. Instead, it’s posts related to drugs, guns, self-harm, sex and nudity, spam and fake accounts, and bullying and harassment.”

These are Facebook’s own numbers, and not independently verified, so take that into account. But some standout numbers are that Facebook removed 1.6 billion fake accounts, and 2.5 million posts labeled "Terrorism and Organized Hate."

The current conservative horseshit grievances about Facebook censorship try to frame this as an attempt to attack free speech, all done by a company where Left Coast Liberals are supposedly in charge. This is a huge part of Elon Musk’s Twitter takeover discourse (to whatever degree said takeover is actually happening). And while I hate to give Facebook the benefit of the doubt, it’s pretty clear that the censorship claims are driven by bad-faith attempts to blur the line between political speech and actual violence. But since those bad-faith efforts have proven an excellent political talking point, there is no amount of transparency from Facebook that will likely change this.

Related: Facebook Relaxes (and Then Reverses) Its Rules Over Calling for Leaders to Be Killed, Because of Putin [SFist]

Image: Solen Feyissa via Unsplash