On Tuesday, Facebook announced that it's changing its content moderation policy surrounding the categories of "white nationalism" and "white separatism," acknowledging that these are no different than white supremacy.
Weirdly, until now, the platform had been protecting the free-speech rights of white nationalists at the expense of its own community standards against hate speech, creating a murky distinction between speech by those who identify as "white separatists" and "white nationalists" and those who openly incite hate and identify as white supremacists.
Now in a statement titled "Standing Against Hate," the company says it is enacting "a ban on praise, support and representation of white nationalism and separatism on Facebook and Instagram," effective next week.
The decision seems likely tied to the mass shooting that left 50 Muslims dead in New Zealand, and the statement says, "over the past three months our conversations with members of civil society and academics who are experts in race relations around the world have confirmed that white nationalism and separatism cannot be meaningfully separated from white supremacy and organized hate groups."
But why did this take this long? As Mark Pitcavage of the Anti-Defamation League tells the New York Times, white supremacist and hate groups have been using the term "white nationalist" going back as far as the 1960s because they knew the word "supremacy" wasn't socially acceptable anymore.
As Motherboard writes, the move only further "highlights the malleable nature of Facebook’s policies," and it comes several months after a Motherboard investigation showed the rampant proliferation of neo-Nazi content on the platform, which continued in the wake of the 2017 events in Charlottesville partly because Facebook's own policies allowed it to.
Internal moderator training documents obtained by Motherboard showed that Facebook was trying to spell out distinctions between white supremacist content, and content that had the political tinge of "nationalism" or "separatism." In training slides, Facebook said that "white nationalism" was "an extreme right movement and ideology, but it doesn't seem to be always associated with racism (at least not explicitly).” Facebook added, “In fact, some white nationalists carefully avoid the term supremacy because it has negative connotations."
How Facebook plans to enforce the policy remains an open question. It has said that it uses a "content matching" algorithm to identify memes and other content being used to spread pro-ISIS ideology and other banned content. However the company remains the final arbiter of some extremely dicey and complex matters surrounding what does and what doesn't constitute hate or censor-worthy speech.
As Madihha Ahussain, a lawyer for the civil rights group Muslim Advocates, tells the New York Times, “We need to know how Facebook will define white nationalist and white separatist content. For example, will it include expressions of anti-Muslim, anti-Black, anti-Jewish, anti-immigrant and anti-LGBTQ sentiment — all underlying foundations of white nationalism?"
In January, Facebook announced that it was creating an "oversight board" that would act as a kind of independent supreme court in arbitrating matters of content moderation. The move came after years of criticism and public dustups like the one in 2016 when Facebook flagged and removed a famous photo from the Vietnam War depicting a naked young girl burned with napalm — citing that the photo could be seen as child pornography. A newspaper in Norway called the company out, and they backed down and said their moderator had made a judgement error. (Initially the company issued a statement saying, "While we recognize that this photo is iconic, it’s difficult to create a distinction between allowing a photograph of a nude child in one instance and not others.")
The board, composed of 40 experts from the around the globe to be selected after a series of workshops and after input from think tanks and researchers, will ultimately convene to take on the cases that Facebook's own moderators have found to be the most difficult to arbitrate. As New York Magazine put it, "It will obviously be useful to have a transparent and accountable process by which high-profile cases can be adjudicated. But the millions of smaller instances of content moderation — whether or not to remove a photo or a status update — which can have meaningful effects on individual users’ lives will remain as opaque and unanswerable as they’ve always been."
Right, so, when it comes to deciding whether your Aunt Susan's post about her favorite white nationalist speaker coming to town constitutes a violation, Facebook's supreme court won't be bothering with that kind of small potatoes. It will still be up to the moderator drones around the globe who are increasingly discussing their PTSD with the media because that job is THE WORST.
Previously: NPR Takes Dive Into Facebook's Content Management Process, Says It's 'Set Up To Fail'