Late Friday, Mark Zuckerberg sought to quell the angst about Facebook losing the election for Hillary Clinton by posting a pledge to try harder, as a company, to combat the proliferation of fake news by various means. Meanwhile NPR was working on a story about how Facebook's system of censoring hate speech in general, and arbitrating between users who flag posts they find offensive and the users who post things they believe they have a right to post, is fairly flawed. And indeed, after stress-testing the system and interviewing multiple current and former employees, they find that the system relies on flash judgments, sometimes by people in data centers in Poland or the Philippines, and that mistakes and reversals are made all the time.
How, then, when the company can barely keep up with the volume of flags they get just among user-created posts, photos, and comments, will they suddenly be able to become arbiters of facts in an increasingly lie-filled online media landscape?
Facebook is not as pure a free-speech playground as Twitter has been, and users are expected to abide by a set of Community Standards, thereby creating the need for a complex enforcement machine that can't really be automated.
NPR learns details, through a couple of anonymous employees, about what's called the "Community Operations team" the backbone of which are several thousand subcontractors in offices around the globe who are tasked with reviewing every post that gets flagged. According to insiders, the workers are judged on speed, and expected to make decisions about posts to remove within about 10 seconds. Clearly this leaves lots of room for error, and little time to judge context this despite the fact that Facebook's head of policy Monika Bickert tells NPR in a phone interview, "Context is so important. It's critical when we are looking to determine whether or not something is hate speech, or a credible threat of violence."
Context was important in the censoring, since reversed, of the famed Napalm Girl photo from the Vietnam War, which had been flagged and removed in September because of the nudity of the girl. This caused an outcry, emanating in particular from the Norwegian newspaper that had their post of the photo removed, and resulted in COO Sheryl Sandberg apologizing a few days later saying, "These are difficult decisions and we don't always get it right... we intend to do better."
Zuckerberg once again talked about doing better in his Friday post, perhaps making it easier for users to flag questionably factual content, or for posts whose content is debatable to appear with visible warnings next to them. But where does the line get drawn between creating a safe "platform" for people share things, and running a media company whose users spend a majority of their time posting links to content they did not write. And just as the truthfulness or bias of a news item will be debated, so can something as simple as a comment on a photo.
Per the report:
The problem is, simple and complex items all go into the same big pile. So, the source says, "you go on autopilot" and don't realize when "you have to use judgment, in a system that doesn't give you the time to make a real judgment."
A classic case is something like "Becky looks pregnant." It could be cyberbullying or a compliment. The subcontractor "pretty much tosses a coin," the source says.
Community Operations is headquartered out of Facebook's Dublin office, as described in the cheery video below, though we don't get to see the army of content review people who are allegedly removing or signing off on bits of content every 10 seconds.
The UK's Independent was the first publication allowed into this realm in 2015, where the team reviews issues raised by users across Europe, the Middle East, Africa, and Latin America and thus needs to be extremely multicultural in its understanding of these languages and cultural nuances.
But when issues with the site have become about truth itself, and the ethics surrounding journalistic questions, and not just about whether users are writing hateful things or bullying one another, it sounds like an entirely different sort of team needs to step in, i.e. a newsroom.
Zuckerberg was careful to say on Friday, "We do not want to be arbiters of truth ourselves, but instead rely on our community and trusted third parties." But that's essentially impossible.
As one source tells NPR, "Facebook could afford to make content management regional have decisions come from the same country in which a post occurs," and that alone would likely improve the process.
So, once again we have a call for Zuck to realize he's running a media company now, and rather than use "an enforcement mechanism that is set up to fail," as NPR says, Facebook has to seriously reimagine how it manages and censors content not just how it keeps its 1.8 billion users from being mean to each other.
Previously: Zuckerberg Walks Back His Dismissal Of Fake News Problem, Says Facebook Will Take Steps To Combat Bulls**t