The horrifying 17-minute Facebook Live video made by the accused New Zealand shooter proliferated on social media in the first 24 hours, and Facebook says it removed 1.5 million copies in that first day alone — 1.2 million of which were caught on upload.
The company tweeted on Saturday about its apparent success in combating the spread of the video made by Brenton Harrison Tarrant, noting that it also had removed versions of the video in which the most graphic violence had been edited out.
But Facebook continues to face criticism from lawmakers and Wall Street analysts Monday as Tarrant's 17-minute live feed is just the latest high-profile case of a problem Facebook could have potentially prevented, but didn't.
As the New York Times' Kevin Roose wrote on Friday, this was the world's first "internet-native mass shooting... produced entirely within the irony-soaked discourse of modern extremism." It seems conceived and designed with maximum virality in mind, down to Tarrant's bizarrely flippant reference to YouTube star PewDiePie just minutes before the murders began.
Speaking of YouTube, the company's chief product officer, Neal Mohan, gave an interview over the weekend to the Washington Post discussing his "war room" and the team's early Friday efforts to stop the spread of Tarrant's rapidly spreading video.
Per the WaPo:
The team worked through the night, trying to identify and remove tens of thousands of videos — many repackaged or recut versions of the original footage that showed the horrific murders. As soon as the group took down one, another would appear, as quickly as one per second in the hours after the shooting...
As its efforts faltered, the team finally took unprecedented steps — including temporarily disabling several search functions and cutting off human review features to speed the removal of videos flagged by automated systems. Many of the new clips were altered in ways that outsmarted the company’s detection systems.
Experts suggest that the AI capabilities touted by these companies are actually not as powerful or effective as they'd like us to believe, and the New Zealand incident is just the latest proof of that.
As Recode wrote on Friday, and as others have repeated since, the very nature of platforms like Facebook, Twitter, and YouTube, which encourage users to publish content without prior review, means that there will always be impossible-to-stop viral incidents like this one.
"Even if the companies hired tens of thousands more moderators, the decisions these humans make are prone to subjectivity error," writes the Washington Post, "and AI will never be able to make the subtle judgment calls needed in many cases."
"Unless users stop using YouTube, they have no real incentive to make big changes," says former YouTube engineer turned watchdog group founder Guillaume Chaslot, speaking to the Post. "It’s still whack-a-mole fixes, and the problems come back every time."
Related: Facebook, YouTube, and Twitter Scramble to Remove New Zealand Terrorist Video [SFist]