An outside company has analyzed reams of Facebook data, and found that ‘false’ or ‘disputed’ tags were shown to some users, but not others, possibly based on geography, and almost never slapped on Trump posts.
The post-election recriminations continue at Facebook, particularly as their independent Oversight Board continues to mull permanently banning Donald Trump. (And hey, it only took Facebook 69 days after the election and one insurrection at the U.S. Capitol to remove “Stop the Steal” posts from their platform.)
But other independent bodies are also performing their own autopsies of Facebook’s content decisions around the 2020 election, as The Verge summarizes the findings of a publication called The Markup’s “Facebook Inspector,” which analyzes thousands of different people’s Facebook feeds, and concluded that "Trump’s False Posts Were Treated with Kid Gloves by Facebook.”
Among their more suspicious findings is that different people are shown different flags on false posts depending on the viewer. With regards to a satirical post claiming the Christmas Day Nashville bombing suspect had COVID-19, The Markup found that for one Facebook user in New York, “Facebook appended a note over the top: The information was false.” But for another user in Texas, the exact same “post only had a 'related articles' note appended to the bottom that directed users to a fact-check article, making it much less obvious that the post was untrue.”
During the wild two-month period after Trump lost the election but was still claiming that he’d won, he posted on Facebook that it “statistically impossible” for Biden to have won the election. The Markup reviewed timelines to find that Facebook tagged that article with the incredibly weak statement that the United States “has laws, procedures and established institutions to ensure the integrity of our elections.”
A Facebook disinformation expert at George Washington University, Ethan Porter, told The Markup that “My guess would be that Facebook doesn’t fact-check Donald Trump not because of a concern for free speech or democracy, but because of a concern for their bottom line.”
This is a fundamental problem for all social media, but Facebook in particular, because misinformation and hate speech are so profitable for them. It’s not just that Mark Zuckerberg gets played like a fiddle by Trump and right-wing media figures. There’s also a factor unique to Facebook where Zuckerberg will disappoint investors and shareholders by limiting misinformation.
Cornell University economist Robert Frank called out last week in a very technical but excellent op-ed stemwinder in the New York Times. “The algorithms that choose individual-specific content are crafted to maximize the time people spend on a platform,” Frank writes. “As the developers concede, Facebook’s algorithms are addictive by design and exploit negative emotional triggers. Platform addiction drives earnings, and hate speech, lies and conspiracy theories reliably boost addiction."
Frank argues for greater regulation on Facebook and other platforms, because the way their business models function now is fundamentally bad for democracy and the economy both.
Frank caps his piece saying, "If the conscious intent were to undermine social and political stability, this business model could hardly be a more effective weapon."
Related: Biden Press Secretary Lambasts Facebook Over Post-Election Misinformation [SFist]
Image: @solenfeyissa via Unsplash