Engineers at Facebook reportedly labeled certain types of posts internally as “bad for the world,” and when limiting them led to less engagement on the platform, the baddies got boosted back.
The New York Times dropped two excellent bombs on Facebook today, both very long reads and full of scoops on the internal workings of the company. But the sub-headline in their article on Facebook’s post-election reckoning on false and vitriolic posts crystallizes both stories quite beautifully: “Employees and executives are battling over how to reduce misinformation and hate speech without hurting the company’s bottom line.”
Facebook execs seem to think they handled the post-election period pretty well, despite significant misinformation spread, an obvious double standard between how they treat Biden and Trump, and Steve Bannon’s threats to behead people. But the Times report digs deep into Facebook’s algorithm tinkering on newsfeed displays of political shitposts, and some of these decisions will likely sound highly unethical to anyone who does not have vested Facebook stock shares.
One incident stands out, wherein Facebook engineers actually labeled posts as either “good for the world” or “bad for the world,” and assessed whether to limit any of them (Most of us who do not work in tech would just say “Limit the ones that are ‘bad for the world,” and be done with it, but this is Facebook.) Employees were apparently concerned that ‘bad for the world’ posts were performing vastly better, and ran experiments limiting them. But squelching those ‘bad for the world’ apparently led to users checking Facebook less frequently, an internal metric they call “sessions.”
Of this experiment, an internal Facebook memo said “The results were good except that it led to a decrease in sessions, which motivated us to try a different approach.”
Another experiment called “correct the record” would show users a fact-check link when they share blatantly false news (the company reportedly still does this on some COVID-19 posts.) But according to the Times, that exercise “was vetoed by policy executives who feared it would disproportionately show notifications to people who shared false news from right-wing websites.”
Such profiles in cowardice are explained by a departed Instagram engineer (Facebook owns Instagram).
“Facebook salaries are among the highest in tech right now, and when you’re walking home with a giant paycheck every two weeks, you have to tell yourself that it’s for a good cause,” the engineer Gregor Hochmuth told the Times. “Otherwise, your job is truly no different from other industries that wreck the planet and pay their employees exorbitantly to help them forget.”
The other Times article today is less about Menlo Park palace intrigue, and more about how Facebook radicalizes older people. A journalist recently received the login credentials of two very soft spoken and apolitical Baby Boomers, and monitored their newsfeeds as they would appear to them. He describes the feeds as “a dizzying mix of mundane middle-class American life and high-octane propaganda,” and “hyperpartisan fearmongering and conspiratorial misinformation.”
Baby Boomers in particular have gotten a bad rap for being suckers for Facebook misinformation. But maybe they’re not suckers, they are merely shown whackadoodle posts at a higher rate. The alarming thing about all of this is that Facebook knows which posts are ‘bad for the world,’ and consciously chooses an algorithm that allows them to flourish — because doing the right thing would be ‘bad for shareholders.’
Image: @eddybllrd via Unsplash