Did Facebook and so-called "fake news" give us president-elect Donald Trump? Facebook CEO Mark Zuckerberg doesn't think so, and called any suggestion to the contrary "a pretty crazy idea." However, facing a chorus of criticism following revelations that false pro-Trump stories masquerading as news were shared millions of times across his social network, the tech titan has slowly begun to admit that the widespread dissemination of bullshit may not be good for anything other than his company's bottom line. As such, Facebook today announced a series of technical changes in the hope of keeping the site's content mostly based in reality. Chief among them is the ability for users to mark posts as "a fake news story."

"We’re testing several ways to make it easier to report a hoax if you see one on Facebook, which you can do by clicking the upper right hand corner of a post," Adam Mosseri, News Feed VP, explains on the Facebook blog. "We’ve relied heavily on our community for help on this issue, and this can help us detect more fake news."

Basically, Facebookers will be able to flag links posted by their friends and family as fitting into one of four categories: "annoying or not interesting," something that "shouldn't be on Facebook," "spam," or "a fake news story." That three out of four of these categories are undefined or relative means things on the social network are about to get a whole lot more complicated.

Take, for example, a widely shared Washington Post story reporting on, you guessed it, fake news. That story itself was criticized by The Intercept as "rife with obviously reckless and unproven allegations, and fundamentally shaped by shoddy, slothful journalistic tactics," and the Post was forced to append a lengthy editor's note to the report. Now, is the original Post story "fake news," or just bad reporting? And, perhaps more to the point, if the editors at the Washington Post have a hard time seeing through bullshit than how can Facebook expect Uncle Joe and Gradma Betty to be any better?

Here's how the proposed system will work. Let's say a post is repeatedly flagged across the site by numerous users as "fake news." If enough people do so, and if the post matches other Facebook-defined criteria, then the company will hand it off to a third party fact-checker to determine the veracity of the report. If the fact-checker identifies intentionally misleading information, then the post could end up marked as "disputed." The post would still show up on Facebook, but would have the little disclaimer attached to it.

Facebook is also working to make it harder for propagators of misleading content to gain financially off their work. "Spammers make money by masquerading as well-known news organizations, and posting hoaxes that get people to visit to their sites, which are often mostly ads," the company explains. "On the buying side we’ve eliminated the ability to spoof domains, which will reduce the prevalence of sites that pretend to be real publications. On the publisher side, we are analyzing publisher sites to detect where policy enforcement actions might be necessary."

Facebook will slowly roll these features out to a limited subset of users, and, if the company deems them a success, will expand them on a much larger scale. Which is all well and good, and definitely overdue, but the question of whether or not the new features will be enough to curb the fake-news scourge is an open question.

Either way, there is one outcome we can all see coming: Partisans marking everything they disagree with, factual or not, as "fake news." 2017 is going to be a fun year.

Related: Is Facebook To Blame For President-Elect Donald Trump?
Zuckerberg Walks Back His Dismissal Of Fake News Problem, Says Facebook Will Take Steps To Combat Bulls**t