In a seemingly cynical and tactical move ahead of another congressional hearing, Facebook announced today that it is officially banning intentionally manipulated video aimed at misinforming the public, a.k.a. deepfakes.

"While these videos are still rare on the Internet, they present a significant challenge for our industry and society as their use increases," writes the company's  vice president for global policy management, Monika Bickert, in a blog post. And she goes on to explain how the company will and will not police altered video, leaving open plenty of arguments and exceptions that make the policy likely moot in all but the most egregious cases.

The company says it will ban videos created by AI that stitches together speech to create entirely fictional footage with public figures, and it will ban video that "has been edited or synthesized – beyond adjustments for clarity or quality – in ways that aren’t apparent to an average person and would likely mislead someone into thinking that a subject of the video said words that they did not actually say."

As the Washington Post notes, a recent case of altered video of House Speaker Nancy Pelosi, slowed down and meant to make it seem like she was drunk, likely wouldn't fall under either of these categories and would therefore be left up. Though the company does say that such videos will also be eligible for third-party fact-checking, and if deemed false or misleading they could be downgraded in the site's algorithm.

Under Facebook's former policy, the Pelosi video was flagged as "false" but was not taken down. At the time, the company said "we don’t have a policy that stipulates that the information you post on Facebook must be true."

Now the company is at least acknowledging the growing danger of deepfakes in misinformation campaigns, but the policy doesn't go far enough according to former Vice President Joe Biden's presidential campaign. An out-of-context snippet of video of Biden recently made the rounds on Twitter, taking a few seconds of 13-minute monologue that Biden gave in New Hampshire talking about English common law and cutting it to suggest that Biden was espousing White Nationalist views. (He had been talking about needing to change the culture that began by allowing men to beat their wives in the 1300s, and he said, "Folks, this is about changing the culture, our culture, our culture. It’s not imported from some African nation or some Asian nation. It’s our English jurisprudential culture, our European culture, that says it’s all right." The clip took out the part about "changing the culture.") Such a clip is referred to as a "cheapfake" because all it involves is a simple edit that removes the context of a statement.

Biden's campaign issued a statement Tuesday saying Facebook's new deepfake ban is "an illusion of progress," adding, "Facebook’s policy does not get to the core issue of how their platform is being used to spread disinformation, but rather how professionally that disinformation is created."

The announcement by Facebook comes just a day ahead of a hearing before the House Committee on Energy and Commerce's Subcommittee on Consumer Protection and Commerce, which is titled "Americans at Risk: Manipulation and Deception in the Digital Age." Bickert is scheduled to testify at that hearing along with Tristan Harris, the former Google employee and cofounder of The Center for Humane Technology; and Joan Donovan, the Research Director of the Technology and Social Change Project at the Harvard Kennedy School.

As Amy Zegart, co-director of the Center for International Security and Cooperation at Stanford University, told The Hill last summer, social media platforms may not even be the biggest problem when it comes to amplifying the effect of deepfakes. The traditional media has gotten so competitive with scoops gleaned from the internet, she says, that "The press is going to have to resist the urge to get the scoop by talking about something that may not be true before they can validate it." And she adds, "That's going to require some technical skills and it's going to require some patience." (The Biden clip, for instance, got retweeted by several prominent journalists before they understood the full context.)

To that end, Facebook has launched a "challenge" for developers to create new deepfake detection tools, and they partnered with Reuters last fall to sponsor a course for journalists in recognizing deepfakes.

Previously: Doctored Videos Making Pelosi Look Drunk Go Viral