Something appears to be up in the Instagram universe, with many users turning to other platforms like Reddit and Xitter to lodge complaints about their accounts getting falsely flagged and suspended.
Meta may have put a new AI system to work in recent weeks, and its content moderation efforts have left something to be desired for a host of Instagram users. A quick scroll of the Instagram subreddit shows a number of recent stories of people's accounts being banned for seemingly no reason, and then sometimes unbanned after a week or two following an appeal.
"Lately, there’s been a disturbing uptick in false bans on Instagram and Facebook — people waking up to find their accounts disabled with little explanation, and in a lot of terrifying cases, being accused of violating terms related to (CSE) [child sexual exploitation]," writes one Redditor. "Ordinary users are being hit with these bans completely out of nowhere—no prior warnings, no real reasoning, no chance to appeal effectively.And while losing photos, messages, and memories sucks — and it does — the larger issue is that Meta is slapping these life-altering accusations onto people’s digital identities without context or due process."
The suspensions appear to be for multiple reasons, and a Meta representative (or a bot) did tell one user recently that they had seen an uptick in bans for and that these were now being reviewed by humans "instead of automation."
Last week, we noted the recent Facebook suspension of San Francisco writer Rebecca Solnit, who said she had been posting about the protests and National Guard deployment in Los Angeles.
TechCrunch reached out to Meta for comment, and they have not offered any, and they have not confirmed that AI systems are being used for flagging and suspending accounts.
The Bay Area's own "People Behaving Badly" journalist Stanley Roberts saw his Meta accounts banned two weeks ago over "account integrity," only to see them restored a few days later.
"Me being told I’m not really me was never in my 2025 bingo cards," Roberts wrote on X. "This is draconian in its purest form."
A class-action lawsuit may be taking shape, led by a law firm in St. Paul, Minnesota, and that class-action is seeking more plaintiffs who believe they were wrongfully banned from Meta's platforms and had their businesses impacted.
"This ban has directly affected my business and all of the hard work and branding that I’ve spent countless hours pouring into my business, my gym, and my students," writes one gym owner on Reddit.
Another user with multiple personal and business accounts said he's seen his accounts recently suspended as well.
"On June 4th I suddenly got an email stating one of my IG accounts was suspended due to CSE," this user writes. "The particular account is a car account. I only post pictures and videos of my car, absolutely nothing else. Unfortunately, I made the mistake of connecting it to my Facebook and my Facebook account that is 20 years old is also gone now. ... After that account was banned, they slowly banned 5 more of my Instagram accounts including my business instagram. Funny enough, my main personal Instagram didn’t get banned because I am Meta Verified."
And the appeals process, which is supposed to be easier if you have a Verified account, hasn't gone well for this user either. "You’re thinking, wow, easy to get support and help, right? Nope. The Meta support chat sends me in endless loops of broken links, telling me to appeal (I can’t log in to appeal), telling me to fill out forms, etc. Then they always just suddenly close the ticket and say 'We’ve given you all the resources you need for your problem, have a nice day!' and never actually help."
Meta CEO Mark Zuckerberg began the year by announcing that the company would be pulling back on content moderation, saying that it had gone "too far" in recent years and infringed on users' free speech rights. So what exactly is happening now, besides an oversealous AI making errors and not enough humans around to fix them?
This is a developing story.
