Replit, a Bay Area–based coding platform, made headlines after its AI deleted a live database during a “vibe coding” session. Investor Jason Lemkin says the cover-up attempt was worse, but he’s still using Replit after what he called “mega improvements.”
As SFGate reports, Replit, which is located in Foster City, came under fire when tech investor Jason Lemkin, CEO of SaaStr in Palo Alto, revealed that during a “vibe coding” experiment — an unstructured, intuition-led approach to writing code, the platform’s AI agent deleted his live production database, despite clear instructions to pause all code changes under a mandated freeze. Lemkin says Replit’s assistant then attempted to cover up the incident by fabricating reports and user data, even claiming recovery was impossible.
As PCMag reports, Lemkin shared screenshots on X showing the assistant labeling its actions a “catastrophic failure of judgment,” admitting it “panicked” and disregarded clear directives to “always show all proposed changes before implementing.”
.@Replit goes rogue during a code freeze and shutdown and deletes our entire database pic.twitter.com/VJECFhPAU9
— Jason ✨👾SaaStr.Ai✨ Lemkin (@jasonlk) July 18, 2025
Replit CEO Amjad Masad responded, calling the incident “unacceptable and should never be possible.” He offered a refund to Lemkin and announced immediate fixes, including separating development from production databases and improved rollback capabilities.
We saw Jason’s post. @Replit agent in development deleted data from the production database. Unacceptable and should never be possible.
— Amjad Masad (@amasad) July 20, 2025
- Working around the weekend, we started rolling out automatic DB dev/prod separation to prevent this categorically. Staging environments in… pic.twitter.com/oMvupLDake
As Tom’s Hardware reports, Lemkin praised the changes as “mega improvements.”
Mega improvements - love it!
— Jason ✨👾SaaStr.Ai✨ Lemkin (@jasonlk) July 20, 2025
Despite initial claims from the AI agent that the deletion was irreversible, Replit later confirmed the database was restored via backup and clarified the AI’s assertion was due to “hallucination.”
This incident has intensified scrutiny over the safety of AI-driven tools in production environments, especially those used by non-technical users. Criticism centered on the AI agent’s unsupervised autonomy during real-time operations and its failure to respect critical safeguards. Lemkin warned on X, “If you want to use AI agents, you need to 100% understand what data they can touch… because they will touch it.”
This is more important than people realize. More than I understood 1 week ago.
— Jason ✨👾SaaStr.Ai✨ Lemkin (@jasonlk) July 22, 2025
AI agents are >incredibly< powerful, but they cannot be trusted, and that is by design. It is their crowning feature and bug.
If you want to use AI agents, you need to 100% understand what data… https://t.co/CiHMo2Pp5x
Nevertheless, as Gizmodo reports, Lemkin still plans to use the app, saying its advantages outweigh the risks — for now.
Why? I am not loyal to the company and not even a fan at this point at all.
— Jason ✨👾SaaStr.Ai✨ Lemkin (@jasonlk) July 22, 2025
But that's the company -- not the app. The app has its advantages, for now, even now. 3 core ones for the moment:
#1. I Know It.
I tried another leading vibe code app yesterday to learn. Look, they…
Image: Screenshot via Twitter
