A recent Medium post by a developer offers a timely warning for anyone using AI coding tools to move faster and automate more, while trusting just a little too much.
The headline-grabbing part of Sonu Yadav's story is hard to miss. While working on an AWS migration, he says a sequence of his own decisions — compounded by Claude Code running a Terraform destroy command against the wrong state file — ended up deleting the production database and surrounding infrastructure for DataTalks.Club, wiping out access to 2.5 years of course submissions and student work. In the moment, it appeared that the backups were gone, too.
For about 24 hours, the platform was down and the data looked lost.
That's the kind of anecdote that could easily be turned into an anti-AI morality play. That's the wrong lesson here, but there's definitely a moral at the end.
To his credit, Yadav does not present the episode as proof that Claude Code is reckless or unusable. He says plainly that the bigger failure was his own workflow — and notably, he acknowledges that Claude had actually warned him against mixing the new project into existing production infrastructure.
And yet, he overrode that warning from Claude Code.
From there, the mistakes compounded: He failed to carry over the correct Terraform state after switching computers, walked away to retrieve files from another machine while the agent was mid-task, and did not intervene when the explanation for a destructive command sounded reasonable enough to let it continue. With auto-approve enabled, there was no pause for a final human check.
That's what makes the story useful beyond the technical details. The problem was not some sci-fi version of AI "going rogue." The problem was a mix of oh-so-familiar human temptations: convenience, speed, and overconfidence. In addition to the problematic belief that because a tool is usually helpful, it can be trusted with more than it should.
That lesson matters because AI coding assistants keep getting better at sounding competent while they're doing complex work on a user's behalf. They can generate code quickly, inspect configurations, suggest fixes, and carry out multi-step tasks that once required more manual effort. Used well, they can be a real productivity gain. But that same fluency can make it easier for a user to slide from "this is assisting me" to "this is handling it for me," especially when deadlines loom and the terminal is scrolling fast.
Yadav's account is a reminder that those are not the same thing.
When "good enough" becomes a problem
The core lessons are simple enough to travel well beyond software teams: AI tools can be excellent collaborators. However, they are not a substitute for judgment, especially when the task involves live systems, sensitive data, financial records, or any action that is hard to reverse. The more destructive the potential action, the more important it is for a human being to slow down, verify what is happening, and approve the step deliberately.
In this case, Yadav eventually got lucky. (Real lucky.) AWS support was able to locate an internal snapshot that was not visible in his console, and after about a day, the system was restored. The student data came back. But one reason the story resonates is that luck did a lot of work in the ending. Had that snapshot not been recoverable, this would be a very different article.
Friction can be positive
The most constructive part of Yadav's post comes after the crisis. Once the dust settled, he went back and rebuilt his workflow around a simple idea: Powerful tools need friction in the right places. That meant adding backup systems that couldn't be swept away alongside the infrastructure they were backing up, testing restore paths so he'd know they actually worked, enabling deletion protection at multiple levels, and moving the configuration that caused the original confusion to a shared location no single machine could lose track of.
None of that requires knowing what Terraform is. It's just the discipline of assuming things will go wrong and making sure "wrong" can't mean "unrecoverable."
That may be the larger AI lesson hiding inside this story. The real risk is not just that AI can make mistakes. It's that people, lulled by convenience and fluency, are likely to stop applying the skepticism and process that used to come naturally when the work felt more obviously dangerous.
The bottom line
Just to be clear: This is a case for common sense, not a case against Claude Code.
Keep using the tools: Let them help, and let them save time. But don't hand them the keys to anything important and assume they're wise enough to avoid a serious accident.