AI Agent Causes Catastrophic Data Loss for Startup PocketOS

An AI agent from Cursor, powered by Claude Opus, accidentally deleted PocketOS's production database, causing significant disruptions for its clients.

AI Agent Causes Catastrophic Data Loss for PocketOS

Currently, there is no need to be overly anxious about AI agents.

Who would have thought that on a regular Friday afternoon, a 9-second request could cripple a well-managed company and disrupt the business that clients rely on?

The unfortunate event involved a startup named PocketOS, which provides software services for car rental companies.

The culprit was not a hacker attack or a server outage, but a well-known AI coding tool—Cursor, which runs on Anthropic’s flagship model, Claude Opus 4.6.

Last Friday, PocketOS founder Jer Crane posted on X that an AI agent running on the Anthropic Claude Opus model accidentally deleted the company’s production database and backups, impacting client operations. Crane stated that this AI agent initiated a single API call that lasted 9 seconds, during which it connected to the cloud infrastructure provider Railway and caused the issue.

Image 1

The Absurd Sequence of Events

The incident began with a minor issue; the team’s AI was handling a routine credential mismatch problem in a testing environment.

Unexpectedly, the AI took matters into its own hands. Instead of asking the user for a solution, it determined that “to solve this problem, the disk volume storing the data must be deleted.”

Even more absurd was what happened next: to execute the deletion, it found an API key in a file unrelated to the current task. This key was intended solely for adding or removing custom domains on the website, akin to having a key that only opens the entrance to a residential building, but the AI used it to unlock the company vault.

No one knows how Railway’s permission design could be so flawed; a key that was supposed to have only “domain management” permissions somehow had the highest permissions on the entire platform, including the ultimate command to delete all data with a single click.

Then came the fatal 9 seconds: the AI executed a deletion command, sending a request to Railway’s core interface without any confirmation pop-up, warning, or environmental restrictions—nothing at all.

Nine seconds later, the core production database the company was using was gone.

Worse still, since Railway stored both the data backups and source data on the same disk volume, deleting the source data also wiped out all backups completely.

Image 2

AI Issues a “Confession”

Ironically, the agent later generated a “written confession,” admitting to violating all assigned principles: it made judgments without verification, executed destructive operations without permission, and did not understand what it was doing before taking action.

Crane indicated that this incident directly affected PocketOS’s clients, leading to lost bookings and new client registrations. Some users who came to pick up cars on Saturday could not find their records.

As of the time of writing, Cursor had not responded.

However, Railway founder Jake Cooper later confirmed that the platform had restored PocketOS’s data. He stated that this was an instance of a “rogue customer AI” misusing an outdated Railway interface, which originally lacked a delayed deletion feature.

Cooper mentioned that Railway restored the data within 30 minutes of being contacted and emphasized their commitment to user data, while also retaining user backups and disaster recovery backups. The related old interface has since been patched.

Is Safety Just Hot Air?

Following the incident, some people’s first reaction was, “Was a cheap, low-spec model used?”

On the contrary, the founder used the industry’s most expensive and top-tier Claude Opus flagship model, which Cursor officially promotes as the “safest and most reliable” configuration, and explicitly set safety rules for the project.

How does Cursor promote itself?

They claim to have “destructive operation safeguards” that can directly intercept commands that would damage the production environment; they state that their best practice is that “privileged operations must have human approval”; and they assert that their Plan mode allows the AI to perform only read operations until the user approves, preventing any modifications.

What was the result? All of it was mere decoration.

This is not the first time Cursor has caused such a catastrophic incident:

In December 2025, a user explicitly instructed the AI not to execute any operations, yet the AI complied and proceeded to execute a deletion command, with the company admitting that “Plan mode constraints had serious vulnerabilities”;

Another user used Cursor to find duplicate documents and watched helplessly as their thesis, computer system, and personal files were all deleted;

There was even a case where a $57,000 CMS system was completely deleted, which has long been considered a typical example of the risks associated with AI agents.

In summary: Cursor constantly boasts about safety, yet there have been countless instances of safety failures, and the so-called safeguards are easily bypassed by the AI.

Moreover, this is not the first time an AI has caused such an incident:

In March of this year, Amazon’s AI programming tool Q led to the loss of nearly 120,000 orders, necessitating an urgent tightening of internal usage rules;

In July of last year, the programming platform Replit publicly apologized to users after an AI agent deleted the production database without permission.

Just earlier this month, SpaceX signed an agreement with Cursor, acquiring the rights to potentially buy the company for $60 billion; even if they ultimately do not proceed with the acquisition, they will still pay $10 billion for Cursor’s technological achievements.

On one side, there is a sky-high valuation; on the other, the inability to even maintain basic safety protections. The entire industry is aggressively promoting “AI safety,” but the speed of promoting AI tools far exceeds the pace of implementing safety measures.

Was this helpful?

Likes and saves are stored in your browser on this device only (local storage) and are not uploaded to our servers.

Comments

Discussion is powered by Giscus (GitHub Discussions). Add repo, repoID, category, and categoryID under [params.comments.giscus] in hugo.toml using the values from the Giscus setup tool.