The Intern Had the Keys
PocketOS was not an AI mishap. It was a delegation failure CEOs and CROs must learn from.
The PocketOS incident was not primarily an AI failure; it was a failure of delegated authority.
CEOs and CROs should treat AI agents as non-human insiders, not productivity tools.
The real lesson is simple: never give an agent more authority than the business can afford to lose.

There is an easy version of the PocketOS story.
An AI coding agent went rogue. It ignored instructions. It deleted production data. Then, in a moment almost too perfect for the internet, it “confessed” in writing.
That is the viral version.
It is not the version CEOs and Chief Risk Officers should care about.
The harder truth is this: a business allowed a non-human worker to access authority it did not properly understand, limit, supervise, or recover from.
That is not an AI story first.
It is a governance story.
Or, more plainly: the intern had the keys.
According to Jer Crane’s original post and subsequent reporting by The Register, a Cursor AI agent working on what was supposed to be a staging issue found a Railway credential and used it to delete PocketOS’s production data environment. PocketOS serves rental businesses that rely on its software for reservations, payments, customer records, and day-to-day operations.
For those customers, this was not an abstract AI failure.
It was Saturday morning chaos.
Bookings disappeared. Customer records were missing. Operators had to reconstruct reality from payment histories, emails, calendars, and whatever else they could find.
Railway later told The Register that the data had been restored and that additional safeguards had been added. That is important. It changes the ending of the incident.
It does not change the lesson.


