Anonymity for published writers and users with a heavy online presence is quickly becoming a thing of the past, as writer Kelsey Piper warns that Anthropic’s Claude Opus 4.7 managed to identify her based on a brief passage of unpublished work — while logged out and using incognito mode.

Vox writer Kelsey Piper, who champions anonymity, conducted some tests on Anthropic’s Claude Opus 4.7 last week and quickly learned that her usual safeguards, such as logging out, using incognito mode, and omitting saved preferences and memory, didn’t prevent Claude from pinpointing who she was almost immediately — based on 125 words from an unpublished political column.

This is particularly alarming given the news that Anthropic’s latest model, Mythos, has the capability to hack into nearly every computer in the world, and it’s not being released to the public for that reason, as SFist reported last week.

To rule out the possibility that the system was accessing hidden user information, Piper asked a friend to repeat the test on a separate computer and got the same result. She also ran it through the API, again with no identifying context, and received the same answer each time.

While she notes that narrowing down an author from a political column isn’t that difficult, especially given Piper’s public body of work, the model reached the same conclusion across unrelated and unpublished material, including a student progress report, a movie review, fiction, and a college application essay. She said other models weren’t nearly as accurate, although ChatGPT successfully pinpointed her some of the time.

When she asked Claude and ChatGPT for justifications for their answers, she said the explanations were often off-base. She suspects they were generated after the fact, with the models identifying patterns in writing and then retrofitting a narrative to justify the result, despite not actually understanding how they reached it.

Piper also did some testing on her friends with little to no online writing history and found they had a bit more anonymity, but the window is quickly closing. Claude sometimes guessed people based on their social circles, suggesting it was picking up shared stylistic traits. In one case, the model failed to name the correct author of a private Discord message but instead identified two of her close friends who had more public writing.

Repeated tests produced similar near-misses, with the model circling a small group of connected writers.

While AI cannot yet identify the average user from a single passage, it can already narrow down prolific writers — and that threshold is likely to drop as models improve.

Previously: Anthropic's New Model, Mythos, Is So Dangerous It Isn't Being Released to the Public

Image: Michael M. Santiago/Getty Images