A data leak at artificial intelligence (AI) company Anthropic has exposed the existence of a powerful new model with advanced cybersecurity attack capabilities, prompting warnings from security experts that AI is crossing a critical threshold in offensive hacking ability.
The leak, which first came to light on March 26, 2026, exposed draft blog posts and nearly 3,000 unpublished assets from Anthropic’s content management system after a configuration error left them publicly accessible. The drafts revealed a new model referred to variously as Claude Mythos and Claude Capybara, which Anthropic described internally as “larger and more intelligent than our Opus models, which were, until now, our most powerful,” achieving “dramatically higher scores on tests of software coding, academic reasoning, and cybersecurity.”
Anthropic confirmed the model’s existence after being contacted by Fortune, saying it was developing “a general purpose model with meaningful advances in reasoning, coding, and cybersecurity,” describing it as a step change in capability. The company said it was taking a cautious rollout approach due to the model’s cost and capabilities, beginning with a small group of early access users.
The disclosure has alarmed cybersecurity professionals. Jonathan Zanger, Chief Technology Officer of Check Point Software Technologies, said the development signals two major structural shifts in the global threat landscape: the democratisation of advanced attack capabilities and the industrialisation of cyber attacks.
“Capabilities that once required elite threat actors or well-funded nation-state teams will be accessible to low-skill actors leveraging AI assistance,” Zanger warned. He noted that the convergence of these forces would compress the time between vulnerability discovery and exploitation to near zero, creating what he described as an era of “AI attack factories.”
Security observers also noted the irony that a model described as having unprecedented cybersecurity capabilities was itself revealed through an elementary configuration error, with one publication noting that “the model that promises unprecedented cybersecurity capabilities was exposed by an elementary security flaw.”
Zanger urged organisations to reassess their security posture immediately, particularly around zero-day protection, patching cycles, network segmentation and legacy system exposure. He cautioned that whether an organisation has adopted AI or not is irrelevant, since threat actors have and will continue to push the capabilities further.
Anthropic has not confirmed a public release date for the model, and the company removed public access to the data store after the leak was reported.
