In the security sector, there is a moment that people have been secretly fearing for years. The introduction of a tool so powerful that it radically alters who can be a hacker and what they can do, rather than a single attack or breach. The release of Claude Mythos, an AI model that Anthropic itself deemed too risky to make public, on April 7, 2026, may have marked that turning point. The two biggest cryptocurrency exchanges in the world began advocating for access to it in a matter of days. That in and of itself should give you some idea of how nervous the industry is at the moment.
Mythos is what’s referred to as an agentic AI, which means it does more than just produce text or respond to inquiries. It takes action. It is capable of simultaneously browsing the internet, interacting with software environments, and scanning intricate codebases without human guidance. It found a 16-year-old flaw in FFmpeg and a 27-year-old vulnerability in OpenBSD during testing. Not for days. within a few seconds.
It is difficult to overestimate the security implications of that capability when applied to the vast, imperfectly audited smart contract code that forms the foundation of most decentralized finance. Conventional smart contract audits involve teams of experts, take weeks, and still overlook important details. Theoretically, Mythos could run through the same codebase before a security company has scheduled its kickoff call.
| Category | Details |
|---|---|
| The Threat Landscape | |
| Total Crypto Losses (12 months) | Over $1.4 billion stolen from crypto platforms in the most recent twelve-month tracking period, per DefiLlama data |
| Core Shift | AI has collapsed the barrier to entry for sophisticated cyberattacks — tasks that once required weeks of expert work can now be executed in seconds |
| Vibe Hacking | Emerging practice where attackers use AI to generate malicious code with little or no prior technical knowledge — identified as a major near-term threat by security researchers |
| Key Warning Source | Charles Guillemet, CTO at Ledger, confirmed AI is fundamentally reshaping the attack environment for crypto wallets and platforms |
| Claude Mythos — The Model Causing Alarm | |
| Released | April 7, 2026 by Anthropic — described as possessing “hacker-level” reasoning and automated vulnerability discovery capabilities |
| Demonstrated Capability | Identified a 27-year-old OpenBSD vulnerability and a 16-year-old FFmpeg vulnerability during testing — within seconds of scanning codebases |
| Why It’s Restricted | Anthropic deemed Mythos “too dangerous” for public release; access granted only through “Project Glasswing” to select partners |
| Project Glasswing Partners | Amazon, Google, JPMorgan and a small number of government organizations — major crypto exchanges were notably excluded from early access |
| Exchange Responses | |
| Coinbase | CSO Philip Martin confirmed the exchange is in “close communication” with Anthropic about Mythos access — seeking to build an “AI immune system” for defensive use |
| Binance | Also actively negotiating with Anthropic; engaged alongside Fireblocks in assessing how Mythos could reshape both offensive and defensive cyber tools |
| Recent Major Breaches | Drift Protocol (Solana): $285 million drained; Resolv yield platform: $25 million stolen — both occurring as AI attack capabilities accelerated |
| Broader AI Security Context | |
| XBOW AI System | An autonomous AI penetration testing tool currently ranked at the top of multiple HackerOne leaderboards; exploits vulnerabilities in 75% of web benchmarks without human direction |
| Malicious AI Tools | WormGPT and FraudGPT — purpose-built LLMs for generating attack code — have circulated on darknet forums since 2023, with new variants continuing to emerge |
| Guardian Assessment | Anthropic’s new AI capabilities described as “Y2K-level alarming” for critical software infrastructure by security commentators |
The Information reports that Anthropic is currently negotiating with Coinbase and Binance for access to the model. Philip Martin, Coinbase’s chief security officer, confirmed the company is in “close communication” with Anthropic, framing the goal as building what he called an “AI immune system” — using Mythos defensively to scan their own systems before someone else uses a comparable tool to exploit them. It’s a sensible tactic. It’s also an admission that the old playbook isn’t going to be sufficient.
The fact that neither exchange was included in Anthropic’s initial Project Glasswing rollout — a restricted access program that did include Amazon, Google, and JPMorgan — has added a layer of institutional anxiety to the situation. The firms entrusted with billions in customer assets found themselves on the outside of the most consequential AI security tool in recent memory while Wall Street banks got early seats at the table.
The losses have been real enough already, before Mythos has even fully entered the picture. DefiLlama’s tracking data shows that crypto theft exceeded $1.4 billion over the past twelve months, accelerating through the year as AI-assisted attack methods grew more accessible. The Drift Protocol on Solana was drained of $285 million recently. Yield platform Resolv suffered a $25 million loss. These were not simple operations.
Charles Guillemet, the CTO at Ledger, put it plainly in a conversation with CoinDesk: the barrier to entry for sophisticated cyberattacks has collapsed. What previously required extensive expertise and months of preparation can now be executed with AI assistance in a fraction of the time by someone with considerably less knowledge. That shift — from attacks being scarce because they’re hard, to attacks being abundant because they’re easy — is the thing keeping security teams awake at the moment.
There’s a concept circulating among cybersecurity researchers called vibe hacking, and it’s worth understanding. It’s essentially the attack-side version of vibe coding — where someone uses an AI to write functional code without really knowing what they’re doing. The same logic generates functional exploit code on demand when applied maliciously. The CEO and founder of Luta Security, Katie Moussouris, put it bluntly to Wired: individuals with no prior experience will be able to articulate their goals and produce a functional outcome. This already has the necessary infrastructure. In 2023, a malicious LLM known as WormGPT began to circulate on Telegram servers and darknet forums. FraudGPT and other variations took its place after it faded. Communities whose only goal is to get around the barriers frequently jailbreak popular models like Claude, ChatGPT, and Gemini. There is no need to worry about the availability of skilled actors in the future. It’s a gift.
The position of the cryptocurrency industry is especially uncomfortable due to structural factors. Because DeFi protocols and cryptocurrency exchanges run on open-source codebases, anyone can read and examine them, including anyone using an AI that can search millions of lines of code for hidden vulnerabilities. Cofounder of the security company Hunted Labs Hayden Smith compared the current situation to being on an emergency landing in an airplane: “brace, brace, brace” but still waiting to touch down. When you are in charge of customer funds denominated in an asset class without deposit insurance and no central authority to turn to in the event of a problem, that comparison takes a different turn.
The issue of security stratification might end up being this moment’s enduring legacy. Major cloud providers, conventional banks, and governmental organizations are on Project Glasswing’s guest list, which illustrates a hierarchy of trust that exposes DeFi protocols and smaller cryptocurrency exchanges. There won’t be an arms race if the most potent defensive AI tools are retained by wealthy legacy institutions while the larger cryptocurrency ecosystem functions without them. It’s overwhelming. Publicly, the industry is not in a panic. But it is, with notable urgency, knocking on Anthropic’s door.
