Lazarus Doesn't Need AGI

Last week’s reporting on unauthorized access to Claude Mythos reads as an AI security story. It is also, structurally, a North Korea (DPRK) story. Even if the current suspects turn out to be Discord hobbyists.

Mythos was meant to be contained. Within hours of the public Project Glasswing announcement, a third-party contractor environment became the access vector. Not because Anthropic did something wrong. Because controlled release, at the scale modern enterprise software operates, is a goal rather than a guarantee.

The interesting question isn’t who got in this time. It’s who gets in next, and their economics.

What happened?

The group accessed Mythos the same day it was announced, guessing the endpoint based on Anthropic’s naming conventions for prior models. The vector was an individual employed at a third-party contractor, not Anthropic’s core infrastructure. Source characterizations point to a research community “not wreaking havoc” with the model.

The misread

If the coverage only centers on Anthropic’s security posture or the AI safety debate, we’re missing an important angle.

The structural signal is that any preview or controlled-access model release has porous boundaries by design. Access controls on paper (contracts, NDAs, approved vendor lists) differ from those in practice. Every partner brings their own contractors, endpoints, and people with legitimate credentials and uneven security hygiene. That is the real control surface, not the cryptographic perimeter around the model itself. Which makes this a supply chain problem that happens to be about AI, not an AI problem that happens to involve vendors.

The blind spot

AI policy discourse is locked on US versus China, including energy, chip controls, export rules, sovereign AI posture, and who wins the race.

Structurally missing from the larger conversation is the one state actor whose entire foreign currency revenue stream is cyber-enabled theft. DPRK doesn’t need to win any race. They need a 20-30% productivity gain in existing operations.

The pipeline is documented. Insikt Group’s Crypto Country estimated that regime-linked cryptocurrency theft reached roughly $3 billion through 2023. The Multilateral Sanctions Monitoring Team (successor to the UN Panel of Experts after Russia’s 2024 veto) has since done the harder primary work. MSMT’s October 2025 report documents $2.8 billion stolen from cryptocurrency companies between January 2024 and September 2025 across more than 40 heists, with proceeds explicitly tied to WMD and ballistic missile program funding. The State Department updated the tally in January 2026: another $400 million stolen in the three months since publication, bringing the 2025 totals above $2 billion.

Every successful crypto exchange intrusion ends up on a launch pad.

Why North Korea wants the next model

Crypto exchange intrusions are labor-intensive at every phase. Recon, social engineering at scale (fake developer personas on GitHub and LinkedIn, spear-phishing of individual engineers at wallet providers), credential harvesting, post-exploit lateral movement, key extraction, and laundering.

Agentic capability compresses the cycle to include the same operator-hours, more successful intrusions, and more stolen $$$ per operator.

Bybit is an easy example. The FBI attributed approximately $1.5 billion in stolen virtual assets to TraderTraitor in February 2025. The intrusion chain ran months of patient targeting against a single Safe{Wallet} system administrator via phishing, followed by post-compromise operational patience. These types of attacks are expensive, time-intensive, and still extraordinarily productive.

Lazarus and TraderTraitor don’t need AGI. They need the productivity lift that turns a junior operator into a senior one and shaves weeks off the planning phase. It doesn’t have to be Mythos specifically. Any comparable capability through a comparable vector does the job.

Better tools mean more successful intrusions. More successful intrusions mean more stolen crypto. More stolen crypto means more missiles.

Three access patterns

Three different tradecraft patterns keep getting conflated in media coverage. They are not the same TTP, and treating them as one weakens the response on all three.

1. Contractor misuse. A legitimately credentialed employee at a third-party vendor uses their access for unauthorized purposes. This is the Mythos story. The credentials and access are real, though the intent is variable. Defenses (easy to say, hard to do well): telemetry, behavioral monitoring, and least-privilege scoping at the vendor tier.

2. Fraudulent hiring. An adversary places its own operatives inside the target through stolen or synthetic identities, often via remote IT contracting. This is the DPRK IT worker scheme. Insikt’s Inside the Scam documents PurpleBravo’s infrastructure: front companies in China spoofing legitimate IT firms, and a malware ecosystem (BeaverTail, InvisibleFerret, OtterCookie) targeting the cryptocurrency industry. The credentials are real, but the identities are fake. Defenses: identity verification at hire (in-person interviews to avoid AI tricks), ongoing personnel vetting, geographic and behavioral baselining.

3. Supply chain compromise. A trusted vendor’s systems get breached, and the attacker uses that vendor’s legitimate distribution channel to reach the real target. TeamPCP’s March 2026 LiteLLM compromise hit the AI toolchain directly, poisoning Trivy (a defensive security scanner) to reach a package with 95 million monthly downloads. Defenses: build-pipeline integrity, dependency monitoring, signed artifacts.

These three attack vectors converge on the same truth. Any preview or limited-release AI program that depends on third parties is exposed to all three vectors simultaneously. DPRK is the actor most motivated across the full triangle because the revenue case is specific, measurable, and directly beneficial for the regime. They are incentivized to be “AI native.”

So what?

In the security industry, we need to stop thinking about AI access as purely a lab problem when it’s also a sanctions problem. The great-power competition framing obscures the actor already online, with a rich history of monetizing cyber heists to fund missiles.

“Limited release” is a wonderful bumper sticker. The AI reality, from a threat-modeling perspective, is a countdown to turbo-charging adversarial capabilities.

Now what?

The honest conversation is that perimeter-style AI “controlled access” is less effective against State-sponsored adversaries. A productive security path is a distinct preview infrastructure, aggressive telemetry, canaries, and third-party access tied to personnel-level vetting rather than contractual attestation. (Guessable endpoints should be the first thing dead.)

Crypto exchanges and custodians: your threat model needs to anticipate what Lazarus can do 3 to 6 months from now, not what they did last quarter. Assume they improve faster than your defenses do.

Policymakers: DPRK is a first-class entity in AI access governance. The Multilateral Sanctions Monitoring Team framework already documents cyber-enabled sanctions evasion thoroughly. What it doesn’t yet do is name AI capability access as a sanctions-relevant category. Dual-use export controls have governed the transfer of semiconductor and missile technology for decades. AI capability is the obvious next category.

Corporate CISOs (outside the AI-lab orbit): your third-party contractor environments are now inside the AI capability threat surface, whether you opted in or not. Inventory accordingly.

Close

Mythos is a preview of an access pattern. Any actor whose business model is stealing money to build weapons will find the third-party seam. This time, it was hobbyists. DPRK has spent two decades proving why nonproliferation is the right frame here.