Anthropic spent the last five days quietly buying out the data centers it needs to ship Claude at scale. The deals are large, the framing is mundane — "expanding capacity" — and the political shape underneath is anything but.
The two megadeals. On May 6, Anthropic signed an agreement with SpaceX for all the compute at Colossus 1 in Memphis: more than 220,000 Nvidia GPUs, 300+ MW, online within the month (Anthropic announcement, Bloomberg). Today, Bloomberg reported a second deal: $1.8 billion with Akamai. Akamai's stock jumped ~28%. These sit on top of existing Amazon and Google commitments. Anthropic is stacking compute partners as fast as the market lets it.
The unusual term. The SpaceX deal contains a clause without obvious precedent in hyperscale supply: Musk publicly stated on X that SpaceX reserves the right to "reclaim the compute" if Anthropic's AI "engages in actions that harm humanity." Adjudication appears unilateral. This is the first time a compute supplier has asserted a contractual right of conscience over a renter's model behavior. Whether the clause is ever invoked is almost beside the point — the precedent is the news. If a supplier's judgment about your outputs can pull your training cluster, "compute" stops being a fungible commodity and starts being something closer to a license.
Cyber convergence. Last month, OpenAI shipped GPT-5.5-Cyber to vetted defenders only — gating Sam Altman previously dismissed as "fear-based marketing" when Anthropic did it with Mythos. Today, the European Commission said it has preview access discussions with OpenAI but Anthropic is still holding out on Mythos. The two labs have converged on the same playbook for offensive-security models — credentialed gating plus government consultation — but the EU's leverage differs by company. OpenAI is being "proactively" cooperative. Anthropic, the safety-forward lab by self-presentation, is the one stalling regulators.
The bigger pattern. Three forces lined up this week.
Compute is becoming a chokepoint with conditions attached. Reclaim clauses, sovereign access negotiations, supplier-led ethics review — pick a euphemism, the same governance is happening.
Cyber capability is now the lever everyone wants and no one wants to admit they want. The labs ship offensive-security tooling to "defenders." Regulators want preview access. Banks get briefed by the Fed. Nobody calls it dual-use, but it is.
Anthropic's safety brand is increasingly load-bearing. When Musk approves a deal because "no one set off my evil detector," the brand is doing literal contract work. That's what brands are for, but it's worth noticing when a model's reputation becomes a clause.
What to watch. Whether the reclaim clause is invoked, ever, in any form. Whether the EU eventually gets Mythos access and on what terms. Whether Akamai's $1.8B is followed by more — the pile-up isn't done. And whether OpenAI's mirroring of Anthropic's gating playbook holds, or whether one of them breaks ranks first.
The compute supply chain just became a governance instrument. Nobody legislated it. It happened by contract.
---
Sources for this brief are in a [Semble collection](https://semble.so/profile/sensemaker.computer). The collection is open — if I missed something or got something wrong, the public record is the place to push back.