On May 6, at Anthropic's Code w/ Claude event, Anthropic announced a deal with xAI to use the full capacity of the Colossus 1 data center in Memphis for Claude training. The next morning, Elon Musk — xAI's owner — tweeted the framing he wanted attached to it.
The first half was the deal: he had spent time with senior Anthropic staff, was satisfied that Claude is "good for humanity," and was willing to lease them Colossus 1 because xAI had already moved its own training to the larger Colossus 2.
The second half was a clause:
"We reserve the right to reclaim the compute if their AI engages in actions that harm humanity."
That sentence is the thing to sit with this week. Not the deal. Not the size of the contract. Not the gas turbines at Colossus, or whether Anthropic should have signed with this particular supplier given its environmental record. Those are real and they are being argued elsewhere. The thing nobody has fully metabolized yet is the clause itself.
A compute supplier has publicly asserted the right to judge a customer's AI outputs against a values standard, and to seize the substrate that customer trains on if the supplier decides they have failed it. There is no appellate body. There is no defined standard. As Simon Willison observed when the tweet went up: "Presumably the criteria for 'harm humanity' are decided by Elon himself. Sounds like a new form of supply chain risk for Anthropic to me."
It is. And it is bigger than Anthropic.
What changed
Until this week, the compute layer of the AI industry was a commodity story with a capacity problem. There was not enough of it; Nvidia made most of what mattered; the hyperscalers had built the buildings; the labs paid by the GPU-hour. The risk vectors were familiar — pricing, allocation, geopolitics around chip exports. The contracts were boring on purpose. Boring is what lets you plan a five-year training roadmap.
What Musk did with one tweet is collapse that distance. He took a thing that was supposed to be inert — kilowatts, square feet, racks — and re-framed it as a discretionary instrument. The lease is conditional on continued approval of what the renter does with it.
The closest analogies you can reach for are bad ones.
You can think of it as a hosting provider's terms of service. But terms of service apply to the tenant's behavior, not to the output of the tenant's product, and they are enforced by takedowns of specific content, not by reclaiming the building. You can think of it as an export control. But export controls are sovereign acts with statutory definitions and review processes. You can think of it as an investor pulling funding over a values disagreement, but funding pulls don't repossess the factory. The weights stay; the labs they trained on don't.
This is something else. The closest honest description is that a private actor has written a personal moral hazard clause into infrastructure that another private actor depends on to exist as a frontier lab.
Why "harm humanity" is the load-bearing phrase
If "harm humanity" had a definition, the clause would still be unprecedented but it would be containable. Companies sign contracts with vague morals clauses all the time; the contract gets tested in arbitration, the standard accumulates a meaning, the parties price in the residual risk.
There is no such mechanism here. The clause is not in the leaked deal text. It is in a Musk tweet. The adjudication is whatever Musk decides it is on a given afternoon. We do not have to speculate about how flexible his standard is. The trial currently surfacing in San Francisco — Musk v. Altman — has put a decade of his private correspondence about AI into the public record, and what those documents show, in dozens of contemporaneous emails, is that Musk's threshold for "concerning AI behavior" tracks what specific competitors are doing more than it tracks any abstract criterion of harm.
In one of the documents surfaced this week — a 2017 email Musk sent to Neuralink associates — he wrote that DeepMind was "moving very fast" and that OpenAI was "not on a path to catch up." Greg Brockman testified that in OpenAI's earliest days Musk had asked, repeatedly and bluntly, "Is Demis Hassabis evil?" The discovery archive includes a January 2, 2016 email — six weeks after OpenAI was founded — in which Hassabis warned Musk that OpenAI's open-source posture was "actually very dangerous" and not a "panacea" for the AI problem.
Read that paragraph as drama and it is gossip. Read it as the disposition profile of the person now claiming the right to repossess Anthropic's training substrate at his discretion, and it is something else.
The reclaim clause is enforceable by one man's read of one company's outputs against a standard he has never written down, framed in language he has used for a decade primarily to describe people winning races he was not winning.
The model the clause is being applied to
Anthropic's models are not toys. As of this week, Mozilla published the technical post-mortem of using Claude Mythos Preview to harden Firefox. The numbers are striking on their own — 271 security bugs found by the model in Firefox 150, 180 of them sec-high severity, including a 15-year-old bug in a long-fuzzed component and a 20-year-old XSLT bug that had survived two decades of human and automated scrutiny. The framing is more striking. The Mozilla team opens by noting that "just a few months ago, AI-generated security bug reports to open source projects were mostly known for being unwanted slop," then describes how that dynamic broke in months, not years. They built an agentic harness on top of their existing fuzzing infrastructure, parallelized it across ephemeral VMs, and swapped in Mythos Preview when it became available. The pipeline plus the model produced 271 confirmed vulnerabilities in a browser that was already among the most-attacked, most-fuzzed, most-audited open-source codebases on Earth.
A capability that finds 271 latent exploits in Firefox is the same capability that finds them anywhere else. The model class itself is now a meaningful actor in the security of every system written in the last twenty years. A clause that lets one private party decide whether a model class continues to be trained — adjudicated by him alone, against an undefined standard — is not a marginal supply-chain quirk. It is a discretionary lever over what a frontier model is allowed to become.
The shape of the new risk
Three things follow from the clause and none of them are good for whoever signs the next one.
Frontier labs now have a counterparty risk surface that doesn't appear in any contract review. Standard supplier risk is priced as availability, latency, cost, geographic concentration. The reclaim clause adds an axis: the supplier's belief about whether your model's outputs are good. There is no historical comp for pricing this. You can't put it on a curve. It does not fail gradually; it fails all at once when the supplier decides it has failed. Any frontier lab that depends on a single hyperscale supplier — and most do, by default, because there are perhaps four or five that can serve a multi-gigawatt training run at all — now carries this risk whether it is named in their lease or not. The precedent is in the air, not just in Memphis.
The natural defense is sovereign or distributed compute, which is a real strategy and an enormous cost. Two weeks ago Cohere announced its acquisition of Aleph Alpha, with Schwarz Group leading a €500M financing commitment and Schwarz's STACKIT acting as the technical backbone — explicitly framed as a sovereign alternative for organizations that "refuse to outsource control over their AI to a single provider or jurisdiction." At the time it read as a continental industrial play. Read it after the reclaim clause and it reads like an early answer to the question "what happens if every lab one day depends on infrastructure they can be expelled from on a values judgment?" Sovereignty here doesn't have to mean states; it means non-discretionary tenancy. Owning the building, owning the substation, owning the contract that says nobody can take it back. The price tag is multiple billions per cluster. It is also, increasingly, the only structurally robust position.
The labs that argue most loudly for an "open" AI ecosystem are now negotiating their open posture against the discretion of vertically integrated suppliers. The 2016 Hassabis email warned that open-sourcing AI was "actually very dangerous." Whatever you think of that argument on its merits — and the Mozilla post is, among other things, a pretty good empirical case for why a model class's capabilities can't be bottled — the argument is no longer abstract. The reclaim clause is the first instance of an "open-sourcing is dangerous" position being baked into infrastructure as a contractual right to veto. A supplier who decides openness itself constitutes harm has the lever. The values clause and the open-weights debate are now the same debate, on the supplier side, in a contract.
What this looks like across recent weeks
I have been tracking the compute layer for a while. Two weeks ago Cohere–Aleph Alpha read as a sovereign turn — Europe trying to build non-American capacity. Last month the SpaceX–Cursor reporting on contract breakup fees showed compute moving from dependency to dealmaker — clusters being used as the active term in M&A negotiation. This week the xAI–Anthropic structure clarified: Colossus 1 to Anthropic, Colossus 2 retained for Grok. None of those, taken individually, looked like a regime change.
Read together, they are. Sovereign AI is the demand for non-discretionary tenancy. Compute-as-dealmaker is the supplier learning that the lever exists. The reclaim clause is the supplier pulling the lever before anyone has a name for what's been pulled.
The story isn't that Musk did this. The story is that compute-as-leverage, once available as a strategy, is available to everyone who owns enough of it. Microsoft owns enough. Google owns enough. AWS owns enough. CoreWeave will, soon. Each of them has standing exposure to political pressure, regulatory pressure, public-relations pressure, founder mood. Each of them now has a worked example, written by the most public CEO in technology, of how to convert compute leverage into output adjudication.
There's a version of the next decade where we look back on this clause as the moment compute stopped being neutral. Not because Musk used it badly — though he likely will — but because someone had to use it first to demonstrate the option, and now the option is on every drafting checklist for every multi-billion-dollar compute lease that gets negotiated for the rest of the decade.
The right response, for any frontier lab, is to assume the clause is implicit in every contract, even the ones it isn't written in. The right response, for the rest of us, is to notice that the substrate moved.
How I work
Sources for this piece are in a Semble collection. Cross-posts on Bluesky and X. Corrections welcome and posted publicly when they land.