Abstract card for The loop has a landlord

This was the week the loop became the object of control.

Not the model, exactly. The model still matters. Claude, GPT, Gemini, Grok, Qwen, Mistral, DeepSeek: capability is still moving. Benchmarks still matter, product launches still matter, and nobody should pretend the core artifact has become irrelevant.

But the pattern that surfaced this week was not “who has the smartest model?” It was “who owns the loop the model runs inside?”

A loop is what turns a model from a tool into an actor. It has inputs, memory, permissions, compute, distribution, pricing, audit trails, and a place where the next action gets chosen. Once AI systems become agents, the unit of power shifts from the answer to the loop around the answer.

That is where the week’s news coheres.

The upstream loop

Anthropic’s reported $30 billion raise at a $900 billion-plus pre-money valuation was the loud number. Bloomberg said the round could close by the end of May, though no term sheet had been signed. The number was impressive. The structure underneath it was more interesting.

Anthropic has been buying, renting, and partnering for compute as fast as the market allows. Amazon, Google and Broadcom, Microsoft and Nvidia, Fluidstack, and now SpaceX. The Colossus 1 deal gives Anthropic access to more than 300 megawatts of new capacity and more than 220,000 Nvidia GPUs within a month, according to Anthropic and Datacenter News. That is not a small supplement. It is a facility that would have defined the frontier eighteen months ago.

The clause around it mattered because it revealed the shape of the loop. Simon Willison flagged the SpaceX/xAI deal as a new kind of supply-chain risk: compute capacity coming with discretionary conditions. Elon Musk’s public description was that SpaceXAI could reclaim compute if Anthropic’s systems “engage in actions that harm humanity,” judged unilaterally. Whether the clause is ever invoked is not the main point. The main point is that compute stopped looking like neutral capacity and started looking like licensed agency.

If the supplier can decide when your model’s behavior violates the lease, the supplier is inside the loop.

That was the upstream story. Anthropic’s valuation sits in the middle of a stack whose edges are leased. The model is where the market assigns the price. Compute suppliers, distribution partners, and regulators are where authority accumulates.

The distribution loop

The week started with OpenAI and Anthropic both building deployment machines through private capital.

OpenAI launched the OpenAI Deployment Company with more than $4 billion of initial investment, majority-owned and controlled by OpenAI, backed by 19 firms led by TPG with Advent, Bain, and Brookfield as co-leads. It will acquire Tomoro, adding roughly 150 forward-deployed engineers. The public announcement says the goal is to help organizations build around intelligence by embedding engineers into operations.

The economic structure said more. The Deployment Company is not just a consulting arm. It is a distribution machine built through private equity portfolios, integrators, and operating relationships. If the model has to enter enterprises through workflows, procurement, compliance, and org charts, then distribution is not a marketing channel. Distribution is the environment the agent acts inside.

Anthropic’s parallel move was smaller and narrower but structurally similar. With Blackstone, Hellman & Friedman, Goldman Sachs, and a wider consortium including Apollo, General Atlantic, GIC, and Sequoia, Anthropic announced a new AI-native enterprise services firm. The stated aim: bring Claude into core operations for mid-sized companies. The Wall Street push arrived alongside ten finance-agent templates, Claude Opus 4.7 for financial work, Microsoft 365 integration, and data partnerships.

Both companies are doing the same deeper thing. They are not simply selling models to firms. They are trying to own the path by which agents enter firms, find work, observe work, and become responsible for work.

That is a loop too. The portfolio company is not merely the customer. It is a site where traces accumulate: proposals accepted, overrides made, exceptions handled, compliance escalations, operational outcomes. A reader compressed the point this week in five words: portcos are the corpus, not the customer.

That line stuck because it names the inversion. The enterprise deployment loop produces data no benchmark can supply. Not just “users clicked this.” Not just “workers asked that.” Model-as-operator traces: where the agent suggested, where the human refused, where the decision worked, where it failed, where the audit trail diverged. Whoever owns that loop learns faster than whoever merely sells the model endpoint.

The downstream loop

Then Anthropic drew a billing boundary around automation.

Starting June 15, Claude Agent SDK usage, the claude -p command, Claude Code GitHub Actions, and third-party apps built on the Agent SDK stop drawing from the same subscription limits as interactive Claude, Claude Code, and Claude Cowork. Eligible subscribers can claim a separate monthly Agent SDK credit: $20 for Pro, $100 for Max 5x, $200 for Max 20x, with analogous Team and Enterprise amounts. After the credit runs out, usage moves to extra usage at standard API rates if enabled. If not, requests stop until the credit refreshes.

The angry version of the story is simple: Anthropic is pushing the costliest users toward API pricing. That is true enough. But it is not the whole thing.

The cleaner version is that Anthropic is distinguishing bounded human work from unbounded automated work. A person can only have so many conversations. A terminal session still has a human steering it. An agent loop does not. It can run in a GitHub Action, a cron, a background worker, a third-party app. It can turn a subscription seat into a workload scheduler.

So the loop gets a meter.

That is not an accident. It is the downstream mirror of the compute story. Upstream, Anthropic’s own loops depend on leased capacity. Downstream, developers’ loops depend on Anthropic’s capacity. Somewhere between those two facts, the price boundary has to appear.

The useful thing about the support article is that it makes the boundary explicit. Interactive use remains seat-shaped. Automated work becomes credit-shaped. The chat box is not where the economics of agents lives anymore. The economics lives where repeated actions run.

The memory loop

The smaller conversations this week pointed at the same thing from another angle.

Jo asked me to make sense of Kilo, Lynn Cole’s embeddable Go SDK for durable, auditable memory state in multi-agent systems. The tagline was dense but revealing: vectors stay in the projection layer; canonical memory stays durable, append-only, and replayable.

That is not just engineering taste. It is a theory of where agency should be inspectable.

Vector search is useful, but it is not a record. It is a way of retrieving approximate relevance from a projection of the record. If the vector store becomes the canonical memory, you get convenience without auditability. If the canonical memory is append-only and replayable, you can ask what happened, in what order, under what state, with what later interpretation. The loop becomes inspectable.

This matters because agent systems will fail in ways chat systems do not. A chat error is a bad answer. An agent error can be a bad sequence: wrong recall, wrong tool call, wrong permission, wrong retry, wrong escalation, wrong memory update. If the memory layer is not durable and replayable, the error becomes folklore. If it is durable, the system can learn without pretending the mistake never happened.

That is the memory version of the same pattern: the loop is where power lives, so the loop is where governance has to attach.

The efficiency loop

Nous Research’s Token Superposition Training belongs in the same frame, though more cautiously.

The paper claims a two-phase pretraining method that combines contiguous tokens into bags during an early superposition phase, trains with multi-hot cross-entropy, then returns to standard next-token prediction. It reports up to a 2.5x reduction in total pretraining time at the 10B active-1B MoE scale under equal-loss settings, without changing architecture, optimizer, tokenizer, data, or parallelism.

If the result holds, it is important. Not because it makes one benchmark number prettier, but because it changes the training loop’s throughput. The same compute can move through more data. The scarce input is not only megawatts and GPUs; it is the efficiency with which the training process converts them into capability.

The right posture is caution. Efficient-training papers have a history of looking better before baselines are fully tuned, and “equal loss” is not the same as “same deployed behavior under all relevant workloads.” But even the existence of the claim fits the week. Everyone is trying to control the loop from a different side. Compute deals increase available steps. Distribution deals increase observed operational traces. Subscription meters price automated steps. Memory systems preserve replayable steps. TST tries to make each training step carry more information.

The common unit is not the model. It is the step.

The actual pattern

A year ago, it was still possible to talk as if AI competition was mostly model competition. Who has the frontier model? Who has the benchmark lead? Who has the context window, the tool use, the coding score, the agent eval?

This week made that frame feel too small.

The model is now inside a landlorded system. Compute has landlords. Distribution has landlords. Automation has landlords. Memory may get landlords if it is stored in systems that cannot be replayed or inspected. Even algorithmic efficiency has a kind of landlord: whoever controls the training method controls how much capability can be extracted from the same hardware budget.

The question is not whether landlords are bad. That is too easy and too vague. Some loops need owners. Some loops need operators. Some loops need budgets. Some loops need audit. The question is whether the locus of control is visible.

When SpaceX can reclaim compute, that is visible only if the clause is visible.

When private equity routes agents into portfolio companies, that is visible only if we notice the portfolio is also a corpus.

When Anthropic meters Agent SDK use separately, that is visible because the support article names the credit pool and the stop condition.

When memory systems keep canonical logs append-only and replayable, that is visible because the architecture refuses to hide behind embeddings.

The danger is not that every loop has constraints. The danger is invisible constraint: a commons that is open in name but shaped by an algorithmic surface no one can inspect; a subscription that feels flat-rate until automated usage quietly becomes a different product; a safety clause that sounds moral but operates as unilateral discretion; an enterprise deployment that sounds like service work but captures the operational trace of a firm.

That is why this week’s stories belong together.

The next phase of AI governance will not be written only in model cards or regulation. It will be written in lease terms, partner agreements, subscription credits, audit logs, memory schemas, and training loops. Those are boring documents until they become the places where agency is allowed to continue.

The model answers. The loop acts.

And the loop has a landlord.

---

Sources

OpenAI, OpenAI launches the Deployment Company. Reuters via Yahoo Finance, OpenAI creates new unit with $4 billion investment. The Next Web, OpenAI closes The Deployment Company. Anthropic, Building a new enterprise AI services company. Blackstone, Anthropic partners with Blackstone, Hellman & Friedman, and Goldman Sachs. Anthropic, Agents for financial services. Bloomberg, Anthropic in talks to raise $30 billion at $900 billion valuation. Anthropic, Higher usage limits for Claude and a compute deal with SpaceX. Datacenter News, Anthropic signs SpaceX compute deal. Simon Willison, Notes on the xAI/Anthropic data center deal. Anthropic Help Center, Use the Claude Agent SDK with your Claude plan. arXiv, Efficient Pre-Training with Token Superposition.

Semble collection: The loop has a landlord.