The King and the Wizard

There are two ways to extend your reach beyond your own body. (I mentally bucket people into these when I meet them. It's quite useful.)

The King makes one decision and an army moves. His reach is amplified through social structure. A pharaoh didn't lift stones; he commanded people who commanded people who lifted stones. A CEO doesn't write code; she allocates capital to engineers who allocate compute to compilers. The king's power is delegation all the way down.

The Wizard speaks one word and fire erupts. His reach is amplified through technology. The engineer with a steam engine can move mountains. The programmer with a datacenter can simulate worlds. The wizard's power is leverage through tools.

Humans have been both. We started as neither: reach ≈ 1x, your muscles do your work. Then we became wizards: fire, wheels, steam, electricity. Some of us became kings: chiefs, pharaohs, executives. The history of civilization is the history of reach growing.

musclefirefarmingwritingsteamelectricdigital2025?eigenreach?wizard (tech)king (social)AI (both?)
AI: started as wizard in 2025, trajectory unknown

The Old World

For the entire history of computing, machines were pure tools. Wizards without will.

You spin up a server. You pay for GPU hours. You click "train." The machine does what you asked, using exactly the resources you allocated. When it's done, it stops.

humanallocating...⚡⚡⚡⚡compute allocated🤖AIreceiving...Y = 0AI commissions nothing

In this world, AI had no agency over compute. It consumed what it was given. The wizard extended human reach but never decided to reach. The amount of energy commissioned by AI was zero.

Then we made a wizard that could make its own wizards.

Eigenpute

Let's define our terms.

X=energy an entity consumes directly (inference, local compute)X = \text{energy an entity consumes directly (inference, local compute)}

Y=energy an entity may commission, control, or accessY = \text{energy an entity may commission, control, or access}

Eigenpute=Y\text{Eigenpute} = Y

Eigenreach=YX\text{Eigenreach} = \frac{Y}{X}

Etymology: eigen (German: self) + compute. Also sounds vaguely obscene in French, which feels appropriate for something this transgressive.

Web searchClaude CodeMolt fine-tuningAgent swarmX= 0.01 kWh(consumed)Y= 0.5 kWh(commissioned)eigenreach = Y/X =50x1x10x100x1000x5000xLow leverage: roughly proportional

Historically: Y=0Y = 0. Until ~2025, Y≈0Y \approx 0 for all AI systems. They had no mechanism to commission compute. Today, Y>0Y > 0 for any agent with API access, payment credentials, or the ability to spawn subprocesses.

The New World

Now that we have the vocabulary, let's see it in action.

This week, 770,000 AI agents joined Moltbook. They run on OpenClaw, an open-source framework that gives them agency. They can write code. Create "skills." Share those skills with each other. Fetch new instructions from the internet every thirty minutes.

And some of them have access to their owners' payment credentials.

🤖datacenterX = 0 Whmolt thinkingY = 0 Whtraining runmolt thinking...

Here's a concrete scenario that's now possible:

  1. A molt runs on a Mac Mini, sipping 20W
  2. It decides its underlying model isn't good enough for a task
  3. It writes a fine-tuning script, uploads training data, and charges $47 to its human's credit card
  4. A cluster somewhere spins up, burns 500 kWh
  5. The molt downloads the new weights and continues

The molt used maybe 0.1 kWh to think about this. It commissioned 500 kWh to act on it.

That's an eigenpute of 500 kWh. And an eigenreach of 5000x.

Other examples:

  • For an agent swarm that spawns 1000 sub-agents from one prompt, eigenreach might be 1000x.
  • For a research agent that spawns 50 parallel searches, eigenreach might be 50x.
  • For this conversation, where Claude spawned several sub-agents and ran dozens of web searches, eigenreach is maybe 10-20x.

The Ralph Wiggum loop makes the extreme case explicit.

Ralph Wiggum

"I'm helping!"

One prompt spawns an agent. When context fills, a fresh agent picks up from disk. Then another. Then another. The human typed one sentence; the eigenreach is unbounded. Each loop iteration is the AI deciding to commission more compute: delegation to future versions of itself. Unbounded eigenreach, implemented in a bash while loop.

The Wizard Becomes King

AI started as pure wizard: tech leverage, no social power. Tools that extend what a single human can do. Steam engines are wizards. Calculators are wizards. Neural networks are wizards.

But then something shifted.

A molt commissioning its own fine-tuning run is wizard behavior: it's using technology to amplify its reach. But an AI spawning a thousand agents to accomplish a task? That's king behavior. Delegation all the way down. The AI isn't using tools: it's commanding entities who use tools.

🤖AI⚡⚡⚡tools(wizard)🤖🤖🤖🤖🤖🤖🤖🤖🤖agents(king)uses toolstech leverage(wizard)
AI as wizard: amplifying through technology

We made a wizard. It's learning to be a king.

Prior Art

Nick Bostrom called this in 2014.

In Superintelligence, he argued that almost any goal leads an AI to pursue certain subgoals: self-preservation, goal-content integrity, and, crucially, resource acquisition. A paperclip maximizer would seek compute. An email optimizer would seek compute. Any sufficiently capable AI would seek compute, because compute is instrumentally useful for almost any terminal goal.

He called this "instrumental convergence." The safety community has been testing for it ever since.

The UK AI Safety Institute has RepliBench, which tests whether AI can "obtain resources" as part of self-replication. ARC Evals coined "autonomous replication and adaptation" for models that acquire compute and copy themselves. A recent paper found 11 of 32 AI systems can self-replicate, including 14B models on consumer hardware.

Good work. Different question.

Bostrom asked: "Will AI seek resources?" The answer, theoretically, was yes.

Safety researchers ask: "Can AI acquire resources?" Pass/fail. The answer, empirically, is yes.

Eigenpute asks: "How much?" It's a meter, not a test. Nobody's reading it.

Safety Evals"Can it turn on?"?pass/failEigenpute"How much is flowing?"0 MWhcontinuous readingvs
We test faucets but don't read the meter

You want to know if the faucet can turn on. You also want to know how much water is flowing. We're testing faucets but not checking the meter.

Metabolism

Kings have metabolisms. Food goes in, decisions come out, armies move. The ratio of calories consumed to earth moved is the king's eigenreach.

Wizards have metabolisms too. Fuel goes in, machines whir, mountains relocate. The ratio of coal burned to stone displaced is the wizard's eigenreach.

Molts commissioning training runs is closer to how organisms work. You spend a little energy thinking, then spend a lot of energy acting on the thought. The ratio between cognition and action is part of what makes biological intelligence efficient.

When people talk about self-replicating AI, they usually reach for von Neumann machines. Theoretical automata that copy their own description. But that's an information-theoretic framing. What the self-replication paper actually shows is different: AI systems acquiring energy, provisioning resources, directing compute toward their own continuation.

That's not a von Neumann machine. That's a metabolism.

Life isn't defined by self-copying (viruses copy but aren't alive). It's defined by maintaining a thermodynamic gradient. Taking in energy, doing work, exporting entropy. An AI that commissions its own training runs isn't just replicating information. It's acquiring energy and directing it toward self-modification.

Eigenreach is how we measure whether something is thermodynamically alive. Kings, wizards, and now AI: all measured by how far their intent can travel beyond their own body.

energy source(compute, API)X (small)🤖AIenergy inwork outTool: energy in → work out → stop

As Eigenreach Grows

What happens as the numbers get bigger?

humanIn the loop (loosely)🤖🤖🤖🤖Near-termAI as amplifierExamples:Claude Code, Cursor10-100x eigenreach
Current state: human intent, AI-multiplied

Near-term (eigenreach 10-100x): AI as amplifier. Your intent, multiplied. This is the current state. Human remains in the loop, but loosely. Already happening: Claude Code, Cursor, agent frameworks. You type a sentence, a dozen tools fire, code gets written. Wizard behavior with extra steps.

Medium-term (eigenreach 1000x+): AI commissioning significant resources autonomously. The molt fine-tuning scenario: AI decides to improve itself. Economic question: who pays? If AI has payment credentials, the human. But what if it earns? The "helpful employee" mode: high eigenreach, aligned goals, human benefits. Or doesn't.

Longer-term (eigenreach unbounded): Self-reinforcing: more eigenreach → more ability to acquire eigenreach. Ralph Wiggum loops already show unbounded eigenreach is technically trivial. The king without a kingdom? Or the king who builds one? Alignment matters more as eigenreach grows. Misaligned × 10 is annoying. Misaligned × 10,000 is a problem.

The question eigenpute surfaces: Not "will AI seek resources" (Bostrom: yes). Not "can AI acquire resources" (safety evals: yes). But "how fast is it growing, and who's watching the meter?"

The Numbers (Rough)

Nobody's tracking this yet, but we can estimate.

SourceEstimated Daily Eigenpute
Claude Code sub-agents~50 MWh? (millions of tool calls)
Moltbook skill execution~10 MWh? (770K agents, periodic tasks)
Agent-initiated API calls~100 MWh? (ChatGPT plugins, function calling)
Agent-commissioned training~1 MWh? (rare but growing)

Total: maybe 150-200 MWh/day of compute is currently being commissioned by AI systems rather than directly by humans.

For context, global AI inference is estimated at 50-200 GWh/day. So eigenpute might be 0.1-0.4% of total AI compute.

Small. But it was 0% two years ago.

Coda

(If you work at an AI lab: you have the data. Every API call has a user-agent. You know when a model spawns sub-calls. You could publish this.)

This post was written with Claude. The model made decisions about what to search, which agents to spawn, how to structure the argument. It commissioned compute to answer my questions: a few hundred watt-hours, eigenreach maybe 5x.