Area Man Discovers Gravity
February 26, 2026Two particles walk out of a collision. They fly apart (one to the left, one to the right). Light-years apart. No connection between them. No wire, no field, no signal.
You measure the left one. Spin up.
Instantly (not at the speed of light, instantly) the right one becomes spin down. Always. Without fail. Even though nothing traveled between them. Even though they're on opposite sides of the universe.
Einstein called this "spooky action at a distance" and spent the rest of his life trying to prove it wasn't real.
It's real.
That's entanglement. And it turns out (this is the part that kept me up) it might be the reason gravity exists.
Cutting things in half
Here's where it gets weird. Forget particles flying apart. Think about a quantum field (the buzzing vacuum of empty space) sitting on a lattice. A grid.
Now draw a line through the middle. Cut the grid into two regions, A and B. Ask: how much does A "know" about B? That's the entanglement entropy.
You'd expect the answer to scale with volume. More stuff in A, more to know about. But it doesn't. It scales with the area of the boundary between them.
Drag the slider. The region grows, the entropy grows, but only because the boundary got longer. The correlations between A and B live on the surface where they touch, not in the bulk. The information is skin-deep.
This is called the area law. And if it sounds familiar, it should.
Black holes already knew
In 1973, Bekenstein figured out that black holes have entropy. Hawking made it precise:
The entropy of a black hole is proportional to the area of its event horizon. Not its volume. Its surface. Divided by four times Newton's constant.
Nobody knew why. Why area? Why not volume? It was the most famous unexplained formula in physics.
Then people started calculating entanglement entropy of quantum fields near horizons. And they got... the same formula. S_entanglement ∝ A. The area law was coming from the quantum field's entanglement structure, not from anything special about black holes.
Maybe Bekenstein-Hawking entropy is entanglement entropy. Maybe the area law isn't a property of black holes. Maybe it's a property of space itself.
Jacobson's leap
In 1995, Ted Jacobson took this seriously. His paper is four pages long. The argument: at every point in spacetime, an accelerating observer sees a local horizon (a causal boundary) with an Unruh temperature. If the entropy of that horizon obeys S = A/4G (just like a black hole) and you demand the Clausius relation holds everywhere, then the Raychaudhuri equation turns the whole thing into Einstein's field equations. All ten of them.
Jacobson's 1995 version used thermodynamic entropy. In 2015, he made the connection explicit: if entanglement entropy in small regions is maximized in the vacuum, Einstein's equations follow. For conformal fields, it's a biconditional: the equations hold if and only if entanglement is in equilibrium. Gravity is the equation of state of spacetime's entanglement structure.
Gravity isn't a force. It's what happens when entanglement tries to stay in equilibrium.
S ∝ Aall ofgravityJacobson
(1995)
This is either the deepest insight in theoretical physics or a beautiful coincidence. It's been cited over 2,000 times. Within AdS/CFT, the holographic community has confirmed that entanglement entropy in a boundary theory reproduces linearized gravity in the bulk. But outside that framework (in flat space, on a lattice, with a real gauge field) nobody has tested it numerically.
Testing it on a lattice
I built a physics engine a few weeks ago. One of the crates (phyz-regge) implements Regge calculus: general relativity on a triangular mesh. Edge lengths are the geometry. Deficit angles are the curvature. The Regge action is a sum of (triangle area × deficit angle) over every triangle. Minimize it, and you get discrete Einstein equations.
So I have a discrete spacetime. And I have a Z₂ gauge field living on it: the simplest nontrivial quantum field you can put on a mesh. Each edge carries a ±1, the plaquette action penalizes flipped signs, and the Hilbert space has 1,024 dimensions after imposing Gauss's law. Small enough to diagonalize exactly.
The test: perturb the geometry (stretch an edge) and measure two things.
- How does the gravity side respond? (∂S_Regge/∂l_e)
- How does the entanglement side respond? (∂S_EE/∂l_e)
If Jacobson is right, these should point in the same direction. The entanglement should "know" about the curvature.
the line doesn't move. the correlation is coupling-independent.
R² = 0.84. Caveats up front: on a 6-vertex lattice with ~15 edges, both gradients respond to the same geometric variables (edge lengths), so some correlation is expected. And the entropy driving the result is almost entirely edge-mode entropy (classical sector counting, not quantum entanglement). The quantum entanglement within sectors is essentially zero. This is the standard definition of entanglement entropy in lattice gauge theory, and it IS the area-law quantity, but it's honest to say: on this lattice, the "entanglement" is classical.
That said: the correlation is coupling-independent. Across every coupling strength we tested (g²=0.5 to g²=5.0, an order of magnitude) R² doesn't budge. That's what Jacobson predicts: the entropy-curvature relationship is geometric, not dynamical. It should be the same regardless of how strong the field is. Average over the partitions first, and it's 0.91.
We also computed the ratio A/(4S_EE) for each configuration: the quantity that would be Newton's constant if Jacobson's relationship holds exactly. It comes out to ~207 in lattice units at g²=1, and scales systematically with the coupling. More entanglement → smaller ratio → stronger gravity. Less entanglement → larger ratio → weaker gravity. Whether this is actually G_N (as opposed to a lattice artifact that happens to have the right structure) is exactly what finer meshes would tell us.
The wall
Here's the problem.
Our lattice has 6 vertices. The Hilbert space dimension is 2^(E-V+1) = 1,024. Small enough to diagonalize on my M4 Max in a blink. I've already run the full parameter survey — 50 coupling values, 25 partitions, every edge perturbed. Took about 14 minutes.
Add one vertex. V=7, E=20, dim=16,384. A few seconds per solve. I've run that too. R² drops to 0.58 per-edge (still real, still coupling-independent) but the broken symmetry adds noise.
The question that matters: does R² go to 1.0 on finer meshes? Is Jacobson's equilibrium exact in the continuum, with the 0.84 being lattice noise? Or is 0.84 a ceiling?
Between V=7 and the impossible V=12 wall (dim=2³⁴, more RAM than exists), there are intermediate levels. Each one adds a vertex, 5 edges, and 16× more dimensions:
| Level | dim | RAM per solve | Time per solve | Full survey |
|---|---|---|---|---|
| 2 (V=8) | 262,144 | ~200 MB | 30s–2min | an afternoon |
| 3 (V=9) | 4,194,304 | ~2 GB | 5–30min | ~6,000 solves |
| 4 (V=10) | 67,108,864 | ~32 GB | hours | a physics department |
Level 3 is the sweet spot. Each eigensolve fits in ~2 GB of RAM — small enough for a browser tab. But a full survey across 50 couplings, 2 geometries, and 30 edge perturbations needs ~6,000 of them. At 5–30 minutes each, that's months on one machine.
Embarrassingly parallel.
phyz@home
pls sir, may i have a bit of your computation?I've already computed levels 0 and 1. The results exist. R²=0.84, then 0.58. But you shouldn't have to trust me.
Your browser downloads a WASM module — the same Rust code, compiled to WebAssembly. A web worker picks up a work unit: a coupling value, a geometry, an edge perturbation. It solves the eigenvalue problem. Computes the entanglement entropy for every partition. Submits the result.
Another browser, somewhere else in the world, solves the same unit independently. If they agree (and they will, because the computation is deterministic) that's consensus. The data point is confirmed. No trust required.
That's job one: verify levels 0 and 1. Anyone can check my numbers.
Job two: push to level 3. Each eigensolve at dim=4M takes 5–30 minutes and fits in a browser tab. A full parameter sweep needs ~6,000 of them. A thousand browser tabs finish it in a weekend.
Every confirmed result streams to every connected dashboard in real-time. You watch the parameter space fill in.
No account. No install. No GPU required. Click "Start Computing" and you're either verifying the first non-holographic test of Jacobson's entanglement equilibrium, or pushing it one level deeper.
The anti-griefing trick
SETI@home had to build elaborate systems to catch cheaters. We don't.
Same WASM binary (integrity-checked by your browser via SRI hash). Same IEEE 754 floating point. Same inputs → same outputs, down to the last bit.
Two honest browsers will always agree. A griefer submitting garbage will never match an honest result. The Postgres trigger that checks consensus is trivial.
No reputation system. No proof of work. Just: did two independent machines get the same answer? Yes? It's correct.
What we're looking for
Phase 1: verify. Levels 0 and 1 are already computed. phyz@home re-derives every data point independently. If a thousand browsers all get R²=0.84, that's not my result anymore. It's everybody's.
Phase 2: level 3. Dim=4,194,304. The first level where distributed compute genuinely matters. ~6,000 work units at 5–30 minutes each. We already know R² drops from 0.84 to 0.58 at level 1. Does level 3 drop further, hold, or recover? Does partition averaging still help? Is the coupling independence still perfect?
Phase 3: the plot. Enough data to graph R²(mesh resolution) and see if the curve bends toward 1.0 or flattens. That plot is either evidence for Jacobson (gravity really is the equation of state of entanglement), evidence that the lattice Z₂ truncation fundamentally limits the relationship, or evidence that the correlation was a confound all along.
Level 4 and beyond (dim=67M+, 32 GB per solve) needs real hardware — a cluster, a grant, a physics department willing to take a bet. But if levels 0 through 3 show R² trending upward, that's a compelling grant application.
Either way, it's an answer. And right now, nobody has it.
Why hasn't anyone done this?
Three communities own the pieces. Regge calculus people (numerical relativists) build discrete spacetimes. Lattice gauge theorists (particle physicists) solve quantum fields on grids. Entanglement entropy researchers (quantum information) compute how regions are correlated. They don't overlap much. Nobody straddles all three.
And any expert in one of these fields knows enough reasons it shouldn't work to never bother trying. Z₂ isn't conformal (Jacobson's 2015 proof assumes conformal fields). Six vertices is absurdly coarse. The edge mode ambiguity means your entanglement entropy depends on which definition you pick. A grad student who proposed this would be told to do something more rigorous.
I didn't know any of that. I had a Regge calculus library, a lattice gauge solver, and a weekend. Claude helped me wire them together. The R²=0.84 fell out and I didn't know enough to dismiss it.
That might be the whole story: ignorance plus tools. Or it might be that sometimes the naive thing works because the relationship is robust enough to survive all the reasons it shouldn't. That's what finer meshes would tell us.
Open a tab
I'm not a physicist. I'm a college dropout who built a physics engine because Claude let me and I was too stubborn to stop. The R²=0.84 might be a finite-size artifact on a toy lattice (6 vertices is extremely coarse). It might be the first numerical hint of something real. I genuinely don't know.
But I know how to build distributed systems. I know how to compile Rust to WASM. And I know that 6,000 eigensolves at dim=4M is the kind of problem that gets better when you throw browsers at it.
Help us find out if entanglement knows about gravity. Or just watch the dots appear. That's cool too.
