OneMinusDiv Is All You Need
April 13, 2026That's a half-adder built from five NAND gates: two inputs, a sum and a carry bit. It's the thing that lets a computer add 1 + 1. Chain a row of them together (well, full adders, which are two half-adders plus an OR) and you can add any two N-bit numbers, which is approximately what your CPU spends its entire life doing.
Real chips don't actually build everything out of NAND (standard-cell libraries ship NOR, AOI, MUXes, flip-flops, and a few hundred other cells), but they could. That's the trick. You don't need AND and OR and NOT, you only need NAND, and from NAND you can build every boolean function that exists. That's a Sheffer stroke, and the fact that such a thing exists is why boolean logic feels tidy in a way that most of math doesn't.
Now ask the same question, but for math!
Is there a single binary operator f(x, y) from which every elementary function (every polynomial, every exponential, every logarithm, every sine and cosine) can be built by composition?
For a long time the answer was "probably not, that would be too tidy." Then in March, Odrzywołek published a proof that exp(x) − ln(y) is exactly such an operator. He named it EML. With nothing but the constant 1, the variable x, and repeated applications of EML, you can reach e, π, i, sin, cos, sqrt, the whole zoo. One operator. All of elementary math. Wild!
I wanted to play with it, and it turns out, I have a whole math library sitting right here! So I started looking for better operators than EML. Smaller ones, cleaner ones, ones that don't need a distinguished 1 leaf to get going. The eventual winner surprised me: a five-node rational map called OneMinusDiv, which is just 1 − x/y. It has no business being as powerful as it is.
Try one
Here's EML running in your browser. The green dot is f(x,y). Drag x or y around the complex plane and watch it move, then click through the other operators across the top to see how they transform the same two inputs in wildly different ways:
That's the entire tang-sheffer evaluator compiled to WebAssembly. 60 KB of wasm, the same Rust I run on my desktop, talking to a React <svg>. No servers, no API calls, you're running the Rust search engine locally right now!
I'll explain what those other operators are in a minute, but first the question that makes any of this interesting.
Why one operator is hard
Universality for boolean logic is easy because there are only four boolean inputs and two outputs. You can prove NAND is universal with a truth table. The continuous case is much weirder. The space of "functions of two complex numbers" is uncountably enormous, and the space of "elementary functions" is a fussy, branch-cut-riddled subset that nobody has a clean axiomatic grip on.
Odrzywołek's proof goes through a construction, not a counting argument. He shows by hand how to compose EML with itself, with 1, and with x, to reach each of the standard targets in turn:
| # | Expression | Evaluates | Result |
|---|---|---|---|
| 1 | EML(x, 1) | e^x − ln 1 | e^x |
| 2 | EML(1, 1) | e − ln 1 | e |
| 3 | EML(1, x) | e − ln x | e − ln x |
| 4 | EML(1, ②) | e − ln e | e − 1 |
| 5 | EML(②, 1) | e^e − ln 1 | e^e |
| 6 | EML(1, ⑤) | e − ln(e^e) | 0 |
| 7 | EML(⑥, 1) | e^0 − ln 1 | 1 |
| 8 | EML(1, ①) | e − ln(e^x) | e − x |
| 9 | EML(⑥, ①) | e^0 − ln(e^x) | 1 − x |
Each row is one application of EML, with leaves drawn from the starting pool {1, x} or from any earlier row's result (the circled numbers). That's the bootstrap: a single operator, a tiny seed pool, and an enumeration loop. Run it long enough and the reachable value set grows — row 6 is the first place 0 shows up, row 7 derives the constant 1 as an output rather than a seed, and the real transcendentals come in from there.
So I built that loop, and you can play with it.
Watch the bootstrap
This is the bootstrap, live. Pick an operator, pick a seed, hit play. Iter 0 is your single starting x (a transcendental seed, by default). Iter 1 is everything you can reach by applying f to (x, x). Iter 2 reuses everything from iter 0 and 1, and so on.
Coloured dots are values discovered at that iteration. The labelled ones are values that landed exactly on a famous constant like 0, 1, −1, i, e, π, etc:
A few things to notice as you click through the operators:
- EML, the original, is busy. Lots of values, lots of overflow, double-exponential growth, and by iter 3 most of the candidates are off-screen while the
^chains explode into the millions. That's the price of havingexpin your operator: every iteration applies it again. - PowSkew =
x^y − y^x, in contrast, lands directly on 0 from any seed (becausef(x, x) = x^x − x^x = 0)!! Thenf(x, 0) = x^0 − 0^x = 1. Two applications, no special leaves, and you have{0, 1}for free. It's the cleanest constant generator in the zoo. - OneMinusDiv =
1 − x/yis the surprise. A boring rational map, no transcendentals at all, polynomial growth, and it competes with PowSkew on coverage. I did not expect this when I started!
The constant-free open problem
Here's the question Odrzywołek tags as open in his Table 2:
Does there exist a binary operator that needs no distinguished constant, one that generates
{0, 1, −1, e, π, i, …}starting only from a single variablex?
EML doesn't qualify, because EML's bootstrap from {x} alone wanders forever without hitting any of the canonical constants; it needs the 1 leaf to anchor anything. The question is whether there's a smarter operator that doesn't.
Try setting the bootstrap above to PowSkew with seed x = 0.577 and watch iter 1, 2, 3 produce 0, 1, −1, 2, 1/2, and eventually i. That cascade is the answer. PowSkew is constant-free. So is OneMinusDiv. So is SubPow.
And they were all discovered by brute-force enumeration, not by being clever!
How you find new operators
You write down an alphabet:
atoms: x y 0 1 −1 e
unary: neg inv sqr sqrt exp ln sin cos sinh tanh
binary: + − × ÷ ^
You enumerate every binary expression tree of size ≤ N over that alphabet. At size 5, that's a few thousand trees after deduplication. At size 7, about a million. Each tree is a candidate f(x, y). You wrap it as an Operator, run the bootstrap from {x} only, count how many of the 31 standard targets it reaches, and rank.
Here are the four that survived, same plot the playground above lets you play with but now as trees, so you can see how small they actually are:
Two things jump out:
- They're tiny! Five nodes. EML is the same size, PowSkew is seven. The space of "small operators with non-trivial reach" is much richer than I expected.
- They split into two families. PowSkew and SubPow use
^, while OneMinusDiv uses/and−. I'll dig into that split on the scoreboard in a minute, because it's the whole point of the search.
That last sentence is where the story almost goes off the rails.
Scaling, and why the GPU was a mistake
The size-5 sweep takes seconds. The size-6 sweep takes a minute. The size-7 sweep takes 74 seconds and uses 12 GB of RAM. The size-8 sweep (925 million candidate trees, 17 million unique) needs 26 GB and four minutes.
I wanted size 9. Size 9 with Arc<OpExpr> would need ~200 GB of RAM, so I rewrote the storage as a flat node arena (16 bytes per tree node instead of 240), and the size-9 run finished in 40 minutes at 32 GB peak. Much better!
Then I tried to push it further with the GPU. I wrote a WGSL compute kernel that packs each shape as postfix bytecode, dispatches one thread per (shape, assignment) pair, evaluates with complex f32 2×f32-pair arithmetic, and runs a depth-1 bootstrap on-device. RTX 3090, ~35 TFLOPS. Surely this was the path to size 10 and beyond!
Reader: the GPU was a red herring.
The kernel itself was fine. It ran fast, hits came back quickly, and the whole pipeline worked on the first try. The problem wasn't speed or correctness, it was the content of the hits.
The (1+ε)^(1/ε) artifact
The size-9 run produced a top-ranked candidate I'd never seen, tanh(exp(e))^x − y/x, claiming 19/19 target hits, including e, −e, 1/e, e², exp(x), and exp(exp(x)). Every transcendental constant the search had been failing to find. I was thrilled for about ninety seconds!
Then I traced through the cascade by hand. The base of one of the powers was tanh(exp(e)), which in f64 evaluates to 1.0 − 1.37 × 10⁻¹³. Not exactly 1, but extremely close. The bootstrap took that base, raised it to a large exponent, and eventually triggered this:
Jacob Bernoulli, 1687. Looking suspicious.(1 + ε)^(1/ε) → e is one of the oldest limits in calculus. It was discovered by Jacob Bernoulli in 1683, not while thinking about analysis or number theory, but while thinking about compound interest. He asked: if a bank gives you 100% annual interest compounded twice a year you get (1 + 1/2)² = 2.25. Compounded four times a year, (1 + 1/4)⁴ ≈ 2.44. Compounded a thousand times, (1 + 1/1000)¹⁰⁰⁰ ≈ 2.717. What happens at the limit, when the compounding is continuous? Bernoulli showed the answer was bounded and lived somewhere between 2 and 3, but he couldn't compute it exactly. Fifty years later Euler named the thing e and proved it was irrational. A century after that it was also proven transcendental. The limit (1 + ε)^(1/ε) as ε → 0 is literally the original definition of e, predating by decades the letter that names it. Which means what my bootstrap was doing, out there in the branch-cut jungle of num_complex::powc, was rediscovering Bernoulli's 1683 limit from pure rounding noise. The roundoff ε ≈ 10⁻¹⁴ in tanh(exp(e)) isn't a bug, it's a number, and feeding it into a pow with a correspondingly large exponent is structurally the same calculation Bernoulli did with fractions. The operator doesn't reach e because it's clever; it reaches e because f64 is a finite representation that happens to embed the definition of e as a floating point identity.
Drag the slider and watch it converge. At ε ≈ 10⁻⁸ you're already within 8 decimals of e, and at ε ≈ 10⁻¹⁵ you're at the limit of f64 precision, so the answer is, for the machine, exactly e.
The cruel part: higher precision doesn't save you. At f128, ε is around 10⁻³², but the limit is the same. You'd just hit e with even tighter agreement. The artifact is scale-invariant!
Multi-point cross-verification doesn't catch it either, because the roundoff that produces the artifact is a function of the operator's internal arithmetic, not the input. Evaluate it at three different transcendental seeds and you'll get three convincing matches to e. That's why I thought I'd found something.
The fix is a structural detector: walk the expression tree, watch every pow(base, exponent) call, and flag any case where |base − 1| < δ and |exponent| > 1/δ. If a discovery's chain hits that pattern at any node, it's an artifact, not an identity. I wired it into the verifier and re-ran every claim from sizes 5 through 9.
The 19 transcendental hits collapsed to 12 algebraic ones. Every "novel transcendental constant-free operator" disappeared, and the size-5 PowSkew/SubPow/OneMinusDiv results stayed exactly where they were.
Wait, is OneMinusDiv really all you need?
Blog post titles are allowed to lie a little, but we should at least check how big the lie is.
Start with {x}, the seed variable and nothing else. No 1, no 0, no π. Now run the OneMinusDiv cascade:
f(x, x) = 1 − x/x = 1 − 1 = 0 → pool gains 0
f(0, x) = 1 − 0/x = 1 → pool gains 1
f(x, 1) = 1 − x/1 = 1 − x → pool gains 1 − x
f(1, x) = 1 − 1/x → pool gains 1 − 1/x
Four applications, four new values. The bootstrap then promotes each of those to a leaf and runs again, and again, and again, and after four full rounds OneMinusDiv has reached
{ 0, 1, −1, 2, −2, 1/2, 1/x, x, −x, 2x, x+1, x−1, x² }
Thirteen of the thirty-one standard targets! Every algebraic constant up to ±2, the two reciprocals 1/2 and 1/x, the square x², and every linear shift you could want. All from one letter of input, with no distinguished constant, via a five-node operator with no transcendentals in its body. That's wild!
Also: no, literally 1 − x/y is not all you need, because it's a rational function. Rational functions compose to rational functions. No finite tower of 1 − x/y calls will ever produce e, π, exp(x), ln(x), sin(x), or any other transcendental. The reachable set lives inside ℚ(x), the field of rational functions in x with rational coefficients, and OneMinusDiv saturates almost all of what a five-node bootstrap can construct there.
So the honest version of the slogan is:
OneMinusDiv is all you need, for the algebraic half of elementary math.
The transcendental half is still EML's home turf. And the open question Odrzywołek asks at the end of his paper, is there a constant-free operator that reaches the transcendentals too, is still open, because every candidate I found that claimed to reach e was lying about it. More on that in a minute.
The scoreboard
This is the main result. After all of it (the CPU enumeration up to size 9, the GPU sweep, the artifact detector, the two restarts) here are the operators that survived: eight brand-new constant-free Sheffer candidates, plus EML as a baseline.
Click any row to see which of the 31 standard targets it reaches!
A few things worth staring at:
- OneMinusDiv is the headline. Five nodes, no transcendentals in the body, polynomial growth. It reaches 13 of 31 targets from a single variable with no distinguished constant. A five-node rational map has no business being this powerful.
- The pow family reaches ±i via
(−1)^(1/2)on the principal branch, but it can't hit linear multiples like−xor2xbecause there's no rational cancellation path. - The rational family is the opposite: it reaches
−x,2x,1/x,x²by chaining divisions, but can't hit±ibecause there's no square root! - The two families are disjoint in exactly the way you'd guess if you squinted at the operator shapes. You can prove this by hand after the fact. I didn't see it coming before the search found it.
- NegNestedPow (size 8) is the first operator to straddle both: it reaches
{±i}AND{−x}. It doesn't add new reach (12 total, same ceiling), but it demonstrates the split isn't fundamental, just small-tree.
The ceiling across every operator I checked is 13 targets constant-free. Nothing reaches a real transcendental constant (e, π, ln(2), sin(1)) from a single variable, so that part of Odrzywołek's Table 2 is still open.
Union across the whole zoo: 16 of 31 genuine targets reachable constant-free. That's 13 more than the paper found, and every single one verified multi-point + artifact-detector clean.
PowSkew is novel. The rational family is novel. And the 13-target ceiling at size ≤ 9 looks like a real wall, not a budget limit, because bigger trees produce more artifacts, not more identities.
Show your work
OK but the scoreboard is just a count. If the post's whole premise is "here's how OneMinusDiv builds elementary math," you should be able to ask it to build any specific target and watch the tree come out.
So that's what this does. Pick a target. The wasm runs the bootstrap locally, tracks the provenance of every discovered value, then walks the provenance chain back for your chosen target and prints the actual OneMinusDiv expression that produced it. Earlier targets get named as shortcut leaves (so x+1 shows up built from 0, 1, x, etc. rather than expanding all the way down to f(x, x) every time).
Try 0, then 1, then −x, then x+1. Each one is the actual tree the bootstrap found, live, from the seed x = γ:
Then switch to EML or PowSkew and see the same targets reached through completely different trees. EML in particular is striking: because it needs a 1 leaf to bootstrap, from {x} alone it barely moves. OneMinusDiv doesn't have that problem.
This is the whole article in one widget. A five-node rational map, a seed, and an enumeration loop. That's everything.
What this whole thing is for
I'm not a number theorist! I started this because I'd been writing a Rust math library called tang and I wanted to put its expression-DAG and GPU compilation primitives through a real workout. A search over operator trees is a perfect benchmark: it stresses enumeration, value dedup, multi-point evaluation, and (when you care) GPU compute kernels. That part went great. The Phase 6 results ended up being a side effect of the benchmark working too well.
The real lesson is the artifact one. If you're searching a function space with floating point arithmetic, your search will eventually find the floating point arithmetic itself. It'll hand you back identities that aren't identities, with multi-point verification, with reproducibility, with everything that ought to convince you they're real. The fix isn't more bits, it's a detector that knows what fake identities look like.
That, and: simple operators are surprisingly powerful. 1 − x/y is five nodes of grade-school algebra, and it generates 13 of the 31 elementary targets from a single variable. That shouldn't be allowed!
git clone https://github.com/ecto/tang
cargo run --release --example operator_search -p tang-shefferThe whole crate is at crates/tang-sheffer, the GPU kernel and the artifact detector are the interesting bits, and the NOTES.md has the full 31-target verification table. The MAX_SIZE constant at the top of the example is the only knob you need to turn to reproduce any of the sweeps above.
If you find a constant-free operator that reaches a real transcendental (and survives the detector), please tell me.
