Binius, a Highly Efficient Proof System

Advanced5/16/2024, 8:52:36 AM
Vitalik Buterin provides a detailed introduction to Binius, a highly efficient proof system based on binary fields. The article first reviews the concepts of finite fields and arithmetization, explaining how SNARK and STARK proof systems work by converting program statements into polynomial equations. Vitalik points out that although Plonky2 has proven that using smaller 64-bit and 31-bit fields can significantly improve the efficiency of proof generation, Binius further enhances efficiency by operating directly on zeros and ones, taking advantage of the features of binary fields. Binius uses multivariate polynomials to represent computational traces and employs a series of mathematical tricks, including the concept of hypercubes and Reed-Solomon encoding, to construct proofs. Vitalik believes that the direct computational ability of binary fields and operations on bits are key to Binius's efficiency.

Forwarded original title ‘Vitalik explains Binius in detail: an efficient proof system based on binary fields’

This post is primarily intended for readers roughly familiar with 2019-era cryptography, especially SNARKs and STARKs. If you are not, I recommend reading those articles first. Special thanks to Justin Drake, Jim Posen, Benjamin Diamond and Radi Cojbasic for feedback and review.

Over the past two years, STARKs have become a crucial and irreplaceable technology for efficiently making easy-to-verify cryptographic proofs of very complicated statements (eg. proving that an Ethereum block is valid). A key reason why is small field sizes: whereas elliptic curve-based SNARKs require you to work over 256-bit integers in order to be secure enough, STARKs let you use much smaller field sizes, which are more efficient: first the Goldilocks field (64-bit integers), and then Mersenne31 and BabyBear (both 31-bit). Thanks to these efficiency gains, Plonky2, which uses Goldilocks, is hundreds of times faster at proving many kinds of computation than its predecessors.

A natural question to ask is: can we take this trend to its logical conclusion, building proof systems that run even faster by operating directly over zeroes and ones? This is exactly what Binius is trying to do, using a number of mathematical tricks that make it very different from the SNARKs and STARKs of three years ago. This post goes through the reasons why small fields make proof generation more efficient, why binary fields are uniquely powerful, and the tricks that Binius uses to make proofs over binary fields work so effectively.

Binius. By the end of this post, you should be able to understand every part of this diagram.

Recap: finite fields

One of the key tasks of a cryptographic proving system is to operate over huge amounts of data, while keeping the numbers small. If you can compress a statement about a large program into a mathematical equation involving a few numbers, but those numbers are as big as the original program, you have not gained anything.

To do complicated arithmetic while keeping numbers small, cryptographers generally use modular arithmetic. We pick some prime “modulus” p. The % operator means “take the remainder of”: 15 % 7=1, 53 % 10=3, etc (note that the answer is always non-negative, so for example −1 % 10=9).

You’ve probably already seen modular arithmetic, in the context of adding and subtracting time (eg. what time is four hours after 9:00?). But here, we don’t just add and subtract modulo some number, we also multiply, divide and take exponents.

We redefine:

The above rules are all self-consistent. For example, if p=7, then:

5+3=1 (because 8%7=1)

1-3=5 (because -2%7=5)

2*5=3

3/5=2

A more general term for this kind of structure is a finite field. A finite field is a mathematical structure that obeys the usual laws of arithmetic, but where there’s a limited number of possible values, and so each value can be represented in a fixed size.

Modular arithmetic (or prime fields) is the most common type of finite field, but there is also another type: extension fields. You’ve probably already seen an extension field before: the complex numbers. We “imagine” a new element, which we label 𝑖, and declare that it satisfies 𝑖2=−1. You can then take any combination of regular numbers and 𝑖, and do math with it: (3𝑖+2)∗(2𝑖+4)= 6𝑖2+12𝑖+4𝑖+8=16𝑖+2. We can similarly take extensions of prime fields. As we start working over fields that are smaller, extensions of prime fields become increasingly important for preserving security, and binary fields (which Binius uses) depend on extensions entirely to have practical utility.

Recap: arithmetization

The way that SNARKs and STARKs prove things about computer programs is through arithmetization: you convert a statement about a program that you want to prove, into a mathematical equation involving polynomials. A valid solution to the equation corresponds to a valid execution of the program.

To give a simple example, suppose that I computed the 100’th Fibonacci number, and I want to prove to you what it is. I create a polynomial 𝐹 that encodes Fibonacci numbers: so 𝐹(0)=𝐹(1)=1, 𝐹(2)=2, 𝐹(3)=3, 𝐹(4)=5, and so on for 100 steps. The condition that I need to prove is that 𝐹(𝑥+2)=𝐹(𝑥)+𝐹(𝑥+1) across the range 𝑥={0,1…98}. I can convince you of this by giving you the quotient:

where Z(x) = (x-0) (x-1) …(x-98). . If I can provide that there is F and H satisfies this equation, then F must satisfy F(x+2)-F(x+1)-F(x) in the range. If I additionally verify that F is satisfied, F(0)=F(1)=1, then F(100) must actually be the 100th Fibonacci number.

If you want to prove something more complicated, then you replace the “simple” relation 𝐹(𝑥+2)=𝐹(𝑥)+𝐹(𝑥+1) with a more complicated equation, which basically says “𝐹(𝑥+1) is the output of initializing a virtual machine with the state 𝐹(𝑥), and running one computational step”. You can also replace the number 100 with a bigger number, eg. 100000000, to accommodate more steps.

All SNARKs and STARKs are based on this idea of using a simple equation over polynomials (or sometimes vectors and matrices) to represent a large number of relationships between individual values. Not all involve checking equivalence between adjacent computational steps in the same way as above: PLONK does not, for example, and neither does R1CS. But many of the most efficient ones do, because enforcing the same check (or the same few checks) many times makes it easier to minimize overhead.

Plonky2: from 256-bit SNARKs and STARKs to 64-bit… only STARKs

Five years ago, a reasonable summary of the different types of zero knowledge proof was as follows. There are two types of proofs: (elliptic-curve-based) SNARKs and (hash-based) STARKs. Technically, STARKs are a type of SNARK, but in practice it’s common to use “SNARK” to refer to only the elliptic-curve-based variety, and “STARK” to refer to hash-based constructions. SNARKs are small, and so you can verify them very quickly and fit them onchain easily. STARKs are big, but they don’t require trusted setups, and they are quantum-resistant.

STARKs work by treating the data as a polynomial, computing evaluations of that polynomial across a large number of points, and using the Merkle root of that extended data as the “polynomial commitment”

A key bit of history here is that elliptic curve-based SNARKs came into widespread use first: it took until roughly 2018 for STARKs to become efficient enough to use, thanks to FRI, and by then Zcash had already been running for over a year. Elliptic curve-based SNARKs have a key limitation: if you want to use elliptic curve-based SNARKs, then the arithmetic in these equations must be done with integers modulo the number of points on the elliptic curve. This is a big number, usually near 2256: for example, for the bn128 curve, it’s 21888242871839275222246405745257275088548364400416034343698204186575808495617. But the actual computation is using small numbers: if you think about a “real” program in your favorite language, most of the stuff it’s working with is counters, indices in for loops, positions in the program, individual bits representing True or False, and other things that will almost always be only a few digits long.

Even if your “original” data is made up of “small” numbers, the proving process requires computing quotients, extensions, random linear combinations, and other transformations of the data, which lead to an equal or larger number of objects that are, on average, as large as the full size of your field. This creates a key inefficiency: to prove a computation over n small values, you have to do even more computation over n much bigger values. At first, STARKs inherited the habit of using 256-bit fields from SNARKs, and so suffered the same inefficiency.

A Reed-Solomon extension of some polynomial evaluations. Even though the original values are small, the extra values all blow up to the full size of the field (in this case 2 to the power 31 -1)

In 2022, Plonky2 was released. Plonky2’s main innovation was doing arithmetic modulo a smaller prime: 264−232+1=18446744069414584321. Now, each addition or multiplication can always be done in just a few instructions on a CPU, and hashing all of the data together is 4x faster than before. But this comes with a catch: this approach is STARK-only. If you try to use a SNARK, with an elliptic curve of such a small size, the elliptic curve becomes insecure.

To continue to be safe, Plonky2 also needed to introduce extension fields. A key technique in checking arithmetic equations is “sampling at a random point”: if you want to check if 𝐻(𝑥)∗𝑍(𝑥) actually equals 𝐹(𝑥+2)−𝐹(𝑥+1)−𝐹(𝑥), you can pick some random coordinate 𝑟, provide polynomial commitment opening proofs proving 𝐻(𝑟), 𝑍(𝑟), 𝐹(𝑟), 𝐹(𝑟+1) and 𝐹(𝑟+2), and then actually check if 𝐻(𝑟)∗𝑍(𝑟) equals 𝐹(𝑟+2)−𝐹(𝑟+1)−𝐹(𝑟). If the attacker can guess the coordinate ahead of time, the attacker can trick the proof system - hence why it must be random. But this also means that the coordinate must be sampled from a set large enough that the attacker cannot guess it by random chance. If the modulus is near 2256, this is clearly the case. But with a modulus of 264−232+1, we’re not quite there, and if we drop to 231−1, it’s definitely not the case. Trying to fake a proof two billion times until one gets lucky is absolutely within the range of an attacker’s capabilities.

To stop this, we sample 𝑟 from an extension field. For example, you can define 𝑦 where 𝑦3=5, and take combinations of 1, 𝑦 and 𝑦2. This increases the total number of coordinates back up to roughly 293. The bulk of the polynomials computed by the prover don’t go into this extension field; they just use integers modulo 231−1, and so you still get all the efficiencies from using the small field. But the random point check, and the FRI computation, does dive into this larger field, in order to get the needed security.

From small primes to binary

Computers do arithmetic by representing larger numbers as sequences of zeroes and ones, and building “circuits” on top of those bits to compute things like addition and multiplication. Computers are particularly optimized for doing computation with 16-bit, 32-bit and 64-bit integers. Moduluses like 264−232+1 and 231−1 are chosen not just because they fit within those bounds, but also because they align well with those bounds: you can do multiplication modulo 264−232+1 by doing regular 32-bit multiplication, and shift and copy the outputs bitwise in a few places; this article explains some of the tricks well.

What would be even better, however, is doing computation in binary directly. What if addition could be “just” XOR, with no need to worry about “carrying” the overflow from adding 1 + 1 in one bit position to the next bit position? What if multiplication could be more parallelizable in the same way? And these advantages would all come on top of being able to represent True/False values with just one bit.

Capturing these advantages of doing binary computation directly is exactly what Binius is trying to do. A table from the Binius team’s zkSummit presentation shows the efficiency gains:

Despite being roughly the same “size”, a 32-bit binary field operation takes 5x less computational resources than an operation over the 31-bit Mersenne field.

From univariate polynomials to hypercubes

Suppose that we are convinced by this reasoning, and want to do everything over bits (zeroes and ones). How do we actually commit to a polynomial representing a billion bits?

Here, we face two practical problems:

  1. For a polynomial to represent a lot of values, those values need to be accessible at evaluations of the polynomial: in our Fibonacci example above, 𝐹(0), 𝐹(1) … 𝐹(100), and in a bigger computation, the indices would go into the millions. And the field that we use needs to contain numbers going up to that size.

  2. Proving anything about a value that we’re committing to in a Merkle tree (as all STARKs do) requires Reed-Solomon encoding it: extending 𝑛 values into eg. 8𝑛 values, using the redundancy to prevent a malicious prover from cheating by faking one value in the middle of the computation. This also requires having a large enough field: to extend a million values to 8 million, you need 8 million different points at which to evaluate the polynomial.

A key idea in Binius is solving these two problems separately, and doing so by representing the same data in two different ways. First, the polynomial itself. Elliptic curve-based SNARKs, 2019-era STARKs, Plonky2 and other systems generally deal with polynomials over one variable: 𝐹(𝑥). Binius, on the other hand, takes inspiration from the Spartan protocol, and works with multivariate polynomials: 𝐹(𝑥1,𝑥2…𝑥𝑘). In fact, we represent the entire computational trace on the “hypercube” of evaluations where each 𝑥𝑖 is either 0 or 1. For example, if we wanted to represent a sequence of Fibonacci numbers, and we were still using a field large enough to represent them, we might visualize the first sixteen of them as being something like this:

That is, 𝐹(0,0,0,0) would be 1, 𝐹(1,0,0,0) would also be 1, 𝐹(0,1,0,0) would be 2, and so forth, up until we get to 𝐹(1,1,1,1)=987. Given such a hypercube of evaluations, there is exactly one multilinear (degree-1 in each variable) polynomial that produces those evaluations. So we can think of that set of evaluations as representing the polynomial; we never actually need to bother computing the coefficients.

This example is of course just for illustration: in practice, the whole point of going to a hypercube is to let us work with individual bits. The “Binius-native” way to count Fibonacci numbers would be to use a higher-dimensional cube, using each set of eg. 16 bits to store a number. This requires some cleverness to implement integer addition on top of the bits, but with Binius it’s not too difficult.

Now, we get to the erasure coding. The way STARKs work is: you take 𝑛 values, Reed-Solomon extend them to a larger number of values (often 8𝑛, usually between 2𝑛 and 32𝑛), and then randomly select some Merkle branches from the extension and perform some kind of check on them. A hypercube has length 2 in each dimension. Hence, it’s not practical to extend it directly: there’s not enough “space” to sample Merkle branches from 16 values. So what do we do instead? We pretend the hypercube is a square!

Simple Binius - an example

See here for a python implementation of this protocol.

Let’s go through an example, using regular integers as our field for convenience (in a real implementation this will be binary field elements). First, we take the hypercube we want to commit to, and encode it as a square:

Now, we Reed-Solomon extend the square. That is, we treat each row as being a degree-3 polynomial evaluated at x = {0, 1, 2, 3}, and evaluate the same polynomial at x = {4, 5, 6, 7}:

Notice that the numbers blow up quickly! This is why in a real implementation, we always use a finite field for this, instead of regular integers: if we used integers modulo 11, for example, the extension of the first row would just be [3, 10, 0, 6].

If you want to play around with extending and verify the numbers here for yourself, you can use my simple Reed-Solomon extension code here.

Next, we treat this extension as columns, and make a Merkle tree of the columns. The root of the Merkle tree is our commitment.

Now, let’s suppose that the prover wants to prove an evaluation of this polynomial at some point 𝑟={𝑟0,𝑟1,𝑟2,𝑟3}. There is one nuance in Binius that makes it somewhat weaker than other polynomial commitment schemes: the prover should not know, or be able to guess, 𝑠, until after they committed to the Merkle root (in other words, 𝑟 should be a pseudo-random value that depends on the Merkle root). This makes the scheme useless for “database lookup” (eg. “ok you gave me the Merkle root, now prove to me 𝑃(0,0,1,0)!”). But the actual zero-knowledge proof protocols that we use generally don’t need “database lookup”; they simply need to check the polynomial at a random evaluation point. Hence, this restriction is okay for our purposes.

Suppose we pick 𝑟={1,2,3,4} (the polynomial, at this point, evaluates to −137; you can confirm it with this code). Now, we get into the process of actually making the proof. We split up 𝑟 into two parts: the first part {1,2} representing a linear combination of columns within a row, and the second part {3,4} representing a linear combination of rows. We compute a “tensor product”, both for the column part:

And for the row part:

What this means is: a list of all possible products of one value from each set. In the row case, we get:

[(1-r2)(1-r3), (1-r3), (1-r2)r3, r2*r3]

Use r={1,2,3,4} (so r2=3 and r3=4):

[(1-3)(1-4), 3(1-4),(1-3)4,34] = [6, -9 -8 -12]

Now, we compute a new “row” 𝑡′, by taking this linear combination of the existing rows. That is, we take:

You can view what’s going on here as a partial evaluation. If we were to multiply the full tensor product ⨂𝑖=03(1−𝑟𝑖,𝑟𝑖) by the full vector of all values, you would get the evaluation 𝑃(1,2,3,4)=−137. Here we’re multiplying a partial tensor product that only uses half the evaluation coordinates, and we’re reducing a grid of 𝑁 values to a row of 𝑁 values. If you give this row to someone else, they can use the tensor product of the other half of the evaluation coordinates to complete the rest of the computation.

The prover provides the verifier with this new row, 𝑡′, as well as the Merkle proofs of some randomly sampled columns. This is 𝑂(𝑁) data. In our illustrative example, we’ll have the prover provide just the last column; in real life, the prover would need to provide a few dozen columns to achieve adequate security.

Now, we take advantage of the linearity of Reed-Solomon codes. The key property that we use is: taking a linear combination of a Reed-Solomon extension gives the same result as a Reed-Solomon extension of a linear combination. This kind of “order independence” often happens when you have two operations that are both linear.

The verifier does exactly this. They compute the extension of 𝑡′, and they compute the same linear combination of columns that the prover computed before (but only to the columns provided by the prover), and verify that these two procedures give the same answer.

In this case, extending 𝑡′, and computing the same linear combination ([6,−9,−8,12]) of the column, both give the same answer: −10746. This proves that the Merkle root was constructed “in good faith” (or it at least “close enough”), and it matches 𝑡′: at least the great majority of the columns are compatible with each other and with 𝑡′.

But the verifier still needs to check one more thing: actually check the evaluation of the polynomial at {𝑟0..𝑟3}. So far, none of the verifier’s steps actually depended on the value that the prover claimed. So here is how we do that check. We take the tensor product of what we labeled as the “column part” of the evaluation point:

In our example, where r={1,2,3,4} so the half of the selected column is {1,2}), this equals:

So now we take this linear combination of 𝑡′:

Which exactly equals the answer you get if you evaluate the polynomial directly.

The above is pretty close to a complete description of the “simple” Binius protocol. This already has some interesting advantages: for example, because the data is split into rows and columns, you only need a field half the size. But this doesn’t come close to realizing the full benefits of doing computation in binary. For this, we will need the full Binius protocol. But first, let’s get a deeper understanding of binary fields.

Binary field

The smallest possible field is arithmetic modulo 2, which is so small that we can write out its addition and multiplication tables:

We can make larger binary fields by taking extensions: if we start with 𝐹2 (integers modulo 2) and then define 𝑥 where 𝑥2=𝑥+1, we get the following addition and multiplication table:

It turns out that we can expand the binary field to arbitrarily large sizes by repeating this construction. Unlike with complex numbers over reals, where you can add one new element 𝑖, but you can’t add any more (quaternions do exist, but they’re mathematically weird, eg. 𝑎𝑏≠𝑏𝑎), with finite fields you can keep adding new extensions forever. Specifically, we define elements as follows:

And so on. This is often called the tower construction, because of how each successive extension can be viewed as adding a new layer to a tower. This is not the only way to construct binary fields of arbitrary size, but it has some unique advantages that Binius takes advantage of.

We can represent these numbers as a list of bits, eg. 1100101010001111. The first bit represents multiples of 1, the second bit represents multiples of 𝑥0, then subsequent bits represent multiples of: 𝑥1, 𝑥1∗𝑥0, 𝑥2, 𝑥2∗𝑥0, and so forth. This encoding is nice because you can decompose it:

This is a relatively uncommon notation, but I like representing binary field elements as integers, taking the bit representation where more-significant bits are to the right. That is, 1=1, 𝑥0=01=2, 1+𝑥0=11=3, 1+𝑥0+𝑥2=11001000=19, and so forth. 1100101010001111 is, in this representation, 61779.

Addition in a binary field is just XOR (as does subtraction, by the way); note that this means x+x=0 for any x. To multiply two elements x*y, there is a very simple recursive algorithm: split each number in half:

Then, split up the multiplication:

The last piece is the only slightly tricky one, because you have to apply the reduction rule, and replace 𝑅𝑥∗𝑅𝑦∗𝑥𝑘2 with 𝑅𝑥∗𝑅𝑦∗(𝑥𝑘−1∗𝑥𝑘+1). There are more efficient ways to do multiplication, analogues of the Karatsuba algorithm and fast Fourier transforms, but I will leave it as an exercise to the interested reader to figure those out.

Division in binary fields is done by combining multiplication and inversion. The “simple but slow” way to do inversion is an application of generalized Fermat’s little theorem. There is also a more complicated but more efficient inversion algorithm, which you can find here. You can use the code here to play around with binary field addition, multiplication and division yourself.

Left: addition table for four-bit binary field elements (ie. elements made up only of combinations of 1, 𝑥0,𝑥1 and 𝑥0𝑥1).

Right: multiplication table for four-bit binary field elements.

The beautiful thing about this type of binary field is that it combines some of the best parts of “regular” integers and modular arithmetic. Like regular integers, binary field elements are unbounded: you can keep extending as far as you want. But like modular arithmetic, if you do operations over values within a certain size limit, all of your answers also stay within the same bound. For example, if you take successive powers of 42, you get:

After 255 steps, you’re back to 42255 = 1. Just like positive integers and modular arithmetic, they follow the usual mathematical laws: ab=ba, a(b+c)=a b+a*c, there are even some weird new laws.

Finally, binary fields make it easy to handle bits: if you do math with numbers that fit 2 k bits, then all your output will also fit 2 k bits. This avoids embarrassment. In Ethereum’s EIP-4844, each “block” of a blob must be a digital module 52435875175126190479447740508185965837690552500527637822603658699938581184513, so encoding the binary data requires throwing away some space and doing additional A check to ensure that each element stores a value less than 2248. This also means that binary field operations are super fast on computers - both CPUs and theoretically optimal FPGA and ASIC designs.

What this all means is that we can do something like the Reed-Solomon encoding we did above, in a way that completely avoids the “explosion” of integers, as we saw in our example, and in a very “exploding” way. “Native” way, the kind of computing that computers are good at. The “split” attribute of binary fields - how we do it 1100101010001111=11001010+10001111*x3 and then split it according to our needs is also crucial to achieve a lot of flexibility.

Full Binius

See here for a python implementation of this protocol.

Now, we can get to “full Binius”, which adjusts “simple Binius” to (i) work over binary fields, and (ii) let us commit to individual bits. This protocol is tricky to understand, because it keeps going back and forth between different ways of looking at a matrix of bits; it certainly took me longer to understand than it usually takes me to understand a cryptographic protocol. But once you understand binary fields, the good news is that there isn’t any “harder math” that Binius depends on. This is not elliptic curve pairings, where there are deeper and deeper rabbit holes of algebraic geometry to go down; here, binary fields are all you need.

Let’s look again at the full diagram:

By now, you should be familiar with most of the components. The idea of “flattening” a hypercube into a grid, the idea of computing a row combination and a column combination as tensor products of the evaluation point, and the idea of checking equivalence between “Reed-Solomon extending then computing the row combination”, and “computing the row combination then Reed-Solomon extending”, were all in simple Binius.

What’s new in “full Binius”? Basically three things:

  • The individual values in the hypercube, and in the square, have to be bits (0 or 1)
  • The extension process extends bits into more bits, by grouping bits into columns and temporarily pretending that they are larger field elements
  • After the row combination step, there’s an element-wise “decompose into bits” step, which converts the extension back into bits

We will go through both in turn. First, the new extension procedure. A Reed-Solomon code has the fundamental limitation that if you are extending 𝑛 values to 𝑘∗𝑛 values, you need to be working in a field that has 𝑘∗𝑛 different values that you can use as coordinates. With 𝐹2 (aka, bits), you cannot do that. And so what we do is, we “pack” adjacent 𝐹2 elements together into larger values. In the example here, we’re packing two bits at a time into elements in {0,1,2,3}, because our extension only has four evaluation points and so that’s enough for us. In a “real” proof, we would probably back 16 bits at a time together. We then do the Reed-Solomon code over these packed values, and unpack them again into bits.

Now, the row combination. To make “evaluate at a random point” checks cryptographically secure, we need that point to be sampled from a pretty large space, much larger than the hypercube itself. Hence, while the points within the hypercube are bits, evaluations outside the hypercube will be much larger. In our example above, the “row combination” ends up being [11,4,6,1].

This presents a problem: we know how to combine pairs of bits into a larger value, and then do a Reed-Solomon extension on that, but how do you do the same to pairs of much larger values?

The trick in Binius is to do it bitwise: we look at the individual bits of each value (eg. for what we labeled as “11”, that’s [1,1,0,1]), and then we extend row-wise. That is, we perform the extension procedure on the 1 row of each element, then on the 𝑥0 row, then on the “𝑥1“ row, then on the 𝑥0∗𝑥1 row, and so forth (well, in our toy example we stop there, but in a real implementation we would go up to 128 rows (the last one being 𝑥6∗ …∗ 𝑥0)).

Recapping:

  • We take the bits in the hypercube, and convert them into a grid
  • Then, we treat adjacent groups of bits on each row as larger field elements, and do arithmetic on them to Reed-Solomon extend the rows
  • Then, we take a row combination of each column of bits, and get a (for squares larger than 4x4, much smaller) column of bits for each row as the output
  • Then, we look at the output as a matrix, and treat the bits of that as rows again

Why does this work? In “normal” math, the ability to (often) do linear operations in either order and get the same result stops working if you start slicing a number up by digits. For example, if I start with the number 345, and I multiply it by 8 and then by 3, I get 8280, and if do those two operations in reverse, I also do 8280. But if I insert a “split by digit” operation in between the two steps, it breaks down: if you do 8x then 3x, you get:

But if you do 3x, and then 8x, you get:

But in a binary field built with a tower structure, this approach does work. The reason lies in their separability: if you multiply a large value by a small value, what happens on each segment stays on each segment. If we multiply 1100101010001111 by 11, this is the same as the first factorization of 1100101010001111, which is

And then multiplying each component by 11 separately.

Putting it all together

Generally, zero knowledge proof systems work by making statements about polynomials that simultaneously represent statements about the underlying evaluations: just like we saw in the Fibonacci example, 𝐹(𝑋+2)−𝐹(𝑋+1)−𝐹(𝑋)=𝑍(𝑋)∗𝐻(𝑋) simultaneously checks all steps of the Fibonacci computation. We check statements about polynomials by proving evaluations at a random point. This check at a random point stands in for checking the whole polynomial: if the polynomial equation doesn’t match, the chance that it matches at a specific random coordinate is tiny.

In practice, a major source of inefficiency comes from the fact that in real programs, most of the numbers we are working with are tiny: indices in for loops, True/False values, counters, and similar things. But when we “extend” the data using Reed-Solomon encoding to give it the redundancy needed to make Merkle proof-based checks safe, most of the “extra” values end up taking up the full size of a field, even if the original values are small.

To get around this, we want to make the field as small as possible. Plonky2 brought us down from 256-bit numbers to 64-bit numbers, and then Plonky3 went further to 31 bits. But even this is sub-optimal. With binary fields, we can work over individual bits. This makes the encoding “dense”: if your actual underlying data has n bits, then your encoding will have n bits, and the extension will have 8 * n bits, with no extra overhead.

Now, let’s look at the diagram a third time:

In Binius, we are committing to a multilinear polynomial: a hypercube 𝑃(x0,x1,…xk), where the individual evaluations 𝑃(0,0…0), 𝑃(0,0…1) up to 𝑃(1,1,…1) are holding the data that we care about. To prove an evaluation at a point, we “re-interpret” the same data as a square. We then extend each row, using Reed-Solomon encoding over groups of bits, to give the data the redundancy needed for random Merkle branch queries to be secure. We then compute a random linear combination of rows, with coefficients designed so that the new combined row actually holds the evaluation that we care about. Both this newly-created row (which get re-interpreted as 128 rows of bits), and a few randomly-selected columns with Merkle branches, get passed to the verifier.

The verifier then does a “row combination of the extension” (or rather, a few columns of the extension), and an “extension of the row combination”, and verifies that the two match. They then compute a column combination, and check that it returns the value that the prover is claiming. And there’s our proof system (or rather, the polynomial commitment scheme, which is the key building block of a proof system).

What did we not cover?

  • Efficient algorithms to extend the rows, which are needed to actually make the computational efficiency of the verifier. We use Fast Fourier transforms over binary fields, described here (though the exact implementation will be different, because this post uses a less efficient construction not based on recursive extension).
  • Arithmetization. Univariate polynomials are convenient because you can do things like F(X+2)-F(X+1)-F(X) = Z(X)*H(X) to relate adjacent steps in the calculation . In a hypercube, “the next step” is far less interpretable than “X+1”. You could do X+k, powers of k, but this jumping behavior would sacrifice many of the key advantages of Binius. The solution is presented in the Binius paper (See Section 4.3), but this is a deep rabbit hole in itself.
  • How to actually safely do specific-value checks. The Fibonacci example requires checking the key boundary conditions: F(0)=F(1)=1 and the value of F(100). But with “raw” Binius, it is insecure to check at pre-known evaluation points. There are fairly simple ways to convert a known-evaluation check into an unknown-evaluation check, using what are called sum-check protocols; but we did not get into those here.
  • Lookup protocols, another technology which has been recently gaining usage as a way to make ultra-efficient proving systems. Binius can be combined with lookup protocols for many applications.
  • Going beyond square-root verification time. Square root is expensive: a Binius proof of bits is about 11 MB long. You can remedy this using some other proof system to make a “proof of a Binius proof”, thus gaining both Binius’s efficiency in proving the main statement and a small proof size. Another option is the much more complicated FRI-Binius protocol, which creates a poly-logarithmic-sized proof (like regular FRI).
  • How Binius affects what counts as “SNARK-friendly”. The basic summary is that, if you use Binius, you no longer need to care much about making computation “arithmetic-friendly”: “regular” hashes are no longer more efficient than traditional arithmetic hashes, multiplication modulo or modulo is no longer a big headache compared to multiplication modulo , and so forth. But this is a complicated topic; lots of things change when everything is done in binary.

I expect many more improvements in binary-field-based proving techniques in the months ahead.

Disclaimer:

  1. This article is reprinted from [Panews]. Forward the Original Title‘Vitalik详解Binius:基于二进制字段的高效证明系统’. All copyrights belong to the original author [Vitalik Buterin*]. If there are objections to this reprint, please contact the Gate Learn team, and they will handle it promptly.
  2. Liability Disclaimer: The views and opinions expressed in this article are solely those of the author and do not constitute any investment advice.
  3. Translations of the article into other languages are done by the Gate Learn team. Unless mentioned, copying, distributing, or plagiarizing the translated articles is prohibited.

Recap: finite fields

Recap: arithmetization

Plonky2: from 256-bit SNARKs and STARKs to 64-bit… only STARKs

From small primes to binary

From univariate polynomials to hypercubes

Simple Binius - an example

Binary field

Full Binius

Putting it all together

Binius, a Highly Efficient Proof System

Advanced5/16/2024, 8:52:36 AM
Vitalik Buterin provides a detailed introduction to Binius, a highly efficient proof system based on binary fields. The article first reviews the concepts of finite fields and arithmetization, explaining how SNARK and STARK proof systems work by converting program statements into polynomial equations. Vitalik points out that although Plonky2 has proven that using smaller 64-bit and 31-bit fields can significantly improve the efficiency of proof generation, Binius further enhances efficiency by operating directly on zeros and ones, taking advantage of the features of binary fields. Binius uses multivariate polynomials to represent computational traces and employs a series of mathematical tricks, including the concept of hypercubes and Reed-Solomon encoding, to construct proofs. Vitalik believes that the direct computational ability of binary fields and operations on bits are key to Binius's efficiency.

Recap: finite fields

Recap: arithmetization

Plonky2: from 256-bit SNARKs and STARKs to 64-bit… only STARKs

From small primes to binary

From univariate polynomials to hypercubes

Simple Binius - an example

Binary field

Full Binius

Putting it all together

Forwarded original title ‘Vitalik explains Binius in detail: an efficient proof system based on binary fields’

This post is primarily intended for readers roughly familiar with 2019-era cryptography, especially SNARKs and STARKs. If you are not, I recommend reading those articles first. Special thanks to Justin Drake, Jim Posen, Benjamin Diamond and Radi Cojbasic for feedback and review.

Over the past two years, STARKs have become a crucial and irreplaceable technology for efficiently making easy-to-verify cryptographic proofs of very complicated statements (eg. proving that an Ethereum block is valid). A key reason why is small field sizes: whereas elliptic curve-based SNARKs require you to work over 256-bit integers in order to be secure enough, STARKs let you use much smaller field sizes, which are more efficient: first the Goldilocks field (64-bit integers), and then Mersenne31 and BabyBear (both 31-bit). Thanks to these efficiency gains, Plonky2, which uses Goldilocks, is hundreds of times faster at proving many kinds of computation than its predecessors.

A natural question to ask is: can we take this trend to its logical conclusion, building proof systems that run even faster by operating directly over zeroes and ones? This is exactly what Binius is trying to do, using a number of mathematical tricks that make it very different from the SNARKs and STARKs of three years ago. This post goes through the reasons why small fields make proof generation more efficient, why binary fields are uniquely powerful, and the tricks that Binius uses to make proofs over binary fields work so effectively.

Binius. By the end of this post, you should be able to understand every part of this diagram.

Recap: finite fields

One of the key tasks of a cryptographic proving system is to operate over huge amounts of data, while keeping the numbers small. If you can compress a statement about a large program into a mathematical equation involving a few numbers, but those numbers are as big as the original program, you have not gained anything.

To do complicated arithmetic while keeping numbers small, cryptographers generally use modular arithmetic. We pick some prime “modulus” p. The % operator means “take the remainder of”: 15 % 7=1, 53 % 10=3, etc (note that the answer is always non-negative, so for example −1 % 10=9).

You’ve probably already seen modular arithmetic, in the context of adding and subtracting time (eg. what time is four hours after 9:00?). But here, we don’t just add and subtract modulo some number, we also multiply, divide and take exponents.

We redefine:

The above rules are all self-consistent. For example, if p=7, then:

5+3=1 (because 8%7=1)

1-3=5 (because -2%7=5)

2*5=3

3/5=2

A more general term for this kind of structure is a finite field. A finite field is a mathematical structure that obeys the usual laws of arithmetic, but where there’s a limited number of possible values, and so each value can be represented in a fixed size.

Modular arithmetic (or prime fields) is the most common type of finite field, but there is also another type: extension fields. You’ve probably already seen an extension field before: the complex numbers. We “imagine” a new element, which we label 𝑖, and declare that it satisfies 𝑖2=−1. You can then take any combination of regular numbers and 𝑖, and do math with it: (3𝑖+2)∗(2𝑖+4)= 6𝑖2+12𝑖+4𝑖+8=16𝑖+2. We can similarly take extensions of prime fields. As we start working over fields that are smaller, extensions of prime fields become increasingly important for preserving security, and binary fields (which Binius uses) depend on extensions entirely to have practical utility.

Recap: arithmetization

The way that SNARKs and STARKs prove things about computer programs is through arithmetization: you convert a statement about a program that you want to prove, into a mathematical equation involving polynomials. A valid solution to the equation corresponds to a valid execution of the program.

To give a simple example, suppose that I computed the 100’th Fibonacci number, and I want to prove to you what it is. I create a polynomial 𝐹 that encodes Fibonacci numbers: so 𝐹(0)=𝐹(1)=1, 𝐹(2)=2, 𝐹(3)=3, 𝐹(4)=5, and so on for 100 steps. The condition that I need to prove is that 𝐹(𝑥+2)=𝐹(𝑥)+𝐹(𝑥+1) across the range 𝑥={0,1…98}. I can convince you of this by giving you the quotient:

where Z(x) = (x-0) (x-1) …(x-98). . If I can provide that there is F and H satisfies this equation, then F must satisfy F(x+2)-F(x+1)-F(x) in the range. If I additionally verify that F is satisfied, F(0)=F(1)=1, then F(100) must actually be the 100th Fibonacci number.

If you want to prove something more complicated, then you replace the “simple” relation 𝐹(𝑥+2)=𝐹(𝑥)+𝐹(𝑥+1) with a more complicated equation, which basically says “𝐹(𝑥+1) is the output of initializing a virtual machine with the state 𝐹(𝑥), and running one computational step”. You can also replace the number 100 with a bigger number, eg. 100000000, to accommodate more steps.

All SNARKs and STARKs are based on this idea of using a simple equation over polynomials (or sometimes vectors and matrices) to represent a large number of relationships between individual values. Not all involve checking equivalence between adjacent computational steps in the same way as above: PLONK does not, for example, and neither does R1CS. But many of the most efficient ones do, because enforcing the same check (or the same few checks) many times makes it easier to minimize overhead.

Plonky2: from 256-bit SNARKs and STARKs to 64-bit… only STARKs

Five years ago, a reasonable summary of the different types of zero knowledge proof was as follows. There are two types of proofs: (elliptic-curve-based) SNARKs and (hash-based) STARKs. Technically, STARKs are a type of SNARK, but in practice it’s common to use “SNARK” to refer to only the elliptic-curve-based variety, and “STARK” to refer to hash-based constructions. SNARKs are small, and so you can verify them very quickly and fit them onchain easily. STARKs are big, but they don’t require trusted setups, and they are quantum-resistant.

STARKs work by treating the data as a polynomial, computing evaluations of that polynomial across a large number of points, and using the Merkle root of that extended data as the “polynomial commitment”

A key bit of history here is that elliptic curve-based SNARKs came into widespread use first: it took until roughly 2018 for STARKs to become efficient enough to use, thanks to FRI, and by then Zcash had already been running for over a year. Elliptic curve-based SNARKs have a key limitation: if you want to use elliptic curve-based SNARKs, then the arithmetic in these equations must be done with integers modulo the number of points on the elliptic curve. This is a big number, usually near 2256: for example, for the bn128 curve, it’s 21888242871839275222246405745257275088548364400416034343698204186575808495617. But the actual computation is using small numbers: if you think about a “real” program in your favorite language, most of the stuff it’s working with is counters, indices in for loops, positions in the program, individual bits representing True or False, and other things that will almost always be only a few digits long.

Even if your “original” data is made up of “small” numbers, the proving process requires computing quotients, extensions, random linear combinations, and other transformations of the data, which lead to an equal or larger number of objects that are, on average, as large as the full size of your field. This creates a key inefficiency: to prove a computation over n small values, you have to do even more computation over n much bigger values. At first, STARKs inherited the habit of using 256-bit fields from SNARKs, and so suffered the same inefficiency.

A Reed-Solomon extension of some polynomial evaluations. Even though the original values are small, the extra values all blow up to the full size of the field (in this case 2 to the power 31 -1)

In 2022, Plonky2 was released. Plonky2’s main innovation was doing arithmetic modulo a smaller prime: 264−232+1=18446744069414584321. Now, each addition or multiplication can always be done in just a few instructions on a CPU, and hashing all of the data together is 4x faster than before. But this comes with a catch: this approach is STARK-only. If you try to use a SNARK, with an elliptic curve of such a small size, the elliptic curve becomes insecure.

To continue to be safe, Plonky2 also needed to introduce extension fields. A key technique in checking arithmetic equations is “sampling at a random point”: if you want to check if 𝐻(𝑥)∗𝑍(𝑥) actually equals 𝐹(𝑥+2)−𝐹(𝑥+1)−𝐹(𝑥), you can pick some random coordinate 𝑟, provide polynomial commitment opening proofs proving 𝐻(𝑟), 𝑍(𝑟), 𝐹(𝑟), 𝐹(𝑟+1) and 𝐹(𝑟+2), and then actually check if 𝐻(𝑟)∗𝑍(𝑟) equals 𝐹(𝑟+2)−𝐹(𝑟+1)−𝐹(𝑟). If the attacker can guess the coordinate ahead of time, the attacker can trick the proof system - hence why it must be random. But this also means that the coordinate must be sampled from a set large enough that the attacker cannot guess it by random chance. If the modulus is near 2256, this is clearly the case. But with a modulus of 264−232+1, we’re not quite there, and if we drop to 231−1, it’s definitely not the case. Trying to fake a proof two billion times until one gets lucky is absolutely within the range of an attacker’s capabilities.

To stop this, we sample 𝑟 from an extension field. For example, you can define 𝑦 where 𝑦3=5, and take combinations of 1, 𝑦 and 𝑦2. This increases the total number of coordinates back up to roughly 293. The bulk of the polynomials computed by the prover don’t go into this extension field; they just use integers modulo 231−1, and so you still get all the efficiencies from using the small field. But the random point check, and the FRI computation, does dive into this larger field, in order to get the needed security.

From small primes to binary

Computers do arithmetic by representing larger numbers as sequences of zeroes and ones, and building “circuits” on top of those bits to compute things like addition and multiplication. Computers are particularly optimized for doing computation with 16-bit, 32-bit and 64-bit integers. Moduluses like 264−232+1 and 231−1 are chosen not just because they fit within those bounds, but also because they align well with those bounds: you can do multiplication modulo 264−232+1 by doing regular 32-bit multiplication, and shift and copy the outputs bitwise in a few places; this article explains some of the tricks well.

What would be even better, however, is doing computation in binary directly. What if addition could be “just” XOR, with no need to worry about “carrying” the overflow from adding 1 + 1 in one bit position to the next bit position? What if multiplication could be more parallelizable in the same way? And these advantages would all come on top of being able to represent True/False values with just one bit.

Capturing these advantages of doing binary computation directly is exactly what Binius is trying to do. A table from the Binius team’s zkSummit presentation shows the efficiency gains:

Despite being roughly the same “size”, a 32-bit binary field operation takes 5x less computational resources than an operation over the 31-bit Mersenne field.

From univariate polynomials to hypercubes

Suppose that we are convinced by this reasoning, and want to do everything over bits (zeroes and ones). How do we actually commit to a polynomial representing a billion bits?

Here, we face two practical problems:

  1. For a polynomial to represent a lot of values, those values need to be accessible at evaluations of the polynomial: in our Fibonacci example above, 𝐹(0), 𝐹(1) … 𝐹(100), and in a bigger computation, the indices would go into the millions. And the field that we use needs to contain numbers going up to that size.

  2. Proving anything about a value that we’re committing to in a Merkle tree (as all STARKs do) requires Reed-Solomon encoding it: extending 𝑛 values into eg. 8𝑛 values, using the redundancy to prevent a malicious prover from cheating by faking one value in the middle of the computation. This also requires having a large enough field: to extend a million values to 8 million, you need 8 million different points at which to evaluate the polynomial.

A key idea in Binius is solving these two problems separately, and doing so by representing the same data in two different ways. First, the polynomial itself. Elliptic curve-based SNARKs, 2019-era STARKs, Plonky2 and other systems generally deal with polynomials over one variable: 𝐹(𝑥). Binius, on the other hand, takes inspiration from the Spartan protocol, and works with multivariate polynomials: 𝐹(𝑥1,𝑥2…𝑥𝑘). In fact, we represent the entire computational trace on the “hypercube” of evaluations where each 𝑥𝑖 is either 0 or 1. For example, if we wanted to represent a sequence of Fibonacci numbers, and we were still using a field large enough to represent them, we might visualize the first sixteen of them as being something like this:

That is, 𝐹(0,0,0,0) would be 1, 𝐹(1,0,0,0) would also be 1, 𝐹(0,1,0,0) would be 2, and so forth, up until we get to 𝐹(1,1,1,1)=987. Given such a hypercube of evaluations, there is exactly one multilinear (degree-1 in each variable) polynomial that produces those evaluations. So we can think of that set of evaluations as representing the polynomial; we never actually need to bother computing the coefficients.

This example is of course just for illustration: in practice, the whole point of going to a hypercube is to let us work with individual bits. The “Binius-native” way to count Fibonacci numbers would be to use a higher-dimensional cube, using each set of eg. 16 bits to store a number. This requires some cleverness to implement integer addition on top of the bits, but with Binius it’s not too difficult.

Now, we get to the erasure coding. The way STARKs work is: you take 𝑛 values, Reed-Solomon extend them to a larger number of values (often 8𝑛, usually between 2𝑛 and 32𝑛), and then randomly select some Merkle branches from the extension and perform some kind of check on them. A hypercube has length 2 in each dimension. Hence, it’s not practical to extend it directly: there’s not enough “space” to sample Merkle branches from 16 values. So what do we do instead? We pretend the hypercube is a square!

Simple Binius - an example

See here for a python implementation of this protocol.

Let’s go through an example, using regular integers as our field for convenience (in a real implementation this will be binary field elements). First, we take the hypercube we want to commit to, and encode it as a square:

Now, we Reed-Solomon extend the square. That is, we treat each row as being a degree-3 polynomial evaluated at x = {0, 1, 2, 3}, and evaluate the same polynomial at x = {4, 5, 6, 7}:

Notice that the numbers blow up quickly! This is why in a real implementation, we always use a finite field for this, instead of regular integers: if we used integers modulo 11, for example, the extension of the first row would just be [3, 10, 0, 6].

If you want to play around with extending and verify the numbers here for yourself, you can use my simple Reed-Solomon extension code here.

Next, we treat this extension as columns, and make a Merkle tree of the columns. The root of the Merkle tree is our commitment.

Now, let’s suppose that the prover wants to prove an evaluation of this polynomial at some point 𝑟={𝑟0,𝑟1,𝑟2,𝑟3}. There is one nuance in Binius that makes it somewhat weaker than other polynomial commitment schemes: the prover should not know, or be able to guess, 𝑠, until after they committed to the Merkle root (in other words, 𝑟 should be a pseudo-random value that depends on the Merkle root). This makes the scheme useless for “database lookup” (eg. “ok you gave me the Merkle root, now prove to me 𝑃(0,0,1,0)!”). But the actual zero-knowledge proof protocols that we use generally don’t need “database lookup”; they simply need to check the polynomial at a random evaluation point. Hence, this restriction is okay for our purposes.

Suppose we pick 𝑟={1,2,3,4} (the polynomial, at this point, evaluates to −137; you can confirm it with this code). Now, we get into the process of actually making the proof. We split up 𝑟 into two parts: the first part {1,2} representing a linear combination of columns within a row, and the second part {3,4} representing a linear combination of rows. We compute a “tensor product”, both for the column part:

And for the row part:

What this means is: a list of all possible products of one value from each set. In the row case, we get:

[(1-r2)(1-r3), (1-r3), (1-r2)r3, r2*r3]

Use r={1,2,3,4} (so r2=3 and r3=4):

[(1-3)(1-4), 3(1-4),(1-3)4,34] = [6, -9 -8 -12]

Now, we compute a new “row” 𝑡′, by taking this linear combination of the existing rows. That is, we take:

You can view what’s going on here as a partial evaluation. If we were to multiply the full tensor product ⨂𝑖=03(1−𝑟𝑖,𝑟𝑖) by the full vector of all values, you would get the evaluation 𝑃(1,2,3,4)=−137. Here we’re multiplying a partial tensor product that only uses half the evaluation coordinates, and we’re reducing a grid of 𝑁 values to a row of 𝑁 values. If you give this row to someone else, they can use the tensor product of the other half of the evaluation coordinates to complete the rest of the computation.

The prover provides the verifier with this new row, 𝑡′, as well as the Merkle proofs of some randomly sampled columns. This is 𝑂(𝑁) data. In our illustrative example, we’ll have the prover provide just the last column; in real life, the prover would need to provide a few dozen columns to achieve adequate security.

Now, we take advantage of the linearity of Reed-Solomon codes. The key property that we use is: taking a linear combination of a Reed-Solomon extension gives the same result as a Reed-Solomon extension of a linear combination. This kind of “order independence” often happens when you have two operations that are both linear.

The verifier does exactly this. They compute the extension of 𝑡′, and they compute the same linear combination of columns that the prover computed before (but only to the columns provided by the prover), and verify that these two procedures give the same answer.

In this case, extending 𝑡′, and computing the same linear combination ([6,−9,−8,12]) of the column, both give the same answer: −10746. This proves that the Merkle root was constructed “in good faith” (or it at least “close enough”), and it matches 𝑡′: at least the great majority of the columns are compatible with each other and with 𝑡′.

But the verifier still needs to check one more thing: actually check the evaluation of the polynomial at {𝑟0..𝑟3}. So far, none of the verifier’s steps actually depended on the value that the prover claimed. So here is how we do that check. We take the tensor product of what we labeled as the “column part” of the evaluation point:

In our example, where r={1,2,3,4} so the half of the selected column is {1,2}), this equals:

So now we take this linear combination of 𝑡′:

Which exactly equals the answer you get if you evaluate the polynomial directly.

The above is pretty close to a complete description of the “simple” Binius protocol. This already has some interesting advantages: for example, because the data is split into rows and columns, you only need a field half the size. But this doesn’t come close to realizing the full benefits of doing computation in binary. For this, we will need the full Binius protocol. But first, let’s get a deeper understanding of binary fields.

Binary field

The smallest possible field is arithmetic modulo 2, which is so small that we can write out its addition and multiplication tables:

We can make larger binary fields by taking extensions: if we start with 𝐹2 (integers modulo 2) and then define 𝑥 where 𝑥2=𝑥+1, we get the following addition and multiplication table:

It turns out that we can expand the binary field to arbitrarily large sizes by repeating this construction. Unlike with complex numbers over reals, where you can add one new element 𝑖, but you can’t add any more (quaternions do exist, but they’re mathematically weird, eg. 𝑎𝑏≠𝑏𝑎), with finite fields you can keep adding new extensions forever. Specifically, we define elements as follows:

And so on. This is often called the tower construction, because of how each successive extension can be viewed as adding a new layer to a tower. This is not the only way to construct binary fields of arbitrary size, but it has some unique advantages that Binius takes advantage of.

We can represent these numbers as a list of bits, eg. 1100101010001111. The first bit represents multiples of 1, the second bit represents multiples of 𝑥0, then subsequent bits represent multiples of: 𝑥1, 𝑥1∗𝑥0, 𝑥2, 𝑥2∗𝑥0, and so forth. This encoding is nice because you can decompose it:

This is a relatively uncommon notation, but I like representing binary field elements as integers, taking the bit representation where more-significant bits are to the right. That is, 1=1, 𝑥0=01=2, 1+𝑥0=11=3, 1+𝑥0+𝑥2=11001000=19, and so forth. 1100101010001111 is, in this representation, 61779.

Addition in a binary field is just XOR (as does subtraction, by the way); note that this means x+x=0 for any x. To multiply two elements x*y, there is a very simple recursive algorithm: split each number in half:

Then, split up the multiplication:

The last piece is the only slightly tricky one, because you have to apply the reduction rule, and replace 𝑅𝑥∗𝑅𝑦∗𝑥𝑘2 with 𝑅𝑥∗𝑅𝑦∗(𝑥𝑘−1∗𝑥𝑘+1). There are more efficient ways to do multiplication, analogues of the Karatsuba algorithm and fast Fourier transforms, but I will leave it as an exercise to the interested reader to figure those out.

Division in binary fields is done by combining multiplication and inversion. The “simple but slow” way to do inversion is an application of generalized Fermat’s little theorem. There is also a more complicated but more efficient inversion algorithm, which you can find here. You can use the code here to play around with binary field addition, multiplication and division yourself.

Left: addition table for four-bit binary field elements (ie. elements made up only of combinations of 1, 𝑥0,𝑥1 and 𝑥0𝑥1).

Right: multiplication table for four-bit binary field elements.

The beautiful thing about this type of binary field is that it combines some of the best parts of “regular” integers and modular arithmetic. Like regular integers, binary field elements are unbounded: you can keep extending as far as you want. But like modular arithmetic, if you do operations over values within a certain size limit, all of your answers also stay within the same bound. For example, if you take successive powers of 42, you get:

After 255 steps, you’re back to 42255 = 1. Just like positive integers and modular arithmetic, they follow the usual mathematical laws: ab=ba, a(b+c)=a b+a*c, there are even some weird new laws.

Finally, binary fields make it easy to handle bits: if you do math with numbers that fit 2 k bits, then all your output will also fit 2 k bits. This avoids embarrassment. In Ethereum’s EIP-4844, each “block” of a blob must be a digital module 52435875175126190479447740508185965837690552500527637822603658699938581184513, so encoding the binary data requires throwing away some space and doing additional A check to ensure that each element stores a value less than 2248. This also means that binary field operations are super fast on computers - both CPUs and theoretically optimal FPGA and ASIC designs.

What this all means is that we can do something like the Reed-Solomon encoding we did above, in a way that completely avoids the “explosion” of integers, as we saw in our example, and in a very “exploding” way. “Native” way, the kind of computing that computers are good at. The “split” attribute of binary fields - how we do it 1100101010001111=11001010+10001111*x3 and then split it according to our needs is also crucial to achieve a lot of flexibility.

Full Binius

See here for a python implementation of this protocol.

Now, we can get to “full Binius”, which adjusts “simple Binius” to (i) work over binary fields, and (ii) let us commit to individual bits. This protocol is tricky to understand, because it keeps going back and forth between different ways of looking at a matrix of bits; it certainly took me longer to understand than it usually takes me to understand a cryptographic protocol. But once you understand binary fields, the good news is that there isn’t any “harder math” that Binius depends on. This is not elliptic curve pairings, where there are deeper and deeper rabbit holes of algebraic geometry to go down; here, binary fields are all you need.

Let’s look again at the full diagram:

By now, you should be familiar with most of the components. The idea of “flattening” a hypercube into a grid, the idea of computing a row combination and a column combination as tensor products of the evaluation point, and the idea of checking equivalence between “Reed-Solomon extending then computing the row combination”, and “computing the row combination then Reed-Solomon extending”, were all in simple Binius.

What’s new in “full Binius”? Basically three things:

  • The individual values in the hypercube, and in the square, have to be bits (0 or 1)
  • The extension process extends bits into more bits, by grouping bits into columns and temporarily pretending that they are larger field elements
  • After the row combination step, there’s an element-wise “decompose into bits” step, which converts the extension back into bits

We will go through both in turn. First, the new extension procedure. A Reed-Solomon code has the fundamental limitation that if you are extending 𝑛 values to 𝑘∗𝑛 values, you need to be working in a field that has 𝑘∗𝑛 different values that you can use as coordinates. With 𝐹2 (aka, bits), you cannot do that. And so what we do is, we “pack” adjacent 𝐹2 elements together into larger values. In the example here, we’re packing two bits at a time into elements in {0,1,2,3}, because our extension only has four evaluation points and so that’s enough for us. In a “real” proof, we would probably back 16 bits at a time together. We then do the Reed-Solomon code over these packed values, and unpack them again into bits.

Now, the row combination. To make “evaluate at a random point” checks cryptographically secure, we need that point to be sampled from a pretty large space, much larger than the hypercube itself. Hence, while the points within the hypercube are bits, evaluations outside the hypercube will be much larger. In our example above, the “row combination” ends up being [11,4,6,1].

This presents a problem: we know how to combine pairs of bits into a larger value, and then do a Reed-Solomon extension on that, but how do you do the same to pairs of much larger values?

The trick in Binius is to do it bitwise: we look at the individual bits of each value (eg. for what we labeled as “11”, that’s [1,1,0,1]), and then we extend row-wise. That is, we perform the extension procedure on the 1 row of each element, then on the 𝑥0 row, then on the “𝑥1“ row, then on the 𝑥0∗𝑥1 row, and so forth (well, in our toy example we stop there, but in a real implementation we would go up to 128 rows (the last one being 𝑥6∗ …∗ 𝑥0)).

Recapping:

  • We take the bits in the hypercube, and convert them into a grid
  • Then, we treat adjacent groups of bits on each row as larger field elements, and do arithmetic on them to Reed-Solomon extend the rows
  • Then, we take a row combination of each column of bits, and get a (for squares larger than 4x4, much smaller) column of bits for each row as the output
  • Then, we look at the output as a matrix, and treat the bits of that as rows again

Why does this work? In “normal” math, the ability to (often) do linear operations in either order and get the same result stops working if you start slicing a number up by digits. For example, if I start with the number 345, and I multiply it by 8 and then by 3, I get 8280, and if do those two operations in reverse, I also do 8280. But if I insert a “split by digit” operation in between the two steps, it breaks down: if you do 8x then 3x, you get:

But if you do 3x, and then 8x, you get:

But in a binary field built with a tower structure, this approach does work. The reason lies in their separability: if you multiply a large value by a small value, what happens on each segment stays on each segment. If we multiply 1100101010001111 by 11, this is the same as the first factorization of 1100101010001111, which is

And then multiplying each component by 11 separately.

Putting it all together

Generally, zero knowledge proof systems work by making statements about polynomials that simultaneously represent statements about the underlying evaluations: just like we saw in the Fibonacci example, 𝐹(𝑋+2)−𝐹(𝑋+1)−𝐹(𝑋)=𝑍(𝑋)∗𝐻(𝑋) simultaneously checks all steps of the Fibonacci computation. We check statements about polynomials by proving evaluations at a random point. This check at a random point stands in for checking the whole polynomial: if the polynomial equation doesn’t match, the chance that it matches at a specific random coordinate is tiny.

In practice, a major source of inefficiency comes from the fact that in real programs, most of the numbers we are working with are tiny: indices in for loops, True/False values, counters, and similar things. But when we “extend” the data using Reed-Solomon encoding to give it the redundancy needed to make Merkle proof-based checks safe, most of the “extra” values end up taking up the full size of a field, even if the original values are small.

To get around this, we want to make the field as small as possible. Plonky2 brought us down from 256-bit numbers to 64-bit numbers, and then Plonky3 went further to 31 bits. But even this is sub-optimal. With binary fields, we can work over individual bits. This makes the encoding “dense”: if your actual underlying data has n bits, then your encoding will have n bits, and the extension will have 8 * n bits, with no extra overhead.

Now, let’s look at the diagram a third time:

In Binius, we are committing to a multilinear polynomial: a hypercube 𝑃(x0,x1,…xk), where the individual evaluations 𝑃(0,0…0), 𝑃(0,0…1) up to 𝑃(1,1,…1) are holding the data that we care about. To prove an evaluation at a point, we “re-interpret” the same data as a square. We then extend each row, using Reed-Solomon encoding over groups of bits, to give the data the redundancy needed for random Merkle branch queries to be secure. We then compute a random linear combination of rows, with coefficients designed so that the new combined row actually holds the evaluation that we care about. Both this newly-created row (which get re-interpreted as 128 rows of bits), and a few randomly-selected columns with Merkle branches, get passed to the verifier.

The verifier then does a “row combination of the extension” (or rather, a few columns of the extension), and an “extension of the row combination”, and verifies that the two match. They then compute a column combination, and check that it returns the value that the prover is claiming. And there’s our proof system (or rather, the polynomial commitment scheme, which is the key building block of a proof system).

What did we not cover?

  • Efficient algorithms to extend the rows, which are needed to actually make the computational efficiency of the verifier. We use Fast Fourier transforms over binary fields, described here (though the exact implementation will be different, because this post uses a less efficient construction not based on recursive extension).
  • Arithmetization. Univariate polynomials are convenient because you can do things like F(X+2)-F(X+1)-F(X) = Z(X)*H(X) to relate adjacent steps in the calculation . In a hypercube, “the next step” is far less interpretable than “X+1”. You could do X+k, powers of k, but this jumping behavior would sacrifice many of the key advantages of Binius. The solution is presented in the Binius paper (See Section 4.3), but this is a deep rabbit hole in itself.
  • How to actually safely do specific-value checks. The Fibonacci example requires checking the key boundary conditions: F(0)=F(1)=1 and the value of F(100). But with “raw” Binius, it is insecure to check at pre-known evaluation points. There are fairly simple ways to convert a known-evaluation check into an unknown-evaluation check, using what are called sum-check protocols; but we did not get into those here.
  • Lookup protocols, another technology which has been recently gaining usage as a way to make ultra-efficient proving systems. Binius can be combined with lookup protocols for many applications.
  • Going beyond square-root verification time. Square root is expensive: a Binius proof of bits is about 11 MB long. You can remedy this using some other proof system to make a “proof of a Binius proof”, thus gaining both Binius’s efficiency in proving the main statement and a small proof size. Another option is the much more complicated FRI-Binius protocol, which creates a poly-logarithmic-sized proof (like regular FRI).
  • How Binius affects what counts as “SNARK-friendly”. The basic summary is that, if you use Binius, you no longer need to care much about making computation “arithmetic-friendly”: “regular” hashes are no longer more efficient than traditional arithmetic hashes, multiplication modulo or modulo is no longer a big headache compared to multiplication modulo , and so forth. But this is a complicated topic; lots of things change when everything is done in binary.

I expect many more improvements in binary-field-based proving techniques in the months ahead.

Disclaimer:

  1. This article is reprinted from [Panews]. Forward the Original Title‘Vitalik详解Binius:基于二进制字段的高效证明系统’. All copyrights belong to the original author [Vitalik Buterin*]. If there are objections to this reprint, please contact the Gate Learn team, and they will handle it promptly.
  2. Liability Disclaimer: The views and opinions expressed in this article are solely those of the author and do not constitute any investment advice.
  3. Translations of the article into other languages are done by the Gate Learn team. Unless mentioned, copying, distributing, or plagiarizing the translated articles is prohibited.
Start Now
Sign up and get a
$100
Voucher!