Infrastructure

Rebuild ZK infrastructure for large-scale use cases

Benefits to use

01

Groundbreaking performance

The new Brakedown encoding in Plonky3 can encode witness data at a speed of approximately 1.2 GB/s, providing a significant increase in proving speeds.

Initial estimates suggest a proving rate of around 4 MHz with a single CPU-only machine, making the solution about three orders of magnitude faster than existing alternatives.

02

Support for conventional languages

Our zkVM's compatibility with LLVM opens the door to a wide array of conventional programming languages, streamlining the transition for developers into the world of decentralized systems. Additionally, EVM compatibility ensures a smooth experience for those already in the Ethereum development space, allowing for a seamless blend of familiarity and innovation.

03

Lower cost of proving

By redefining the proving process, our zkVM minimizes computational resources, thereby significantly reducing the cost of generating zero-knowledge proofs. 



This efficiency not only makes our platform more accessible but also translates to tangible savings in both time and operational costs, enabling a leaner, more cost-effective approach to secure computation.

04

Empower novel solutions

Our platform is engineered to foster innovation, giving developers the tools to create novel solutions in the decentralized space. Whether it's in finance, gaming, or any other sector seeking to leverage the power of zero-knowledge proofs, our zkVM provides the performance and versatility necessary to transform visionary ideas into deployed realities.

Frequently Asked Questions

1.

Could you provide specific performance figures?

The incorporation of the new Brakedown encoding in Plonky3 has dramatically increased proving speeds, enabling the encoding of witness data at approximately 1.2 GB/s. Initial estimates suggest a proving rate of around 4 MHz using a single CPU-only machine, making this solution notably faster than current alternatives—nearly three orders of magnitude.

2.

Is there potential for further performance enhancement in the future?

While we have made substantial progress, our team continues to refine and optimize our implementations. In particular, we see significant potential for augmenting the speed of Brakedown. Given its memory-bound nature, we anticipate that utilizing block matrix multiplication could provide significant performance enhancements.

3.

During which phase of the prover do you anticipate the Mersenne prime to offer superior performance?

The Mersenne prime offers notable performance advantages in arithmetic hashes, such as Rescue, due to the remarkably efficient field arithmetic—for instance, an M1 chip can execute more than three field multiplications per cycle.

4.

Is the Tip5 hash function integral to recursion?

The Tip5 hash function is not necessarily crucial to our recursion strategy—it was primarily used as an illustrative example. We are considering Poseidon2 among other recent schemes as potential alternatives.

5.

You mentioned the use of only degree 2 constraints in the study club. Does this involve decomposing higher-degree equations into smaller degree 2 checks at the expense of increased columns? What were the performance considerations that influenced this decision?

We typically utilize degree 3 constraints in most Algebraic Intermediate Representations (AIRs). Many logical constructs naturally map to degree 3 constraints, although occasionally, we add intermediate columns manually to reduce the degree of specific relations.

6.

Can you identify areas or opcodes that could benefit from higher degree constraints?

Our multiset equality arguments could substantially benefit from higher degree constraints. Elevating the degree would allow us to batch more terms into a singular cumulative sum.

7.

Would a 64-bit ISA be advantageous?

While a 64-bit ISA could be beneficial for applications performing large number calculations with ‘u64’s, we initially chose to concentrate on the 32-bit variant for simplicity's sake.

8.

Would it be challenging to allow program decoding in the style of Miden? (Currently, it appears that programs are compiled into a substantial preprocessed/trusted table)

We refrained from employing Miden's Merkle Abstract Syntax Trees (MAST) approach due to the complexities it introduces in supporting LLVM with arbitrary jumps. Treating the program as trusted has proven beneficial, eliminating the need to validate faults.

9.

To what extent is it accurate to say that coprocessors are essentially FRI-based preprocessed SNARKs?

One can conceptualize coprocessors as separate STARKs, which employ either FRI or another Polynomial Commitment Scheme (PCS), and can be interconnected using multiset equality arguments.