Zero Knowledge
Sep 8, 2025
The Ethereum Proximity Prize: Reed–Solomon Codes and MCA
The Ethereum Foundation has launched a $1 million cryptography bounty to resolve the proximity gaps conjecture for Reed–Solomon codes. A bre...
Zero Knowledge
Oct 2, 2025

The zero-knowledge proof landscape has exploded with innovation, but beneath the cryptographic complexity lies a fundamental challenge: how do developers express computational logic in ways that zero-knowledge systems can verify? The answer lies in constraint languages—domain-specific languages (DSLs) that bridge the gap between familiar programming concepts and the mathematical foundations of zero-knowledge proofs.
This post compares three distinct approaches to constraint authoring: Halo2's PLONKish circuits used in the DarkFi zkVM, Zirgen's STARK-based virtual machine compiler by RISC Zero, and Plonky3's direct AIR constraint authoring used in Ziren and SP1. Each represents a different philosophy for how developers should interact with zero-knowledge systems, from high-level abstractions to low-level mathematical primitives.
Understanding these differences matters beyond academic interest. The constraint language you choose shapes not just how you write proofs, but how you think about computation itself. Some prioritize safety through type systems, others optimize for performance through direct control, and still others emphasize the natural expression of algorithms. By examining their architectures, data models, and practical implementations through a common Fibonacci example, we can understand when each approach shines and what tradeoffs developers must navigate.
Whether you're building privacy-preserving applications, verifiable computation systems, or exploring zero-knowledge proofs for the first time, this comparison will help you understand the landscape and choose the right tool for your specific needs.
Halo2 builds on PLONKish arithmetization, which represents computations as a rectangular matrix of values where each cell contains field elements. Think of it as a spreadsheet where each row represents a computation step and columns hold different types of data. PLONKish circuits are defined in terms of a rectangular matrix of values, with constraints (called gates) that enforce relationships between these values and selectors that can turn specific constraints on or off for different rows.
The Halo2 API elegantly abstracts this low-level matrix representation through a layered architecture. At the core, there are chips that encapsulate specific computational logic. Using their API, developers define chips that "know" how to use particular sets of custom gates for efficient computation. For example, one might have chips that implement particular cryptographic algorithms like scalar multiplication or pairings.
Above chips, developers typically interact through gadgets, which provide a more stable and convenient API. While chips implement specific algorithms such as multiplication or pairings, gadgets are more general abstractions — for instance, the built-in ECC gadget in Halo2 encapsulates all elliptic-curve operations, internally composing the multiplication and pairing chips. This indirection is useful because, for reasons of efficiency and limitations imposed by PLONKish circuits, the chip interfaces often depend on low-level implementation details, whereas the gadget interface offers a more convenient and stable API that hides those details.
The layouter orchestrates everything, managing how different chips and regions are arranged within the circuit matrix. It handles the spatial organization of gates and ensures efficient use of the available circuit area.
A key design principle of Halo2's API is the clear separation between circuit configuration and witness assignment. The configure phase defines the circuit's structure—columns, gates, and constraints—while the synthesize phase populates the circuit with actual witness values. This separation allows the same circuit definition to be used for different proof instances, with only the witness data changing between runs.
Zirgen builds on STARK proof systems, which represent computations fundamentally differently from PLONKish circuits. Arithmetization of a virtual machine refers to expressing the state transition for the execution of each instruction as a polynomial. Instead of a rectangular matrix, STARKs work with execution traces—sequences of machine states over time where each row represents a computation step and columns represent register values or memory locations.
The DSL abstracts the complexity of manually writing AIR constraints by providing higher-level constructs that compile down to the polynomial constraints required by the STARK prover.
The architecture revolves around components that encapsulate logical units of computation, similar to how chips work in Halo2 but tailored for the sequential nature of STARK traces. Each component defines registers (state variables) and the transition polynomials that govern how those registers evolve from one step to the next. To obtain the transition polynomials, the prover evaluates the transition constraints symbolically in the trace polynomials.
Unlike Halo2's spatial layouter system, Zirgen organizes computation sequentially through cycles that define the progression of machine state. The DSL provides control flow constructs and conditional logic that compile into the appropriate polynomial constraints, allowing developers to express algorithmic logic in a more natural imperative style while maintaining the mathematical rigor required for zero-knowledge proofs.
The constraint system builds on STARK's Algebraic Intermediate Representation (AIR) with several key patterns:
Inside the Air::eval(...) method, developers write polynomial identities over the trace using an AirBuilder. The builder provides access to the current row (local) and next row (one-step rotation) to express state transitions, mirroring the sequential nature of program execution.
Indicator and selector columns encode instruction dispatch, functioning similarly to selectors in Halo2 but with VM-specific semantics. The system enforces that exactly one instruction type is active per execution cycle, and each instruction's semantic constraints are guarded with when_...() functions.
Boundary constraints establish the initial and final states of the virtual machine through first-row and last-row guards.
The modular architecture allows different aspects of the virtual machine to be implemented as separate AIR modules—CPU logic, ALU operations, memory management, and I/O handling can each have their own constraint implementations. A higher-level aggregator manages cross-table constraints, creating a clean separation of concerns.
The result is a system that allows developers to think in familiar imperative, step-by-step terms while working directly with the polynomial constraint primitives that STARK systems require. Unlike higher-level abstractions, this approach provides fine-grained control over constraint generation while maintaining the conceptual clarity of proving program execution on a virtual machine.
Halo2 circuits use three fundamental column types, each serving distinct purposes:
Advice columns hold the private witness data—the secret inputs and intermediate computation results that only the prover knows. These values change with each proof generation and contain the actual data being computed over.
Instance columns store public inputs and outputs that both the prover and verifier know. These are the values that the circuit commits to publicly, such as the input to a hash function or the final result of a computation.
Fixed columns contain circuit constants and configuration data that remain the same across all proof instances. The number of columns in the matrix, and a specification of each column as being fixed, advice, or instance defines the circuit structure.
Selectors are special fixed columns that enable or disable specific constraints on particular rows. They act like switches, allowing the same gate to be active only where needed, making circuits more flexible and efficient.
struct FibonacciConfig {
advice: Column<Advice>,
selector: Selector,
instance: Column<Instance>,
}
The above code groups the specific columns that our circuit will use into a configuration struct.
Zirgen circuits are built around two fundamental register components that handle different aspects of state management in the execution trace:
At the foundation lies the NondetReg component, which creates a single column in the trace and allows the prover to write any value without imposed constraints. The term "nondeterministic" indicates that these registers are essentially free variables that the prover can set arbitrarily, making them the primary mechanism for introducing witness data into the circuit.
Building upon this foundation, the Reg component provides constrained register behavior. It is also built into the language and is defined as a composition that wraps a NondetReg with an additional equality constraint:
component Reg(v: Val) {
reg := NondetReg(v);
v = reg;
reg
}
This implementation demonstrates Zirgen's compositional approach—Reg takes a NondetReg and adds a constraint ensuring the register's value equals the input parameter. This transforms the unconstrained witness allocation into a deterministic assignment where the register's value is fully specified by the circuit logic.
This two-tier design exemplifies Zirgen's architectural principle of building sophisticated constraint systems from elementary, composable components. The separation between raw witness allocation and constrained assignment provides developers with both the flexibility to introduce arbitrary witness data and the tools to enforce deterministic computational relationships.
Unlike Halo2's typed column system (Advice/Instance/Fixed) or Zirgen's declarative register keywords (Reg/NondetReg), Plonky3's approach treats all trace elements as generic columns without inherent type distinctions. The semantic roles—whether data is public or private, constrained or unconstrained—emerge entirely from how you wire constraints and boundary conditions, not from compile-time type annotations.
This design philosophy reflects Plonky3's lower-level approach to constraint authoring. Rather than providing high-level abstractions that encode common patterns, Plonky3 gives developers direct access to the underlying mathematical primitives, requiring explicit constraint construction for behaviors that other systems provide automatically.
Constrained versus unconstrained columns are determined purely by constraint presence. A column becomes constrained only where you assert equations in Air::eval(...). If no constraint relates a column to other values—no equality, range check, or transition relation touches it—the column functions as free witness data that the prover can set arbitrarily. There's no special NondetReg type; unconstrained behavior emerges from the absence of constraints.
Public versus private data represents a protocol boundary rather than a column classification. Public values flow through the proof's public_values vector that both prover and verifier know. Your AIR must explicitly bind specific trace cells to those public slots via boundary constraints, such as "first row equals public input" or "last row equals public output." Everything not explicitly bound to public slots remains private witness data.
Selector and indicator columns handle instruction dispatch and control flow through ordinary columns that you manually constrain to be boolean. The system provides no special selector type—you must explicitly assert s(1-s) = 0 for each boolean flag and ensure mutually exclusive opcodes satisfy ∑ᵢ sᵢ = 1. You then gate opcode-specific identities using these indicators, creating the conditional logic necessary for instruction set semantics.
This approach creates both power and responsibility. The lack of built-in type safety means you can easily create unconstrained columns by accident—forgetting to cover an instruction case or omitting a boundary constraint leaves degrees of freedom in the witness. However, this flexibility allows for precise control over constraint generation and can lead to more efficient circuits when used carefully.
Public inputs and outputs flow through instance columns, creating the bridge between the private circuit computation and the public interface. The most common pattern involves using assign_advice_from_instance() to copy public values from instance columns into advice columns where they can be used in private computations.
For example, in the Fibonacci circuit, the first two Fibonacci numbers and the final output are public—they're loaded into instance columns and then copied into the advice column where the actual computation happens. The function constrain_instance is used to constrain the row value of an instance column in a cell and an absolute position.
The expose_public() pattern is equally important, allowing private computation results to be made public by constraining them to specific instance column positions. This creates a clean separation between the private computational logic and the public interface, while maintaining the cryptographic guarantees.
let mut a_cell = region.assign_advice_from_instance(
|| "1",
self.config.instance,
0,
self.config.advice,
0,
)?;
pub fn expose_public(
&self,
mut layouter: impl Layouter<F>,
cell: AssignedCell<F, F>,
row: usize,
) -> Result<(), Error> {
layouter.constrain_instance(cell.cell(), self.config.instance, row)
}
Zirgen manages the boundary between private computation and public verification through the public keyword, which marks specific registers as externally visible. Public registers serve as the interface between the circuit's internal state and the external verifier, similar to how Halo2's instance columns bridge private and public data.
When a register is marked as public, its values at specific points in the execution trace become part of the public input/output interface. This allows the circuit to commit to certain computational results or accept external inputs while keeping the intermediate computation steps private. The prover must demonstrate that the public values are consistent with the circuit's execution without revealing the private witness data.
component CycleCounter() {
cycle := NondetReg(GetCycle());
public is_first_cycle := IsZero(cycle);
...
}
Plonky3's approach to public data differs fundamentally from both Halo2's instance columns and Zirgen's public keyword. In the Plonky3 framework, proofs carry a public_values vector of field elements that both prover and verifier know. Crucially, no column is inherently "public" by type—you must explicitly bind specific trace cells to designated slots in this vector using boundary or guarded constraints within Air::eval().
The public interface contract requires careful coordination between constraint author and proof consumer. The order of elements in public_values must exactly match the constraint bindings you write, creating a precise but potentially fragile interface specification.
let pis = builder.public_values();
let a = pis[0];
let b = pis[1];
let x = pis[2];
...
when_first_row.assert_eq(local.left.clone(), a);
when_first_row.assert_eq(local.right.clone(), b);
builder.when_last_row().assert_eq(local.right.clone(), x);
First and last row binding represents the most common pattern for input/output handling. This approach injects inputs at the initial execution step and exposes outputs at the final step, mimicking traditional program interfaces:
This pattern works well when the computation has clear initialization and finalization phases, but requires that all public inputs are available at program start and all outputs are ready at program completion.
The witness data in advice columns drives the actual computation, but constraints ensure correctness. Halo2 provides several powerful constraint patterns that go beyond simple same-row relationships.
Same-row constraints are the most straightforward, relating values within a single row. In the Fibonacci example, the create_gate function defines a gate with the constraint s * (a + b - c), which ensures that when the selector s is active, the values a + b must equal c within the same row.
Rotation constraints allow gates to span multiple rows, creating relationships between values in different rows of the same column. The Rotation::next() and Rotation(2) in the Fibonacci gate show this in action—we're constraining that the current value plus the next value equals the value two rows down. This is particularly powerful for algorithms that need to access previous computation states.
Copy constraints enforce that cells in different locations contain the same value, implemented through Halo2's permutation argument. These constraints are established using AssignedCell::copy_advice calls, but require permutation checks to be enabled for the relevant columns via ConstraintSystem::enable_equality. They're essential for connecting different parts of the circuit and ensuring data consistency across regions.
Lookup arguments provide a different constraint mechanism entirely, allowing circuits to prove that witness values exist in predefined lookup tables without revealing which specific entries were accessed. This is particularly useful for operations like range checks or implementing complex functions through table lookups.
The assignment process populates these witness values through the layouter, typically within assign_region() calls that group related constraints together. Each region represents a logical unit of computation with its own local coordinate system, making circuit development more modular and manageable.
meta.create_gate("add", |meta| {
let s = meta.query_selector(selector);
let a = meta.query_advice(advice, Rotation::cur());
let b = meta.query_advice(advice, Rotation::next());
let c = meta.query_advice(advice, Rotation(2));
vec![s * (a + b - c)]
});
In Zirgen's STARK-based model, the computational heart revolves around state transitions and the constraints that govern how registers evolve over the execution trace. The witness data consists of the complete execution trace—the sequence of register states that demonstrate a valid computation.
Same-step constraints operate within a single execution step, relating register values that must satisfy certain relationships at the same point in time. These are analogous to Halo2's same-row constraints. For example, a constraint might ensure that two registers always sum to a third register within the same computation step.
Offset constraints use the @ syntax to create relationships between register values at different points in the execution trace. The @ operator allows referencing register values from previous or future steps. For instance, reg@1 refers to the register's value in the previous execution step. This sequential referencing is fundamental to expressing state machine transitions and algorithmic dependencies that span multiple computation cycles.
Conditional logic through the mux (multiplexer) type provides branching capabilities within the circuit. The mux allows the circuit to select between different computational paths based on register values, enabling complex control flow while maintaining the polynomial constraints required by the STARK system. This branching mechanism compiles down to polynomial constraints that ensure only one branch is active at any given time.
component CycleCounter() {
global total_cycles := NondetReg(6);
cycle := NondetReg(GetCycle());
public is_first_cycle := IsZero(cycle);
[is_first_cycle, 1-is_first_cycle] -> ({
// First cycle; previous cycle should be the last cycle.
cycle@1 = total_cycles - 1;
}, {
// Not first cycle; cycle number should advance by one for every row.
cycle = cycle@1 + 1;
});
cycle
}
// Copy previous two terms forward
d2 : Reg;
d3 : Reg;
d1 := Reg([first, 1-first] -> (f0, d2@1));
d2 := Reg([first, 1-first] -> (f1, d3@1));
// Compute the next Fibonacci term
d3 := Reg(d1 + d2);
Constraints are the polynomial identities you write in Air::eval(...) that must evaluate to zero when the real witness values are substituted.
Constraint authoring happens through the builder API's row access and assertion methods. You work with the current and next rows of the main trace (conceptually "local" and "next" rotations), using builder methods like assert_eq and assert_zero to emit polynomial equalities. This direct access eliminates abstraction layers between your constraint intentions and the underlying mathematical enforcement.
Guard mechanisms provide conditional constraint application through row selectors. Each guard wraps assertions to ensure they apply only on intended rows:
Each guard multiplies your identity by a selector polynomial that equals 1 on intended rows and 0 elsewhere, creating precise mathematical control over where constraints apply.
Boundary constraints handle input/output and initialization by pinning specific cells on first or last rows and binding designated cells to public vector entries like we saw in the previous section.
Transition constraints relate consecutive execution steps using local/next access to enforce state evolution. The general form s_trans · F(local, next) = 0 captures how machine state must progress from one cycle to the next.
Row-local gated semantics implement operation-specific logic under opcode or branch indicators. When selector s_k is active, the constraint s_k · G_k(local) = 0 enforces the appropriate computational semantics.
Range and byte constraints use lookup arguments to prove membership in global tables rather than ad-hoc inequalities, providing efficient range checking for structured values.
Control and responsibility in Plonky3's approach means you directly control where identities apply through guards and what they enforce through assertions. There are no separate "gate objects"—the polynomial identity you write is the gate. This power comes with corresponding responsibility: values become public only when boundary constraints explicitly tie cells to positions in the public vector, not through column type declarations.
Common pitfalls stem from incomplete constraint coverage. Leaving columns or execution branches unconstrained through missing guards or boundary ties creates degrees of freedom that can compromise proof soundness. Forgetting boolean or sum-to-one constraints on indicators, or omitting range checks for structured values like bytes or flags, can allow invalid witness assignments to satisfy the constraint system.
let mut when_transition = builder.when_transition();
// a' <- b
when_transition.assert_eq(local.right.clone(), next.left.clone());
// b' <- a + b
when_transition.assert_eq(local.left.clone() + local.right.clone(), next.right.clone());
Let's trace through the provided Fibonacci implementation to see these concepts in practice. The circuit computes the 9th Fibonacci number (55) starting from the initial values 1, 1.
use halo2_proofs::{arithmetic::FieldExt, circuit::*, plonk::*, poly::Rotation};
use std::marker::PhantomData;
#[derive(Debug, Clone)]
struct ACell<F: FieldExt>(AssignedCell<F, F>);
#[derive(Debug, Clone)]
struct FibonacciConfig {
advice: Column<Advice>,
selector: Selector,
instance: Column<Instance>,
}
#[derive(Debug, Clone)]
struct FibonacciChip<F: FieldExt> {
config: FibonacciConfig,
_marker: PhantomData<F>,
}
impl<F: FieldExt> FibonacciChip<F> {
pub fn construct(config: FibonacciConfig) -> Self {
Self {
config,
_marker: PhantomData,
}
}
pub fn configure(
meta: &mut ConstraintSystem<F>,
advice: Column<Advice>,
instance: Column<Instance>,
) -> FibonacciConfig {
let selector = meta.selector();
meta.enable_equality(advice);
meta.enable_equality(instance);
meta.create_gate("add", |meta| {
//
// advice | selector
// a | s
// b |
// c |
//
let s = meta.query_selector(selector);
let a = meta.query_advice(advice, Rotation::cur());
let b = meta.query_advice(advice, Rotation::next());
let c = meta.query_advice(advice, Rotation(2));
vec![s * (a + b - c)]
});
FibonacciConfig {
advice,
selector,
instance,
}
}
pub fn assign(
&self,
mut layouter: impl Layouter<F>,
nrows: usize,
) -> Result<AssignedCell<F, F>, Error> {
layouter.assign_region(
|| "entire fibonacci table",
|mut region| {
self.config.selector.enable(&mut region, 0)?;
self.config.selector.enable(&mut region, 1)?;
let mut a_cell = region.assign_advice_from_instance(
|| "1",
self.config.instance,
0,
self.config.advice,
0,
)?;
let mut b_cell = region.assign_advice_from_instance(
|| "1",
self.config.instance,
1,
self.config.advice,
1,
)?;
for row in 2..nrows {
if row < nrows - 2 {
self.config.selector.enable(&mut region, row)?;
}
let c_cell = region.assign_advice(
|| "advice",
self.config.advice,
row,
|| a_cell.value().copied() + b_cell.value(),
)?;
a_cell = b_cell;
b_cell = c_cell;
}
Ok(b_cell)
},
)
}
pub fn expose_public(
&self,
mut layouter: impl Layouter<F>,
cell: AssignedCell<F, F>,
row: usize,
) -> Result<(), Error> {
layouter.constrain_instance(cell.cell(), self.config.instance, row)
}
}
#[derive(Default)]
struct MyCircuit<F>(PhantomData<F>);
impl<F: FieldExt> Circuit<F> for MyCircuit<F> {
type Config = FibonacciConfig;
type FloorPlanner = SimpleFloorPlanner;
fn without_witnesses(&self) -> Self {
Self::default()
}
fn configure(meta: &mut ConstraintSystem<F>) -> Self::Config {
let advice = meta.advice_column();
let instance = meta.instance_column();
FibonacciChip::configure(meta, advice, instance)
}
fn synthesize(
&self,
config: Self::Config,
mut layouter: impl Layouter<F>,
) -> Result<(), Error> {
let chip = FibonacciChip::construct(config);
let out_cell = chip.assign(layouter.namespace(|| "entire table"), 10)?;
chip.expose_public(layouter.namespace(|| "out"), out_cell, 2)?;
Ok(())
}
}
#[cfg(test)]
mod tests {
use super::MyCircuit;
use std::marker::PhantomData;
use halo2_proofs::{dev::MockProver, pasta::Fp};
#[test]
fn fibonacci_example2() {
let k = 4;
let a = Fp::from(1); // F[0]
let b = Fp::from(1); // F[1]
let out = Fp::from(55); // F[9]
let circuit = MyCircuit(PhantomData);
let mut public_input = vec![a, b, out];
let prover = MockProver::run(k, &circuit, vec![public_input.clone()]).unwrap();
prover.assert_satisfied();
public_input[2] += Fp::one();
let _prover = MockProver::run(k, &circuit, vec![public_input]).unwrap();
// uncomment the following line and the assert will fail
// _prover.assert_satisfied();
}
}
The FibonacciConfig defines the column structure: one advice column for the computation sequence, one instance column for public values (the initial values and final result), and a selector to control when the addition gate is active.
The custom gate s * (a + b - c) enforces the Fibonacci recurrence relation. When the selector is enabled, it constrains that the current value plus the next value equals the value two rows down. The rotation system allows expressing this sequential relationship elegantly within PLONKish constraints.
During assignment, the first two advice cells are initialized from public instance values using assign_advice_from_instance(). Then the code iterates through the remaining positions, computing each new Fibonacci number as the sum of the previous two and assigning it to the advice column.
The selector management is crucial—we enable it for the first 8 rows (indices 0-7) but disable it for the last two rows to avoid constraining non-existent values. This demonstrates the fine-grained control selectors provide over constraint activation.
Finally, expose_public() constrains the final computed value to match the expected public output in the instance column, completing the public-private interface.
The test demonstrates both positive and negative cases: the correct computation with F[9] = 55 satisfies all constraints, while an incorrect expected output (56) would cause the proof to fail, showing how the cryptographic guarantees prevent cheating.
This example showcases the elegance of Halo2's API: complex cryptographic concepts like polynomial commitments and constraint systems are abstracted into intuitive operations like assigning values, enabling selectors, and defining gates. The result is a system that's both powerful enough for sophisticated cryptographic applications and accessible enough for practical development.
Let's examine the provided Fibonacci implementation in Zirgen to see how these concepts work in practice. This circuit computes Fibonacci numbers over multiple execution cycles, demonstrating Zirgen's approach to iterative computation.
extern GetCycle() : Val;
component IsZero(val: Val) {
// Nondeterministically 'guess' the result
isZero := NondetReg(Isz(val));
// Compute the inverse (for non-zero values), for zero values, Inv returns 0
inv := NondetReg(Inv(val));
// isZero should be either 0 or 1
isZero * (1 - isZero) = 0;
// If isZero is 0 (i.e. nonzero) then val must have an inverse
val * inv = 1 - isZero;
// If isZero is 1, then val must be zero
isZero * val = 0;
// If isZero is 1, then inv must be zero
isZero * inv = 0;
isZero
}
test IsZeroTest {
IsZero(0) = 1;
IsZero(1) = 0;
IsZero(2) = 0;
}
component CycleCounter() {
global total_cycles := NondetReg(6);
cycle := NondetReg(GetCycle());
public is_first_cycle := IsZero(cycle);
[is_first_cycle, 1-is_first_cycle] -> ({
// First cycle; previous cycle should be the last cycle.
cycle@1 = total_cycles - 1;
}, {
// Not first cycle; cycle number should advance by one for every row.
cycle = cycle@1 + 1;
});
cycle
}
component Top() {
global f0: Reg;
global f1: Reg;
global steps: Reg;
cycle := CycleCounter();
first := cycle.is_first_cycle;
// Copy previous two terms forward
d2 : Reg;
d3 : Reg;
d1 := Reg([first, 1-first] -> (f0, d2@1));
d2 := Reg([first, 1-first] -> (f1, d3@1));
// Compute the next Fibonacci term
d3 := Reg(d1 + d2);
// If cycle = steps, write the next term to the output
public terminate := IsZero(cycle - steps + 1);
[terminate, 1 - terminate] -> ({
global f_last := Reg(d3);
Log("f_last = %u", f_last);
}, {});
}
component FibTest(f0: Val, f1: Val, steps: Val, out: Val) {
// Supply inputs
global f0 := Reg(f0);
global f1 := Reg(f1);
global steps := Reg(steps);
top := Top();
// Check the output
[top.terminate, 1 - top.terminate] -> ({
global f_last : Reg;
f_last = out;
}, {});
}
test FirstCycle {
FibTest(1, 2, 1, 3);
}
test SecondCycle {
FibTest(1, 2, 2, 5);
}
test SixthCycle {
FibTest(1, 2, 6, 34);
}
The implementation begins with the IsZero component, which demonstrates sophisticated constraint composition. It uses two NondetReg components—one to guess whether the input is zero (isZero) and another to compute the multiplicative inverse (inv). The constraints ensure mathematical consistency: isZero * (1 - isZero) = 0 forces the result to be binary, while val * inv = 1 - isZero ensures that non-zero values have proper inverses. This component showcases how Zirgen handles conditional logic through mathematical constraints rather than explicit branching.
The CycleCounter component manages the execution flow across multiple steps. It uses the external GetCycle() function to track the current execution step and employs the IsZero component to detect the first cycle. The mux syntax [is_first_cycle, 1-is_first_cycle] -> ({...}, {...}) demonstrates conditional execution—different constraint blocks activate based on whether it's the first cycle or not. The offset constraint cycle = cycle@1 + 1 shows how register values evolve across execution steps, ensuring each cycle increments properly.
In the Top component, the actual Fibonacci computation unfolds through register evolution. The global registers f0, f1, and steps store the initial conditions and iteration count. The core computation uses three registers (d1, d2, d3) that represent consecutive Fibonacci terms, with values flowing forward through offset constraints like d2@1 referencing the previous cycle's value.
The mux construct [first, 1-first] -> (f0, d2@1) elegantly handles initialization versus continuation logic—on the first cycle, d1 takes the initial value f0, but on subsequent cycles it takes the previous cycle's d2 value. This pattern repeats for d2, creating the characteristic Fibonacci sliding window.
Termination detection uses IsZero(cycle - steps + 1) to identify when the computation should complete, triggering the final output assignment through another mux construct. The terminate flag controls whether the final result is written to the global f_last register.
The FibTest component demonstrates the testing framework, setting up initial conditions and verifying outputs. The tests show the circuit computing F₃=3, F₄=5, and F₈=34, validating correctness across different iteration counts.
This example illustrates several key Zirgen principles: components compose naturally to build complex functionality, constraint-based conditional logic replaces explicit control flow, offset syntax elegantly expresses cross-cycle dependencies, and the execution trace captures the complete computational evolution. Unlike Halo2's spatial constraint satisfaction, Zirgen's approach naturally expresses iterative algorithms through state machine evolution over execution cycles.
Let's trace through the provided Fibonacci implementation to see Plonky3's direct constraint authoring in practice. This circuit computes Fibonacci sequences using Plonky3's AIR framework.
use core::borrow::Borrow;
use p3_air::{Air, AirBuilder, AirBuilderWithPublicValues, BaseAir};
use p3_baby_bear::{BabyBear, Poseidon2BabyBear};
use p3_challenger::{DuplexChallenger, HashChallenger, SerializingChallenger32};
use p3_commit::ExtensionMmcs;
use p3_dft::Radix2DitParallel;
use p3_field::extension::BinomialExtensionField;
use p3_field::{Field, PrimeCharacteristicRing, PrimeField64};
use p3_fri::{HidingFriPcs, TwoAdicFriPcs, create_test_fri_params};
use p3_keccak::{Keccak256Hash, KeccakF};
use p3_matrix::Matrix;
use p3_matrix::dense::RowMajorMatrix;
use p3_merkle_tree::{MerkleTreeHidingMmcs, MerkleTreeMmcs};
use p3_symmetric::{
CompressionFunctionFromHasher, PaddingFreeSponge, SerializingHasher, TruncatedPermutation,
};
use p3_uni_stark::{StarkConfig, prove, verify};
use rand::SeedableRng;
use rand::rngs::SmallRng;
/// For testing the public values feature
pub struct FibonacciAir {}
impl<F> BaseAir<F> for FibonacciAir {
fn width(&self) -> usize {
NUM_FIBONACCI_COLS
}
}
impl<AB: AirBuilderWithPublicValues> Air<AB> for FibonacciAir {
fn eval(&self, builder: &mut AB) {
let main = builder.main();
let pis = builder.public_values();
let a = pis[0];
let b = pis[1];
let x = pis[2];
let (local, next) = (
main.row_slice(0).expect("Matrix is empty?"),
main.row_slice(1).expect("Matrix only has 1 row?"),
);
let local: &FibonacciRow<AB::Var> = (*local).borrow();
let next: &FibonacciRow<AB::Var> = (*next).borrow();
let mut when_first_row = builder.when_first_row();
when_first_row.assert_eq(local.left.clone(), a);
when_first_row.assert_eq(local.right.clone(), b);
let mut when_transition = builder.when_transition();
// a' <- b
when_transition.assert_eq(local.right.clone(), next.left.clone());
// b' <- a + b
when_transition.assert_eq(local.left.clone() + local.right.clone(), next.right.clone());
builder.when_last_row().assert_eq(local.right.clone(), x);
}
}
pub fn generate_trace_rows<F: PrimeField64>(a: u64, b: u64, n: usize) -> RowMajorMatrix<F> {
assert!(n.is_power_of_two());
let mut trace = RowMajorMatrix::new(F::zero_vec(n * NUM_FIBONACCI_COLS), NUM_FIBONACCI_COLS);
let (prefix, rows, suffix) = unsafe { trace.values.align_to_mut::<FibonacciRow<F>>() };
assert!(prefix.is_empty(), "Alignment should match");
assert!(suffix.is_empty(), "Alignment should match");
assert_eq!(rows.len(), n);
rows[0] = FibonacciRow::new(F::from_u64(a), F::from_u64(b));
for i in 1..n {
rows[i].left = rows[i - 1].right;
rows[i].right = rows[i - 1].left + rows[i - 1].right;
}
trace
}
const NUM_FIBONACCI_COLS: usize = 2;
pub struct FibonacciRow<F> {
pub left: F,
pub right: F,
}
impl<F> FibonacciRow<F> {
const fn new(left: F, right: F) -> Self {
Self { left, right }
}
}
impl<F> Borrow<FibonacciRow<F>> for [F] {
fn borrow(&self) -> &FibonacciRow<F> {
debug_assert_eq!(self.len(), NUM_FIBONACCI_COLS);
let (prefix, shorts, suffix) = unsafe { self.align_to::<FibonacciRow<F>>() };
debug_assert!(prefix.is_empty(), "Alignment should match");
debug_assert!(suffix.is_empty(), "Alignment should match");
debug_assert_eq!(shorts.len(), 1);
&shorts[0]
}
}
The implementation begins with the trace structure definition through FibonacciRow<F>, which encapsulates the two values needed for Fibonacci computation: left and right. This simple structure maps directly to the trace columns, with NUM_FIBONACCI_COLS = 2 defining the trace width. The Borrow implementation provides safe type conversion between raw field element slices and the structured row representation.
The FibonacciAir struct implements the core AIR traits that define constraint structure and evaluation. The BaseAir implementation simply declares the trace width, while the Air<AB> implementation contains the constraint logic.
The constraint evaluation demonstrates Plonky3's direct approach to polynomial identity authoring. Access to trace rows happens through explicit slicing and type conversion, giving you direct control over how witness data is interpreted.
Boundary constraints for initialization bind the first row to public inputs. The when_first_row() guard ensures these constraints apply only at execution start.
Transition constraints for recurrence implement the Fibonacci relationship across all interior rows. The when_transition() guard applies these constraints to every step except the first and last.
Boundary constraint for output binds the final result to the expected public value.
Witness generation happens separately through generate_trace_rows(), which simulates the Fibonacci computation to produce concrete field element assignments. This trace generation creates the witness data that will be committed in the proof, with each row containing the Fibonacci pair for that execution step.
This example showcases Plonky3's philosophical approach: constraints are exactly the polynomial identities you write, with no hidden gate abstractions. The three constraint families (initialization boundary, transition, output boundary) directly implement the mathematical relationships needed to prove Fibonacci computation correctness. The clear separation between witness generation and constraint specification allows the same AIR definition to work with different computation parameters while maintaining cryptographic soundness.