Vision
22 May
## min read

Decentralization is not a meme: Part 1

What do we mean by “decentralization”? Why Aztec takes decentralization seriously? In this post, we explore Aztec’s efforts around protocol decentralization: sequencer, prover, and upgrade mechanism.

Share
Written by
Lisa A.
Edited by

Many thanks to Cooper, Prasad, Rafal, Mahima, and Elias for the review.

Contents

  • What do we mean by “decentralization”?
  • Why Aztec takes decentralization seriously
  • Aztec’s efforts around protocol decentralization: sequencer, prover, and upgrade mechanismsome text
  • Request for proposal (RFP)
  • Sequencer
  • Prover
  • Conclusion

1) What do we mean by “decentralization”?

Decentralization in blockchain is one of the most speculative topics, often thought about as a meme.

Source
Source

The questions around decentralization are:

  • Which components should be decentralized (both at the blockchain and application layers)?
  • To what extent?
  • Is decentralization of a specific part of the network a must-have or just a nice-to-have?
  • Will we die if a specific part of the network is not decentralized?
  • Is it enough to have a decentralization roadmap or should we decentralize for real?

The goal of this article is to shed some light on what we mean by decentralization, why it matters, and how decentralization is defined and provided in the context of the Aztec network.

Before we start figuring out the true meaning of decentralization, we should note that decentralization is not a final goal in itself. Instead, it is a way to provide rollups with a number of desired properties, including:

  • Permissionlessness (censorship resistance as a consequence) – anyone can submit a transaction and the rollup will process it (i.e. the rollup doesn’t have an opinion on which transactions are good and which are bad and can’t censor them).
  • Liveness – chain operates (processes transactions) nonstop irrespective of what is happening.
  • Security – the correctness of state transition (i.e. transactions' correct execution) is guaranteed by something robust and reliable (e.g. zero-knowledge proofs).

In the case of zk-rollups, these properties are tightly connected with “entities” that operate the rollup, including:

  • Sequencer – orders and executes transactions –> impacts permissionlessness and liveness.
  • Prover – generates proof that the transactions were executed correctly –> impacts security and liveness.
  • Governance mechanism (upgrade mechanism) – manages and implements protocol upgrades –> impacts security, liveness, and permissonlessness.

Even though we said at the beginning of the article that decentralization is a speculative topic, it’s not overly speculative. Decentralization is required for permissionlessness, censorship resistance, and liveness, which are required to reach the system’s end goal, credible neutrality (at least to some extent). “Credible neutrality” means the protocol is not “designed to favor specific people or outcomes over others” and is “able to convince a large and diverse group of people that the mechanism at least makes that basic effort to be fair”.

Credible neutrality is a crucial element for rollups as well, which is why we're prioritizing decentralization at Aztec, among other things. Progressive decentralization is not an option; Decentralization from the start is a must-have as the regulatory, political, and legal landscapes are constantly changing.

In the next section, we will dive into the specifics of Aztec’s case, looking at its components and their levels of decentralization.

2) Why Aztec takes decentralization seriously

Aztec network is a privacy-first L2 on Ethereum. Its goal is to allow developers to build dapps that are privacy-preserving and compliant (in any desired jurisdiction!) at the same time.

For Aztec, there are two levels of decentralization: protocol and organizational.

At the protocol level, Aztec network consists of a number of components, such as the P2P transaction pool (i.e. mempool), sequencer, prover, upgrade mechanism, economics, Noir (DSL language), execution layer, cryptography, web SDK, rollup contract, and more.

For each of these, decentralization might have a slightly different meaning. But the root reason why we care about it is to provide safety for the developers and users ecosystem.

  • For the rollup contract and upgrade mechanism, the question is who controls the upgrades and how we can diversify this process in terms of quantity and geography.
    A good mechanism should defend the protocol from forced upgrades (e.g. by the court). It should also mitigate the sanctions risk, isolating this risk at the application level, not the rollup level.
  • For sequencer and prover, we also need quantity and geographical decentralization as well as a multi-client approach where users can choose a vendor from a distributed set that may have various different priorities.
  • For economic decentralization, we need to ensure that “the ongoing balancing of incentives among the stakeholders — developers, contributors, and consumers — will drive further contributions of value to the overall system”. It covers the vesting of power, control, and ownership with system stakeholders in such a way that the value of the ecosystem as a whole accrues to a broader array of participants.
  • For all software components, such as client software, we need to ensure that copies of the software are distributed widely enough within the community to ensure access, even if the original maintainers choose to abandon the projects.

When it comes to long-term economic decentralization, the desired outcome is power decentralization, which in turn can be achieved through geographical decentralization.

In the context of geographical decentralization, we particularly care that:

  • diversification among different jurisdictions mitigates the risk of local regulatory regimes attempting to impose their will.
  • when reasoning about extremes and black swan events, having a global system is attractive from the point of view of safety and availability.
  • intuitively, a system that privileges certain geographies cannot be considered neutral and fair.

For more thoughts on geographical decentralization, check out the articleDecentralized crypto needs you: to be a geographical decentralization maxi” by Phil Daian.

3) Aztec’s efforts around protocol decentralization: sequencer, prover, and upgrade mechanism

The decentralization to-do list is pretty huge. Decentralization mechanism design is a complex process that takes time, which is why Aztec started working on it far in advance and called on the most brilliant minds to collaborate, cooperate, design, and produce the necessary mechanisms that will allow the Aztec network to be credibly neutral from day one.

Request for proposal (RFP)

Since last summer, we’ve announced a number of requests for proposal (RFPs) to invite the power of community and the greatest minds in the industry to find a range of solutions for the Aztec network protocol design:

Everyone was welcome to craft a proposal and post it on the forum. For each of the RFPs, we outlined a number of protocol requirements that will decentralize and diversify each part of Aztec, making it robust and credibly neutral.

For each RFP, we got a number of proposals (all of them are attached in the RFPs’ comments). Proposals were discussed on the forum by the community and analyzed in detail by partners (e.g. Block Science) and the Aztec Labs team.

In this section, we will describe and briefly discuss the chosen proposals.

Sequencer Selection

Some of the desired properties

There are a number of desired properties assigned to the sequencer. These include:

  • Permissionlessness sequencer role – any actor who adheres to the protocol can fill the role of sequencer.
  • Elegant reorg recovery – the protocol has affordance for recovering its state after an Ethereum reorg.
  • Denial of services – an actor cannot prevent other actors from using the system.
  • L2 chain halt – there is a healing mechanism in case of block proposal process failure.
  • Censorship resistance – it’s infeasibly expensive to maintain sufficient control of the sequencer role to discriminate on transactions.

Other factors to be considered are

  • How the protocol handles MEV
  • How costly it is to form a cartel
  • Protocol complexity
  • Coordination overhead – how costly it is to coordinate a new round of sequencers

Sequencer mechanism

The chosen sequencer design is called “Fernet” and was suggested by Santiago Palladino (“Palla”), one of the talented engineers at Aztec Labs. Its core component is randomized sequencer selection. To be eligible for selection, sequencers need to stake assets on L1. After staking, a sequencer needs to wait for an activation period of a number of L1 blocks before they can start proposing new blocks. The waiting period guards the protocol against malicious governance attacks.

Block proposal mechanism

Stage 0: Score calculation

  • In each round (currently expected to be ~12-36ss), staked sequencers calculate their round-based score, derived from a hash over RANDAO and a public key.

Stage 1: Proposal

  • Based on the calculated scores, if a sequencer determines its score for a given round as likely to win, it commits to a block proposal.
  • During the proposal stage, the highest ranking proposers (i.e. sequencers) submit L1 transactions, including a commitment to the Layer-2 transaction ordering in the proposed block, the previous block being built upon, and any additional metadata required by the protocol.

Stage 2: Prover commitment – estimated ~3-5 Ethereum blocks

  • The highest ranking proposers (i.e. sequencers) make an off-chain deal with provers. This might be a vertical integration (i.e. a sequencer runs a prover), business deal with a specific 3rd party prover, or a prover-boost auction between all of the third party proving marketplaces.
    On the sequencers' side, this approach allows them to generate proofs according to their needs. On the network side, it benefits from modularity, enjoying all proving systems innovations.
  • Provers build proofs for blocks with the highest scores.
  • This stage will be explicitly defined in the next section dedicated to the proving mechanism.

Stage 3: Reveal

  • At the end of the proposal phase, the sequencer with the highest ranking block proposal on L1 becomes the leader for this cycle, and reveals the block content, i.e. uploads the block contents to either L1 or a verifiable DA layer.
  • As stages 0 and 1 are effectively multi-leader protocols, there is a very high probability that someone will submit a proposal (though it might not be among the leaders according to the score).
    In the event that no one submits a valid block proposal, we introduce a “backup” mode, which enables a first-come, first-served race to submit the first proof to the L1 smart contracts. There is also a similar backup mode in the event that there is a valid proposal, but no valid prover commitment (deposit) by the end of the prover commitment phase or should the block not get finalized.
  • If the leading sequencer posts invalid data during the reveal phase, the sequencer for the next block will build from the previous one.

Stage 4: Proving – estimated ~40 Ethereum blocks

  • Before the end of this phase, it is expected for the block proof to be published to L1 for verification.
  • Once the proof for the highest ranking block is submitted to L1 and verified, the block becomes final, assuming its parent block in the chain is also final.
  • This would trigger new tokens to be minted, and payouts to the sequencer, prover commitment address, and the address that submitted the proofs.
  • If block N is committed to but doesn't get proven, its prover deposit is slashed.

The cycle for block N+1 can start at the end of the block N reveal phase.

How Fernet meets required properties

How Fernet meets required properties

Property
How Fernet addresses it
Permissionlessness sequencer role
Anyone who locked some funds on L1 can propose a block after a waiting period. 
Elegant reorg recovery
On L2, reorg is not possible as a new block can be proposed strictly after the previous block was finalized. However, L1 reorg might impact L2.
L2 chain halt
Relying on the Ethereum copy of Aztec network state between the last finalized epoch and the current safe block.
Censorship resistance
In order to censor a transaction, it must be the case that an entity can “guarantee” that they are repeatedly selected as sequencer while the transaction to be censored is awaiting processing. The VRF selection process will prevent such a guarantee.
MEV
For the public domain, MEV is extracted by the sequencer responsible for the current slot. In the private domain, there is no direct MEV extraction. However, there might be some probabilistically extracted MEV, though its feasibility will depend on the dapps landscape deployed on the chain.
Protocol complexity: engaged mechanisms can be adjusted over time because of modularity
The Aztec protocol design assumes modularity, allowing it to choose any prover and DA mechanisms and adjust them later if needed. 
Cost for private and public function calls
For public functions, call costs depend directly on the specific executed opcodes (as for any other rollup). For private function calls, there is a fixed cost for every state update and proof verification.

For a detailed analysis of the protocol's ability to satisfy the design requirements, check this report crafted by an independent third party, Block Science.

Prover

Context

In the previous section, we mentioned that at stage 3 proofs are supplied to the blocks. However, we didn’t explicitly define the specific prover mechanism.

To design a prover mechanism, Aztec also initialized an RFP after the sequencer mechanism was chosen to be Fernet (as described in the previous section).

Without going into too much detail, one should note that the Aztec network has two types of proofs: client-side proofs and rollup-side proofs. Client-side proofs are generated for each private function and submitted to the Aztec network by the user. The client-side proving mechanism doesn’t have any decentralization requirements, as all the private data is processed solely on the user’s device, meaning it’s inherently decentralized. Covering client-side proof generation is outside the scope of this piece, but check out one of our previous articles to learn more about it.

The Aztec RFP “Decentralized Prover Coordination” asked for a rollup-side prover mechanism, the goal of which is to generate proofs for blocks.

In particular, it means the sequencer executes every public function and the prover creates a proof of each function’s correct execution. That proof is aggregated into a kernel proof. Each kernel proof is aggregated into another kernel proof and so on (i.e. as a chain of kernel proofs). The final kernel proof is aggregated into a base rollup proof. The base rollup proofs are aggregated into pairs in a tree-like structure. The root of this tree is the final block proof.

Desired properties
There is a row of desired properties assigned to the prover mechanism. Among those:

  • Permissionlesness – anyone can run an Aztec prover.
  • The prover of each block can be recognized to be rewarded or slashed by the protocol.
  • Recovery mechanism in case provers stop supplying proofs.
  • Flexibility for future cryptography improvements.


Prover mechanism

The chosen prover mechanism is called “Sidecar” and was suggested by Cooper Kunz.

  • It is a minimally enshrined commitment and slashing scheme that facilitates the sequencer outsourcing proving rights to anyone, given an out-of-protocol prover marketplace. This allows sequencers to leverage reputation or other off-protocol information to make their choice.
  • In particular, it means anyone can take a prover role. For example, it can be a specialized proving marketplace, or a vertically integrated sequencer’s prover, or an individual independent prover.
  • After the sequencer chooses its prover, there is a Prover Commitment Phase by the end of which any sequencer who believes they have a chance to win block proposal rights must signal via an L1 transaction the prover’s Ethereum address and the prover specifies its deposit.
  • After the prover commits, the block content is revealed by the sequencer. Going with this specific order (i.e. first prover commitment then revealing block content) allows one to mitigate potential MEV-stealing (if sequencers have to publish all data to a DA layer before the commitment) and proof withholding attacks (i.e. putting up a block proposal that seems valid but never revealing the underlying data required to verify it).
  • The prover operates outside of Aztec protocol and the Aztec network. Hence, after the prover commitment stage, the protocol simply waits a predetermined amount of time for the proof submission phase to begin.

How Sidecar meets required properties

How Sidecar meets required properties

Property
How Sidecar addresses it
Permissionlessness
Anyone can run a prover.
Prover recognizability
Prover posts commitment to L1.
Recovery mechanism
Reorgs are not possible within the current design. 
Flexibility
There is no hard commitment to one specific proving system. 

Conclusion

Decentralization is neither a sentiment nor a meme. It’s one of the core milestones on the way to credible neutrality. And credible neutrality is one of the core milestones on the way to a long-lasting, secure, and robust Ethereum ecosystem.

If the network is not credibly neutral, the safety of users’ funds cannot be long-term guaranteed. Furthermore, if the network is not credibly neutral, the developers building on top of the network can’t be sure that the network will be there for them tomorrow, the day after tomorrow, in a year, in ten years, etc. They have to trust the network team that they are good, reliable people, and will continue maintaining the network and will fulfill all their promises. But what if that is not the case? Good intentions of a small number of people are not enough to secure hundreds of dapps, the thousands of developers building them, and the millions of users using them. The network should be designed in a credibly neutral way from the first to the last bit. Without compromises, without speculation, without promises.

That is what we are working on at Aztec Labs: systematically decentralizing all of the network’s components (e.g. sequencer and prover) with the help and support of a wide community (e.g. through RFP and RFC mechanisms) and top-notch partners (e.g. Block Science).

That is why, especially in the early days, Aztec prioritizes safety over other properties (e.g. impossibility of reorg attacks by design and unrolling upgrade mechanism allowing sequencers to have enough time to battle-test the mechanism before any assets come to the network).

Besides technical and economical decentralization, Aztec also considers its legal aspect that comes in the form of a foundation that is a suitable vehicle to promote decentralization.

If you want to contribute to Aztec’s decentralization – fill in the form.

This was the first part of the piece on Aztec’s decentralization. In the second part (coming soon), we will cover the upgrade mechanism.

Sources:

Read more
Aztec Network
Aztec Network
24 Sep
xx min read

Testnet Retro - 2.0.3 Network Upgrade

Special thanks to Santiago Palladino, Phil Windle, Alex Gherghisan, and Mitch Tracy for technical updates and review.

On September 17th, 2025, a new network upgrade was deployed, making Aztec more secure and flexible for home stakers. This upgrade, shipped with all the features needed for a fully decentralized network launch, includes a completely redesigned slashing system that allows inactive or malicious operators to be removed, and does not penalize home stakers for short outages. 

With over 23,000 operators running validators across 6 continents (in a variety of conditions), it is critical not to penalize nodes that temporarily drop due to internet connectivity issues. This is because users of the network are also found across the globe, some of whom might have older phones. A significant effort was put into shipping a low-memory proving mode that allows older mobile devices to send transactions and use privacy-preserving apps. 

The network was successfully deployed, and all active validators on the old testnet were added to the queue of the new testnet. This manual migration was only necessary because major upgrades to the governance contracts had gone in since the last testnet was deployed. The new testnet started producing blocks after the queue started to be “flushed,” moving validators into the rollup. Because the network is fully decentralized, the initial flush could have been called by anyone. The network produced ~2k blocks before an invalid block made it to the chain and temporarily stalled block production. Block production is now restored and the network is healthy. This post explains what caused the issue and provides an update on the current status of the network. 

Note: if you are a network operator, you must upgrade to version 2.0.3 and restart your node to participate in the latest testnet. If you want to run a node, it’s easy to get started.

What’s included in the upgrade? 

This upgrade was a team-wide effort that optimized performance and implemented all the mechanisms needed to launch Aztec as a fully decentralized network from day 1. 

Feature highlights include: 

  • Improved node stability: The Aztec node software is now far more stable. Users will see far fewer crashes and increased performance in terms of attestations and blocks produced. This translates into a far better experience using testnet, as transactions get included much faster.
  • Boneh–Lynn–Shacham (BLS) keys: When a validator registers on the rollup, they also provide keys that allow BLS signature aggregation. This unlocks future optimizations where signatures can be combined via p2p communication, then verified on Ethereum, while proving that the signatures come from block proposers.
  • Low-memory proving mode: The client-side proving requirements have dropped dramatically from 3.7GB to 1.3GB through a new low-memory proving mode, enabling older mobile devices to send Aztec transactions and use apps like zkPassport. 
  • AVM performance: The Aztec Virtual Machine (AVM) performance has seen major improvements with constraint coverage jumping from 0% to approximately 90-95%, providing far more secure AVM proving and more realistic proving performance numbers from provers. 
  • Flexible key management: The system now supports flexible key management through keystores, multi-EOA support, and remote signers, eliminating the need to pass private keys through environment variables and representing a significant step toward institutional readiness. 
  • Redesigned slashing: Slashing has been redesigned to provide much better consensus guarantees. Further, the new configuration allows nodes not to penalize home stakers for short outages, such as 20-minute interruptions. 
  • Slashing Vetoer: The Slasher contract now has an explicit vetoer: an address that can prevent slashing. At Mainnet, the initial vetoer will be operated by an independent group of security researchers who will also provide security assessments on upgrades. This acts as a failsafe in the event that nodes are erroneously trying to slash other nodes due to a bug.

With these updates in place, we’re ready to test a feature-complete network. 

What happened after deployment? 

As mentioned above, block production started when someone called the flush function and a minimum number of operators from the queue were let into the validator set. 

Shortly thereafter, while testing the network, a member of the Aztec Labs team spun up a “bad” sequencer that produced an invalid block proposal. Specifically, one of the state trees in the proposal was tampered with. 

Initial block production 

The expectation was that this would be detected immediately and the block rejected. Instead, a bug was discovered in the validator code where the invalid block proposal wasn't checked thoroughly enough. In effect, the proposal got enough attestations, so it was posted to the rollup. Due to extra checks in the nodes, when the nodes pulled the invalid block from Ethereum, they detected the tampered tree and refused to sync it. This is a good outcome as it prevented the attack. Additionally, prover nodes refused to prove the epoch containing the invalid block. This allowed the rollup to prune the entire bad epoch away. After the prune, the invalid state was reset to the last known good block.

Block production stalled

The prune revealed another, smaller bug, where, after a failed block sync, a prune does not get processed correctly, requiring a node restart to clear up. This led to a 90-minute outage from the moment the block proposal was posted until the testnet recovered. The time was equally split between waiting for pruning to happen and for the nodes to restart in order to process the prune.

The Fix

Validators were correctly re-executing all transactions in the block proposals and verifying that the world state root matched the one in the block proposal, but they failed to check that intermediate tree roots, which are included in the proposal and posted to the rollup contract on L1, were also correct. The attack tweaked one of these intermediate roots while proposing a correct world state root, so it went unnoticed by the attestors. 

As mentioned above, even though the block made it through the initial attestation and was posted to L1, the invalid block was caught by the validators, and the entire epoch was never proven as provers refused to generate a proof for the inconsistent state. 

A fix was pushed that resolved this issue and ensured that invalid block proposals would be caught and rejected. A second fix was pushed that ensures inconsistent state is removed from the uncommitted cache of the world state.

Block production restored

What’s Next

Block production is currently running smoothly, and the network health has been restored. 

Operators who had previously upgraded to version 2.0.3 will need to restart their nodes. Any operator who has not upgraded to 2.0.3 should do so immediately. 

Attestation and Block Production rate on the new rollup

Slashing has also been functioning as expected. Below you can see the slashing signals for each round. A single signal can contain votes for multiple validators, but a validator's attester needs to receive 65 votes to be slashed.

Votes on slashing signals

Join us this Thursday, September 25, 2025, at 4 PM CET on the Discord Town Hall to hear more about the 2.0.3 upgrade. To stay up to date with the latest updates for network operators, join the Aztec Discord and follow Aztec on X.

Noir
Noir
18 Sep
xx min read

Just write “if”: Why Payy left Halo2 for Noir

The TL;DR:

Payy, a privacy-focused payment network, just rewrote its entire ZK architecture from Halo2 to Noir while keeping its network live, funds safe, and users happy. 

Code that took months to write now takes weeks (with MVPs built in as little as 30 minutes). Payy’s codebase shrank from thousands of lines to 250, and now their entire engineering team can actually work on its privacy infra. 

This is the story of how they transformed their ZK ecosystem from one bottlenecked by a single developer to a system their entire team can modify and maintain.

Starting with Halo2

Eighteen months ago, Payy faced a deceptively simple requirement: build a privacy-preserving payment network that actually works on phones. That requires client-side proving.

"Anyone who tells you they can give you privacy without the proof being on the phone is lying to you," Calum Moore - Payy's Technical Lead - states bluntly.

To make a private, mobile network work, they needed:

  • Mobile proof generation with sub-second performance
  • Minimal proof sizes for transmission over weak mobile signals
  • Low memory footprint for on-device proving
  • Ethereum verifier for on-chain settlement

To start, the team evaluated available ZK stacks through their zkbench framework:

STARKs (e.g., RISC Zero): Memory requirements made them a non-starter on mobile. Large proof sizes are unsuitable for mobile data transmission.

Circom with Groth16: Required trusted setup ceremonies for each circuit update. It had “abstracted a bit too early” and, as a result, is not high-level enough to develop comfortably, but not low-level enough for controls and optimizations, said Calum.

Halo2: Selected based on existing production deployments (ZCash, Scroll), small proof sizes, and an existing Ethereum verifier. As Calum admitted with the wisdom of hindsight: “Back a year and a half ago, there weren’t any other real options.”

Bus factor = 1 😳

Halo2 delivered on its promises: Payy successfully launched its network. But cracks started showing almost immediately.

First, they had to write their own chips from scratch. Then came the real fun: if statements.

"With Halo2, I'm building a chip, I'm passing this chip in... It's basically a container chip, so you'd set the value to zero or one depending on which way you want it to go. And, you'd zero out the previous value if you didn't want it to make a difference to the calculation," Calum explained, “when I’m writing in Noir, I just write ‘if’. "

With Halo2, writing an if statement (programming 101) required building custom chip infra. 

Binary decomposition, another fundamental operation for rollups, meant more custom chips. The Halo2 implementation quickly grew to thousands of lines of incomprehensible code.

And only Calum could touch any of it.

The Bottleneck

"It became this black box that no one could touch, no one could reason about, no one could verify," he recalls. "Obviously, we had it audited, and we were confident in that. But any changes could only be done by me, could only be verified by me or an auditor."

In engineering terms, this is called a bus factor of one: if Calum got hit by a bus (or took a vacation to Argentina), Payy's entire proving system would be frozen. "Those circuits are open source," Calum notes wryly, "but who's gonna be able to read the Halo2 circuits? Nobody."

Evaluating Noir: One day, in Argentina…

During a launch event in Argentina, "I was like, oh, I'll check out Noir again. See how it's going," Calum remembers. He'd been tracking Noir's progress for months, occasionally testing it out, waiting for it to be reliable.

"I wrote basically our entire client-side proof in about half an hour in Noir. And it probably took me - I don't know, three weeks to write that proof originally in Halo2."

Calum recreated Payy's client-side proof in Noir in 30 minutes. And when he tested the proving speed, without any optimization, they were seeing 2x speed improvements.

"I kind of internally… didn't want to tell my cofounder Sid that I'd already made my decision to move to Noir," Calum admits. "I hadn't broken it to him yet because it's hard to justify rewriting your proof system when you have a deployed network with a bunch of money already on the network and a bunch of users."

Rebuilding (Ship of Theseus-ing) Payy

Convincing a team to rewrite the core of a live financial network takes some evidence. The technical evaluation of Noir revealed improvements across every metric:

Proof Generation Time: Sub-0.5 second proof generation on iPhones. "We're obsessive about performance," Calum notes (they’re confident they can push it even further).

Code Complexity: Their entire ZK implementation compressed from thousands of lines of Halo2 to just 250 lines of Noir code. "With rollups, the logic isn't complex—it's more about the preciseness of the logic," Calum explains.

Composability: In Halo2, proof aggregation required hardwiring specific verifiers for each proof type. Noir offers a general-purpose verifier that accepts any proof of consistent size.

"We can have 100 different proving systems, which are hyper-efficient for the kind of application that we're doing," Calum explains. "Have them all aggregated by the same aggregation proof, and reason about whatever needs to be."

Migration Time

Initially, the goal was to "completely mirror our Halo2 proofs": no new features. This conservative approach meant they could verify correctness while maintaining a live network.

The migration preserved Payy's production architecture:

  • Rust core (According to Calum, "Writing a financial application in JavaScript is borderline irresponsible")
  • Three-proof system: client-side proof plus two aggregators  
  • Sparse Merkle tree with Poseidon hashing for state management

When things are transparent, they’re secure

"If you have your proofs in Noir, any person who understands even a little bit about logic or computers can go in and say, 'okay, I can kinda see what's happening here'," Calum notes.

The audit process completely transformed. With Halo2: "The auditors that are available to audit Halo2 are few and far between."

With Noir: "You could have an auditor that had no Noir experience do at least a 95% job."

Why? Most audit issues are logic errors, not ZK-specific bugs. When auditors can read your code, they find real problems instead of getting lost in implementation details.

Code Comparison

Halo2: Binary decomposition

  • Write a custom chip for binary decomposition
  • Implement constraint system manually
  • Handle grid placement and cell references
  • Manage witness generation separately
  • Debug at the circuit level when something goes wrong

Payy’s previous 383 line implementation of binary decomposition can be viewed here (pkg/zk-circuits/src/chips/binary_decomposition.rs).

Payy’s previous binary decomposition implementation

Meanwhile, binary decomposition is handled in Noir with the following single line.

pub fn to_le_bits<let N: u32>(self: Self) -> [u1; N]

(Source)

What's Next

With Noir's composable proof system, Payy can now build specialized provers for different operations, each optimized for its specific task.

"If statements are horrendous in SNARKs because you pay the cost of the if statement regardless of its run," Calum explains. But with Noir's approach, "you can split your application logic into separate proofs, and run whichever proof is for the specific application you're looking for."

Instead of one monolithic proof trying to handle every case, you can have specialized proofs, each perfect for its purpose.

The Bottom Line

"I fell a little bit in love with Halo2," Calum admits, "maybe it's Stockholm syndrome where you're like, you know, it's a love-hate relationship, and it's really hard. But at the same time, when you get a breakthrough with it, you're like, yes, I feel really good because I'm basically writing assembly-level ZK proofs."

“But now? I just write ‘if’.”

Technical Note: While "migrating from Halo2 to Noir" is shorthand that works for this article, technically Halo2 is an integrated proving system where circuits must be written directly in Rust using its constraint APIs, while Noir is a high-level language that compiles to an intermediate representation and can use various proving backends. Payy specifically moved from writing circuits in Halo2's low-level constraint system to writing them in Noir's high-level language, with Barretenberg (UltraHonk) as their proving backend.

Both tools ultimately enable developers to write circuits and generate proofs, but Noir's modular architecture separates circuit logic from the proving system - which is what made Payy's circuits so much more accessible to their entire team, and now allows them to swap out their proving system with minimal effort as proving systems improve.

Payy's code is open source and available for developers looking to learn from their implementation.

Aztec Network
Aztec Network
4 Sep
xx min read

A New Brand for a New Era of Aztec

After eight years of solving impossible problems, the next renaissance is here. 

We’re at a major inflection point, with both our tech and our builder community going through growth spurts. The purpose of this rebrand is simple: to draw attention to our full-stack privacy-native network and to elevate the rich community of builders who are creating a thriving ecosystem around it. 

For eight years, we’ve been obsessed with solving impossible challenges. We invented new cryptography (Plonk), created an intuitive programming language (Noir), and built the first decentralized network on Ethereum where privacy is native rather than an afterthought. 

It wasn't easy. But now, we're finally bringing that powerful network to life. Testnet is live with thousands of active users and projects that were technically impossible before Aztec.

Our community evolution mirrors our technical progress. What started as an intentionally small, highly engaged group of cracked developers is now welcoming waves of developers eager to build applications that mainstream users actually want and need.

Behind the Brand: A New Mental Model

A brand is more than aesthetics—it's a mental model that makes Aztec's spirit tangible. 

Our Mission: Start a Renaissance

Renaissance means "rebirth"—and that's exactly what happens when developers gain access to privacy-first infrastructure. We're witnessing the emergence of entirely new application categories, business models, and user experiences.

The faces of this renaissance are the builders we serve: the entrepreneurs building privacy-preserving DeFi, the activists building identity systems that protect user privacy, the enterprise architects tokenizing real-world assets, and the game developers creating experiences with hidden information.

Values Driving the Network

This next renaissance isn't just about technology—it's about the ethos behind the build. These aren't just our values. They're the shared DNA of every builder pushing the boundaries of what's possible on Aztec.

Agency: It’s what everyone deserves, and very few truly have: the ability to choose and take action for ourselves. On the Aztec Network, agency is native

Genius: That rare cocktail of existential thirst, extraordinary brilliance, and mind-bending creation. It’s fire that fuels our great leaps forward. 

Integrity: It’s the respect and compassion we show each other. Our commitment to attacking the hardest problems first, and the excellence we demand of any solution. 

Obsession: That highly concentrated insanity, extreme doggedness, and insatiable devotion that makes us tick. We believe in a different future—and we can make it happen, together. 

Visualizing the Next Renaissance

Just as our technology bridges different eras of cryptographic innovation, our new visual identity draws from multiple periods of human creativity and technological advancement. 

The Wordmark: Permissionless Party 

Our new wordmark embodies the diversity of our community and the permissionless nature of our network. Each letter was custom-drawn to reflect different pivotal moments in human communication and technological progress.

  • The A channels the bold architecture of Renaissance calligraphy—when new printing technologies democratized knowledge. 
  • The Z strides confidently into the digital age with clean, screen-optimized serifs. 
  • The T reaches back to antiquity, imagined as carved stone that bridges ancient and modern. 
  • The E embraces the dot-matrix aesthetic of early computing—when machines first began talking to each other. 
  • And the C fuses Renaissance geometric principles with contemporary precision.

Together, these letters tell the story of human innovation: each era building on the last, each breakthrough enabling the next renaissance. And now, we're building the infrastructure for the one that's coming.

The Icon: Layers of the Next Renaissance

We evolved our original icon to reflect this new chapter while honoring our foundation. The layered diamond structure tells the story:

  • Innermost layer: Sensitive data at the core
  • Black privacy layer: The network's native protection
  • Open third layer: Our permissionless builder community
  • Outermost layer: Mainstream adoption and real-world transformation

The architecture echoes a central plaza—the Roman forum, the Greek agora, the English commons, the American town square—places where people gather, exchange ideas, build relationships, and shape culture. It's a fitting symbol for the infrastructure enabling the next leap in human coordination and creativity.

Imagery: Global Genius 

From the Mughal and Edo periods to the Flemish and Italian Renaissance, our brand imagery draws from different cultures and eras of extraordinary human flourishing—periods when science, commerce, culture and technology converged to create unprecedented leaps forward. These visuals reflect both the universal nature of the Renaissance and the global reach of our network. 

But we're not just celebrating the past —we're creating the future: the infrastructure for humanity's next great creative and technological awakening, powered by privacy-native blockchain technology.

You’re Invited 

Join us to ask questions, learn more and dive into the lore.

Join Our Discord Town Hall. September 4th at 8 AM PT, then every Thursday at 7 AM PT. Come hear directly from our team, ask questions, and connect with other builders who are shaping the future of privacy-first applications.

Take your stance on privacy. Visit the privacy glyph generator to create your custom profile pic and build this new world with us.

Stay Connected. Visit the new website and to stay up-to-date on all things Noir and Aztec, make sure you’re following along on X.

The next renaissance is what you build on Aztec—and we can't wait to see what you'll create.

Aztec Network
Aztec Network
22 Jul
xx min read

Introducing the Adversarial Testnet

Aztec’s Public Testnet launched in May 2025.

Since then, we’ve been obsessively working toward our ultimate goal: launching the first fully decentralized privacy-preserving layer-2 (L2) network on Ethereum. This effort has involved a team of over 70 people, including world-renowned cryptographers and builders, with extensive collaboration from the Aztec community.

To make something private is one thing, but to also make it decentralized is another. Privacy is only half of the story. Every component of the Aztec Network will be decentralized from day one because decentralization is the foundation that allows privacy to be enforced by code, not by trust. This includes sequencers, which order and validate transactions, provers, which create privacy-preserving cryptographic proofs, and settlement on Ethereum, which finalizes transactions on the secure Ethereum mainnet to ensure trust and immutability.

Strong progress is being made by the community toward full decentralization. The Aztec Network now includes nearly 1,000 sequencers in its validator set, with 15,000 nodes spread across more than 50 countries on six continents. With this globally distributed network in place, the Aztec Network is ready for users to stress test and challenge its resilience.

Introducing the Adversarial Testnet

We're now entering a new phase: the Adversarial Testnet. This stage will test the resilience of the Aztec Testnet and its decentralization mechanisms.

The Adversarial Testnet introduces two key features: slashing, which penalizes validators for malicious or negligent behavior in Proof-of-Stake (PoS) networks, and a fully decentralized governance mechanism for protocol upgrades.

This phase will also simulate network attacks to test its ability to recover independently, ensuring it could continue to operate even if the core team and servers disappeared (see more on Vitalik’s “walkaway test” here). It also opens the validator set to more people using ZKPassport, a private identity verification app, to verify their identity online.  

Slashing on the Aztec Network

The Aztec Network testnet is decentralized, run by a permissionless network of sequencers.

The slashing upgrade tests one of the most fundamental mechanisms for removing inactive or malicious sequencers from the validator set, an essential step toward strengthening decentralization.

Similar to Ethereum, on the Aztec Network, any inactive or malicious sequencers will be slashed and removed from the validator set. Sequencers will be able to slash any validator that makes no attestations for an entire epoch or proposes an invalid block.

Three slashes will result in being removed from the validator set. Sequencers may rejoin the validator set at any time after getting slashed; they just need to rejoin the queue.

Decentralized Governance

In addition to testing network resilience when validators go offline and evaluating the slashing mechanisms, the Adversarial Testnet will also assess the robustness of the network’s decentralized governance during protocol upgrades.

Adversarial Testnet introduces changes to Aztec Network’s governance system.

Sequencers now have an even more central role, as they are the sole actors permitted to deposit assets into the Governance contract.

After the upgrade is defined and the proposed contracts are deployed, sequencers will vote on and implement the upgrade independently, without any involvement from Aztec Labs and/or the Aztec Foundation.

Start Your Plan of Attack  

Starting today, you can join the Adversarial Testnet to help battle-test Aztec’s decentralization and security. Anyone can compete in six categories for a chance to win exclusive Aztec swag, be featured on the Aztec X account, and earn a DappNode. The six challenge categories include:

  • Homestaker Sentinel: Earn 1 Aztec Dappnode by maximizing attestation and proposal success rates and volumes, and actively participating in governance.
  • The Slash Priest: Awarded to the participant who most effectively detects and penalizes misbehaving validators or nodes, helping to maintain network security by identifying and “slashing” bad actors.
  • High Attester: Recognizes the participant with the highest accuracy and volume of valid attestations, ensuring reliable and secure consensus during the adversarial testnet.
  • Proposer Commander: Awarded to the participant who consistently creates the most successful and timely proposals, driving efficient consensus.
  • Meme Lord: Celebrates the creator of the most creative and viral meme that captures the spirit of the adversarial testnet.
  • Content Chronicler: Honors the participant who produces the most engaging and insightful content documenting the adversarial testnet experience.

Performance will be tracked using Dashtec, a community-built dashboard that pulls data from publicly available sources. Dashtec displays a weighted score of your validator performance, which may be used to evaluate challenges and award prizes.

The dashboard offers detailed insights into sequencer performance through a stunning UI, allowing users to see exactly who is in the current validator set and providing a block-by-block view of every action taken by sequencers.

To join the validator set and start tracking your performance, click here. Join us on Thursday, July 31, 2025, at 4 pm CET on Discord for a Town Hall to hear more about the challenges and prizes. Who knows, we might even drop some alpha.

To stay up-to-date on all things Noir and Aztec, make sure you’re following along on X.