Vision
22 May
## min read

Decentralization is not a meme: Part 1

What do we mean by “decentralization”? Why Aztec takes decentralization seriously? In this post, we explore Aztec’s efforts around protocol decentralization: sequencer, prover, and upgrade mechanism.

Share
Written by
Lisa A.
Edited by

Many thanks to Cooper, Prasad, Rafal, Mahima, and Elias for the review.

Contents

  • What do we mean by “decentralization”?
  • Why Aztec takes decentralization seriously
  • Aztec’s efforts around protocol decentralization: sequencer, prover, and upgrade mechanismsome text
  • Request for proposal (RFP)
  • Sequencer
  • Prover
  • Conclusion

1) What do we mean by “decentralization”?

Decentralization in blockchain is one of the most speculative topics, often thought about as a meme.

Source
Source

The questions around decentralization are:

  • Which components should be decentralized (both at the blockchain and application layers)?
  • To what extent?
  • Is decentralization of a specific part of the network a must-have or just a nice-to-have?
  • Will we die if a specific part of the network is not decentralized?
  • Is it enough to have a decentralization roadmap or should we decentralize for real?

The goal of this article is to shed some light on what we mean by decentralization, why it matters, and how decentralization is defined and provided in the context of the Aztec network.

Before we start figuring out the true meaning of decentralization, we should note that decentralization is not a final goal in itself. Instead, it is a way to provide rollups with a number of desired properties, including:

  • Permissionlessness (censorship resistance as a consequence) – anyone can submit a transaction and the rollup will process it (i.e. the rollup doesn’t have an opinion on which transactions are good and which are bad and can’t censor them).
  • Liveness – chain operates (processes transactions) nonstop irrespective of what is happening.
  • Security – the correctness of state transition (i.e. transactions' correct execution) is guaranteed by something robust and reliable (e.g. zero-knowledge proofs).

In the case of zk-rollups, these properties are tightly connected with “entities” that operate the rollup, including:

  • Sequencer – orders and executes transactions –> impacts permissionlessness and liveness.
  • Prover – generates proof that the transactions were executed correctly –> impacts security and liveness.
  • Governance mechanism (upgrade mechanism) – manages and implements protocol upgrades –> impacts security, liveness, and permissonlessness.

Even though we said at the beginning of the article that decentralization is a speculative topic, it’s not overly speculative. Decentralization is required for permissionlessness, censorship resistance, and liveness, which are required to reach the system’s end goal, credible neutrality (at least to some extent). “Credible neutrality” means the protocol is not “designed to favor specific people or outcomes over others” and is “able to convince a large and diverse group of people that the mechanism at least makes that basic effort to be fair”.

Credible neutrality is a crucial element for rollups as well, which is why we're prioritizing decentralization at Aztec, among other things. Progressive decentralization is not an option; Decentralization from the start is a must-have as the regulatory, political, and legal landscapes are constantly changing.

In the next section, we will dive into the specifics of Aztec’s case, looking at its components and their levels of decentralization.

2) Why Aztec takes decentralization seriously

Aztec network is a privacy-first L2 on Ethereum. Its goal is to allow developers to build dapps that are privacy-preserving and compliant (in any desired jurisdiction!) at the same time.

For Aztec, there are two levels of decentralization: protocol and organizational.

At the protocol level, Aztec network consists of a number of components, such as the P2P transaction pool (i.e. mempool), sequencer, prover, upgrade mechanism, economics, Noir (DSL language), execution layer, cryptography, web SDK, rollup contract, and more.

For each of these, decentralization might have a slightly different meaning. But the root reason why we care about it is to provide safety for the developers and users ecosystem.

  • For the rollup contract and upgrade mechanism, the question is who controls the upgrades and how we can diversify this process in terms of quantity and geography.
    A good mechanism should defend the protocol from forced upgrades (e.g. by the court). It should also mitigate the sanctions risk, isolating this risk at the application level, not the rollup level.
  • For sequencer and prover, we also need quantity and geographical decentralization as well as a multi-client approach where users can choose a vendor from a distributed set that may have various different priorities.
  • For economic decentralization, we need to ensure that “the ongoing balancing of incentives among the stakeholders — developers, contributors, and consumers — will drive further contributions of value to the overall system”. It covers the vesting of power, control, and ownership with system stakeholders in such a way that the value of the ecosystem as a whole accrues to a broader array of participants.
  • For all software components, such as client software, we need to ensure that copies of the software are distributed widely enough within the community to ensure access, even if the original maintainers choose to abandon the projects.

When it comes to long-term economic decentralization, the desired outcome is power decentralization, which in turn can be achieved through geographical decentralization.

In the context of geographical decentralization, we particularly care that:

  • diversification among different jurisdictions mitigates the risk of local regulatory regimes attempting to impose their will.
  • when reasoning about extremes and black swan events, having a global system is attractive from the point of view of safety and availability.
  • intuitively, a system that privileges certain geographies cannot be considered neutral and fair.

For more thoughts on geographical decentralization, check out the articleDecentralized crypto needs you: to be a geographical decentralization maxi” by Phil Daian.

3) Aztec’s efforts around protocol decentralization: sequencer, prover, and upgrade mechanism

The decentralization to-do list is pretty huge. Decentralization mechanism design is a complex process that takes time, which is why Aztec started working on it far in advance and called on the most brilliant minds to collaborate, cooperate, design, and produce the necessary mechanisms that will allow the Aztec network to be credibly neutral from day one.

Request for proposal (RFP)

Since last summer, we’ve announced a number of requests for proposal (RFPs) to invite the power of community and the greatest minds in the industry to find a range of solutions for the Aztec network protocol design:

Everyone was welcome to craft a proposal and post it on the forum. For each of the RFPs, we outlined a number of protocol requirements that will decentralize and diversify each part of Aztec, making it robust and credibly neutral.

For each RFP, we got a number of proposals (all of them are attached in the RFPs’ comments). Proposals were discussed on the forum by the community and analyzed in detail by partners (e.g. Block Science) and the Aztec Labs team.

In this section, we will describe and briefly discuss the chosen proposals.

Sequencer Selection

Some of the desired properties

There are a number of desired properties assigned to the sequencer. These include:

  • Permissionlessness sequencer role – any actor who adheres to the protocol can fill the role of sequencer.
  • Elegant reorg recovery – the protocol has affordance for recovering its state after an Ethereum reorg.
  • Denial of services – an actor cannot prevent other actors from using the system.
  • L2 chain halt – there is a healing mechanism in case of block proposal process failure.
  • Censorship resistance – it’s infeasibly expensive to maintain sufficient control of the sequencer role to discriminate on transactions.

Other factors to be considered are

  • How the protocol handles MEV
  • How costly it is to form a cartel
  • Protocol complexity
  • Coordination overhead – how costly it is to coordinate a new round of sequencers

Sequencer mechanism

The chosen sequencer design is called “Fernet” and was suggested by Santiago Palladino (“Palla”), one of the talented engineers at Aztec Labs. Its core component is randomized sequencer selection. To be eligible for selection, sequencers need to stake assets on L1. After staking, a sequencer needs to wait for an activation period of a number of L1 blocks before they can start proposing new blocks. The waiting period guards the protocol against malicious governance attacks.

Block proposal mechanism

Stage 0: Score calculation

  • In each round (currently expected to be ~12-36ss), staked sequencers calculate their round-based score, derived from a hash over RANDAO and a public key.

Stage 1: Proposal

  • Based on the calculated scores, if a sequencer determines its score for a given round as likely to win, it commits to a block proposal.
  • During the proposal stage, the highest ranking proposers (i.e. sequencers) submit L1 transactions, including a commitment to the Layer-2 transaction ordering in the proposed block, the previous block being built upon, and any additional metadata required by the protocol.

Stage 2: Prover commitment – estimated ~3-5 Ethereum blocks

  • The highest ranking proposers (i.e. sequencers) make an off-chain deal with provers. This might be a vertical integration (i.e. a sequencer runs a prover), business deal with a specific 3rd party prover, or a prover-boost auction between all of the third party proving marketplaces.
    On the sequencers' side, this approach allows them to generate proofs according to their needs. On the network side, it benefits from modularity, enjoying all proving systems innovations.
  • Provers build proofs for blocks with the highest scores.
  • This stage will be explicitly defined in the next section dedicated to the proving mechanism.

Stage 3: Reveal

  • At the end of the proposal phase, the sequencer with the highest ranking block proposal on L1 becomes the leader for this cycle, and reveals the block content, i.e. uploads the block contents to either L1 or a verifiable DA layer.
  • As stages 0 and 1 are effectively multi-leader protocols, there is a very high probability that someone will submit a proposal (though it might not be among the leaders according to the score).
    In the event that no one submits a valid block proposal, we introduce a “backup” mode, which enables a first-come, first-served race to submit the first proof to the L1 smart contracts. There is also a similar backup mode in the event that there is a valid proposal, but no valid prover commitment (deposit) by the end of the prover commitment phase or should the block not get finalized.
  • If the leading sequencer posts invalid data during the reveal phase, the sequencer for the next block will build from the previous one.

Stage 4: Proving – estimated ~40 Ethereum blocks

  • Before the end of this phase, it is expected for the block proof to be published to L1 for verification.
  • Once the proof for the highest ranking block is submitted to L1 and verified, the block becomes final, assuming its parent block in the chain is also final.
  • This would trigger new tokens to be minted, and payouts to the sequencer, prover commitment address, and the address that submitted the proofs.
  • If block N is committed to but doesn't get proven, its prover deposit is slashed.

The cycle for block N+1 can start at the end of the block N reveal phase.

How Fernet meets required properties

How Fernet meets required properties

Property
How Fernet addresses it
Permissionlessness sequencer role
Anyone who locked some funds on L1 can propose a block after a waiting period. 
Elegant reorg recovery
On L2, reorg is not possible as a new block can be proposed strictly after the previous block was finalized. However, L1 reorg might impact L2.
L2 chain halt
Relying on the Ethereum copy of Aztec network state between the last finalized epoch and the current safe block.
Censorship resistance
In order to censor a transaction, it must be the case that an entity can “guarantee” that they are repeatedly selected as sequencer while the transaction to be censored is awaiting processing. The VRF selection process will prevent such a guarantee.
MEV
For the public domain, MEV is extracted by the sequencer responsible for the current slot. In the private domain, there is no direct MEV extraction. However, there might be some probabilistically extracted MEV, though its feasibility will depend on the dapps landscape deployed on the chain.
Protocol complexity: engaged mechanisms can be adjusted over time because of modularity
The Aztec protocol design assumes modularity, allowing it to choose any prover and DA mechanisms and adjust them later if needed. 
Cost for private and public function calls
For public functions, call costs depend directly on the specific executed opcodes (as for any other rollup). For private function calls, there is a fixed cost for every state update and proof verification.

For a detailed analysis of the protocol's ability to satisfy the design requirements, check this report crafted by an independent third party, Block Science.

Prover

Context

In the previous section, we mentioned that at stage 3 proofs are supplied to the blocks. However, we didn’t explicitly define the specific prover mechanism.

To design a prover mechanism, Aztec also initialized an RFP after the sequencer mechanism was chosen to be Fernet (as described in the previous section).

Without going into too much detail, one should note that the Aztec network has two types of proofs: client-side proofs and rollup-side proofs. Client-side proofs are generated for each private function and submitted to the Aztec network by the user. The client-side proving mechanism doesn’t have any decentralization requirements, as all the private data is processed solely on the user’s device, meaning it’s inherently decentralized. Covering client-side proof generation is outside the scope of this piece, but check out one of our previous articles to learn more about it.

The Aztec RFP “Decentralized Prover Coordination” asked for a rollup-side prover mechanism, the goal of which is to generate proofs for blocks.

In particular, it means the sequencer executes every public function and the prover creates a proof of each function’s correct execution. That proof is aggregated into a kernel proof. Each kernel proof is aggregated into another kernel proof and so on (i.e. as a chain of kernel proofs). The final kernel proof is aggregated into a base rollup proof. The base rollup proofs are aggregated into pairs in a tree-like structure. The root of this tree is the final block proof.

Desired properties
There is a row of desired properties assigned to the prover mechanism. Among those:

  • Permissionlesness – anyone can run an Aztec prover.
  • The prover of each block can be recognized to be rewarded or slashed by the protocol.
  • Recovery mechanism in case provers stop supplying proofs.
  • Flexibility for future cryptography improvements.


Prover mechanism

The chosen prover mechanism is called “Sidecar” and was suggested by Cooper Kunz.

  • It is a minimally enshrined commitment and slashing scheme that facilitates the sequencer outsourcing proving rights to anyone, given an out-of-protocol prover marketplace. This allows sequencers to leverage reputation or other off-protocol information to make their choice.
  • In particular, it means anyone can take a prover role. For example, it can be a specialized proving marketplace, or a vertically integrated sequencer’s prover, or an individual independent prover.
  • After the sequencer chooses its prover, there is a Prover Commitment Phase by the end of which any sequencer who believes they have a chance to win block proposal rights must signal via an L1 transaction the prover’s Ethereum address and the prover specifies its deposit.
  • After the prover commits, the block content is revealed by the sequencer. Going with this specific order (i.e. first prover commitment then revealing block content) allows one to mitigate potential MEV-stealing (if sequencers have to publish all data to a DA layer before the commitment) and proof withholding attacks (i.e. putting up a block proposal that seems valid but never revealing the underlying data required to verify it).
  • The prover operates outside of Aztec protocol and the Aztec network. Hence, after the prover commitment stage, the protocol simply waits a predetermined amount of time for the proof submission phase to begin.

How Sidecar meets required properties

How Sidecar meets required properties

Property
How Sidecar addresses it
Permissionlessness
Anyone can run a prover.
Prover recognizability
Prover posts commitment to L1.
Recovery mechanism
Reorgs are not possible within the current design. 
Flexibility
There is no hard commitment to one specific proving system. 

Conclusion

Decentralization is neither a sentiment nor a meme. It’s one of the core milestones on the way to credible neutrality. And credible neutrality is one of the core milestones on the way to a long-lasting, secure, and robust Ethereum ecosystem.

If the network is not credibly neutral, the safety of users’ funds cannot be long-term guaranteed. Furthermore, if the network is not credibly neutral, the developers building on top of the network can’t be sure that the network will be there for them tomorrow, the day after tomorrow, in a year, in ten years, etc. They have to trust the network team that they are good, reliable people, and will continue maintaining the network and will fulfill all their promises. But what if that is not the case? Good intentions of a small number of people are not enough to secure hundreds of dapps, the thousands of developers building them, and the millions of users using them. The network should be designed in a credibly neutral way from the first to the last bit. Without compromises, without speculation, without promises.

That is what we are working on at Aztec Labs: systematically decentralizing all of the network’s components (e.g. sequencer and prover) with the help and support of a wide community (e.g. through RFP and RFC mechanisms) and top-notch partners (e.g. Block Science).

That is why, especially in the early days, Aztec prioritizes safety over other properties (e.g. impossibility of reorg attacks by design and unrolling upgrade mechanism allowing sequencers to have enough time to battle-test the mechanism before any assets come to the network).

Besides technical and economical decentralization, Aztec also considers its legal aspect that comes in the form of a foundation that is a suitable vehicle to promote decentralization.

If you want to contribute to Aztec’s decentralization – fill in the form.

This was the first part of the piece on Aztec’s decentralization. In the second part (coming soon), we will cover the upgrade mechanism.

Sources:

Read more
Aztec Network
Aztec Network
4 Sep
xx min read

A New Brand for a New Era of Aztec

After eight years of solving impossible problems, the next renaissance is here. 

We’re at a major inflection point, with both our tech and our builder community going through growth spurts. The purpose of this rebrand is simple: to draw attention to our full-stack privacy-native network and to elevate the rich community of builders who are creating a thriving ecosystem around it. 

For eight years, we’ve been obsessed with solving impossible challenges. We invented new cryptography (Plonk), created an intuitive programming language (Noir), and built the first decentralized network on Ethereum where privacy is native rather than an afterthought. 

It wasn't easy. But now, we're finally bringing that powerful network to life. Testnet is live with thousands of active users and projects that were technically impossible before Aztec.

Our community evolution mirrors our technical progress. What started as an intentionally small, highly engaged group of cracked developers is now welcoming waves of developers eager to build applications that mainstream users actually want and need.

Behind the Brand: A New Mental Model

A brand is more than aesthetics—it's a mental model that makes Aztec's spirit tangible. 

Our Mission: Start a Renaissance

Renaissance means "rebirth"—and that's exactly what happens when developers gain access to privacy-first infrastructure. We're witnessing the emergence of entirely new application categories, business models, and user experiences.

The faces of this renaissance are the builders we serve: the entrepreneurs building privacy-preserving DeFi, the activists building identity systems that protect user privacy, the enterprise architects tokenizing real-world assets, and the game developers creating experiences with hidden information.

Values Driving the Network

This next renaissance isn't just about technology—it's about the ethos behind the build. These aren't just our values. They're the shared DNA of every builder pushing the boundaries of what's possible on Aztec.

Agency: It’s what everyone deserves, and very few truly have: the ability to choose and take action for ourselves. On the Aztec Network, agency is native

Genius: That rare cocktail of existential thirst, extraordinary brilliance, and mind-bending creation. It’s fire that fuels our great leaps forward. 

Integrity: It’s the respect and compassion we show each other. Our commitment to attacking the hardest problems first, and the excellence we demand of any solution. 

Obsession: That highly concentrated insanity, extreme doggedness, and insatiable devotion that makes us tick. We believe in a different future—and we can make it happen, together. 

Visualizing the Next Renaissance

Just as our technology bridges different eras of cryptographic innovation, our new visual identity draws from multiple periods of human creativity and technological advancement. 

The Wordmark: Permissionless Party 

Our new wordmark embodies the diversity of our community and the permissionless nature of our network. Each letter was custom-drawn to reflect different pivotal moments in human communication and technological progress.

  • The A channels the bold architecture of Renaissance calligraphy—when new printing technologies democratized knowledge. 
  • The Z strides confidently into the digital age with clean, screen-optimized serifs. 
  • The T reaches back to antiquity, imagined as carved stone that bridges ancient and modern. 
  • The E embraces the dot-matrix aesthetic of early computing—when machines first began talking to each other. 
  • And the C fuses Renaissance geometric principles with contemporary precision.

Together, these letters tell the story of human innovation: each era building on the last, each breakthrough enabling the next renaissance. And now, we're building the infrastructure for the one that's coming.

The Icon: Layers of the Next Renaissance

We evolved our original icon to reflect this new chapter while honoring our foundation. The layered diamond structure tells the story:

  • Innermost layer: Sensitive data at the core
  • Black privacy layer: The network's native protection
  • Open third layer: Our permissionless builder community
  • Outermost layer: Mainstream adoption and real-world transformation

The architecture echoes a central plaza—the Roman forum, the Greek agora, the English commons, the American town square—places where people gather, exchange ideas, build relationships, and shape culture. It's a fitting symbol for the infrastructure enabling the next leap in human coordination and creativity.

Imagery: Global Genius 

From the Mughal and Edo periods to the Flemish and Italian Renaissance, our brand imagery draws from different cultures and eras of extraordinary human flourishing—periods when science, commerce, culture and technology converged to create unprecedented leaps forward. These visuals reflect both the universal nature of the Renaissance and the global reach of our network. 

But we're not just celebrating the past —we're creating the future: the infrastructure for humanity's next great creative and technological awakening, powered by privacy-native blockchain technology.

You’re Invited 

Join us to ask questions, learn more and dive into the lore.

Join Our Discord Town Hall. September 4th at 8 AM PT, then every Thursday at 7 AM PT. Come hear directly from our team, ask questions, and connect with other builders who are shaping the future of privacy-first applications.

Take your stance on privacy. Visit the privacy glyph generator to create your custom profile pic and build this new world with us.

Stay Connected. Visit the new website and to stay up-to-date on all things Noir and Aztec, make sure you’re following along on X.

The next renaissance is what you build on Aztec—and we can't wait to see what you'll create.

Aztec Network
Aztec Network
22 Jul
xx min read

Introducing the Adversarial Testnet

Aztec’s Public Testnet launched in May 2025.

Since then, we’ve been obsessively working toward our ultimate goal: launching the first fully decentralized privacy-preserving layer-2 (L2) network on Ethereum. This effort has involved a team of over 70 people, including world-renowned cryptographers and builders, with extensive collaboration from the Aztec community.

To make something private is one thing, but to also make it decentralized is another. Privacy is only half of the story. Every component of the Aztec Network will be decentralized from day one because decentralization is the foundation that allows privacy to be enforced by code, not by trust. This includes sequencers, which order and validate transactions, provers, which create privacy-preserving cryptographic proofs, and settlement on Ethereum, which finalizes transactions on the secure Ethereum mainnet to ensure trust and immutability.

Strong progress is being made by the community toward full decentralization. The Aztec Network now includes nearly 1,000 sequencers in its validator set, with 15,000 nodes spread across more than 50 countries on six continents. With this globally distributed network in place, the Aztec Network is ready for users to stress test and challenge its resilience.

Introducing the Adversarial Testnet

We're now entering a new phase: the Adversarial Testnet. This stage will test the resilience of the Aztec Testnet and its decentralization mechanisms.

The Adversarial Testnet introduces two key features: slashing, which penalizes validators for malicious or negligent behavior in Proof-of-Stake (PoS) networks, and a fully decentralized governance mechanism for protocol upgrades.

This phase will also simulate network attacks to test its ability to recover independently, ensuring it could continue to operate even if the core team and servers disappeared (see more on Vitalik’s “walkaway test” here). It also opens the validator set to more people using ZKPassport, a private identity verification app, to verify their identity online.  

Slashing on the Aztec Network

The Aztec Network testnet is decentralized, run by a permissionless network of sequencers.

The slashing upgrade tests one of the most fundamental mechanisms for removing inactive or malicious sequencers from the validator set, an essential step toward strengthening decentralization.

Similar to Ethereum, on the Aztec Network, any inactive or malicious sequencers will be slashed and removed from the validator set. Sequencers will be able to slash any validator that makes no attestations for an entire epoch or proposes an invalid block.

Three slashes will result in being removed from the validator set. Sequencers may rejoin the validator set at any time after getting slashed; they just need to rejoin the queue.

Decentralized Governance

In addition to testing network resilience when validators go offline and evaluating the slashing mechanisms, the Adversarial Testnet will also assess the robustness of the network’s decentralized governance during protocol upgrades.

Adversarial Testnet introduces changes to Aztec Network’s governance system.

Sequencers now have an even more central role, as they are the sole actors permitted to deposit assets into the Governance contract.

After the upgrade is defined and the proposed contracts are deployed, sequencers will vote on and implement the upgrade independently, without any involvement from Aztec Labs and/or the Aztec Foundation.

Start Your Plan of Attack  

Starting today, you can join the Adversarial Testnet to help battle-test Aztec’s decentralization and security. Anyone can compete in six categories for a chance to win exclusive Aztec swag, be featured on the Aztec X account, and earn a DappNode. The six challenge categories include:

  • Homestaker Sentinel: Earn 1 Aztec Dappnode by maximizing attestation and proposal success rates and volumes, and actively participating in governance.
  • The Slash Priest: Awarded to the participant who most effectively detects and penalizes misbehaving validators or nodes, helping to maintain network security by identifying and “slashing” bad actors.
  • High Attester: Recognizes the participant with the highest accuracy and volume of valid attestations, ensuring reliable and secure consensus during the adversarial testnet.
  • Proposer Commander: Awarded to the participant who consistently creates the most successful and timely proposals, driving efficient consensus.
  • Meme Lord: Celebrates the creator of the most creative and viral meme that captures the spirit of the adversarial testnet.
  • Content Chronicler: Honors the participant who produces the most engaging and insightful content documenting the adversarial testnet experience.

Performance will be tracked using Dashtec, a community-built dashboard that pulls data from publicly available sources. Dashtec displays a weighted score of your validator performance, which may be used to evaluate challenges and award prizes.

The dashboard offers detailed insights into sequencer performance through a stunning UI, allowing users to see exactly who is in the current validator set and providing a block-by-block view of every action taken by sequencers.

To join the validator set and start tracking your performance, click here. Join us on Thursday, July 31, 2025, at 4 pm CET on Discord for a Town Hall to hear more about the challenges and prizes. Who knows, we might even drop some alpha.

To stay up-to-date on all things Noir and Aztec, make sure you’re following along on X.

Noir
Noir
26 Jun
xx min read

ZKPassport Case Study: A Look into Online Identity Verification

Preventing sybil attacks and malicious actors is one of the fundamental challenges of Web3 – it’s why we have proof-of-work and proof-of-stake networks. But Sybil attacks go a step further for many projects, with bots and advanced AI agents flooding Discord servers, sending thousands of transactions that clog networks, and botting your Typeforms. Determining who is a real human online and on-chain is becoming increasingly difficult, and the consequences of this are making it difficult for projects to interact with real users.

When the Aztec Testnet launched last month, we wrote about the challenges of running a proof-of-stake testnet in an environment where bots are everywhere. The Aztec Testnet is a decentralized network, and in order to give good actors a chance, a daily quota was implemented to limit the number of new sequencers that could join the validator set per day to start proposing blocks. Using this system, good actors who were already in the set could vote to kick out bad actors, with a daily limit of 5 new sequencers able to join the set each day. However, the daily quota quickly got bottlenecked, and it became nearly impossible for real humans who are operating nodes in good faith to join the Aztec Testnet.

In this case study, we break down Sybil attacks, explore different ways the ecosystem currently uses to prevent them, and dive into how we’re leveraging ZKPassport to prevent Sybil attacks on the Aztec Testnet.

Preventing Sybil Attacks

With the massive repercussions that stem from privacy leaks (see the recent Coinbase incident), any solution to prevent Sybil attacks and prove humanity must not compromise on user privacy and should be grounded in the principles of privacy by design and data minimization. Additionally, given that decentralization underpins the entire purpose of Web3 (and the Aztec Network), joining the network should remain permissionless.

Our goal was to find a solution that allows users to permissionlessly prove their humanity without compromising their privacy. If such a technology exists (spoiler alert: it does), we believe that this has the potential to solve one of the biggest problems faced by our industry: Sybil attacks. Some of the ways that projects currently try to prevent Sybil attacks or prove [humanity] include:

  • “Know Your Customer” (KYC): A process in which users upload a picture or scan of their government ID, which is checked and then retained (indefinitely) by the project, and any “bad actors” are rejected.
    • Pros: High likelihood they are human, although AI has begun to introduce a new set of challenges.
    • Cons: User data is retained and viewable by a centralized entity, which could lead to compromised data and privacy leaks, ultimately impacting the security of the individuals. Also, KYC processes in the age of AI means it is easy to fake a passport as only an image is used to verify and not any biometric data held on the passport itself. Existing KYC practices are outdated, not secure and prone to data leaks increasing personal security risk for the users.
  • On-chain activity and account linking (i.e, Gitcoin passport)
    • Pros: No personal identity data shared (name, location, etc.)
    • Cons: Onchain activity and social accounts are not Sybil-resistant.
  • Small payment to participate
    • Pros: Impractical/financially consequential for bots to join. Effective for centralized infra providers as it can cover the cost they incur from Sybil attacks.
    • Cons: Requires users to pay out of pocket to test the network, and doesn’t prevent bots from participating, and is ineffective for decentralized infra as it is difficult to spread incurred costs to all affected operators.
  • zkEmail
    • Pros: The user shares no private information.
    • Cons: Users cannot be blocked by jurisdiction, for example, it would be impossible to carry out sanctions checks, if required.
  • ZKPassport, a private identity verification app.
    • Pros: User verifies they possess a valid ID without sharing private information. No information is retained therefore no leaks of data can occur impacting the personal security of the user.
    • Cons: Users must have a valid passport or a compatible government ID, in each case, that is not expired.

Both zkEmail and ZKPassport are powered by Noir, the universal language of zk, and are great solutions for preventing Sybil attacks.

With zkEmail, users can do things like prove that they received a confirmation email from a centralized exchange showing that they successfully passed KYC, all without showing any of the email contents or personal information. While this offers a good solution for this use case, we also wanted the functionality of enabling the network to block certain jurisdictions (if needed), without the network knowing where the user is from. This also enables users to directly interface with the network rather than through a third-party email confirmation.

Given this context, ZKPassport was, and is, the perfect fit.

About ZKPassport

For the Aztec Testnet, we’ve integrated ZKPassport to enable node operators to prove they are human and participate in the network. This integration allows the network to dramatically increase the number of sequencers that can be added each day, which is a huge step forward in decentralizing the network with real operators.

ZKPassport allows users to share only the details about themselves that they choose by scanning a passport or government ID. This is achieved using zero-knowledge proofs (ZKPs) that are generated locally on the user’s phone. Implementing client-side zk-proofs in this way enables novel use-cases like age verification, where someone can prove their age without actually sharing how old they are (see the recent report on How to Enable Age Verification on the Internet Today Using Zero-Knowledge Proofs).

As of this week, the ZKPassport app is live and available to download on Google Play and the Apple App Store.

How ZKPassport works

Most countries today issue biometric passports or national IDs containing NFC chips (over 120 countries are currently supported by ZKPassport). These chips contain information on the full name, date of birth, nationality, and even digital photographs of the passport or ID holder. They can also contain biometric data such as fingerprints and iris scans.

By scanning the NFC chip located in their ID document with a smartphone, users generate proof based on a specific request from an app. For example, some apps might require only the user’s age or nationality. In the case of Aztec, no information is needed about the user other than that they do indeed hold a valid passport or ID.

Client-side proving

Once the user installs the ZKPassport app and scans their passport, the proof of identity is generated on the user's smartphone (client-side).

All the private data read from the NFC chip in the passport or ID is processed client-side and never leaves the smartphone (aka: only the user is aware of their data). Only this proof is sent to an app that has requested some information. The app can then verify the validity of the user’s age or nationality, all without actually seeing anything about the user other than what the user has authorized the app to see. In the case of age verification, the user may want to prove that they are over 18, so they’ll create a proof of this on their phone, and the requesting app is able to verify this information without knowing anything else about them.

For the Aztec Testnet, the network only needs to know that the user holds a valid passport, so no information is shared by the user other than “yes, I hold a valid passport or ID.”

Getting started with ZKPassport on Aztec Testnet

This is a nascent and evolving technology, and various phone models, operating systems, and countries are still being optimized for. To ensure this works seamlessly, we’ll be selecting the first cohort of people who have already been running active validators on a rolling basis to help test ZKPassport and provide early feedback.

If someone successfully verifies that they are a valid passport holder, they will be added to a queue to enter the validator set. Once they are in line, they are guaranteed entry. The queue will enable an estimated additional 10% of the current set to be allowed in each day. For example, if 800 sequencers are currently in the set, 80 new sequencers will be allowed to join that day.

This allows existing operators to maintain control of the network in the event that bad actors enter, while dramatically increasing the number of new validators added compared to the current number.

Humanizing Web3  

With ZKPassport now live, the Aztec Testnet is better equipped to distinguish real users from bots, without compromising on privacy or decentralization.

This integration is already enabling more verified human node operators to join the validator set, and the network is ready to welcome more. By leveraging ZKPs and client-side proving, ZKPassport ensures that humanity checks are both secure and permissionless, bringing us closer to a decentralized future that doesn’t rely on trust in centralized authorities.

This is exciting not just for Aztec but for the broader ecosystem. As the network continues to grow and develop, participation must remain open to anyone acting in good faith, regardless of geography or background, while keeping out bots and other malicious actors. ZKPassport makes this possible.

We’re excited to see the community expand, powered by real people helping to build a more private, inclusive, and human Web3.

Stay up-to-date on Noir and Aztec by following Noir and Aztec on X.

Noir
Noir
4 Jun
xx min read

StealthNote: The Decentralized, Private Glassdoor of Web3

Imagine an app that allows users to post private messages while proving they belong to an organization, without revealing their identity. Thanks to zero-knowledge proofs (ZKPs), it's now possible to protect the user’s identity through secure messaging, confidential voting, secured polling, and more. This development in privacy-preserving authentication creates powerful new ways for teams and individuals to communicate on the Internet while keeping aspects of their identity private.

Introducing Private Posting

Compared to Glassdoor, StealthNote is an app that allows users to post messages privately while proving they belong to a specific organization. Built with Noir, an open-source programming language for writing ZK programs, StealthNote utilizes ZKPs to prove ownership of a company email address, without revealing the particular email or other personal information.

Privately Sign In With Google

To prove the particular domain email ownership, the app asks users to sign in using Google. This utilizes Google’s ‘Sign in with Google’ OAuth authorization. OAuth is usually used by external applications for user authorization and returns verified users’ data, such as name, email, and the organization’s domain.

However, using ‘Sign in with Google’ in a traditional way reveals all of the information about the person’s identity to the app. Furthermore, for an app where you want to allow the public to verify the information about a user, all of this information would be made public to the world. That’s where StealthNote steps in, enabling part of the returned user data to stay private (e.g. name and email) and part of it to be publicly verifiable (e.g. company domain).

How StealthNote Works

Understanding JSON Web Tokens (JWTs)

When you "Sign in with Google" in a third-party app, Google returns some information about the user as a JSON Web Token (JWT) – a standard for sending information around the web.

JWTs are just formatted strings that contain a header (some info about the token), a payload (data about the user), and a signature to ensure the integrity and authenticity of the token:

Anyone can verify the authenticity of the above data by verifying that the JWT was signed by Google using their public key.

Adding Private Messages

In the case of StealthNote, we want to authorize the user and prove that they sent a particular message. To make this possible, custom information is added to the JWT token payload – a hashed message. With this additional field, the JWT becomes a digitally signed proof that a particular user sent that exact message.

Protecting the Sender’s Privacy

You can share the message and the JWT with someone and convince them that the message was sent by someone in the company. However, this would require sharing the whole JWT, which includes your name and email, exposing who the sender is. So, how does StealthNote protect this information?

They used a ZK-programming language, Noir, with the following goals in mind:

  • Verify the signature of the JWT using Google's public key
  • Extract the hashed message from the payload
  • Extract the email domain from the payload

The payload and the signature are kept private, meaning they stay on the user’s device and never need to be revealed, while the hashed message, the domain, and the JWT public key are public. The ZKP is generated in the browser, and no private data ever leaves the user's device.

Noir: What is Happening Under the Hood

By executing the program with Noir and generating a proof, the prover (the user who is posting a message) proves that they can generate a JWT signed by some particular public key, and it contains an email field in the payload with the given domain.

When the message is sent to the StealthNote server, the server verifies that the proof is valid as per the StealthNote circuit and validates that the public key in the proof is the same as Google's public key.

Once both checks pass, the server inserts the proof into the database, which then appears in the feed visible for other users. Other users can also verify the proof in the browser. The role of the server is to act as a data storage layer.

Stay up-to-date on Noir and Aztec by following Noir and Aztec on X.