Aztec Network
7 Feb
## min read

From zero to nowhere: smart contract programming in Huff (1/4)

In this series, learn smart contract programming in Huff directly from Zac.

Share
Written by
Zac Williamson
Edited by

Hello there!

I want to write about a piece of runoff that has oozed out of the primordial slop on the AZTEC factory floor: …Huff.

Huff is an Ethereum smart contract programming ‘language’ that was developed while writing weierstrudel, an elliptic curve arithmetic library for validating zero-knowledge proofs.

Elliptic curve arithmetic is computationally expensive, so developing an efficient implementation was paramount, and not something that could be done in native Solidity.

It wasn’t even something that could be done in Solidity inline assembly, so we made Huff.

To call Huff a language is being generous — it’s about as close as one can get to EVM assembly code, with a few bits of syntactic sugar bolted on.

Huff programs are composed of macros, where each macro in turn is composed of a combination of more macros and EVM assembly opcodes. When a macro is invoked, template parameters can be supplied to the macro, which themselves are macros.

Unlike a LISP-like language or something with sensible semantics, Huff doesn’t really have expressions either. That would require things like knowing how many variables a Huff macro adds to the stack at compile time, or expecting a Huff macro to not occasionally jump into the middle of another macro. Or assuming a Huff macro won’t completely mangle the program counter by mapping variables to jump destinations in a lookup table. You know, completely unreasonable expectations.Huff doesn’t have functions. Huff doesn’t even have variables, only macros.

Huff is good for one thing, though, which is writing extremely gas-optimised code.The kind of code where the overhead of the jump instruction in a function call is too expensive.

The kind of code where an extra swap instruction for a variable assignment is an outrageous luxury.At the very least, it does this quite well. The weierstrudel library performs elliptic curve multiple-scalar multiplication for less gas than the Ethereum’s “precompile” smart contract. An analogous Solidity smart contract is ~30–100 times more expensive.

It also enables complicated algorithms to be broken down into constituent macros that can be rigorously tested, which is useful.

Huff is also a game, played on a chess-board. One player has chess pieces, the other draughts pieces. The rules don’t make any sense, the game is deliberately confusing and it is an almost mathematical certainty that the draughts player will lose. You won’t find references to this game online because it was “invented” in a pub by some colleagues of mine in a past career and promptly forgotten about for being a terrible game.

I found that writing Huff macros invoked similar emotions to playing Huff, hence the name.

{{blog_divider}}

Programming in Huff

Given the absence of any documentation, I figured it might be illuminating to write a short series in how to write a smart contract in Huff. You know, if you’re looking for time to kill and you’ve run out of more interesting things to do like watch paint dry or rub salt in your eyes.

If you want to investigate further, you’ll find Huff on GitHub. For some demonstration Huff code, the weierstrudel smart contract is written entirely in Huff.

{{blog_divider}}

“Hello World” — an ERC20 implementation in Huff

Picture the scene — the year is 2020 and the world is reeling from a new global financial crisis. With the collapse of the monetary base, capital flees to the only store of stable value that can be found — first-generation Crypto-Kitties. Amidst this global carnage, Ethereum has failed to achieve its scaling milestones and soaring gas fees cripple the network.It is a world on the brink, where one single edict is etched into the minds citizens from San Francisco to Shanghai — The tokens must flow…or else.

This is truly the darkest timeline, and in the darkest timeline, we code in Huff.

{{blog_divider}}

Finding our feet

We’re going to write an ERC20 token contract. But not just any ERC20 contract— we’re going to write an ERC20 contract where every opcode must justify its place, or be scourged from existence…

Let’s start by looking at the Solidity interface for a ‘mintable’ token — there’s not much point in an ERC20 contract if it doesn’t have any tokens, after all.

function totalSupply() public view returns (uint);

function balanceOf(address tokenOwner) public view returns (uint);

function allowance(address tokenOwner, address spender) public view returns (uint);

function transfer(address to, uint tokens) public returns (bool);

function approve(address spender, uint tokens) public returns (bool);

function transferFrom(address from, address to, uint tokens) public returns (bool);

function mint(address to, uint tokens) public returns (bool);

event Transfer(address indexed from, address indexed to, uint tokens);

event Approval(address indexed tokenOwner, address indexed spender, uint tokens);

That doesn’t look so bad, how hard can this be?

{{blog_divider}}

Bootstrapping

Before we start writing the main subroutines, remember that Huff doesn’t do variables. But there’s a macro for that! Specifically, we need to be able to identify storage locations with something that resembles a variable.

Let’s create some macros that refer to storage locations that we’re going to be storing the smart contract’s state in. For Solidity smart contracts, the compiler will (under the hood) assign every storage variable to a storage pointer and we’re doing the same here.

First up, the storage pointer that maps to token balances:

#define macro BALANCE_LOCATION = takes(0) returns(1) {
   0x00
}

The takes field refers to how many EVM stack items this macro consumes. returns refers to how many EVM stack items this macro will add onto the stack.

Finally, the macro code is just 0x00 . This will push 0 onto the EVM stack; we’re associating balances with the first storage ‘slot’ in our smart contract.

We also need a storage location for the contract’s owner:

#define macro OWNER_LOCATION = takes(0) returns(0) {
   0x01
}

{{blog_divider}}

Implementing SafeMath in Huff

SafeMath is a Solidity library that performs arithmetic operations whilst guarding against integer overflow and underflow.

We need the same functionality in Huff. After all, we wouldn’t want to write unsafe Huff code. That would be irrational.ERC20 is a simple contract, so we will only need addition and subtraction capabilites.

Let’s consider our first macro, MATH__ADD . Normally, this would be a function with two variables as input arguments. But Huff doesn’t have functions.

Huff doesn’t have variables either.

...

Let’s take a step back then. What would this function look like if we were to rip out Solidity’s syntactic sugar? This is the function interface:

function add(uint256 a, uint256 b) internal view returns (uint256 c);

Under the hood, when the add function is called, variables a and b will be pushed to the front of the EVM’s stack.

Behind them on the stack will be a jump label that corresponds to the return destination of this function. But we’re going to ignore that — It’s cheaper to directly graft the function bytecode in-line when its needed, instead of spending gas by messing around with jumps.

So for our first macro, MATH__ADD , we expect first two variables to be at the front of the EVM stack; the variables that we want to add. This macro will consume these two variables, and return the result on the stack. If an integer overflow is triggered, the macro will throw an error.

Starting with the basics, if our stack state is: a, b , we need a+b . Once we have a+b , we need to compare it with either a or b . If either are greater than a+b, we have an integer overflow.

So step1: clone b , creating stack state: b, a, b . We do this with thedup2 opcode. We then call add , which eats the first two stack variables and spits out a+b , leaving us with (a+b), b on the stack.

Next up, we need to validate that a+b >= b. One slight problem here — the Ethereum Virtual Machine doesn’t have an opcode that maps to the >= operator! We only have gt and lt opcodes to work with.

We also have the eq opcode, so we could check whether a+b > band perform a logical OR operation with a+b = b . i.e.:

// stack state: (a+b) b
dup2 dup2 gt // stack state: ((a+b) > b) (a+b)
bdup3 dup3 eq // stack state: ((a+b) = b) ((a+b) > b) (a+b) b
or           // stack state: ((a+b) >= b) (a+b) b

But that’s expensive, we’ve more than doubled the work we’re doing! Each opcode in the above section is 3 gas so we’re chewing through 21 gas to compare two variables. This isn’t Solidity — this is Huff, and it’s time to haggle.

A cheaper alternative is to, instead, validate that (b > (a+b)) == 0 . i.e:

// stack state: (a+b) b
dup1 dup3 gt // stack state: (a > (a+b)) (a+b)
biszero       // stack state: (a > (a+b) == 0) (a+b) b

Much better, only 12 gas. We can almost live with that, but we’re not done bargaining.

We can optimize this further, because once we’ve performed this step, we don’t need bon the stack anymore — we can consume it. We still need (a+b)on the stack however, so we need a swap opcode to get b in front of (a+b) on the program stack. This won’t save us any gas up-front, but we’ll save ourselves an opcode later on in this macro.

dup1 swap2 gt // stack state: (b > (a+b)) (a+b)
iszero        // stack state: ((a+b) >= b) (a+b)

Finally, if a > (a+b) we need to throw an error. When implementing “if <x> throw an error”, we have two options to take, because of how thejumpi instruction works.

jumpi is how the EVM performs conditional branching. jumpi will consume the top two variables on the stack. It will treat the second variable as a position in the program’s program counter, and will jump to it only if the first variable is not zero.

When throwing errors, we can test for the error condition, and if true jump to a point in the program that will throw an error.

OR we can test for the opposite of the error condition, and if true, jump to a point in the program that skips over some code that throws an error.

For example, this is how we would program option 2 for our safe add macro:

// stack state: ((a+b) >= b) (a+b)
no_overflow jumpi
   0x00 0x00 revert // throw an error
no_overflow:
// continue with algorithm

Option one, on the other hand, looks like this:

// stack state: ((a+b) >= b) (a+b)
iszero // stack state: (b > (a+b)) (a+b)
throw_error jumpi
// continue with algorithm

For our use case, option 2 is more efficient, because if we chain option 2 with our condition test, we end up with:

dup2 add dup1 swap2 gt
iszero
iszero
throw_error jumpi

We can remove the two iszero opcodes because they cancel each other out! Leaving us with the following macro

#define macro MATH__ADD = takes(2) returns(1) {
   // stack state: a b
   dup2 add
   // stack state: (a+b) b
   dup1 swap2 gt
   // stack state: (a > (a+b)) (a+b)
   throw_error jumpi}

However, we have a problem! We haven’t defined our jump label throw_error , or what happens when we hit it. We can’t add it to the end of macro MATH__ADD , because then we would have to jump over the error-throwing code if the error condition was not met.

We would prefer not to have macros that use jump labels that are not declared inside the macro itself. We can solve this by passing the jump label throw_error as a template parameter. It is then the responsibility of the macro that invokes MATH__ADD to supply the correct jump label — which ideally should be a local jump label and not a global one.

Our final macro looks like this:

template <throw_error_jump_label>
#define macro MATH__ADD = takes(2) returns(1) {
   // stack state: a b
   dup2 add
   // stack state: (a+b) a
   dup1 swap2 gt
   // stack state: (a > (a+b)) (a+b)
<throw_error_jump_label> jumpi
}

The jumpi opcode is 10 gas, and the others cost 3 gas (assuming <throw_error_jump_label> eventually will map to a PUSH opcode) — in total 28 gas.As an aside — let’s consider the overhead created by Solidity when calling SafeMath.add(a, b)First, values a and b are duplicated on the stack; functions don’t consume existing stack variables. Next, the return destination, that must be jumped to when the function finishes, is pushed onto the stack. Finally, the jump destination of SafeMath.add is pushed onto the stack and the jump instruction is called.

Once the function has finished its work, the jump instruction is called to jump back to the return destination. The values a , b are then assigned to local variables by identifying the location on the stack that these variables occupy, calling a swap opcode to manoeuvre the return value into the allocated stack location, followed by a pop opcode to remove the old value. This is performed twice for each variable.

In total that’s…

  • 4 dup opcodes (3 gas each)
  • 2 jump opcodes (8 gas each)
  • 2 swap opcodes (3 gas each)
  • 2 pop opcodes (2 gas each)
  • 2 jumpdest opcodes (1 gas each)

To summarise, the act of calling SafeMath.add as a Solidity function would cost 40 gas before the algorithm actually does any work.

To summarise the summary, our MATH__ADD macro does its job in almost half the gas it would cost to process a Solidity function overhead.

To summarise the summary of the summary, this is acceptable Huff code.

{{blog_divider}}

Subtraction

Finally, we need an equivalent macro for subtraction:

template <throw_error_jump_label>
#define macro MATH__SUB = takes(2) returns(1) {
   // stack state: a b
   // calling sub will create (a-b)
   // if (b>a) we have integer underflow - throw an error    dup1 dup3 gt
   // stack state: (b>a) a b<throw_error_jump_label> jumpi
   // stack state: a b
   sub
   // stack state: (a-b)
}

{{blog_divider}}

Utility macros

Next up, we need to define some utility macros we’ll be using . We need macros that validate that the transaction sender has not sent any ether to the smart contract, UTILS__NOT_PAYABLE. For our mint method, we’ll need a macro that validates that the message sender is the contract’s owner, UTILS__ONLY_OWNER:

template<error_location>
#define macro UTILS__NOT_PAYABLE = takes(0) returns(0) {
   callvalue <error_location> jumpi
}

#define macro UTILS__ONLY_OWNER = takes(0) returns(0) {
   OWNER_LOCATION() sload caller eq is_owner jumpi
       0x00 0x00 revert
   is_owner:
}

N.B. revert consumes two stack items. p x revert will take memory starting at x, and return the next p bytes as an error code. We’re not going to worry about error codes here, just throwing an error is good enough.

{{blog_divider}}

Creating the constructor

Now that we’ve set up our helper macros, we’re close to actually being able to write our smart contract methods. Congratulations on nearly reaching step 1!

To start with , we need a constructor. This is just another macro in Huff. Our constructor is very simple — we just need to record who the owner of the contract is. In Solidity it looks like this:

constructor() public {
   owner = msg.sender;
}

And in Huff it looks like this:

#define macro ERC20 = takes(0) returns(0) {
   caller OWNER_LOCATION() sstore
}

The EVM opcode caller will push the message sender’s address onto the stack.

We then push the storage slot we’ve reserved for the owner onto the stack.

Finally we call sstore, which will consume the first two stack items and store the 2nd stack item, using the value of the 1st stack item as the storage pointer.

For more information about storage pointers and how smart contracts manage state — Anreas Olofsson’s Solidity workshop on storage is a great read.

{{blog_divider}}

Parsing the function signature

Are we ready to start writing our smart contract methods yet? Of course not, this is Huff. Huff is efficient, but slow.

I like think of Huff like a trusty tortoise, if the tortoise is actually a hundred rats stitched into a tortoise suit, and each rat is a hundred maggots stitched into a rat suit.

…anyhow, we still need our function selector. But Huff doesn’t do functions; we’re going to have to create them from more basic building blocks.

One of the first pieces of code generated by the Solidity compiler is code to unpick the function signature. A function signature is a unique marker that maps to a function name.

For example, consider the solidity functionfunction balanceOf(address tokenOwner) public view returns (uint balance);The function signature will take the core identifying information of the function:

  • the function name
  • the input argument types

This is represented as a string, i.e. "balanceOf(address)". A keccak256 hash of this string is taken, and the most significant 4 bytes of the hash are then used as the function signature.

This online tool makes it easier to find the signature of a function.

It’s a bit of a mouthful, but it creates a (mostly) unique identifier for any given function — this allows contracts to conform to a defined interface that other smart contracts can call.

For example, if the function signature for a given function varied from contract to contract, it would be impossible to have an ‘ERC20’ token, because other smart contracts wouldn’t know how to construct a given contract’s function signature.

With that out of the way, we will find the function signature in the first 4 bytes of calldata. We need to extract this signature and then figure out what to do with it.

Solidity will create function signature hashes under the hood so you don’t have to, but Huff is a bit too primitive for that. We have to supply them directly. We can identify the ERC20 function signatures by pulling them out of remix:

We can parse a function signature by extracting the first 4 bytes of calldata and then perform a series of if-else statements over every function hash.

We can use the bit-shift instructions in Constantinople to save a bit of gas here. 0x00 calldataload will extract the first 32 bytes of calldata and push it onto the stack in a single EVM word. i.e. the 4 bytes we want are in the most significant byte positions and we need them in the least significant positions.

We can do this with 0x00 calldataload 224 shr

We can execute ‘functions’ by comparing the calldata with a function signature, and jumping to the relevant macro if there is a match. i.e:

0x00 calldataload 224 shr // function signature
dup1 0xa9059cbb eq transfer jumpi
dup1 0x23b872dd eq transfer_from jumpi
dup1 0x70a08231 eq balance_of jumpi
dup1 0xdd62ed3e eq allowance jumpi
dup1 0x095ea7b3 eq approve jumpi
dup1 0x18160ddd eq total_supply jumpi
dup1 0x40c10f19 eq mint jumpi
// If we reach this point, we've reached the fallback function.
// However we don't have anything inside our fallback function!
// We can just exit instead, after checking that callvalue is zero:
UTILS__NOT_PAYABLE<error_location>()
0x00 0x00 return

We want the scope of this macro to be constrained to identifying where to jump — the location of these jump labels is elsewhere in the code. Again, we use template parameters to ensure that jump labels are only explicitly called inside the macros that they are defined in.

Our final macro looks like this:

template <transfer, transfer_from, balance_of, allowance, approve, total_supply, mint, error_location>
#define macro ERC20__FUNCTION_SIGNATURE = takes(0) returns(0) {
   0x00 calldataload 224 shr // function signature
   dup1 0xa9059cbb eq <transfer> jumpi
   dup1 0x23b872dd eq <transfer_from> jumpi
   dup1 0x70a08231 eq <balance_of> jumpi     dup1 0xdd62ed3e eq <allowance> jumpi
   dup1 0x095ea7b3 eq <approve> jumpi    dup1 0x18160ddd eq <total_supply> jumpi
   dup1 0x40c10f19 eq <mint> jumpi
   UTILS__NOT_PAYABLE<error_location>()
   0x00 0x00 return
}

{{blog_divider}}

Setting up boilerplate contract code

Finally, we have enough to write the skeletal structure of our main function — the entry-point when our smart contract is called. We represent each method with a macro, which we will need to implement.

#define macro ERC20__MAIN = takes(0) returns(0) {


   ERC20__FUNCTION_SIGNATURE<
       transfer,
       transfer_from,
       balance_of,
       allowance,
       approve,
       total_supply,
       mint,
       throw_error
>()

   transfer:
       ERC20__TRANSFER<throw_error>()
   transfer_from:
       ERC20__TRANFSER_FROM<throw_error>()
   balance_of:
       ERC20__BALANCE_OF<throw_error>()
   allowance:
       ERC2O__ALLOWANCE<throw_error>()
   approve:
       ERC20__APPROVE<throw_error>()
   total_supply:
       ERC20__TOTAL_SUPPLY<throw_error>()
   mint:
       ERC20__MINT<throw_error>()
   throw_error:
       0x00 0x00 revert
}

…Tadaa.

Finally we’ve set up our pre-flight macros and boilerplate code and we’re ready to start implementing methods!But that’s enough for today.

In part 2 we’ll implement the ERC20 methods as glistening Huff macros, run some benchmarks against a Solidity implementation and question whether any of this was worth the effort.

Cheers,

Zac.

Click here for part 2

Read more
Aztec Network
Aztec Network
4 Sep
xx min read

A New Brand for a New Era of Aztec

After eight years of solving impossible problems, the next renaissance is here. 

We’re at a major inflection point, with both our tech and our builder community going through growth spurts. The purpose of this rebrand is simple: to draw attention to our full-stack privacy-native network and to elevate the rich community of builders who are creating a thriving ecosystem around it. 

For eight years, we’ve been obsessed with solving impossible challenges. We invented new cryptography (Plonk), created an intuitive programming language (Noir), and built the first decentralized network on Ethereum where privacy is native rather than an afterthought. 

It wasn't easy. But now, we're finally bringing that powerful network to life. Testnet is live with thousands of active users and projects that were technically impossible before Aztec.

Our community evolution mirrors our technical progress. What started as an intentionally small, highly engaged group of cracked developers is now welcoming waves of developers eager to build applications that mainstream users actually want and need.

Behind the Brand: A New Mental Model

A brand is more than aesthetics—it's a mental model that makes Aztec's spirit tangible. 

Our Mission: Start a Renaissance

Renaissance means "rebirth"—and that's exactly what happens when developers gain access to privacy-first infrastructure. We're witnessing the emergence of entirely new application categories, business models, and user experiences.

The faces of this renaissance are the builders we serve: the entrepreneurs building privacy-preserving DeFi, the activists building identity systems that protect user privacy, the enterprise architects tokenizing real-world assets, and the game developers creating experiences with hidden information.

Values Driving the Network

This next renaissance isn't just about technology—it's about the ethos behind the build. These aren't just our values. They're the shared DNA of every builder pushing the boundaries of what's possible on Aztec.

Agency: It’s what everyone deserves, and very few truly have: the ability to choose and take action for ourselves. On the Aztec Network, agency is native

Genius: That rare cocktail of existential thirst, extraordinary brilliance, and mind-bending creation. It’s fire that fuels our great leaps forward. 

Integrity: It’s the respect and compassion we show each other. Our commitment to attacking the hardest problems first, and the excellence we demand of any solution. 

Obsession: That highly concentrated insanity, extreme doggedness, and insatiable devotion that makes us tick. We believe in a different future—and we can make it happen, together. 

Visualizing the Next Renaissance

Just as our technology bridges different eras of cryptographic innovation, our new visual identity draws from multiple periods of human creativity and technological advancement. 

The Wordmark: Permissionless Party 

Our new wordmark embodies the diversity of our community and the permissionless nature of our network. Each letter was custom-drawn to reflect different pivotal moments in human communication and technological progress.

  • The A channels the bold architecture of Renaissance calligraphy—when new printing technologies democratized knowledge. 
  • The Z strides confidently into the digital age with clean, screen-optimized serifs. 
  • The T reaches back to antiquity, imagined as carved stone that bridges ancient and modern. 
  • The E embraces the dot-matrix aesthetic of early computing—when machines first began talking to each other. 
  • And the C fuses Renaissance geometric principles with contemporary precision.

Together, these letters tell the story of human innovation: each era building on the last, each breakthrough enabling the next renaissance. And now, we're building the infrastructure for the one that's coming.

The Icon: Layers of the Next Renaissance

We evolved our original icon to reflect this new chapter while honoring our foundation. The layered diamond structure tells the story:

  • Innermost layer: Sensitive data at the core
  • Black privacy layer: The network's native protection
  • Open third layer: Our permissionless builder community
  • Outermost layer: Mainstream adoption and real-world transformation

The architecture echoes a central plaza—the Roman forum, the Greek agora, the English commons, the American town square—places where people gather, exchange ideas, build relationships, and shape culture. It's a fitting symbol for the infrastructure enabling the next leap in human coordination and creativity.

Imagery: Global Genius 

From the Mughal and Edo periods to the Flemish and Italian Renaissance, our brand imagery draws from different cultures and eras of extraordinary human flourishing—periods when science, commerce, culture and technology converged to create unprecedented leaps forward. These visuals reflect both the universal nature of the Renaissance and the global reach of our network. 

But we're not just celebrating the past —we're creating the future: the infrastructure for humanity's next great creative and technological awakening, powered by privacy-native blockchain technology.

You’re Invited 

Join us to ask questions, learn more and dive into the lore.

Join Our Discord Town Hall. September 4th at 8 AM PT, then every Thursday at 7 AM PT. Come hear directly from our team, ask questions, and connect with other builders who are shaping the future of privacy-first applications.

Take your stance on privacy. Visit the privacy glyph generator to create your custom profile pic and build this new world with us.

Stay Connected. Visit the new website and to stay up-to-date on all things Noir and Aztec, make sure you’re following along on X.

The next renaissance is what you build on Aztec—and we can't wait to see what you'll create.

Aztec Network
Aztec Network
22 Jul
xx min read

Introducing the Adversarial Testnet

Aztec’s Public Testnet launched in May 2025.

Since then, we’ve been obsessively working toward our ultimate goal: launching the first fully decentralized privacy-preserving layer-2 (L2) network on Ethereum. This effort has involved a team of over 70 people, including world-renowned cryptographers and builders, with extensive collaboration from the Aztec community.

To make something private is one thing, but to also make it decentralized is another. Privacy is only half of the story. Every component of the Aztec Network will be decentralized from day one because decentralization is the foundation that allows privacy to be enforced by code, not by trust. This includes sequencers, which order and validate transactions, provers, which create privacy-preserving cryptographic proofs, and settlement on Ethereum, which finalizes transactions on the secure Ethereum mainnet to ensure trust and immutability.

Strong progress is being made by the community toward full decentralization. The Aztec Network now includes nearly 1,000 sequencers in its validator set, with 15,000 nodes spread across more than 50 countries on six continents. With this globally distributed network in place, the Aztec Network is ready for users to stress test and challenge its resilience.

Introducing the Adversarial Testnet

We're now entering a new phase: the Adversarial Testnet. This stage will test the resilience of the Aztec Testnet and its decentralization mechanisms.

The Adversarial Testnet introduces two key features: slashing, which penalizes validators for malicious or negligent behavior in Proof-of-Stake (PoS) networks, and a fully decentralized governance mechanism for protocol upgrades.

This phase will also simulate network attacks to test its ability to recover independently, ensuring it could continue to operate even if the core team and servers disappeared (see more on Vitalik’s “walkaway test” here). It also opens the validator set to more people using ZKPassport, a private identity verification app, to verify their identity online.  

Slashing on the Aztec Network

The Aztec Network testnet is decentralized, run by a permissionless network of sequencers.

The slashing upgrade tests one of the most fundamental mechanisms for removing inactive or malicious sequencers from the validator set, an essential step toward strengthening decentralization.

Similar to Ethereum, on the Aztec Network, any inactive or malicious sequencers will be slashed and removed from the validator set. Sequencers will be able to slash any validator that makes no attestations for an entire epoch or proposes an invalid block.

Three slashes will result in being removed from the validator set. Sequencers may rejoin the validator set at any time after getting slashed; they just need to rejoin the queue.

Decentralized Governance

In addition to testing network resilience when validators go offline and evaluating the slashing mechanisms, the Adversarial Testnet will also assess the robustness of the network’s decentralized governance during protocol upgrades.

Adversarial Testnet introduces changes to Aztec Network’s governance system.

Sequencers now have an even more central role, as they are the sole actors permitted to deposit assets into the Governance contract.

After the upgrade is defined and the proposed contracts are deployed, sequencers will vote on and implement the upgrade independently, without any involvement from Aztec Labs and/or the Aztec Foundation.

Start Your Plan of Attack  

Starting today, you can join the Adversarial Testnet to help battle-test Aztec’s decentralization and security. Anyone can compete in six categories for a chance to win exclusive Aztec swag, be featured on the Aztec X account, and earn a DappNode. The six challenge categories include:

  • Homestaker Sentinel: Earn 1 Aztec Dappnode by maximizing attestation and proposal success rates and volumes, and actively participating in governance.
  • The Slash Priest: Awarded to the participant who most effectively detects and penalizes misbehaving validators or nodes, helping to maintain network security by identifying and “slashing” bad actors.
  • High Attester: Recognizes the participant with the highest accuracy and volume of valid attestations, ensuring reliable and secure consensus during the adversarial testnet.
  • Proposer Commander: Awarded to the participant who consistently creates the most successful and timely proposals, driving efficient consensus.
  • Meme Lord: Celebrates the creator of the most creative and viral meme that captures the spirit of the adversarial testnet.
  • Content Chronicler: Honors the participant who produces the most engaging and insightful content documenting the adversarial testnet experience.

Performance will be tracked using Dashtec, a community-built dashboard that pulls data from publicly available sources. Dashtec displays a weighted score of your validator performance, which may be used to evaluate challenges and award prizes.

The dashboard offers detailed insights into sequencer performance through a stunning UI, allowing users to see exactly who is in the current validator set and providing a block-by-block view of every action taken by sequencers.

To join the validator set and start tracking your performance, click here. Join us on Thursday, July 31, 2025, at 4 pm CET on Discord for a Town Hall to hear more about the challenges and prizes. Who knows, we might even drop some alpha.

To stay up-to-date on all things Noir and Aztec, make sure you’re following along on X.

Noir
Noir
26 Jun
xx min read

ZKPassport Case Study: A Look into Online Identity Verification

Preventing sybil attacks and malicious actors is one of the fundamental challenges of Web3 – it’s why we have proof-of-work and proof-of-stake networks. But Sybil attacks go a step further for many projects, with bots and advanced AI agents flooding Discord servers, sending thousands of transactions that clog networks, and botting your Typeforms. Determining who is a real human online and on-chain is becoming increasingly difficult, and the consequences of this are making it difficult for projects to interact with real users.

When the Aztec Testnet launched last month, we wrote about the challenges of running a proof-of-stake testnet in an environment where bots are everywhere. The Aztec Testnet is a decentralized network, and in order to give good actors a chance, a daily quota was implemented to limit the number of new sequencers that could join the validator set per day to start proposing blocks. Using this system, good actors who were already in the set could vote to kick out bad actors, with a daily limit of 5 new sequencers able to join the set each day. However, the daily quota quickly got bottlenecked, and it became nearly impossible for real humans who are operating nodes in good faith to join the Aztec Testnet.

In this case study, we break down Sybil attacks, explore different ways the ecosystem currently uses to prevent them, and dive into how we’re leveraging ZKPassport to prevent Sybil attacks on the Aztec Testnet.

Preventing Sybil Attacks

With the massive repercussions that stem from privacy leaks (see the recent Coinbase incident), any solution to prevent Sybil attacks and prove humanity must not compromise on user privacy and should be grounded in the principles of privacy by design and data minimization. Additionally, given that decentralization underpins the entire purpose of Web3 (and the Aztec Network), joining the network should remain permissionless.

Our goal was to find a solution that allows users to permissionlessly prove their humanity without compromising their privacy. If such a technology exists (spoiler alert: it does), we believe that this has the potential to solve one of the biggest problems faced by our industry: Sybil attacks. Some of the ways that projects currently try to prevent Sybil attacks or prove [humanity] include:

  • “Know Your Customer” (KYC): A process in which users upload a picture or scan of their government ID, which is checked and then retained (indefinitely) by the project, and any “bad actors” are rejected.
    • Pros: High likelihood they are human, although AI has begun to introduce a new set of challenges.
    • Cons: User data is retained and viewable by a centralized entity, which could lead to compromised data and privacy leaks, ultimately impacting the security of the individuals. Also, KYC processes in the age of AI means it is easy to fake a passport as only an image is used to verify and not any biometric data held on the passport itself. Existing KYC practices are outdated, not secure and prone to data leaks increasing personal security risk for the users.
  • On-chain activity and account linking (i.e, Gitcoin passport)
    • Pros: No personal identity data shared (name, location, etc.)
    • Cons: Onchain activity and social accounts are not Sybil-resistant.
  • Small payment to participate
    • Pros: Impractical/financially consequential for bots to join. Effective for centralized infra providers as it can cover the cost they incur from Sybil attacks.
    • Cons: Requires users to pay out of pocket to test the network, and doesn’t prevent bots from participating, and is ineffective for decentralized infra as it is difficult to spread incurred costs to all affected operators.
  • zkEmail
    • Pros: The user shares no private information.
    • Cons: Users cannot be blocked by jurisdiction, for example, it would be impossible to carry out sanctions checks, if required.
  • ZKPassport, a private identity verification app.
    • Pros: User verifies they possess a valid ID without sharing private information. No information is retained therefore no leaks of data can occur impacting the personal security of the user.
    • Cons: Users must have a valid passport or a compatible government ID, in each case, that is not expired.

Both zkEmail and ZKPassport are powered by Noir, the universal language of zk, and are great solutions for preventing Sybil attacks.

With zkEmail, users can do things like prove that they received a confirmation email from a centralized exchange showing that they successfully passed KYC, all without showing any of the email contents or personal information. While this offers a good solution for this use case, we also wanted the functionality of enabling the network to block certain jurisdictions (if needed), without the network knowing where the user is from. This also enables users to directly interface with the network rather than through a third-party email confirmation.

Given this context, ZKPassport was, and is, the perfect fit.

About ZKPassport

For the Aztec Testnet, we’ve integrated ZKPassport to enable node operators to prove they are human and participate in the network. This integration allows the network to dramatically increase the number of sequencers that can be added each day, which is a huge step forward in decentralizing the network with real operators.

ZKPassport allows users to share only the details about themselves that they choose by scanning a passport or government ID. This is achieved using zero-knowledge proofs (ZKPs) that are generated locally on the user’s phone. Implementing client-side zk-proofs in this way enables novel use-cases like age verification, where someone can prove their age without actually sharing how old they are (see the recent report on How to Enable Age Verification on the Internet Today Using Zero-Knowledge Proofs).

As of this week, the ZKPassport app is live and available to download on Google Play and the Apple App Store.

How ZKPassport works

Most countries today issue biometric passports or national IDs containing NFC chips (over 120 countries are currently supported by ZKPassport). These chips contain information on the full name, date of birth, nationality, and even digital photographs of the passport or ID holder. They can also contain biometric data such as fingerprints and iris scans.

By scanning the NFC chip located in their ID document with a smartphone, users generate proof based on a specific request from an app. For example, some apps might require only the user’s age or nationality. In the case of Aztec, no information is needed about the user other than that they do indeed hold a valid passport or ID.

Client-side proving

Once the user installs the ZKPassport app and scans their passport, the proof of identity is generated on the user's smartphone (client-side).

All the private data read from the NFC chip in the passport or ID is processed client-side and never leaves the smartphone (aka: only the user is aware of their data). Only this proof is sent to an app that has requested some information. The app can then verify the validity of the user’s age or nationality, all without actually seeing anything about the user other than what the user has authorized the app to see. In the case of age verification, the user may want to prove that they are over 18, so they’ll create a proof of this on their phone, and the requesting app is able to verify this information without knowing anything else about them.

For the Aztec Testnet, the network only needs to know that the user holds a valid passport, so no information is shared by the user other than “yes, I hold a valid passport or ID.”

Getting started with ZKPassport on Aztec Testnet

This is a nascent and evolving technology, and various phone models, operating systems, and countries are still being optimized for. To ensure this works seamlessly, we’ll be selecting the first cohort of people who have already been running active validators on a rolling basis to help test ZKPassport and provide early feedback.

If someone successfully verifies that they are a valid passport holder, they will be added to a queue to enter the validator set. Once they are in line, they are guaranteed entry. The queue will enable an estimated additional 10% of the current set to be allowed in each day. For example, if 800 sequencers are currently in the set, 80 new sequencers will be allowed to join that day.

This allows existing operators to maintain control of the network in the event that bad actors enter, while dramatically increasing the number of new validators added compared to the current number.

Humanizing Web3  

With ZKPassport now live, the Aztec Testnet is better equipped to distinguish real users from bots, without compromising on privacy or decentralization.

This integration is already enabling more verified human node operators to join the validator set, and the network is ready to welcome more. By leveraging ZKPs and client-side proving, ZKPassport ensures that humanity checks are both secure and permissionless, bringing us closer to a decentralized future that doesn’t rely on trust in centralized authorities.

This is exciting not just for Aztec but for the broader ecosystem. As the network continues to grow and develop, participation must remain open to anyone acting in good faith, regardless of geography or background, while keeping out bots and other malicious actors. ZKPassport makes this possible.

We’re excited to see the community expand, powered by real people helping to build a more private, inclusive, and human Web3.

Stay up-to-date on Noir and Aztec by following Noir and Aztec on X.

Noir
Noir
4 Jun
xx min read

StealthNote: The Decentralized, Private Glassdoor of Web3

Imagine an app that allows users to post private messages while proving they belong to an organization, without revealing their identity. Thanks to zero-knowledge proofs (ZKPs), it's now possible to protect the user’s identity through secure messaging, confidential voting, secured polling, and more. This development in privacy-preserving authentication creates powerful new ways for teams and individuals to communicate on the Internet while keeping aspects of their identity private.

Introducing Private Posting

Compared to Glassdoor, StealthNote is an app that allows users to post messages privately while proving they belong to a specific organization. Built with Noir, an open-source programming language for writing ZK programs, StealthNote utilizes ZKPs to prove ownership of a company email address, without revealing the particular email or other personal information.

Privately Sign In With Google

To prove the particular domain email ownership, the app asks users to sign in using Google. This utilizes Google’s ‘Sign in with Google’ OAuth authorization. OAuth is usually used by external applications for user authorization and returns verified users’ data, such as name, email, and the organization’s domain.

However, using ‘Sign in with Google’ in a traditional way reveals all of the information about the person’s identity to the app. Furthermore, for an app where you want to allow the public to verify the information about a user, all of this information would be made public to the world. That’s where StealthNote steps in, enabling part of the returned user data to stay private (e.g. name and email) and part of it to be publicly verifiable (e.g. company domain).

How StealthNote Works

Understanding JSON Web Tokens (JWTs)

When you "Sign in with Google" in a third-party app, Google returns some information about the user as a JSON Web Token (JWT) – a standard for sending information around the web.

JWTs are just formatted strings that contain a header (some info about the token), a payload (data about the user), and a signature to ensure the integrity and authenticity of the token:

Anyone can verify the authenticity of the above data by verifying that the JWT was signed by Google using their public key.

Adding Private Messages

In the case of StealthNote, we want to authorize the user and prove that they sent a particular message. To make this possible, custom information is added to the JWT token payload – a hashed message. With this additional field, the JWT becomes a digitally signed proof that a particular user sent that exact message.

Protecting the Sender’s Privacy

You can share the message and the JWT with someone and convince them that the message was sent by someone in the company. However, this would require sharing the whole JWT, which includes your name and email, exposing who the sender is. So, how does StealthNote protect this information?

They used a ZK-programming language, Noir, with the following goals in mind:

  • Verify the signature of the JWT using Google's public key
  • Extract the hashed message from the payload
  • Extract the email domain from the payload

The payload and the signature are kept private, meaning they stay on the user’s device and never need to be revealed, while the hashed message, the domain, and the JWT public key are public. The ZKP is generated in the browser, and no private data ever leaves the user's device.

Noir: What is Happening Under the Hood

By executing the program with Noir and generating a proof, the prover (the user who is posting a message) proves that they can generate a JWT signed by some particular public key, and it contains an email field in the payload with the given domain.

When the message is sent to the StealthNote server, the server verifies that the proof is valid as per the StealthNote circuit and validates that the public key in the proof is the same as Google's public key.

Once both checks pass, the server inserts the proof into the database, which then appears in the feed visible for other users. Other users can also verify the proof in the browser. The role of the server is to act as a data storage layer.

Stay up-to-date on Noir and Aztec by following Noir and Aztec on X.