Aztec Network
29 Aug
## min read

From zero to nowhere: smart contract programming in Huff (2/4)

Follow the journey of smart contract programming in Huff, offering unique insights into this coding language.

Share
Written by
Zac Williamson
Edited by

Hello there!

This is the second article in a series on how to write smart contracts in Huff.

Huff is a recent creation that has rudely inserted itself into to the panoply of Ethereum smart contract-programming languages. Barely a language, Huff is more like an assembler that can manage a few guttural yelps, that could charitably be interpreted as some kind of syntax.

I created Huff so that I could write an efficient elliptic curve arithmetic contract called Weierstrudel, and reduce the costs of zero-knowledge cryptography on Ethereum. So, you know, a general purpose language that will take the world by storm at any moment…

But hey, I guess it can be used to write an absurdly over-optimised ERC20 contract as well.

So let’s dive back into where we left off.

Prerequisites

  • Part 1 of this series
  • Knowledge of Solidity inline assembly
  • A large glass of red wine. Huff’s Ballmer peak is extremely high, so a full-bodied wine is ideal. A Malbec or a Merlot will also balance out Huff’s bitter aftertaste, but vintage is more important than sweetness here.

Difficulty rating: (medium Huff)

Getting back into the swing of things: Transfer

We left off in the previous article with the skeletal structure of our contract, which is great! We can jump right into the fun stuff and start programming some logic.

We need to implement the functionality function transfer(address to, uint256 value) public returns (bool). Here’s a boilerplate Solidity implementation of transfer

function transfer(address to, uint256 value) public returns (bool) {
  balances[msg.sender] = balances[msg.sender].sub(value);
  balances[to] = balances[to].add(value);
  emit Transfer(from, to, value);
  return true;
}

Hmm. Well, this is awkward. We need to access some storage mappings and emit an event.

Huff doesn’t have mappings or events.

Okay, let’s take a step back. An ERC20 token represents its balances by mapping address to an integer: mapping(address => uint) balances. But…Huff does not have types (Of course Huff doesn’t have types, type checking is expensive! Well, it’s not free, sometimes, so it had to go).

In order to emulate this, we need to dig under the hood and figure out how Solidity represents mappings, with hashes.

Breaking down a Solidity mapping

Smart contracts store data via the sstore opcode — which takes a pointer to a storage location. Each storage location can contain 32 bytes of data, but these locations don’t have to be linear (unlike memory locations, which are linear).

Mappings work by combining the mapping key with the storage slot of the mapping, and hashing the result. The result is a 32 byte storage pointer that unique both to the key being used, and the mapping variable in question.

So, we can solve this problem by implement mappings from scratch! The Transfer event requires address from, address to, uint256 value. We’ll deal with the event at the end of this macro, but given that we will be re-using the from, to, value variables a lot, we might as well throw them on the stack. What could go wrong?

Initialising the stack

Before we start cutting some code, let’s map out the steps our method has to perform:

  1. Increase balance[to] by value
  2. Decrease balance[msg.sender] by value
  3. Error checking
  4. Emit the Transfer(from, to, value) event
  5. Return true

When writing an optimised Huff macro, we need to think strategically about how to perform the above in order to minimise the number of swap opcodes that are required.

Specifically, the variables from and to are located in calldata, the data structure that stores input data sent to the smart contract. We can load a word of calldata via calldataload. The calldataload opcode has one input, the offset in calldata that we’re loading from, meaning it costs 6 gas to load a word of calldata.

It only costs 3 gas to duplicate a variable on the stack, so we only want to load from calldata once and re-use variables with the dup opcode.

Because the stack is a last-in-first-out structure, the first variables we load onto the stack will be consumed by the last bit of logic in our method.

The last ‘bit of logic’ we need is the event that we will be emitting, so let’s deal with how that will work.

Huff and events

Imagine we’re almost done implementingtransfer(address to, uint256 value) public returns (bool), the only minor issue is that we need to emit an event, Transfer(address indexed from, address indexed to, uint256 value).

…I have a confession to make. I’ve never written a Huff contract that emits events. Still, there’s a first time for everything yes?

Events have two types of data associated with them, topics and data.

Topics are what get created when an event parameter has the indexed prefix. Instead of storing the parameter address indexed from in the event log, the keccak256 hash of from is used as a database lookup index.

i.e. When searching the event logs, you can pull out all Transfer events that contained a given address in the from or to field. But if you look at a bunch of Transfer event logs, you won’t be able to identify which address was used as the from or to field by looking at the log data.

Our event Transfer has three topics, despite only having two indexed parameters. The event signature is also a topic (a keccak256 hash of the event string, it’s like a function signature).

Digging around in Remix, this is the event signature:

0xDDF252AD1BE2C89B69C2B068FC378DAA952BA7F163C4A11628F55A4DF523B3EF. Just rolls off the tongue doesn’t it?

So we know what we need to do with indexed data — just supply the ‘topics’ on the stack. Next, how does an event log non-indexed data?

Taking a step back, there are five log opcodes: log0, log1, log2, log3, log4. These opcodes describe the number of indexed parameters in each log.

We want log3. The interface for the log3 opcode is log3(p1, p2, a, b, c). Memory p1 to p1+p2 contains the log data, and a, b, c represent the log topics.

i.e. we want log3(p1, p2, event_signature, from, to). The values from, to, event_signature are going to be the last variables consumed on our stack, so they must be the first variables we add to it at the start of our program.

Finally, we’re in a position to write some Huff code, oh joy. Here it is:

0x04 calldataload
caller0xDDF252AD1BE2C89B69C2B068FC378DAA952BA7F163C4A11628F55A4DF523B3EF

Next up, we need to define the memory that will contain value. That’s simple enough — we’ll be storing value at memory position 0x00, so p1=0x00 and p2=0x20. Giving us

0x04 calldataload
caller
0xDDF252AD1BE2C89B69C2B068FC378DAA952BA7F163C4A11628F55A4DF523B3EF
0x20
0x00

Finally, we need value, so that we can store it in memory for log3. We will execute the mstore opcode at the end of our method, so that we can access value from the stack for the rest of our method. For now, we just load it onto the stack:

0x04 calldataload
caller
0xDDF252AD1BE2C89B69C2B068FC378DAA952BA7F163C4A11628F55A4DF523B3EF
0x20
0x00
0x24 calldataload

One final thing: we have assumed that 0x04 calldataload will map to address to. But 0x04 calldataload loads a 32-byte word onto the stack, and addresses are only 20 bytes! We need to mask the 12 most-significant bytes of 0x04 calldataload, in case the transaction sender has added non-zero junk into those upper 12 bytes of calldata.

We can fix this by calling 0x04 calldataload 0x000000000000000000000000ffffffffffffffffffffffffffffffffffffffff and.

Actually, let’s make a macro for that, and one for our event signature to keep it out of the way:

#define macro ADDRESS_MASK = takes(1) returns(1) {
0x000000000000000000000000ffffffffffffffffffffffffffffffffffffffff
and
}

#define macro TRANSFER_EVENT_SIGNATURE = takes(0) returns(1) {
0xDDF252AD1BE2C89B69C2B068FC378DAA952BA7F163C4A11628F55A4DF523B3EF
}

The final state of our ‘initialisation’ macro is this:#define macro ERC20__TRANSFER_INIT = takes(0) returns(6) {  0x04 calldataload ADDRESS_MASK()  caller  TRANSFER_EVENT_SIGNATURE()  0x20  0x00  0x24 calldataload}

Updating balances[to]

Now that we’ve set up our stack, we can proceed iteratively through our method’s steps (you know, like a normal program…).

To increase balances[to], we need to handle mappings. To start, let’s get our mapping key for balances[to]. We need to place to and the storage slot unique to balances linearly in 64 bytes of memory, so we can hash it:

// stack state:
// value 0x20 0x00 signature from to
dup6 0x00 mstore
BALANCE_LOCATION() 0x20 mstore
0x40 0x00 sha3

In the functional style, sha3(a, b) will create a keccak256 hash of the data in memory, starting at memory index aand ending at index a+b.

Notice how our ‘mapping’ uses 0x40 bytes of memory? That’s why the ‘free memory pointer’ in a Solidity contract is stored at memory index 0x40 — the first 2 words of memory are used for hashing to compute mapping keys.

Optimizing storage pointer construction

I don’t know about you, but I’m not happy with this macro. It smells…inefficient. In part 1, when we set BALANCE_LOCATION() to storage slot 0x00, we did that on purpose! balances is the most commonly used mapping in an ERC20 contract, and there’s no point storing 0x00 in memory. Smart contract memory isn’t like normal memory and uninitialised — all memory starts off initialised to 0x00. We can scrap that stuff, leaving:

// stack state:
// value 0x20 0x00 signature from to
dup6 0x00 mstore
0x40 0x00 sha3

Note that if our program has previously stored something at index 0x20, we will compute the wrong mapping key.

But this is Huff; clearly the solution is to just never use more than 32 bytes of memory for our entire program. What are we, some kind of RAM hog?

Setting storage variables

Next, we’re going to update balances[to]. To start, we need to load up our balance and duplicate value, in preparation to add it to balances[to].

// stack state:
// key(balances[to]) value 0x20 0x00 signature from to
dup1 sload
dup3

Now, remember that MATH__ADD macro we made in part 1? We can use it here! See, Huff makes programming easy with plug-and-play macros! What a joy.

// stack state:
// key(balances[to]) value 0x20 0x00 signature from to
dup1 sload
dup3MATH_ADD()

…wait. I said we could use MATH__ADD, not that we were going to. I don’t like how expensive this code is looking and I think we can haggle.

Specifically, let’s look at that MATH__ADD macro:

template #define macro MATH__ADD = takes(2) returns(1) { // stack state: a b dup2 add // stack state: (a+b) a dup1 swap2 gt // stack state: (a > (a+b)) (a+b) jumpi }

Remember how, well, optimised, our macro seemed in part 1? All I see now is a bloated gorgon feasting on wasted gas with opcodes dribbling down its chin. Disgusting!

First off, we don’t need that swap2 opcode. Our original macro consumed a from the stack because that variable wasn’t needed anymore. But… a is uint256 value, and we do need it for later — we can replace dup1 swap2 with a simple dup opcode.

But there’s a larger culprit here that needs to go, that jumpi opcode.

Combining error codes

The transfer function has three error tests it must perform: the two safemath checks when updating balances[from] and balances[to], and validating that callvalue is 0.

If we were to implement this naively, we would end up with three conditional jump instructions and associated opcodes to set up jump labels. That’s 48 gas. Quite frankly, I’d rather put that gas budget towards another Malbec, so let’s start optimising.

Instead of directly invoking a jumpi instruction against our error test, let’s store it for later. We can combine all of our error tests into a single test and only onejumpi instruction. Putting this into action, let’s compute the error condition, but leave it on the stack:

// stack state:
// key(balances[to]) value 0x20 0x00 signature from to
dup1 sload // balances[to]
dup3       // value balances[to]
add        // value+balances[to]
dup1       // value+balances[to] value+balances[to]
dup4       // value v+b v+b
gt         // error_code value+balances[to]

Finally, we need to store value+balances[to] at the mapping key we computed previously. We need key(balances[to]) to be in front of value+balances[to], which we can perform with a simple swap opcode:

#define macro ERC20__TRANSFER_TO = takes(6) returns(7) {
// stack state:
// value 0x20 0x00 signature from to
dup6 0x00 mstore
0x40 0x00 sha3
dup1 sload // balances[to] key(balances[to]) ...
dup3       // value balances[to] ...
add        // value+balances[to] ...
dup3       // value value+balances[to] ...
gt         // error_code value+balances[to] key(balances[to])
swap2      // key(balances[to]) value+balances[to} error_code
sstore     // error_code ...
}

And that’s it! We’re done with updating balances[to]. What a breeze.

Updating balances[from]

Next up, we need to repeat that process for balances[from], pillaging the parts of MATH__SUB that are useful.

#define macro ERC20__TRANSFER_FROM = takes(7) returns(8) {
// stack state:
// error_code, value, 0x20, 0x00, signature, from, to
caller 0x00 mstore
0x40 0x00 sha3
dup1 sload // balances[from], key(balances[from]), error_code,...
dup4 dup2 sub // balances[from]-value, balances[from], key, e,...
dup5 swap3 // key, balances[from]-value, balances[from], value
sstore     // balances[from], value, error_code, value, ...
lt         // error_code_2, error_code, value, ...
}

Now that we have both of our error variables on the stack, we can perform our error test! In addition to the two error codes, we want to test whether callvalue> 0. We don’t need gt(callvalue,0) here, any non-zero value of callvalue will trigger our jumpi instruction to jump.

callvalue or or jumpi

What an adorable little line of Huff.

Emitting Transfer(from, to, value)

The penultimate step in our method is to emit that event. Our stack state at this point is where we left it after ERC20__TRANSFER_INIT, so all we need to do is store value at memory index 0x00 and call the log3 opcode.

// stack state:
// value, 0x20, 0x00, event_signature, from, to
0x00 mstore log3

Finally, we need to return true (i.e. 1). We can do that by storing 0x01 at memory position 0x00 and calling return(0x00, 0x20)

Putting it all together…

At long last, we have our transfer method! It’s this…thing

#define macro OWNER_LOCATION = takes(0) returns(1) {
0x01
}

#define macro ADDRESS_MASK = takes(1) returns(1) {
0x000000000000000000000000ffffffffffffffffffffffffffffffffffffffff
and
}

#define macro TRANSFER_EVENT_SIGNATURE = takes(0) returns(1) {
0xDDF252AD1BE2C89B69C2B068FC378DAA952BA7F163C4A11628F55A4DF523B3EF
}

#define macro ERC20 = takes(0) returns(0) {
caller OWNER_LOCATION() mstore
}

#define macro ERC20__TRANSFER_INIT = takes(0) returns(6) {
0x04 calldataload ADDRESS_MASK()
caller
TRANSFER_EVENT_SIGNATURE()
0x20
0x00
0x24 calldataload
}

#define macro ERC20__TRANSFER_GIVE_TO = takes(6) returns(7) {
// stack state:
// value, 0x20, 0x00, signature, from, to
dup6 0x00 mstore
0x40 0x00 sha3
dup1 sload // balances[to], key(balances[to]), ...
dup3       // value, balances[to], ...
add        // value+balances[to], ...
dup1       // value+balances[to], value+balances[to], ...
dup4       // value value+balances[to] ...
gt         // error_code, value+balances[to], key(balances[to])
swap2      // key(balances[to]), value+balances[to}, error_code
sstore     // error_code, ...
}

#define macro ERC20__TRANSFER_TAKE_FROM = takes(7) returns(8) {
// stack state:
// error_code, value, 0x20, 0x00, signature, from, to
caller 0x00 mstore
0x40 0x00 sha3
dup1 sload // balances[from], key(balances[from]), error_code,...
dup4 dup2 sub // balances[from]-value, balances[from], key, e,...
dup5 swap3 // key, balances[from]-value, balances[from], value
sstore     // balances[from], value, error_code, value, ...
lt         // error_code_2, error_code, value, ...
}

template #define macro ERC20__TRANSFER = takes(0) returns(0) { ERC20__TRANSFER_INIT() ERC20__TRANSFER_TO() ERC20__TRANSFER_FROM() callvalue or or jumpi 0x00 mstore log3 0x01 0x00 mstore 0x20 0x00 return }

Wasn’t that fun?

I think that’s a reasonable place to end this article. In the next chapter, we’ll deal with how to handle ERC20 allowances in a Huff contract.

Cheers,

Zac.

Click here for part 3

Read more
Aztec Network
Aztec Network
4 Sep
xx min read

A New Brand for a New Era of Aztec

After eight years of solving impossible problems, the next renaissance is here. 

We’re at a major inflection point, with both our tech and our builder community going through growth spurts. The purpose of this rebrand is simple: to draw attention to our full-stack privacy-native network and to elevate the rich community of builders who are creating a thriving ecosystem around it. 

For eight years, we’ve been obsessed with solving impossible challenges. We invented new cryptography (Plonk), created an intuitive programming language (Noir), and built the first decentralized network on Ethereum where privacy is native rather than an afterthought. 

It wasn't easy. But now, we're finally bringing that powerful network to life. Testnet is live with thousands of active users and projects that were technically impossible before Aztec.

Our community evolution mirrors our technical progress. What started as an intentionally small, highly engaged group of cracked developers is now welcoming waves of developers eager to build applications that mainstream users actually want and need.

Behind the Brand: A New Mental Model

A brand is more than aesthetics—it's a mental model that makes Aztec's spirit tangible. 

Our Mission: Start a Renaissance

Renaissance means "rebirth"—and that's exactly what happens when developers gain access to privacy-first infrastructure. We're witnessing the emergence of entirely new application categories, business models, and user experiences.

The faces of this renaissance are the builders we serve: the entrepreneurs building privacy-preserving DeFi, the activists building identity systems that protect user privacy, the enterprise architects tokenizing real-world assets, and the game developers creating experiences with hidden information.

Values Driving the Network

This next renaissance isn't just about technology—it's about the ethos behind the build. These aren't just our values. They're the shared DNA of every builder pushing the boundaries of what's possible on Aztec.

Agency: It’s what everyone deserves, and very few truly have: the ability to choose and take action for ourselves. On the Aztec Network, agency is native

Genius: That rare cocktail of existential thirst, extraordinary brilliance, and mind-bending creation. It’s fire that fuels our great leaps forward. 

Integrity: It’s the respect and compassion we show each other. Our commitment to attacking the hardest problems first, and the excellence we demand of any solution. 

Obsession: That highly concentrated insanity, extreme doggedness, and insatiable devotion that makes us tick. We believe in a different future—and we can make it happen, together. 

Visualizing the Next Renaissance

Just as our technology bridges different eras of cryptographic innovation, our new visual identity draws from multiple periods of human creativity and technological advancement. 

The Wordmark: Permissionless Party 

Our new wordmark embodies the diversity of our community and the permissionless nature of our network. Each letter was custom-drawn to reflect different pivotal moments in human communication and technological progress.

  • The A channels the bold architecture of Renaissance calligraphy—when new printing technologies democratized knowledge. 
  • The Z strides confidently into the digital age with clean, screen-optimized serifs. 
  • The T reaches back to antiquity, imagined as carved stone that bridges ancient and modern. 
  • The E embraces the dot-matrix aesthetic of early computing—when machines first began talking to each other. 
  • And the C fuses Renaissance geometric principles with contemporary precision.

Together, these letters tell the story of human innovation: each era building on the last, each breakthrough enabling the next renaissance. And now, we're building the infrastructure for the one that's coming.

The Icon: Layers of the Next Renaissance

We evolved our original icon to reflect this new chapter while honoring our foundation. The layered diamond structure tells the story:

  • Innermost layer: Sensitive data at the core
  • Black privacy layer: The network's native protection
  • Open third layer: Our permissionless builder community
  • Outermost layer: Mainstream adoption and real-world transformation

The architecture echoes a central plaza—the Roman forum, the Greek agora, the English commons, the American town square—places where people gather, exchange ideas, build relationships, and shape culture. It's a fitting symbol for the infrastructure enabling the next leap in human coordination and creativity.

Imagery: Global Genius 

From the Mughal and Edo periods to the Flemish and Italian Renaissance, our brand imagery draws from different cultures and eras of extraordinary human flourishing—periods when science, commerce, culture and technology converged to create unprecedented leaps forward. These visuals reflect both the universal nature of the Renaissance and the global reach of our network. 

But we're not just celebrating the past —we're creating the future: the infrastructure for humanity's next great creative and technological awakening, powered by privacy-native blockchain technology.

You’re Invited 

Join us to ask questions, learn more and dive into the lore.

Join Our Discord Town Hall. September 4th at 8 AM PT, then every Thursday at 7 AM PT. Come hear directly from our team, ask questions, and connect with other builders who are shaping the future of privacy-first applications.

Take your stance on privacy. Visit the privacy glyph generator to create your custom profile pic and build this new world with us.

Stay Connected. Visit the new website and to stay up-to-date on all things Noir and Aztec, make sure you’re following along on X.

The next renaissance is what you build on Aztec—and we can't wait to see what you'll create.

Aztec Network
Aztec Network
22 Jul
xx min read

Introducing the Adversarial Testnet

Aztec’s Public Testnet launched in May 2025.

Since then, we’ve been obsessively working toward our ultimate goal: launching the first fully decentralized privacy-preserving layer-2 (L2) network on Ethereum. This effort has involved a team of over 70 people, including world-renowned cryptographers and builders, with extensive collaboration from the Aztec community.

To make something private is one thing, but to also make it decentralized is another. Privacy is only half of the story. Every component of the Aztec Network will be decentralized from day one because decentralization is the foundation that allows privacy to be enforced by code, not by trust. This includes sequencers, which order and validate transactions, provers, which create privacy-preserving cryptographic proofs, and settlement on Ethereum, which finalizes transactions on the secure Ethereum mainnet to ensure trust and immutability.

Strong progress is being made by the community toward full decentralization. The Aztec Network now includes nearly 1,000 sequencers in its validator set, with 15,000 nodes spread across more than 50 countries on six continents. With this globally distributed network in place, the Aztec Network is ready for users to stress test and challenge its resilience.

Introducing the Adversarial Testnet

We're now entering a new phase: the Adversarial Testnet. This stage will test the resilience of the Aztec Testnet and its decentralization mechanisms.

The Adversarial Testnet introduces two key features: slashing, which penalizes validators for malicious or negligent behavior in Proof-of-Stake (PoS) networks, and a fully decentralized governance mechanism for protocol upgrades.

This phase will also simulate network attacks to test its ability to recover independently, ensuring it could continue to operate even if the core team and servers disappeared (see more on Vitalik’s “walkaway test” here). It also opens the validator set to more people using ZKPassport, a private identity verification app, to verify their identity online.  

Slashing on the Aztec Network

The Aztec Network testnet is decentralized, run by a permissionless network of sequencers.

The slashing upgrade tests one of the most fundamental mechanisms for removing inactive or malicious sequencers from the validator set, an essential step toward strengthening decentralization.

Similar to Ethereum, on the Aztec Network, any inactive or malicious sequencers will be slashed and removed from the validator set. Sequencers will be able to slash any validator that makes no attestations for an entire epoch or proposes an invalid block.

Three slashes will result in being removed from the validator set. Sequencers may rejoin the validator set at any time after getting slashed; they just need to rejoin the queue.

Decentralized Governance

In addition to testing network resilience when validators go offline and evaluating the slashing mechanisms, the Adversarial Testnet will also assess the robustness of the network’s decentralized governance during protocol upgrades.

Adversarial Testnet introduces changes to Aztec Network’s governance system.

Sequencers now have an even more central role, as they are the sole actors permitted to deposit assets into the Governance contract.

After the upgrade is defined and the proposed contracts are deployed, sequencers will vote on and implement the upgrade independently, without any involvement from Aztec Labs and/or the Aztec Foundation.

Start Your Plan of Attack  

Starting today, you can join the Adversarial Testnet to help battle-test Aztec’s decentralization and security. Anyone can compete in six categories for a chance to win exclusive Aztec swag, be featured on the Aztec X account, and earn a DappNode. The six challenge categories include:

  • Homestaker Sentinel: Earn 1 Aztec Dappnode by maximizing attestation and proposal success rates and volumes, and actively participating in governance.
  • The Slash Priest: Awarded to the participant who most effectively detects and penalizes misbehaving validators or nodes, helping to maintain network security by identifying and “slashing” bad actors.
  • High Attester: Recognizes the participant with the highest accuracy and volume of valid attestations, ensuring reliable and secure consensus during the adversarial testnet.
  • Proposer Commander: Awarded to the participant who consistently creates the most successful and timely proposals, driving efficient consensus.
  • Meme Lord: Celebrates the creator of the most creative and viral meme that captures the spirit of the adversarial testnet.
  • Content Chronicler: Honors the participant who produces the most engaging and insightful content documenting the adversarial testnet experience.

Performance will be tracked using Dashtec, a community-built dashboard that pulls data from publicly available sources. Dashtec displays a weighted score of your validator performance, which may be used to evaluate challenges and award prizes.

The dashboard offers detailed insights into sequencer performance through a stunning UI, allowing users to see exactly who is in the current validator set and providing a block-by-block view of every action taken by sequencers.

To join the validator set and start tracking your performance, click here. Join us on Thursday, July 31, 2025, at 4 pm CET on Discord for a Town Hall to hear more about the challenges and prizes. Who knows, we might even drop some alpha.

To stay up-to-date on all things Noir and Aztec, make sure you’re following along on X.

Noir
Noir
26 Jun
xx min read

ZKPassport Case Study: A Look into Online Identity Verification

Preventing sybil attacks and malicious actors is one of the fundamental challenges of Web3 – it’s why we have proof-of-work and proof-of-stake networks. But Sybil attacks go a step further for many projects, with bots and advanced AI agents flooding Discord servers, sending thousands of transactions that clog networks, and botting your Typeforms. Determining who is a real human online and on-chain is becoming increasingly difficult, and the consequences of this are making it difficult for projects to interact with real users.

When the Aztec Testnet launched last month, we wrote about the challenges of running a proof-of-stake testnet in an environment where bots are everywhere. The Aztec Testnet is a decentralized network, and in order to give good actors a chance, a daily quota was implemented to limit the number of new sequencers that could join the validator set per day to start proposing blocks. Using this system, good actors who were already in the set could vote to kick out bad actors, with a daily limit of 5 new sequencers able to join the set each day. However, the daily quota quickly got bottlenecked, and it became nearly impossible for real humans who are operating nodes in good faith to join the Aztec Testnet.

In this case study, we break down Sybil attacks, explore different ways the ecosystem currently uses to prevent them, and dive into how we’re leveraging ZKPassport to prevent Sybil attacks on the Aztec Testnet.

Preventing Sybil Attacks

With the massive repercussions that stem from privacy leaks (see the recent Coinbase incident), any solution to prevent Sybil attacks and prove humanity must not compromise on user privacy and should be grounded in the principles of privacy by design and data minimization. Additionally, given that decentralization underpins the entire purpose of Web3 (and the Aztec Network), joining the network should remain permissionless.

Our goal was to find a solution that allows users to permissionlessly prove their humanity without compromising their privacy. If such a technology exists (spoiler alert: it does), we believe that this has the potential to solve one of the biggest problems faced by our industry: Sybil attacks. Some of the ways that projects currently try to prevent Sybil attacks or prove [humanity] include:

  • “Know Your Customer” (KYC): A process in which users upload a picture or scan of their government ID, which is checked and then retained (indefinitely) by the project, and any “bad actors” are rejected.
    • Pros: High likelihood they are human, although AI has begun to introduce a new set of challenges.
    • Cons: User data is retained and viewable by a centralized entity, which could lead to compromised data and privacy leaks, ultimately impacting the security of the individuals. Also, KYC processes in the age of AI means it is easy to fake a passport as only an image is used to verify and not any biometric data held on the passport itself. Existing KYC practices are outdated, not secure and prone to data leaks increasing personal security risk for the users.
  • On-chain activity and account linking (i.e, Gitcoin passport)
    • Pros: No personal identity data shared (name, location, etc.)
    • Cons: Onchain activity and social accounts are not Sybil-resistant.
  • Small payment to participate
    • Pros: Impractical/financially consequential for bots to join. Effective for centralized infra providers as it can cover the cost they incur from Sybil attacks.
    • Cons: Requires users to pay out of pocket to test the network, and doesn’t prevent bots from participating, and is ineffective for decentralized infra as it is difficult to spread incurred costs to all affected operators.
  • zkEmail
    • Pros: The user shares no private information.
    • Cons: Users cannot be blocked by jurisdiction, for example, it would be impossible to carry out sanctions checks, if required.
  • ZKPassport, a private identity verification app.
    • Pros: User verifies they possess a valid ID without sharing private information. No information is retained therefore no leaks of data can occur impacting the personal security of the user.
    • Cons: Users must have a valid passport or a compatible government ID, in each case, that is not expired.

Both zkEmail and ZKPassport are powered by Noir, the universal language of zk, and are great solutions for preventing Sybil attacks.

With zkEmail, users can do things like prove that they received a confirmation email from a centralized exchange showing that they successfully passed KYC, all without showing any of the email contents or personal information. While this offers a good solution for this use case, we also wanted the functionality of enabling the network to block certain jurisdictions (if needed), without the network knowing where the user is from. This also enables users to directly interface with the network rather than through a third-party email confirmation.

Given this context, ZKPassport was, and is, the perfect fit.

About ZKPassport

For the Aztec Testnet, we’ve integrated ZKPassport to enable node operators to prove they are human and participate in the network. This integration allows the network to dramatically increase the number of sequencers that can be added each day, which is a huge step forward in decentralizing the network with real operators.

ZKPassport allows users to share only the details about themselves that they choose by scanning a passport or government ID. This is achieved using zero-knowledge proofs (ZKPs) that are generated locally on the user’s phone. Implementing client-side zk-proofs in this way enables novel use-cases like age verification, where someone can prove their age without actually sharing how old they are (see the recent report on How to Enable Age Verification on the Internet Today Using Zero-Knowledge Proofs).

As of this week, the ZKPassport app is live and available to download on Google Play and the Apple App Store.

How ZKPassport works

Most countries today issue biometric passports or national IDs containing NFC chips (over 120 countries are currently supported by ZKPassport). These chips contain information on the full name, date of birth, nationality, and even digital photographs of the passport or ID holder. They can also contain biometric data such as fingerprints and iris scans.

By scanning the NFC chip located in their ID document with a smartphone, users generate proof based on a specific request from an app. For example, some apps might require only the user’s age or nationality. In the case of Aztec, no information is needed about the user other than that they do indeed hold a valid passport or ID.

Client-side proving

Once the user installs the ZKPassport app and scans their passport, the proof of identity is generated on the user's smartphone (client-side).

All the private data read from the NFC chip in the passport or ID is processed client-side and never leaves the smartphone (aka: only the user is aware of their data). Only this proof is sent to an app that has requested some information. The app can then verify the validity of the user’s age or nationality, all without actually seeing anything about the user other than what the user has authorized the app to see. In the case of age verification, the user may want to prove that they are over 18, so they’ll create a proof of this on their phone, and the requesting app is able to verify this information without knowing anything else about them.

For the Aztec Testnet, the network only needs to know that the user holds a valid passport, so no information is shared by the user other than “yes, I hold a valid passport or ID.”

Getting started with ZKPassport on Aztec Testnet

This is a nascent and evolving technology, and various phone models, operating systems, and countries are still being optimized for. To ensure this works seamlessly, we’ll be selecting the first cohort of people who have already been running active validators on a rolling basis to help test ZKPassport and provide early feedback.

If someone successfully verifies that they are a valid passport holder, they will be added to a queue to enter the validator set. Once they are in line, they are guaranteed entry. The queue will enable an estimated additional 10% of the current set to be allowed in each day. For example, if 800 sequencers are currently in the set, 80 new sequencers will be allowed to join that day.

This allows existing operators to maintain control of the network in the event that bad actors enter, while dramatically increasing the number of new validators added compared to the current number.

Humanizing Web3  

With ZKPassport now live, the Aztec Testnet is better equipped to distinguish real users from bots, without compromising on privacy or decentralization.

This integration is already enabling more verified human node operators to join the validator set, and the network is ready to welcome more. By leveraging ZKPs and client-side proving, ZKPassport ensures that humanity checks are both secure and permissionless, bringing us closer to a decentralized future that doesn’t rely on trust in centralized authorities.

This is exciting not just for Aztec but for the broader ecosystem. As the network continues to grow and develop, participation must remain open to anyone acting in good faith, regardless of geography or background, while keeping out bots and other malicious actors. ZKPassport makes this possible.

We’re excited to see the community expand, powered by real people helping to build a more private, inclusive, and human Web3.

Stay up-to-date on Noir and Aztec by following Noir and Aztec on X.

Noir
Noir
4 Jun
xx min read

StealthNote: The Decentralized, Private Glassdoor of Web3

Imagine an app that allows users to post private messages while proving they belong to an organization, without revealing their identity. Thanks to zero-knowledge proofs (ZKPs), it's now possible to protect the user’s identity through secure messaging, confidential voting, secured polling, and more. This development in privacy-preserving authentication creates powerful new ways for teams and individuals to communicate on the Internet while keeping aspects of their identity private.

Introducing Private Posting

Compared to Glassdoor, StealthNote is an app that allows users to post messages privately while proving they belong to a specific organization. Built with Noir, an open-source programming language for writing ZK programs, StealthNote utilizes ZKPs to prove ownership of a company email address, without revealing the particular email or other personal information.

Privately Sign In With Google

To prove the particular domain email ownership, the app asks users to sign in using Google. This utilizes Google’s ‘Sign in with Google’ OAuth authorization. OAuth is usually used by external applications for user authorization and returns verified users’ data, such as name, email, and the organization’s domain.

However, using ‘Sign in with Google’ in a traditional way reveals all of the information about the person’s identity to the app. Furthermore, for an app where you want to allow the public to verify the information about a user, all of this information would be made public to the world. That’s where StealthNote steps in, enabling part of the returned user data to stay private (e.g. name and email) and part of it to be publicly verifiable (e.g. company domain).

How StealthNote Works

Understanding JSON Web Tokens (JWTs)

When you "Sign in with Google" in a third-party app, Google returns some information about the user as a JSON Web Token (JWT) – a standard for sending information around the web.

JWTs are just formatted strings that contain a header (some info about the token), a payload (data about the user), and a signature to ensure the integrity and authenticity of the token:

Anyone can verify the authenticity of the above data by verifying that the JWT was signed by Google using their public key.

Adding Private Messages

In the case of StealthNote, we want to authorize the user and prove that they sent a particular message. To make this possible, custom information is added to the JWT token payload – a hashed message. With this additional field, the JWT becomes a digitally signed proof that a particular user sent that exact message.

Protecting the Sender’s Privacy

You can share the message and the JWT with someone and convince them that the message was sent by someone in the company. However, this would require sharing the whole JWT, which includes your name and email, exposing who the sender is. So, how does StealthNote protect this information?

They used a ZK-programming language, Noir, with the following goals in mind:

  • Verify the signature of the JWT using Google's public key
  • Extract the hashed message from the payload
  • Extract the email domain from the payload

The payload and the signature are kept private, meaning they stay on the user’s device and never need to be revealed, while the hashed message, the domain, and the JWT public key are public. The ZKP is generated in the browser, and no private data ever leaves the user's device.

Noir: What is Happening Under the Hood

By executing the program with Noir and generating a proof, the prover (the user who is posting a message) proves that they can generate a JWT signed by some particular public key, and it contains an email field in the payload with the given domain.

When the message is sent to the StealthNote server, the server verifies that the proof is valid as per the StealthNote circuit and validates that the public key in the proof is the same as Google's public key.

Once both checks pass, the server inserts the proof into the database, which then appears in the feed visible for other users. Other users can also verify the proof in the browser. The role of the server is to act as a data storage layer.

Stay up-to-date on Noir and Aztec by following Noir and Aztec on X.