Aztec Network
7 Feb
## min read

From zero to nowhere: smart contract programming in Huff (1/4)

In this series, learn smart contract programming in Huff directly from Zac.

Share
Written by
Zac Williamson
Edited by

Hello there!

I want to write about a piece of runoff that has oozed out of the primordial slop on the AZTEC factory floor: …Huff.

Huff is an Ethereum smart contract programming ‘language’ that was developed while writing weierstrudel, an elliptic curve arithmetic library for validating zero-knowledge proofs.

Elliptic curve arithmetic is computationally expensive, so developing an efficient implementation was paramount, and not something that could be done in native Solidity.

It wasn’t even something that could be done in Solidity inline assembly, so we made Huff.

To call Huff a language is being generous — it’s about as close as one can get to EVM assembly code, with a few bits of syntactic sugar bolted on.

Huff programs are composed of macros, where each macro in turn is composed of a combination of more macros and EVM assembly opcodes. When a macro is invoked, template parameters can be supplied to the macro, which themselves are macros.

Unlike a LISP-like language or something with sensible semantics, Huff doesn’t really have expressions either. That would require things like knowing how many variables a Huff macro adds to the stack at compile time, or expecting a Huff macro to not occasionally jump into the middle of another macro. Or assuming a Huff macro won’t completely mangle the program counter by mapping variables to jump destinations in a lookup table. You know, completely unreasonable expectations.Huff doesn’t have functions. Huff doesn’t even have variables, only macros.

Huff is good for one thing, though, which is writing extremely gas-optimised code.The kind of code where the overhead of the jump instruction in a function call is too expensive.

The kind of code where an extra swap instruction for a variable assignment is an outrageous luxury.At the very least, it does this quite well. The weierstrudel library performs elliptic curve multiple-scalar multiplication for less gas than the Ethereum’s “precompile” smart contract. An analogous Solidity smart contract is ~30–100 times more expensive.

It also enables complicated algorithms to be broken down into constituent macros that can be rigorously tested, which is useful.

Huff is also a game, played on a chess-board. One player has chess pieces, the other draughts pieces. The rules don’t make any sense, the game is deliberately confusing and it is an almost mathematical certainty that the draughts player will lose. You won’t find references to this game online because it was “invented” in a pub by some colleagues of mine in a past career and promptly forgotten about for being a terrible game.

I found that writing Huff macros invoked similar emotions to playing Huff, hence the name.

{{blog_divider}}

Programming in Huff

Given the absence of any documentation, I figured it might be illuminating to write a short series in how to write a smart contract in Huff. You know, if you’re looking for time to kill and you’ve run out of more interesting things to do like watch paint dry or rub salt in your eyes.

If you want to investigate further, you’ll find Huff on GitHub. For some demonstration Huff code, the weierstrudel smart contract is written entirely in Huff.

{{blog_divider}}

“Hello World” — an ERC20 implementation in Huff

Picture the scene — the year is 2020 and the world is reeling from a new global financial crisis. With the collapse of the monetary base, capital flees to the only store of stable value that can be found — first-generation Crypto-Kitties. Amidst this global carnage, Ethereum has failed to achieve its scaling milestones and soaring gas fees cripple the network.It is a world on the brink, where one single edict is etched into the minds citizens from San Francisco to Shanghai — The tokens must flow…or else.

This is truly the darkest timeline, and in the darkest timeline, we code in Huff.

{{blog_divider}}

Finding our feet

We’re going to write an ERC20 token contract. But not just any ERC20 contract— we’re going to write an ERC20 contract where every opcode must justify its place, or be scourged from existence…

Let’s start by looking at the Solidity interface for a ‘mintable’ token — there’s not much point in an ERC20 contract if it doesn’t have any tokens, after all.

function totalSupply() public view returns (uint);

function balanceOf(address tokenOwner) public view returns (uint);

function allowance(address tokenOwner, address spender) public view returns (uint);

function transfer(address to, uint tokens) public returns (bool);

function approve(address spender, uint tokens) public returns (bool);

function transferFrom(address from, address to, uint tokens) public returns (bool);

function mint(address to, uint tokens) public returns (bool);

event Transfer(address indexed from, address indexed to, uint tokens);

event Approval(address indexed tokenOwner, address indexed spender, uint tokens);

That doesn’t look so bad, how hard can this be?

{{blog_divider}}

Bootstrapping

Before we start writing the main subroutines, remember that Huff doesn’t do variables. But there’s a macro for that! Specifically, we need to be able to identify storage locations with something that resembles a variable.

Let’s create some macros that refer to storage locations that we’re going to be storing the smart contract’s state in. For Solidity smart contracts, the compiler will (under the hood) assign every storage variable to a storage pointer and we’re doing the same here.

First up, the storage pointer that maps to token balances:

#define macro BALANCE_LOCATION = takes(0) returns(1) {
   0x00
}

The takes field refers to how many EVM stack items this macro consumes. returns refers to how many EVM stack items this macro will add onto the stack.

Finally, the macro code is just 0x00 . This will push 0 onto the EVM stack; we’re associating balances with the first storage ‘slot’ in our smart contract.

We also need a storage location for the contract’s owner:

#define macro OWNER_LOCATION = takes(0) returns(0) {
   0x01
}

{{blog_divider}}

Implementing SafeMath in Huff

SafeMath is a Solidity library that performs arithmetic operations whilst guarding against integer overflow and underflow.

We need the same functionality in Huff. After all, we wouldn’t want to write unsafe Huff code. That would be irrational.ERC20 is a simple contract, so we will only need addition and subtraction capabilites.

Let’s consider our first macro, MATH__ADD . Normally, this would be a function with two variables as input arguments. But Huff doesn’t have functions.

Huff doesn’t have variables either.

...

Let’s take a step back then. What would this function look like if we were to rip out Solidity’s syntactic sugar? This is the function interface:

function add(uint256 a, uint256 b) internal view returns (uint256 c);

Under the hood, when the add function is called, variables a and b will be pushed to the front of the EVM’s stack.

Behind them on the stack will be a jump label that corresponds to the return destination of this function. But we’re going to ignore that — It’s cheaper to directly graft the function bytecode in-line when its needed, instead of spending gas by messing around with jumps.

So for our first macro, MATH__ADD , we expect first two variables to be at the front of the EVM stack; the variables that we want to add. This macro will consume these two variables, and return the result on the stack. If an integer overflow is triggered, the macro will throw an error.

Starting with the basics, if our stack state is: a, b , we need a+b . Once we have a+b , we need to compare it with either a or b . If either are greater than a+b, we have an integer overflow.

So step1: clone b , creating stack state: b, a, b . We do this with thedup2 opcode. We then call add , which eats the first two stack variables and spits out a+b , leaving us with (a+b), b on the stack.

Next up, we need to validate that a+b >= b. One slight problem here — the Ethereum Virtual Machine doesn’t have an opcode that maps to the >= operator! We only have gt and lt opcodes to work with.

We also have the eq opcode, so we could check whether a+b > band perform a logical OR operation with a+b = b . i.e.:

// stack state: (a+b) b
dup2 dup2 gt // stack state: ((a+b) > b) (a+b)
bdup3 dup3 eq // stack state: ((a+b) = b) ((a+b) > b) (a+b) b
or           // stack state: ((a+b) >= b) (a+b) b

But that’s expensive, we’ve more than doubled the work we’re doing! Each opcode in the above section is 3 gas so we’re chewing through 21 gas to compare two variables. This isn’t Solidity — this is Huff, and it’s time to haggle.

A cheaper alternative is to, instead, validate that (b > (a+b)) == 0 . i.e:

// stack state: (a+b) b
dup1 dup3 gt // stack state: (a > (a+b)) (a+b)
biszero       // stack state: (a > (a+b) == 0) (a+b) b

Much better, only 12 gas. We can almost live with that, but we’re not done bargaining.

We can optimize this further, because once we’ve performed this step, we don’t need bon the stack anymore — we can consume it. We still need (a+b)on the stack however, so we need a swap opcode to get b in front of (a+b) on the program stack. This won’t save us any gas up-front, but we’ll save ourselves an opcode later on in this macro.

dup1 swap2 gt // stack state: (b > (a+b)) (a+b)
iszero        // stack state: ((a+b) >= b) (a+b)

Finally, if a > (a+b) we need to throw an error. When implementing “if <x> throw an error”, we have two options to take, because of how thejumpi instruction works.

jumpi is how the EVM performs conditional branching. jumpi will consume the top two variables on the stack. It will treat the second variable as a position in the program’s program counter, and will jump to it only if the first variable is not zero.

When throwing errors, we can test for the error condition, and if true jump to a point in the program that will throw an error.

OR we can test for the opposite of the error condition, and if true, jump to a point in the program that skips over some code that throws an error.

For example, this is how we would program option 2 for our safe add macro:

// stack state: ((a+b) >= b) (a+b)
no_overflow jumpi
   0x00 0x00 revert // throw an error
no_overflow:
// continue with algorithm

Option one, on the other hand, looks like this:

// stack state: ((a+b) >= b) (a+b)
iszero // stack state: (b > (a+b)) (a+b)
throw_error jumpi
// continue with algorithm

For our use case, option 2 is more efficient, because if we chain option 2 with our condition test, we end up with:

dup2 add dup1 swap2 gt
iszero
iszero
throw_error jumpi

We can remove the two iszero opcodes because they cancel each other out! Leaving us with the following macro

#define macro MATH__ADD = takes(2) returns(1) {
   // stack state: a b
   dup2 add
   // stack state: (a+b) b
   dup1 swap2 gt
   // stack state: (a > (a+b)) (a+b)
   throw_error jumpi}

However, we have a problem! We haven’t defined our jump label throw_error , or what happens when we hit it. We can’t add it to the end of macro MATH__ADD , because then we would have to jump over the error-throwing code if the error condition was not met.

We would prefer not to have macros that use jump labels that are not declared inside the macro itself. We can solve this by passing the jump label throw_error as a template parameter. It is then the responsibility of the macro that invokes MATH__ADD to supply the correct jump label — which ideally should be a local jump label and not a global one.

Our final macro looks like this:

template <throw_error_jump_label>
#define macro MATH__ADD = takes(2) returns(1) {
   // stack state: a b
   dup2 add
   // stack state: (a+b) a
   dup1 swap2 gt
   // stack state: (a > (a+b)) (a+b)
<throw_error_jump_label> jumpi
}

The jumpi opcode is 10 gas, and the others cost 3 gas (assuming <throw_error_jump_label> eventually will map to a PUSH opcode) — in total 28 gas.As an aside — let’s consider the overhead created by Solidity when calling SafeMath.add(a, b)First, values a and b are duplicated on the stack; functions don’t consume existing stack variables. Next, the return destination, that must be jumped to when the function finishes, is pushed onto the stack. Finally, the jump destination of SafeMath.add is pushed onto the stack and the jump instruction is called.

Once the function has finished its work, the jump instruction is called to jump back to the return destination. The values a , b are then assigned to local variables by identifying the location on the stack that these variables occupy, calling a swap opcode to manoeuvre the return value into the allocated stack location, followed by a pop opcode to remove the old value. This is performed twice for each variable.

In total that’s…

  • 4 dup opcodes (3 gas each)
  • 2 jump opcodes (8 gas each)
  • 2 swap opcodes (3 gas each)
  • 2 pop opcodes (2 gas each)
  • 2 jumpdest opcodes (1 gas each)

To summarise, the act of calling SafeMath.add as a Solidity function would cost 40 gas before the algorithm actually does any work.

To summarise the summary, our MATH__ADD macro does its job in almost half the gas it would cost to process a Solidity function overhead.

To summarise the summary of the summary, this is acceptable Huff code.

{{blog_divider}}

Subtraction

Finally, we need an equivalent macro for subtraction:

template <throw_error_jump_label>
#define macro MATH__SUB = takes(2) returns(1) {
   // stack state: a b
   // calling sub will create (a-b)
   // if (b>a) we have integer underflow - throw an error    dup1 dup3 gt
   // stack state: (b>a) a b<throw_error_jump_label> jumpi
   // stack state: a b
   sub
   // stack state: (a-b)
}

{{blog_divider}}

Utility macros

Next up, we need to define some utility macros we’ll be using . We need macros that validate that the transaction sender has not sent any ether to the smart contract, UTILS__NOT_PAYABLE. For our mint method, we’ll need a macro that validates that the message sender is the contract’s owner, UTILS__ONLY_OWNER:

template<error_location>
#define macro UTILS__NOT_PAYABLE = takes(0) returns(0) {
   callvalue <error_location> jumpi
}

#define macro UTILS__ONLY_OWNER = takes(0) returns(0) {
   OWNER_LOCATION() sload caller eq is_owner jumpi
       0x00 0x00 revert
   is_owner:
}

N.B. revert consumes two stack items. p x revert will take memory starting at x, and return the next p bytes as an error code. We’re not going to worry about error codes here, just throwing an error is good enough.

{{blog_divider}}

Creating the constructor

Now that we’ve set up our helper macros, we’re close to actually being able to write our smart contract methods. Congratulations on nearly reaching step 1!

To start with , we need a constructor. This is just another macro in Huff. Our constructor is very simple — we just need to record who the owner of the contract is. In Solidity it looks like this:

constructor() public {
   owner = msg.sender;
}

And in Huff it looks like this:

#define macro ERC20 = takes(0) returns(0) {
   caller OWNER_LOCATION() sstore
}

The EVM opcode caller will push the message sender’s address onto the stack.

We then push the storage slot we’ve reserved for the owner onto the stack.

Finally we call sstore, which will consume the first two stack items and store the 2nd stack item, using the value of the 1st stack item as the storage pointer.

For more information about storage pointers and how smart contracts manage state — Anreas Olofsson’s Solidity workshop on storage is a great read.

{{blog_divider}}

Parsing the function signature

Are we ready to start writing our smart contract methods yet? Of course not, this is Huff. Huff is efficient, but slow.

I like think of Huff like a trusty tortoise, if the tortoise is actually a hundred rats stitched into a tortoise suit, and each rat is a hundred maggots stitched into a rat suit.

…anyhow, we still need our function selector. But Huff doesn’t do functions; we’re going to have to create them from more basic building blocks.

One of the first pieces of code generated by the Solidity compiler is code to unpick the function signature. A function signature is a unique marker that maps to a function name.

For example, consider the solidity functionfunction balanceOf(address tokenOwner) public view returns (uint balance);The function signature will take the core identifying information of the function:

  • the function name
  • the input argument types

This is represented as a string, i.e. "balanceOf(address)". A keccak256 hash of this string is taken, and the most significant 4 bytes of the hash are then used as the function signature.

This online tool makes it easier to find the signature of a function.

It’s a bit of a mouthful, but it creates a (mostly) unique identifier for any given function — this allows contracts to conform to a defined interface that other smart contracts can call.

For example, if the function signature for a given function varied from contract to contract, it would be impossible to have an ‘ERC20’ token, because other smart contracts wouldn’t know how to construct a given contract’s function signature.

With that out of the way, we will find the function signature in the first 4 bytes of calldata. We need to extract this signature and then figure out what to do with it.

Solidity will create function signature hashes under the hood so you don’t have to, but Huff is a bit too primitive for that. We have to supply them directly. We can identify the ERC20 function signatures by pulling them out of remix:

We can parse a function signature by extracting the first 4 bytes of calldata and then perform a series of if-else statements over every function hash.

We can use the bit-shift instructions in Constantinople to save a bit of gas here. 0x00 calldataload will extract the first 32 bytes of calldata and push it onto the stack in a single EVM word. i.e. the 4 bytes we want are in the most significant byte positions and we need them in the least significant positions.

We can do this with 0x00 calldataload 224 shr

We can execute ‘functions’ by comparing the calldata with a function signature, and jumping to the relevant macro if there is a match. i.e:

0x00 calldataload 224 shr // function signature
dup1 0xa9059cbb eq transfer jumpi
dup1 0x23b872dd eq transfer_from jumpi
dup1 0x70a08231 eq balance_of jumpi
dup1 0xdd62ed3e eq allowance jumpi
dup1 0x095ea7b3 eq approve jumpi
dup1 0x18160ddd eq total_supply jumpi
dup1 0x40c10f19 eq mint jumpi
// If we reach this point, we've reached the fallback function.
// However we don't have anything inside our fallback function!
// We can just exit instead, after checking that callvalue is zero:
UTILS__NOT_PAYABLE<error_location>()
0x00 0x00 return

We want the scope of this macro to be constrained to identifying where to jump — the location of these jump labels is elsewhere in the code. Again, we use template parameters to ensure that jump labels are only explicitly called inside the macros that they are defined in.

Our final macro looks like this:

template <transfer, transfer_from, balance_of, allowance, approve, total_supply, mint, error_location>
#define macro ERC20__FUNCTION_SIGNATURE = takes(0) returns(0) {
   0x00 calldataload 224 shr // function signature
   dup1 0xa9059cbb eq <transfer> jumpi
   dup1 0x23b872dd eq <transfer_from> jumpi
   dup1 0x70a08231 eq <balance_of> jumpi     dup1 0xdd62ed3e eq <allowance> jumpi
   dup1 0x095ea7b3 eq <approve> jumpi    dup1 0x18160ddd eq <total_supply> jumpi
   dup1 0x40c10f19 eq <mint> jumpi
   UTILS__NOT_PAYABLE<error_location>()
   0x00 0x00 return
}

{{blog_divider}}

Setting up boilerplate contract code

Finally, we have enough to write the skeletal structure of our main function — the entry-point when our smart contract is called. We represent each method with a macro, which we will need to implement.

#define macro ERC20__MAIN = takes(0) returns(0) {


   ERC20__FUNCTION_SIGNATURE<
       transfer,
       transfer_from,
       balance_of,
       allowance,
       approve,
       total_supply,
       mint,
       throw_error
>()

   transfer:
       ERC20__TRANSFER<throw_error>()
   transfer_from:
       ERC20__TRANFSER_FROM<throw_error>()
   balance_of:
       ERC20__BALANCE_OF<throw_error>()
   allowance:
       ERC2O__ALLOWANCE<throw_error>()
   approve:
       ERC20__APPROVE<throw_error>()
   total_supply:
       ERC20__TOTAL_SUPPLY<throw_error>()
   mint:
       ERC20__MINT<throw_error>()
   throw_error:
       0x00 0x00 revert
}

…Tadaa.

Finally we’ve set up our pre-flight macros and boilerplate code and we’re ready to start implementing methods!But that’s enough for today.

In part 2 we’ll implement the ERC20 methods as glistening Huff macros, run some benchmarks against a Solidity implementation and question whether any of this was worth the effort.

Cheers,

Zac.

Click here for part 2

Read more
Aztec Network
Aztec Network
10 Mar
xx min read

Alpha Network Security: What to Expect

Aztec’s Approach to Security

Aztec is novel code — the bleeding edge of cryptography and blockchain technology. As the first decentralized L2 on Ethereum, Aztec is powered by a global network of sequencers and provers. Decentralization introduces some novel challenges in how security is addressed; there is no centralized sequencer to pause or a centralized entity who has power over the network. The rollout of the network reflects this, with distinct goals at each phase.

Ignition

Validate governance and decentralized block building work as intended on Ethereum Mainnet. 

Alpha

Enable transactions at 1TPS, ~6s block times and improve the security of the network via continual ongoing audits and bug bounty. New releases of the alpha network are expected regularly to address any security vulnerabilities. Please note, every alpha deployment is distinct and state is not migrated between Alpha releases. 

Beta

We will transition to Beta once the network scales to >10 TPS, with reduced block times while ensuring 99.9% uptime. Additionally, the transition requires no critical bugs disclosed via bug bounty in 3 months. State migrations across network releases can be considered.

TL;DR: The roadmap from Ignition to Alpha to Beta is designed to reflect the core team's growing confidence in the network's security.

This phased approach lets us balance ecosystem growth while building security confidence and steadily expanding the community of researchers and tools working to validate the network’s security, soundness and correctness.

Ultimately, time in production without an exploit is the most reliable indicator of how secure a codebase is.

At the start of Alpha, that confidence is still developing. The core team believes the network is secure enough to support early ecosystem use cases and handle small amounts of value. However this is experimental alpha software and users should not deposit more value than they are willing to lose. Apps may choose to limit deposit amounts to mitigate risk for users.

Audits are ongoing throughout Alpha, with the goal to achieve dual external audits across the entire codebase.

The table below shows current security and audit coverage at the time of writing.

The main bug bounty for the network is not yet live, other than for the non-cryptographic L1 smart contracts as audits are ongoing. We encourage security researchers to responsibly disclose findings in line with our security policy .

As the audits are still ongoing, we expect to discover vulnerabilities in various components. The fixes will be packaged and distributed with the “v5” release.

If we discover a Critical vulnerability in “v4” in accordance with the following severity matrix, which would require the change of verification keys to fix, we will first alert the portal operators to pause deposits and then post a message on the forum, stating that the rollup has a vulnerability.

Security of the Aztec Virtual Machine (AVM)

Aztec uses a hybrid execution model, handling private and public execution separately — and the security considerations differ between them.

As per the audit table above, it is clear that the Aztec Virtual Machine (AVM) has not yet completed its internal and external audits. This is intentional as all AVM execution is public, which allows it to benefit from a “Training Wheel” — the validator re-execution committee.

Every 72 seconds, a collection of newly proposed Aztec blocks are bundled into a "checkpoint" and submitted to L1. With each proposed checkpoint, a committee of 48 staking validators randomly selected from the entire set of validators (presently 3,959) re-execute all txs of all blocks in the checkpoint, and attest to the resulting state roots. 33 out of 48 attestations are required for the checkpoint proposal to be considered valid. The committee and the eventual zk proof must agree on the resultant state root for a checkpoint to be added to the proven chain. As a result, an attacker must control 33/48 of any given committee to exploit any bug in the AVM.

The only time the re-execution committee is not active is during the escape hatch, where the cost to propose a block is set at a level which attempts to quantify the security of the execution training wheel. For this version of the alpha network, this is set a 332M AZTEC, a figure intended to approximate the economic protection the committee normally provides, equivalent to roughly 19% of the un-staked circulating supply at the time of writing. Since the Aztec Foundation holds a significant portion of that supply, the effective threshold is considerably higher in practice.

Quantifying the cost of committee takeover attacks

A key design assumption is that just-in-time bribery of the sequencer committee is impractical and the only ****realistic attack vector is stake acquisition, not bribery.

Assuming a sequencer set size of 4,000 and a committee that rotates each epoch (~38.4mins) from the full sequencer set using a Fisher-Yates shuffle seeded by L1 RANDAO we can see the probability and amount of stake required in the table below.

To achieve a 99% probability of controlling at least one supermajority within 3 days, an attacker would need to control approximately 55.4% of the validator set - roughly 2,215 sequencers representing 443M AZTEC in stake. Assuming an exploit is successful their stake would likely de-value by 70-80%, resulting in an expected economic loss of approximately 332M AZTEC.

To achieve only a 0.5% probability of controlling at least one supermajority within 6 months, an attacker would need to control approximately 33.88% of the validator set.

What does this means for builders?

The practical effect of this training wheel is that the network can exist while there are known security issues with the AVM, as long as the value an attacker would gain from any potential exploit is less than the cost of acquiring 332M AZTEC.

The training wheel allows security researchers to spend more time on the private execution paths that don’t benefit from the training wheel and for the network to be deployed in an alpha version where security researchers can attempt to find additional AVM exploits.

In concrete terms, the training wheel means the Alpha network can reasonably secure value up to around 332M AZTEC (~$6.5M at the time of writing).

Ecosystem builders should keep the above limits in mind, particularly when designing portal contracts that bridge funds into the network.

Portals are the main way value will be bridged into the alpha network, and as a result are also the main target for any exploits. The design of portals can allow the network to secure far higher value. If a portal secures > 332M AZTEC and allows all of its funds to be taken in one withdrawal without any rate limits, delays or pause functionality then it is a target for an AVM exploit attack.

If a portal implements a maximum withdrawal per user, pause functionality or delays for larger withdrawals it becomes harder for an attacker to steal a large quantum of funds in one go.

Conclusion

The Aztec Alpha code is ready to go. The next step is for someone in the community to submit a governance proposal and for the network to vote on enabling transactions. This is decentralization working as intended.

Once live, Alpha will run at 1 TPS with roughly 6 second block times. Audits are still ongoing across several components, so keep deposits small and only put in what you're comfortable losing.

On the security side, a 48-validator re-execution committee provides the main protection during Alpha, requiring 33/48 consensus on every 72-second checkpoint. Successfully attacking the AVM would require controlling roughly 55% of the validator set at a cost of around 332M AZTEC, putting the practical security ceiling at approximately $6.5M.

Alpha is about growing the ecosystem, expanding the security of the network, and accumulating the one thing no audit can shortcut: time in production. This is the network maturing in exactly the way it was designed to as it progresses toward Beta.

Aztec Network
Aztec Network
4 Mar
xx min read

Aztec Network: Roadmap Update

The Ignition Chain launched late last year, as the first fully decentralized L2 on Ethereum– a huge milestone for decentralized networks. The team has reinvented what true programmable privacy means, building the execution model from the ground up— combining the programmability of Ethereum with the privacy of Zcash in a single execution environment.

Since then, the network has been running with zero downtime with 3,500+ sequencers and 50+ provers across five continents. With the infrastructure now in place, the network is fully in the hands of the community, and the culmination of the past 8 years of work is now converging. 

Major upgrades have landed across four tracks: the execution layer, the proving system, the programming language, Noir, and the decentralization stack. Together, these milestones deliver on Aztec’s original promise, a system where developers can write fully programmable smart contracts with customizable privacy.

The infrastructure is in place. The code is ready. And we’re ready to ship. 

What’s New on the Roadmap?

The Execution Layer

The execution layer delivers on Aztec's core promise: fully programmable, privacy-preserving smart contracts on Ethereum. 

A complete dual state model is now in place–with both private and public state. Private functions execute client-side in the Private Execution Environment (PXE), running directly in the user's browser and generating zero-knowledge proofs locally, so that private data never leaves the original device. Public functions execute on the Aztec Virtual Machine (AVM) on the network side. 

Aztec.js is now live, giving developers a full SDK for managing accounts and interacting with contracts. Native account abstraction has been implemented, meaning every account is a smart contract with customizable authentication rules. Note discovery has been solved through a tagging mechanism, allowing recipients to efficiently query for relevant notes without downloading and decrypting everything on the network.

Contract standards are underway, with the Wonderland team delivering AIP-20 for tokens and AIP-721 for NFTs, along with escrow contracts and logic libraries, providing the production-ready building blocks for the Alpha Network. 

The Proving System

The proving system is what makes Aztec's privacy guarantees real, and it has deep roots.

In 2019, Aztec's cofounder Zac Williamson and Chief Scientist Ariel Gabizon introduced PLONK, which became one of the most widely used proving systems in zero-knowledge cryptography. Since then, Aztec's cryptographic backend, Barretenberg, has evolved through multiple generations, each facilitating faster, lighter, and more efficient proving than the last. The latest innovation, CHONK (Client-side Highly Optimized ploNK), is purpose-built for proving on phones and browsers and is what powers proof generation for the Alpha Network.

CHONK is a major leap forward for the user experience, dramatically reducing the memory and time required to generate proofs on consumer devices. It leverages best-in-class circuit primitives, a HyperNova-style folding scheme for efficiently processing chains of private function calls, and Goblin, a hyper-efficient purpose-built recursion acceleration scheme. The result is that private transactions can be proven on the devices people actually use, not just powerful servers.

This matters because privacy on Aztec means proofs are generated on the user's own device, keeping private data private. If proving is too slow or too resource-intensive, privacy becomes impractical. CHONK makes it practical.

Decentralization

Decentralization is what makes Aztec's privacy guarantees credible. Without it, a central operator could censor transactions, introduce backdoors, or compromise user privacy at will. 

Aztec addressed this by hardcoding decentralized sequencing, proving, and governance directly into the base protocol. The Ignition Chain has proven the stability of this consensus layer, maintaining zero downtime with over 3,500 sequencers and 50+ provers running across five continents. Aztec Labs and the Aztec Foundation run no sequencers and do not participate in governance.

Noir

Noir 1.0 is nearing completion, bringing a stable, production-grade language within reach. Aztec's own protocol circuits have been entirely rewritten in Noir, meaning the language is already battle-tested at the deepest layer of the stack. 

Internal and external audits of the compiler and toolchain are progressing in parallel, and security tooling including fuzzers and bytecode parsers is nearly finished. A stable, audited language means application teams can build on Alpha with confidence that the foundation beneath them won't shift.

What Comes Next

The code for Alpha Network, a functionally complete and raw version of the network, is ready.

The Alpha Network brings fully programmable, privacy-preserving smart contracts to Ethereum for the first time. It's the culmination of years of parallel work across the four tracks in the Aztec Roadmap. Together, they enable efficient client-side proofs that power customizable smart contracts, letting users choose exactly what stays private and what goes public. 

No other project in the space is close to shipping this. 

The code is written. The network is running. All the pieces are in place. The governance proposal is now live on the forum and open for discussion. Read through it, ask questions, poke holes, and help shape the path forward. 

Once the community is aligned, the proposal moves to a vote. This is how a decentralized network upgrades. Not by a team pushing a button, but by the people running it.

Programmable privacy will unlock a renaissance in onchain adoption. Real-world applications are coming and institutions are paying attention. Alpha represents the culmination of eight years of intense work to deliver privacy on Ethereum. 

Now it needs to be battle-tested in the wild. 

View the updated product roadmap here and join us on Thursday, March 5th, at 3 pm UTC on X to hear more about the most recent updates to our product roadmap.

Aztec Network
Aztec Network
30 Jan
xx min read

Aztec Ignition Chain Update

In November 2025, the Aztec Ignition Chain went live as the first decentralized L2 on Ethereum. Since launch, more than 185 operators across 5 continents have joined the network, with 3,400+ sequencers now running. The Ignition Chain is the backbone of the Aztec Network; true end-to-end programmable privacy is only possible when the underlying network is decentralized and permissionless. 

Until now, only participants from the $AZTEC token sale have been able to stake and earn block rewards ahead of Aztec's upcoming Token Generation Event (TGE), but that's about to change. Keep reading for an update on the state of the network and learn how you can spin up your own sequencer or start delegating your tokens to stake once TGE goes live.

Block Production 

The Ignition Chain launched to prove the stability of the consensus layer before the execution environment ships, which will enable privacy-preserving smart contracts. The network has remained healthy, crossing a block height of 75k blocks with zero downtime. That includes navigating Ethereum's major Fusaka upgrade in December 2025 and a governance upgrade to increase the queue speed for joining the sequencer set.

Source: AztecBlocks

Block Rewards

Over 30M $AZTEC tokens have been distributed to sequencers and provers to date. Block rewards go out every epoch (every 32 blocks), with 70% going to sequencers and 30% going to provers for generating block proofs.

If you don't want to run your own node, you can delegate your stake and share in block rewards through the staking dashboard. Note that fractional staking is not currently supported, so you'll need 200k $AZTEC tokens to stake.

Global Participation  

The Ignition Chain launched as a decentralized network from day one. The Aztec Labs and Aztec Foundation teams are not running any sequencers on the network or participating in governance. This is your network.

Anyone who purchased 200k+ tokens in the token sale can stake or delegate their tokens on the staking dashboard. Over 180 operators are now running sequencers, with more joining daily as they enter the sequencer set from the queue. And it's not just sequencers: 50+ provers have joined the permissionless, decentralized prover network to generate block proofs.

These operators span the globe, from solo stakers to data centers, from Australia to Portugal.

Source: Nethermind 

Node Performance

Participating sequencers have maintained a 99%+ attestation rate since network launch, demonstrating strong commitment and network health. Top performers include P2P.org, Nethermind, and ZKV. You can see all block activity and staker performance on the Dashtec dashboard. 

How to Join the Network 

On January 26th, 2026, the community passed a governance proposal for TGE. This makes tokens tradable and unlocks the AZTEC/ETH Uniswap pool as early as February 11, 2026. Once that happens, anyone with 200k $AZTEC tokens can run a sequencer or delegate their stake to participate in block rewards.

Here's what you need to run a validator node:

  • CPU: 8 cores
  • RAM: 16 GB
  • Storage: 1 TB NVMe SSD
  • Bandwidth: 25 Mbps

These are accessible specs for most solo stakers. If you've run an Ethereum validator before, you're already well-equipped.

To get started, head to the Aztec docs for step-by-step instructions on setting up your node. You can also join the Discord to connect with other operators, ask questions, and get support from the community. Whether you run your own hardware or delegate to an experienced operator, you're helping build the infrastructure for a privacy-preserving future.

Solo stakers are the beating heart of the Aztec Network. Welcome aboard.

Aztec Network
Aztec Network
22 Jan
xx min read

The $AZTEC TGE Vote: What You Need to Know

The TL:DR:

  • The $AZTEC token sale, conducted entirely onchain concluded on December 6, 2025, with ~50% of the capital committed coming from the community. 
  • Immediately following the sale, tokens could be withdrawn from the sale website into personal Token Vault smart contracts on the Ethereum mainnet.
  • The proposal for TGE (Token Generation Event) is now live, and sequencers can start signaling to bring the proposal to a vote to unlock these tokens and make them tradeable. 
  • Anyone who participated in the token sale can participate in the TGE vote. 

The $AZTEC token sale was the first of its kind, conducted entirely onchain with ~50% of the capital committed coming from the community. The sale was conducted completely onchain to ensure that you have control over your tokens from day one. As we approach the TGE vote, all token sale participants will be able to vote to unlock their tokens and make them tradable. 

What Is This Vote About?

Immediately following the $AZTEC token sale, tokens could be withdrawn from the sale website into your personal Token Vault smart contracts on the Ethereum mainnet. Right now, token holders are not able to transfer or trade these tokens. 

The TGE is a governance vote that decides when to unlock these tokens. If the vote passes, three things happen:

  1. Tokens purchased in the token sale become fully transferable 
  2. Trading goes live for the Uniswap v4 pool
  3. Block rewards become transferable for sequencers

This decision is entirely in the hands of $AZTEC token holders. The Aztec Labs and Aztec Foundation teams, and investors cannot participate in staking or governance for 12 months, which includes the TGE governance proposal. Team and investor tokens will also remain locked for 1 year and then slowly unlock over the next 2 years. 

The proposal for TGE is now live, and sequencers are already signaling to bring the proposal to a vote. Once enough sequencers have signaled, anyone who participated in the token sale will be able to connect their Token Vault contract to the governance dashboard to vote. Note, this will require you to stake/unstake and follow the regular 15-day process to withdraw tokens.

If the vote passes, TGE can go live as early as February 12, 2026, at 7am UTC. TGE can be executed by the first person to call the execute function to execute the proposal after the time above. 

How Do I Participate?

If you participated in the token sale, you don't have to do anything if you prefer not to vote. If the vote passes, your tokens will become available to trade at TGE. If you want to vote, the process happens in two phases:

Phase 1: Sequencer Signaling

Sequencers kick things off by signaling their support. Once 600 out of 1,000 sequencers signal, the proposal moves to a community vote.

Phase 2: Community Voting

After sequencers create the proposal, all Token Vault holders can vote using the voting governance dashboard. Please note that anyone who wants to vote must stake their tokens, locking their tokens for at least 15 days to ensure the proposal can be executed before the voter exits. Once signaling is complete, the timeline is as follows:

  • Days 1–3: Waiting period 
  • Days 4–10: Voting period (7 days to cast your vote)
  • Days 11–17: Execution delay
  • Days 18–24: Grace period to execute the proposal

Vote Requirements:

  • At least 100M tokens must participate in the vote. This is less than 10% of the tokens sold in the token sale.  
  • 66% of votes must be in favor for the vote to pass.

Frequently Asked Questions

Do I need to participate in the vote? No. If you don't vote, your tokens will become available for trading when TGE goes live. 

Can I vote if I have less than 200,000 tokens? Yes! Anyone who participated in the token sale can participate in the TGE vote. You'll need to connect your wallet to the governance dashboard to vote. 

Is there a withdrawal period for my tokens after I vote? Yes. If you participate in the vote, you will need to withdraw your tokens after voting. Voters can initiate a withdrawal of their tokens immediately after voting, but require a standard 15-day withdrawal period to ensure the vote is executed before voters can exit.

If I have over 200,000 tokens is additional action required to make my tokens tradable after TGE? Yes. If you purchased over 200,000 $AZTEC tokens, you will need to stake your tokens before they become tradable. 

What if the vote fails? A new proposal can be submitted. Your tokens remain locked until a successful vote is completed, or the fallback date of November 13, 2026, whichever happens first.

I'm a Genesis sequencer. Does this apply to me? Genesis sequencer tokens cannot be unlocked early. You must wait until November 13, 2026, to withdraw. However, you can still influence the vote by signaling, earn block rewards, and benefit from trading being enabled.

Where to Learn More

This overview covers the essentials, but the full technical proposal includes contract addresses, code details, and step-by-step instructions for sequencers and advanced users. 

Read the complete proposal on the Aztec Forum and join us for the Privacy Rabbit Hole on Discord happening this Thursday, January 22, 2026, at 15:00 UTC. 

Follow Aztec on X to stay up to date on the latest developments.