Aztec Network
7 Feb
## min read

From zero to nowhere: smart contract programming in Huff (1/4)

In this series, learn smart contract programming in Huff directly from Zac.

Share
Written by
Zac Williamson
Edited by

Hello there!

I want to write about a piece of runoff that has oozed out of the primordial slop on the AZTEC factory floor: …Huff.

Huff is an Ethereum smart contract programming ‘language’ that was developed while writing weierstrudel, an elliptic curve arithmetic library for validating zero-knowledge proofs.

Elliptic curve arithmetic is computationally expensive, so developing an efficient implementation was paramount, and not something that could be done in native Solidity.

It wasn’t even something that could be done in Solidity inline assembly, so we made Huff.

To call Huff a language is being generous — it’s about as close as one can get to EVM assembly code, with a few bits of syntactic sugar bolted on.

Huff programs are composed of macros, where each macro in turn is composed of a combination of more macros and EVM assembly opcodes. When a macro is invoked, template parameters can be supplied to the macro, which themselves are macros.

Unlike a LISP-like language or something with sensible semantics, Huff doesn’t really have expressions either. That would require things like knowing how many variables a Huff macro adds to the stack at compile time, or expecting a Huff macro to not occasionally jump into the middle of another macro. Or assuming a Huff macro won’t completely mangle the program counter by mapping variables to jump destinations in a lookup table. You know, completely unreasonable expectations.Huff doesn’t have functions. Huff doesn’t even have variables, only macros.

Huff is good for one thing, though, which is writing extremely gas-optimised code.The kind of code where the overhead of the jump instruction in a function call is too expensive.

The kind of code where an extra swap instruction for a variable assignment is an outrageous luxury.At the very least, it does this quite well. The weierstrudel library performs elliptic curve multiple-scalar multiplication for less gas than the Ethereum’s “precompile” smart contract. An analogous Solidity smart contract is ~30–100 times more expensive.

It also enables complicated algorithms to be broken down into constituent macros that can be rigorously tested, which is useful.

Huff is also a game, played on a chess-board. One player has chess pieces, the other draughts pieces. The rules don’t make any sense, the game is deliberately confusing and it is an almost mathematical certainty that the draughts player will lose. You won’t find references to this game online because it was “invented” in a pub by some colleagues of mine in a past career and promptly forgotten about for being a terrible game.

I found that writing Huff macros invoked similar emotions to playing Huff, hence the name.

{{blog_divider}}

Programming in Huff

Given the absence of any documentation, I figured it might be illuminating to write a short series in how to write a smart contract in Huff. You know, if you’re looking for time to kill and you’ve run out of more interesting things to do like watch paint dry or rub salt in your eyes.

If you want to investigate further, you’ll find Huff on GitHub. For some demonstration Huff code, the weierstrudel smart contract is written entirely in Huff.

{{blog_divider}}

“Hello World” — an ERC20 implementation in Huff

Picture the scene — the year is 2020 and the world is reeling from a new global financial crisis. With the collapse of the monetary base, capital flees to the only store of stable value that can be found — first-generation Crypto-Kitties. Amidst this global carnage, Ethereum has failed to achieve its scaling milestones and soaring gas fees cripple the network.It is a world on the brink, where one single edict is etched into the minds citizens from San Francisco to Shanghai — The tokens must flow…or else.

This is truly the darkest timeline, and in the darkest timeline, we code in Huff.

{{blog_divider}}

Finding our feet

We’re going to write an ERC20 token contract. But not just any ERC20 contract— we’re going to write an ERC20 contract where every opcode must justify its place, or be scourged from existence…

Let’s start by looking at the Solidity interface for a ‘mintable’ token — there’s not much point in an ERC20 contract if it doesn’t have any tokens, after all.

function totalSupply() public view returns (uint);

function balanceOf(address tokenOwner) public view returns (uint);

function allowance(address tokenOwner, address spender) public view returns (uint);

function transfer(address to, uint tokens) public returns (bool);

function approve(address spender, uint tokens) public returns (bool);

function transferFrom(address from, address to, uint tokens) public returns (bool);

function mint(address to, uint tokens) public returns (bool);

event Transfer(address indexed from, address indexed to, uint tokens);

event Approval(address indexed tokenOwner, address indexed spender, uint tokens);

That doesn’t look so bad, how hard can this be?

{{blog_divider}}

Bootstrapping

Before we start writing the main subroutines, remember that Huff doesn’t do variables. But there’s a macro for that! Specifically, we need to be able to identify storage locations with something that resembles a variable.

Let’s create some macros that refer to storage locations that we’re going to be storing the smart contract’s state in. For Solidity smart contracts, the compiler will (under the hood) assign every storage variable to a storage pointer and we’re doing the same here.

First up, the storage pointer that maps to token balances:

#define macro BALANCE_LOCATION = takes(0) returns(1) {
   0x00
}

The takes field refers to how many EVM stack items this macro consumes. returns refers to how many EVM stack items this macro will add onto the stack.

Finally, the macro code is just 0x00 . This will push 0 onto the EVM stack; we’re associating balances with the first storage ‘slot’ in our smart contract.

We also need a storage location for the contract’s owner:

#define macro OWNER_LOCATION = takes(0) returns(0) {
   0x01
}

{{blog_divider}}

Implementing SafeMath in Huff

SafeMath is a Solidity library that performs arithmetic operations whilst guarding against integer overflow and underflow.

We need the same functionality in Huff. After all, we wouldn’t want to write unsafe Huff code. That would be irrational.ERC20 is a simple contract, so we will only need addition and subtraction capabilites.

Let’s consider our first macro, MATH__ADD . Normally, this would be a function with two variables as input arguments. But Huff doesn’t have functions.

Huff doesn’t have variables either.

...

Let’s take a step back then. What would this function look like if we were to rip out Solidity’s syntactic sugar? This is the function interface:

function add(uint256 a, uint256 b) internal view returns (uint256 c);

Under the hood, when the add function is called, variables a and b will be pushed to the front of the EVM’s stack.

Behind them on the stack will be a jump label that corresponds to the return destination of this function. But we’re going to ignore that — It’s cheaper to directly graft the function bytecode in-line when its needed, instead of spending gas by messing around with jumps.

So for our first macro, MATH__ADD , we expect first two variables to be at the front of the EVM stack; the variables that we want to add. This macro will consume these two variables, and return the result on the stack. If an integer overflow is triggered, the macro will throw an error.

Starting with the basics, if our stack state is: a, b , we need a+b . Once we have a+b , we need to compare it with either a or b . If either are greater than a+b, we have an integer overflow.

So step1: clone b , creating stack state: b, a, b . We do this with thedup2 opcode. We then call add , which eats the first two stack variables and spits out a+b , leaving us with (a+b), b on the stack.

Next up, we need to validate that a+b >= b. One slight problem here — the Ethereum Virtual Machine doesn’t have an opcode that maps to the >= operator! We only have gt and lt opcodes to work with.

We also have the eq opcode, so we could check whether a+b > band perform a logical OR operation with a+b = b . i.e.:

// stack state: (a+b) b
dup2 dup2 gt // stack state: ((a+b) > b) (a+b)
bdup3 dup3 eq // stack state: ((a+b) = b) ((a+b) > b) (a+b) b
or           // stack state: ((a+b) >= b) (a+b) b

But that’s expensive, we’ve more than doubled the work we’re doing! Each opcode in the above section is 3 gas so we’re chewing through 21 gas to compare two variables. This isn’t Solidity — this is Huff, and it’s time to haggle.

A cheaper alternative is to, instead, validate that (b > (a+b)) == 0 . i.e:

// stack state: (a+b) b
dup1 dup3 gt // stack state: (a > (a+b)) (a+b)
biszero       // stack state: (a > (a+b) == 0) (a+b) b

Much better, only 12 gas. We can almost live with that, but we’re not done bargaining.

We can optimize this further, because once we’ve performed this step, we don’t need bon the stack anymore — we can consume it. We still need (a+b)on the stack however, so we need a swap opcode to get b in front of (a+b) on the program stack. This won’t save us any gas up-front, but we’ll save ourselves an opcode later on in this macro.

dup1 swap2 gt // stack state: (b > (a+b)) (a+b)
iszero        // stack state: ((a+b) >= b) (a+b)

Finally, if a > (a+b) we need to throw an error. When implementing “if <x> throw an error”, we have two options to take, because of how thejumpi instruction works.

jumpi is how the EVM performs conditional branching. jumpi will consume the top two variables on the stack. It will treat the second variable as a position in the program’s program counter, and will jump to it only if the first variable is not zero.

When throwing errors, we can test for the error condition, and if true jump to a point in the program that will throw an error.

OR we can test for the opposite of the error condition, and if true, jump to a point in the program that skips over some code that throws an error.

For example, this is how we would program option 2 for our safe add macro:

// stack state: ((a+b) >= b) (a+b)
no_overflow jumpi
   0x00 0x00 revert // throw an error
no_overflow:
// continue with algorithm

Option one, on the other hand, looks like this:

// stack state: ((a+b) >= b) (a+b)
iszero // stack state: (b > (a+b)) (a+b)
throw_error jumpi
// continue with algorithm

For our use case, option 2 is more efficient, because if we chain option 2 with our condition test, we end up with:

dup2 add dup1 swap2 gt
iszero
iszero
throw_error jumpi

We can remove the two iszero opcodes because they cancel each other out! Leaving us with the following macro

#define macro MATH__ADD = takes(2) returns(1) {
   // stack state: a b
   dup2 add
   // stack state: (a+b) b
   dup1 swap2 gt
   // stack state: (a > (a+b)) (a+b)
   throw_error jumpi}

However, we have a problem! We haven’t defined our jump label throw_error , or what happens when we hit it. We can’t add it to the end of macro MATH__ADD , because then we would have to jump over the error-throwing code if the error condition was not met.

We would prefer not to have macros that use jump labels that are not declared inside the macro itself. We can solve this by passing the jump label throw_error as a template parameter. It is then the responsibility of the macro that invokes MATH__ADD to supply the correct jump label — which ideally should be a local jump label and not a global one.

Our final macro looks like this:

template <throw_error_jump_label>
#define macro MATH__ADD = takes(2) returns(1) {
   // stack state: a b
   dup2 add
   // stack state: (a+b) a
   dup1 swap2 gt
   // stack state: (a > (a+b)) (a+b)
<throw_error_jump_label> jumpi
}

The jumpi opcode is 10 gas, and the others cost 3 gas (assuming <throw_error_jump_label> eventually will map to a PUSH opcode) — in total 28 gas.As an aside — let’s consider the overhead created by Solidity when calling SafeMath.add(a, b)First, values a and b are duplicated on the stack; functions don’t consume existing stack variables. Next, the return destination, that must be jumped to when the function finishes, is pushed onto the stack. Finally, the jump destination of SafeMath.add is pushed onto the stack and the jump instruction is called.

Once the function has finished its work, the jump instruction is called to jump back to the return destination. The values a , b are then assigned to local variables by identifying the location on the stack that these variables occupy, calling a swap opcode to manoeuvre the return value into the allocated stack location, followed by a pop opcode to remove the old value. This is performed twice for each variable.

In total that’s…

  • 4 dup opcodes (3 gas each)
  • 2 jump opcodes (8 gas each)
  • 2 swap opcodes (3 gas each)
  • 2 pop opcodes (2 gas each)
  • 2 jumpdest opcodes (1 gas each)

To summarise, the act of calling SafeMath.add as a Solidity function would cost 40 gas before the algorithm actually does any work.

To summarise the summary, our MATH__ADD macro does its job in almost half the gas it would cost to process a Solidity function overhead.

To summarise the summary of the summary, this is acceptable Huff code.

{{blog_divider}}

Subtraction

Finally, we need an equivalent macro for subtraction:

template <throw_error_jump_label>
#define macro MATH__SUB = takes(2) returns(1) {
   // stack state: a b
   // calling sub will create (a-b)
   // if (b>a) we have integer underflow - throw an error    dup1 dup3 gt
   // stack state: (b>a) a b<throw_error_jump_label> jumpi
   // stack state: a b
   sub
   // stack state: (a-b)
}

{{blog_divider}}

Utility macros

Next up, we need to define some utility macros we’ll be using . We need macros that validate that the transaction sender has not sent any ether to the smart contract, UTILS__NOT_PAYABLE. For our mint method, we’ll need a macro that validates that the message sender is the contract’s owner, UTILS__ONLY_OWNER:

template<error_location>
#define macro UTILS__NOT_PAYABLE = takes(0) returns(0) {
   callvalue <error_location> jumpi
}

#define macro UTILS__ONLY_OWNER = takes(0) returns(0) {
   OWNER_LOCATION() sload caller eq is_owner jumpi
       0x00 0x00 revert
   is_owner:
}

N.B. revert consumes two stack items. p x revert will take memory starting at x, and return the next p bytes as an error code. We’re not going to worry about error codes here, just throwing an error is good enough.

{{blog_divider}}

Creating the constructor

Now that we’ve set up our helper macros, we’re close to actually being able to write our smart contract methods. Congratulations on nearly reaching step 1!

To start with , we need a constructor. This is just another macro in Huff. Our constructor is very simple — we just need to record who the owner of the contract is. In Solidity it looks like this:

constructor() public {
   owner = msg.sender;
}

And in Huff it looks like this:

#define macro ERC20 = takes(0) returns(0) {
   caller OWNER_LOCATION() sstore
}

The EVM opcode caller will push the message sender’s address onto the stack.

We then push the storage slot we’ve reserved for the owner onto the stack.

Finally we call sstore, which will consume the first two stack items and store the 2nd stack item, using the value of the 1st stack item as the storage pointer.

For more information about storage pointers and how smart contracts manage state — Anreas Olofsson’s Solidity workshop on storage is a great read.

{{blog_divider}}

Parsing the function signature

Are we ready to start writing our smart contract methods yet? Of course not, this is Huff. Huff is efficient, but slow.

I like think of Huff like a trusty tortoise, if the tortoise is actually a hundred rats stitched into a tortoise suit, and each rat is a hundred maggots stitched into a rat suit.

…anyhow, we still need our function selector. But Huff doesn’t do functions; we’re going to have to create them from more basic building blocks.

One of the first pieces of code generated by the Solidity compiler is code to unpick the function signature. A function signature is a unique marker that maps to a function name.

For example, consider the solidity functionfunction balanceOf(address tokenOwner) public view returns (uint balance);The function signature will take the core identifying information of the function:

  • the function name
  • the input argument types

This is represented as a string, i.e. "balanceOf(address)". A keccak256 hash of this string is taken, and the most significant 4 bytes of the hash are then used as the function signature.

This online tool makes it easier to find the signature of a function.

It’s a bit of a mouthful, but it creates a (mostly) unique identifier for any given function — this allows contracts to conform to a defined interface that other smart contracts can call.

For example, if the function signature for a given function varied from contract to contract, it would be impossible to have an ‘ERC20’ token, because other smart contracts wouldn’t know how to construct a given contract’s function signature.

With that out of the way, we will find the function signature in the first 4 bytes of calldata. We need to extract this signature and then figure out what to do with it.

Solidity will create function signature hashes under the hood so you don’t have to, but Huff is a bit too primitive for that. We have to supply them directly. We can identify the ERC20 function signatures by pulling them out of remix:

We can parse a function signature by extracting the first 4 bytes of calldata and then perform a series of if-else statements over every function hash.

We can use the bit-shift instructions in Constantinople to save a bit of gas here. 0x00 calldataload will extract the first 32 bytes of calldata and push it onto the stack in a single EVM word. i.e. the 4 bytes we want are in the most significant byte positions and we need them in the least significant positions.

We can do this with 0x00 calldataload 224 shr

We can execute ‘functions’ by comparing the calldata with a function signature, and jumping to the relevant macro if there is a match. i.e:

0x00 calldataload 224 shr // function signature
dup1 0xa9059cbb eq transfer jumpi
dup1 0x23b872dd eq transfer_from jumpi
dup1 0x70a08231 eq balance_of jumpi
dup1 0xdd62ed3e eq allowance jumpi
dup1 0x095ea7b3 eq approve jumpi
dup1 0x18160ddd eq total_supply jumpi
dup1 0x40c10f19 eq mint jumpi
// If we reach this point, we've reached the fallback function.
// However we don't have anything inside our fallback function!
// We can just exit instead, after checking that callvalue is zero:
UTILS__NOT_PAYABLE<error_location>()
0x00 0x00 return

We want the scope of this macro to be constrained to identifying where to jump — the location of these jump labels is elsewhere in the code. Again, we use template parameters to ensure that jump labels are only explicitly called inside the macros that they are defined in.

Our final macro looks like this:

template <transfer, transfer_from, balance_of, allowance, approve, total_supply, mint, error_location>
#define macro ERC20__FUNCTION_SIGNATURE = takes(0) returns(0) {
   0x00 calldataload 224 shr // function signature
   dup1 0xa9059cbb eq <transfer> jumpi
   dup1 0x23b872dd eq <transfer_from> jumpi
   dup1 0x70a08231 eq <balance_of> jumpi     dup1 0xdd62ed3e eq <allowance> jumpi
   dup1 0x095ea7b3 eq <approve> jumpi    dup1 0x18160ddd eq <total_supply> jumpi
   dup1 0x40c10f19 eq <mint> jumpi
   UTILS__NOT_PAYABLE<error_location>()
   0x00 0x00 return
}

{{blog_divider}}

Setting up boilerplate contract code

Finally, we have enough to write the skeletal structure of our main function — the entry-point when our smart contract is called. We represent each method with a macro, which we will need to implement.

#define macro ERC20__MAIN = takes(0) returns(0) {


   ERC20__FUNCTION_SIGNATURE<
       transfer,
       transfer_from,
       balance_of,
       allowance,
       approve,
       total_supply,
       mint,
       throw_error
>()

   transfer:
       ERC20__TRANSFER<throw_error>()
   transfer_from:
       ERC20__TRANFSER_FROM<throw_error>()
   balance_of:
       ERC20__BALANCE_OF<throw_error>()
   allowance:
       ERC2O__ALLOWANCE<throw_error>()
   approve:
       ERC20__APPROVE<throw_error>()
   total_supply:
       ERC20__TOTAL_SUPPLY<throw_error>()
   mint:
       ERC20__MINT<throw_error>()
   throw_error:
       0x00 0x00 revert
}

…Tadaa.

Finally we’ve set up our pre-flight macros and boilerplate code and we’re ready to start implementing methods!But that’s enough for today.

In part 2 we’ll implement the ERC20 methods as glistening Huff macros, run some benchmarks against a Solidity implementation and question whether any of this was worth the effort.

Cheers,

Zac.

Click here for part 2

Read more
Aztec Network
Aztec Network
31 Mar
xx min read

Announcing the Alpha Network

Alpha is live: a fully feature-complete, privacy-first network. The infrastructure is in place, privacy is native to the protocol, and developers can now build truly private applications. 

Nine years ago, we set out to redesign blockchain for privacy. The goal: create a system institutions can adopt while giving users true control of their digital lives. Privacy band-aids are coming to Ethereum (someday), but it’s clear we need privacy now, and there’s an arms race underway to build it. Privacy is complex, it’s not a feature you can bolt-on as an afterthought. It demands a ground-up approach, deep tech stack integration, and complete decentralization.

In November 2025, the Aztec Ignition Chain went live as the first decentralized L2 on Ethereum, it’s the coordination layer that the execution layer sits on top of. The network is not operated by the Aztec Labs or the Aztec Foundation, it’s run by the community, making it the true backbone of Aztec. 

With the infrastructure in place and a unanimous community vote, the network enters Alpha. 

What is the Alpha Network?

Alpha is the first Layer 2 with a full execution environment for private smart contracts. All accounts, transactions, and the execution itself can be completely private. Developers can now choose what they want public and what they want to keep private while building with the three privacy pillars we have in place across data, identity, and compute.

These privacy pillars, which can be used individually or combined, break down into three core layers: 

  1. Data: The data you hold or send remains private, enabling use cases such as private transactions, RWAs, payments and stablecoins.
  2. Identity: Your identity remains private, enabling accounts that privately connect real world identities onchain, institutional compliance, or financial reporting where users selectively disclose information.
  3. Compute: The actions you take remain private, enabling applications in private finance, gaming, and beyond.

The Key Components  

Alpha is feature complete–meaning this is the only full-stack solution for adding privacy to your business or application. You build, and Aztec handles the cryptography under the hood. 

It’s Composable. Private-preserving contracts are not isolated; they can talk to each other and seamlessly blend both private and public state across contracts. Privacy can be preserved across contract calls for full callstack privacy. 

No backdoor access. Aztec is the only decentralized L2, and is launching as a fully decentralized rollup with a Layer 1 escape hatch.

It’s Compliant. Companies are missing out on the benefits of blockchains because transparent chains expose user data, while private networks protect it, but still offer fully customizable controls. Now they can build compliant apps that move value around the world instantly.

How Apps Work on Alpha 

  1. Write in Noir, a proprietary rust-like programming language for writing smart contracts. Build contracts with Aztec.nr and mark functions private or public.
  1. Prove on a device. Users execute private logic locally and a ZK proof is generated.
  1. Submit to Aztec. The proof goes to sequencers who validate without seeing the data. Any public aspects are then executed.
  1. Settle on Ethereum. Checkpoints batch proofs to L1 every ~12s. Ethereum verifies everything. 

Developers can explore our privacy primitives across data, identity, and compute and start building with them using the documentation here. Note that this is an early version of the network with known vulnerabilities, see this post for details. While this is the first iteration of the network, there will be several upgrades that secure and harden the network on our path to Beta. If you’d like to learn more about how you can integrate privacy into your project, reach out here

To hear directly from our Cofounders, join our live from Cannes Q&A on Tuesday, March 31st at 9:30 am ET. Follow us on X to get the latest updates from the Aztec Network.

Aztec Network
Aztec Network
27 Mar
xx min read

Critical Vulnerability in Alpha v4

On Wednesday 17 March 2026 our team discovered a new vulnerability in the Aztec Network. Following the analysis, the vulnerability has been confirmed as a critical vulnerability in accordance with our vulnerability matrix.

The vulnerability affects the proving system as a whole, and is not mitigated via public re-execution by the committee of validators. Exploitation can lead to severe disruption of the protocol and theft of user funds.

In accordance with our policy, fixes for the network will be packaged and distributed with the “v5” release of the network, currently planned for July 2026.

The actual bug and corresponding patch will not be publicly disclosed until “v5.”

Aztec applications and portals bridging assets from Layer 1s should warn users about the security guarantees of Alpha, in particular, reminding users not to put in funds they are not willing to lose. Portals or applications may add additional security measures or training wheels specific to their application or use case.

State of Alpha security

We will shortly establish a bug tracker to show the number and severity of bugs known to us in v4. The tracker will be updated as audits and security researchers discover issues. Each new alpha release will get its own tracker. This will allow developers and users to judge for themselves how they are willing to use the network, and we will use the tracker as a primary determinant for whether the network is ready for a "Beta" label.

Additional bug disclosure

We have identified a vulnerability in barretenberg allowing inclusion of incorrect proofs in the Aztec Network mempool, and ask all nodes to upgrade to versions v.4.1.2 or later.

We’d like to thank Consensys Diligence & TU Vienna for a recent discovery of a separate vulnerability in barretenberg categorized as medium for the network and critical for Noir:

We have published a fixed version of barretenberg.

We’d also like to thank Plainshift AI for discovery, reproduction, and reporting of one more vulnerability in the Aztec Network and their ongoing work to help secure the network.

Aztec Network
Aztec Network
18 Mar
xx min read

How Aztec Governance Works

Decentralization is not just a technical property of the Aztec Network, it is the governing principle. 

No single team, company, or individual controls how the network evolves. Upgrades are proposed in public, debated in the open, and approved by the people running the network. Decentralized sequencing, proving, and governance are hard-coded into the base protocol so that no central actor can unilaterally change the rules, censor transactions, or appropriate user value.

The governance framework that makes this possible has three moving parts: Aztec Improvement Proposal (AZIP), Aztec Upgrade Proposal (AZUP), and the onchain vote. Together, they form a pipeline that takes an idea to a live protocol change, with multiple independent checkpoints along the way.

The Virtual Town Square

Every upgrade starts with an AZIP. AZIPs are version-controlled design documents, publicly maintained on GitHub, modeled on the same EIP process that has governed Ethereum since its earliest days. Anyone is encouraged to suggest improvements to the Aztec Network protocol spec.

Before a formal proposal is opened, ideas live in GitHub Discussions, an open forum where the community can weigh in, challenge assumptions, and shape the direction of a proposal before it hardens into a spec. This is the virtual town square: the place where the network's future gets debated in public, not decided behind closed doors.

The AZIP framework is what decentralization looks like in practice. Multiple ideas can surface simultaneously, get stress-tested by the community, and the strongest ones naturally rise. Good arguments win, not titles or seniority. The process selects for quality discussion precisely because anyone can participate and everything is visible.

Once an AZIP is formalized as a pull request, it enters a structured lifecycle: Draft, Ready for Discussion, then Accepted or Rejected. Rejected AZIPs are not deleted — they remain permanently in the repository as a record of what was tried and why it was rejected. Nothing gets quietly buried.

Security Considerations are mandatory for all Core, Standard, and Economics AZIPs. Proposals without them cannot pass the Draft stage. Security is structural, not an afterthought.

From Proposal to Upgrade

Once Core Contributors, a merit-based and informal group of active protocol contributors, have reviewed an AZIP and approved it for inclusion, it gets bundled into an AZUP.

An AZUP takes everything an AZIP described and deploys it — a real smart contract, real onchain actions. Each AZUP includes a payload that encodes the exact onchain changes that will occur if the upgrade is approved. Anyone can inspect the payload on a block explorer and see precisely what will change before voting begins.

The payload then goes to sequencers for signaling. Sequencers are the backbone of the network. They propose blocks, attest to state, and serve as the first governance gate for any upgrade. A payload must accumulate enough signals from sequencers within a fixed round to advance. The people actually running the network have to express coordinated support before any change reaches a broader vote.

Once sequencers signal quorum, the proposal moves to tokenholders. Sequencers' staked voting power defaults to "yea" on proposals that came through the signaling path, meaning opposition must be active, not passive. Any sequencer or tokenholder who wants to vote against a proposal must explicitly re-delegate their stake before the voting snapshot is taken. The system rewards genuine engagement from all sides.

For a proposal to pass, it must meet quorum, a supermajority margin, and a minimum participation threshold, all three. If any condition is unmet, the proposal fails.

Built-In Delays, Built-In Safety

Even after a proposal passes, it does not execute immediately. A mandatory delay gives node operators time to deploy updated software, allows the community to perform final checks, and reduces the risk of sudden uncoordinated changes hitting the network. If the proposal is not executed within its grace period, it expires.

Failed AZUPs cannot be resubmitted. A new proposal must be created that directly addresses the feedback received. There is no way to simply retry and hope for a different result.

No Single Point of Control

The teams building the network have no special governance power. Sequencers, tokenholders, and Core Contributors are the governing actors, each playing a distinct and non-redundant role.

No single party can force or block an upgrade. Sequencers can withhold signals. Tokenholders can vote nay. Proposals not executed within the grace period expire on their own.

This is decentralization working as intended. The network upgrades not because a team decides it should, but because the people running it agree that it should.

If you want to help shape what Aztec becomes, the forum is open. The proposals are public. The town square is yours. 

Follow Aztec on X to stay up to date on the latest developments.

Aztec Network
Aztec Network
10 Mar
xx min read

Alpha Network Security: What to Expect

Aztec’s Approach to Security

Aztec is novel code — the bleeding edge of cryptography and blockchain technology. As the first decentralized L2 on Ethereum, Aztec is powered by a global network of sequencers and provers. Decentralization introduces some novel challenges in how security is addressed; there is no centralized sequencer to pause or a centralized entity who has power over the network. The rollout of the network reflects this, with distinct goals at each phase.

Ignition

Validate governance and decentralized block building work as intended on Ethereum Mainnet. 

Alpha

Enable transactions at 1TPS, ~6s block times and improve the security of the network via continual ongoing audits and bug bounty. New releases of the alpha network are expected regularly to address any security vulnerabilities. Please note, every alpha deployment is distinct and state is not migrated between Alpha releases. 

Beta

We will transition to Beta once the network scales to >10 TPS, with reduced block times while ensuring 99.9% uptime. Additionally, the transition requires no critical bugs disclosed via bug bounty in 3 months. State migrations across network releases can be considered.

TL;DR: The roadmap from Ignition to Alpha to Beta is designed to reflect the core team's growing confidence in the network's security.

This phased approach lets us balance ecosystem growth while building security confidence and steadily expanding the community of researchers and tools working to validate the network’s security, soundness and correctness.

Ultimately, time in production without an exploit is the most reliable indicator of how secure a codebase is.

At the start of Alpha, that confidence is still developing. The core team believes the network is secure enough to support early ecosystem use cases and handle small amounts of value. However this is experimental alpha software and users should not deposit more value than they are willing to lose. Apps may choose to limit deposit amounts to mitigate risk for users.

Audits are ongoing throughout Alpha, with the goal to achieve dual external audits across the entire codebase.

The table below shows current security and audit coverage at the time of writing.

The main bug bounty for the network is not yet live, other than for the non-cryptographic L1 smart contracts as audits are ongoing. We encourage security researchers to responsibly disclose findings in line with our security policy .

As the audits are still ongoing, we expect to discover vulnerabilities in various components. The fixes will be packaged and distributed with the “v5” release.

If we discover a Critical vulnerability in “v4” in accordance with the following severity matrix, which would require the change of verification keys to fix, we will first alert the portal operators to pause deposits and then post a message on the forum, stating that the rollup has a vulnerability.

Security of the Aztec Virtual Machine (AVM)

Aztec uses a hybrid execution model, handling private and public execution separately — and the security considerations differ between them.

As per the audit table above, it is clear that the Aztec Virtual Machine (AVM) has not yet completed its internal and external audits. This is intentional as all AVM execution is public, which allows it to benefit from a “Training Wheel” — the validator re-execution committee.

Every 72 seconds, a collection of newly proposed Aztec blocks are bundled into a "checkpoint" and submitted to L1. With each proposed checkpoint, a committee of 48 staking validators randomly selected from the entire set of validators (presently 3,959) re-execute all txs of all blocks in the checkpoint, and attest to the resulting state roots. 33 out of 48 attestations are required for the checkpoint proposal to be considered valid. The committee and the eventual zk proof must agree on the resultant state root for a checkpoint to be added to the proven chain. As a result, an attacker must control 33/48 of any given committee to exploit any bug in the AVM.

The only time the re-execution committee is not active is during the escape hatch, where the cost to propose a block is set at a level which attempts to quantify the security of the execution training wheel. For this version of the alpha network, this is set a 332M AZTEC, a figure intended to approximate the economic protection the committee normally provides, equivalent to roughly 19% of the un-staked circulating supply at the time of writing. Since the Aztec Foundation holds a significant portion of that supply, the effective threshold is considerably higher in practice.

Quantifying the cost of committee takeover attacks

A key design assumption is that just-in-time bribery of the sequencer committee is impractical and the only ****realistic attack vector is stake acquisition, not bribery.

Assuming a sequencer set size of 4,000 and a committee that rotates each epoch (~38.4mins) from the full sequencer set using a Fisher-Yates shuffle seeded by L1 RANDAO we can see the probability and amount of stake required in the table below.

To achieve a 99% probability of controlling at least one supermajority within 3 days, an attacker would need to control approximately 55.4% of the validator set - roughly 2,215 sequencers representing 443M AZTEC in stake. Assuming an exploit is successful their stake would likely de-value by 70-80%, resulting in an expected economic loss of approximately 332M AZTEC.

To achieve only a 0.5% probability of controlling at least one supermajority within 6 months, an attacker would need to control approximately 33.88% of the validator set.

What does this means for builders?

The practical effect of this training wheel is that the network can exist while there are known security issues with the AVM, as long as the value an attacker would gain from any potential exploit is less than the cost of acquiring 332M AZTEC.

The training wheel allows security researchers to spend more time on the private execution paths that don’t benefit from the training wheel and for the network to be deployed in an alpha version where security researchers can attempt to find additional AVM exploits.

In concrete terms, the training wheel means the Alpha network can reasonably secure value up to around 332M AZTEC (~$6.5M at the time of writing).

Ecosystem builders should keep the above limits in mind, particularly when designing portal contracts that bridge funds into the network.

Portals are the main way value will be bridged into the alpha network, and as a result are also the main target for any exploits. The design of portals can allow the network to secure far higher value. If a portal secures > 332M AZTEC and allows all of its funds to be taken in one withdrawal without any rate limits, delays or pause functionality then it is a target for an AVM exploit attack.

If a portal implements a maximum withdrawal per user, pause functionality or delays for larger withdrawals it becomes harder for an attacker to steal a large quantum of funds in one go.

Conclusion

The Aztec Alpha code is ready to go. The next step is for someone in the community to submit a governance proposal and for the network to vote on enabling transactions. This is decentralization working as intended.

Once live, Alpha will run at 1 TPS with roughly 6 second block times. Audits are still ongoing across several components, so keep deposits small and only put in what you're comfortable losing.

On the security side, a 48-validator re-execution committee provides the main protection during Alpha, requiring 33/48 consensus on every 72-second checkpoint. Successfully attacking the AVM would require controlling roughly 55% of the validator set at a cost of around 332M AZTEC, putting the practical security ceiling at approximately $6.5M.

Alpha is about growing the ecosystem, expanding the security of the network, and accumulating the one thing no audit can shortcut: time in production. This is the network maturing in exactly the way it was designed to as it progresses toward Beta.