Aztec Network
29 Aug
## min read

From zero to nowhere: smart contract programming in Huff (2/4)

Follow the journey of smart contract programming in Huff, offering unique insights into this coding language.

Share
Written by
Zac Williamson
Edited by

Hello there!

This is the second article in a series on how to write smart contracts in Huff.

Huff is a recent creation that has rudely inserted itself into to the panoply of Ethereum smart contract-programming languages. Barely a language, Huff is more like an assembler that can manage a few guttural yelps, that could charitably be interpreted as some kind of syntax.

I created Huff so that I could write an efficient elliptic curve arithmetic contract called Weierstrudel, and reduce the costs of zero-knowledge cryptography on Ethereum. So, you know, a general purpose language that will take the world by storm at any moment…

But hey, I guess it can be used to write an absurdly over-optimised ERC20 contract as well.

So let’s dive back into where we left off.

Prerequisites

  • Part 1 of this series
  • Knowledge of Solidity inline assembly
  • A large glass of red wine. Huff’s Ballmer peak is extremely high, so a full-bodied wine is ideal. A Malbec or a Merlot will also balance out Huff’s bitter aftertaste, but vintage is more important than sweetness here.

Difficulty rating: (medium Huff)

Getting back into the swing of things: Transfer

We left off in the previous article with the skeletal structure of our contract, which is great! We can jump right into the fun stuff and start programming some logic.

We need to implement the functionality function transfer(address to, uint256 value) public returns (bool). Here’s a boilerplate Solidity implementation of transfer

function transfer(address to, uint256 value) public returns (bool) {
  balances[msg.sender] = balances[msg.sender].sub(value);
  balances[to] = balances[to].add(value);
  emit Transfer(from, to, value);
  return true;
}

Hmm. Well, this is awkward. We need to access some storage mappings and emit an event.

Huff doesn’t have mappings or events.

Okay, let’s take a step back. An ERC20 token represents its balances by mapping address to an integer: mapping(address => uint) balances. But…Huff does not have types (Of course Huff doesn’t have types, type checking is expensive! Well, it’s not free, sometimes, so it had to go).

In order to emulate this, we need to dig under the hood and figure out how Solidity represents mappings, with hashes.

Breaking down a Solidity mapping

Smart contracts store data via the sstore opcode — which takes a pointer to a storage location. Each storage location can contain 32 bytes of data, but these locations don’t have to be linear (unlike memory locations, which are linear).

Mappings work by combining the mapping key with the storage slot of the mapping, and hashing the result. The result is a 32 byte storage pointer that unique both to the key being used, and the mapping variable in question.

So, we can solve this problem by implement mappings from scratch! The Transfer event requires address from, address to, uint256 value. We’ll deal with the event at the end of this macro, but given that we will be re-using the from, to, value variables a lot, we might as well throw them on the stack. What could go wrong?

Initialising the stack

Before we start cutting some code, let’s map out the steps our method has to perform:

  1. Increase balance[to] by value
  2. Decrease balance[msg.sender] by value
  3. Error checking
  4. Emit the Transfer(from, to, value) event
  5. Return true

When writing an optimised Huff macro, we need to think strategically about how to perform the above in order to minimise the number of swap opcodes that are required.

Specifically, the variables from and to are located in calldata, the data structure that stores input data sent to the smart contract. We can load a word of calldata via calldataload. The calldataload opcode has one input, the offset in calldata that we’re loading from, meaning it costs 6 gas to load a word of calldata.

It only costs 3 gas to duplicate a variable on the stack, so we only want to load from calldata once and re-use variables with the dup opcode.

Because the stack is a last-in-first-out structure, the first variables we load onto the stack will be consumed by the last bit of logic in our method.

The last ‘bit of logic’ we need is the event that we will be emitting, so let’s deal with how that will work.

Huff and events

Imagine we’re almost done implementingtransfer(address to, uint256 value) public returns (bool), the only minor issue is that we need to emit an event, Transfer(address indexed from, address indexed to, uint256 value).

…I have a confession to make. I’ve never written a Huff contract that emits events. Still, there’s a first time for everything yes?

Events have two types of data associated with them, topics and data.

Topics are what get created when an event parameter has the indexed prefix. Instead of storing the parameter address indexed from in the event log, the keccak256 hash of from is used as a database lookup index.

i.e. When searching the event logs, you can pull out all Transfer events that contained a given address in the from or to field. But if you look at a bunch of Transfer event logs, you won’t be able to identify which address was used as the from or to field by looking at the log data.

Our event Transfer has three topics, despite only having two indexed parameters. The event signature is also a topic (a keccak256 hash of the event string, it’s like a function signature).

Digging around in Remix, this is the event signature:

0xDDF252AD1BE2C89B69C2B068FC378DAA952BA7F163C4A11628F55A4DF523B3EF. Just rolls off the tongue doesn’t it?

So we know what we need to do with indexed data — just supply the ‘topics’ on the stack. Next, how does an event log non-indexed data?

Taking a step back, there are five log opcodes: log0, log1, log2, log3, log4. These opcodes describe the number of indexed parameters in each log.

We want log3. The interface for the log3 opcode is log3(p1, p2, a, b, c). Memory p1 to p1+p2 contains the log data, and a, b, c represent the log topics.

i.e. we want log3(p1, p2, event_signature, from, to). The values from, to, event_signature are going to be the last variables consumed on our stack, so they must be the first variables we add to it at the start of our program.

Finally, we’re in a position to write some Huff code, oh joy. Here it is:

0x04 calldataload
caller0xDDF252AD1BE2C89B69C2B068FC378DAA952BA7F163C4A11628F55A4DF523B3EF

Next up, we need to define the memory that will contain value. That’s simple enough — we’ll be storing value at memory position 0x00, so p1=0x00 and p2=0x20. Giving us

0x04 calldataload
caller
0xDDF252AD1BE2C89B69C2B068FC378DAA952BA7F163C4A11628F55A4DF523B3EF
0x20
0x00

Finally, we need value, so that we can store it in memory for log3. We will execute the mstore opcode at the end of our method, so that we can access value from the stack for the rest of our method. For now, we just load it onto the stack:

0x04 calldataload
caller
0xDDF252AD1BE2C89B69C2B068FC378DAA952BA7F163C4A11628F55A4DF523B3EF
0x20
0x00
0x24 calldataload

One final thing: we have assumed that 0x04 calldataload will map to address to. But 0x04 calldataload loads a 32-byte word onto the stack, and addresses are only 20 bytes! We need to mask the 12 most-significant bytes of 0x04 calldataload, in case the transaction sender has added non-zero junk into those upper 12 bytes of calldata.

We can fix this by calling 0x04 calldataload 0x000000000000000000000000ffffffffffffffffffffffffffffffffffffffff and.

Actually, let’s make a macro for that, and one for our event signature to keep it out of the way:

#define macro ADDRESS_MASK = takes(1) returns(1) {
0x000000000000000000000000ffffffffffffffffffffffffffffffffffffffff
and
}

#define macro TRANSFER_EVENT_SIGNATURE = takes(0) returns(1) {
0xDDF252AD1BE2C89B69C2B068FC378DAA952BA7F163C4A11628F55A4DF523B3EF
}

The final state of our ‘initialisation’ macro is this:#define macro ERC20__TRANSFER_INIT = takes(0) returns(6) {  0x04 calldataload ADDRESS_MASK()  caller  TRANSFER_EVENT_SIGNATURE()  0x20  0x00  0x24 calldataload}

Updating balances[to]

Now that we’ve set up our stack, we can proceed iteratively through our method’s steps (you know, like a normal program…).

To increase balances[to], we need to handle mappings. To start, let’s get our mapping key for balances[to]. We need to place to and the storage slot unique to balances linearly in 64 bytes of memory, so we can hash it:

// stack state:
// value 0x20 0x00 signature from to
dup6 0x00 mstore
BALANCE_LOCATION() 0x20 mstore
0x40 0x00 sha3

In the functional style, sha3(a, b) will create a keccak256 hash of the data in memory, starting at memory index aand ending at index a+b.

Notice how our ‘mapping’ uses 0x40 bytes of memory? That’s why the ‘free memory pointer’ in a Solidity contract is stored at memory index 0x40 — the first 2 words of memory are used for hashing to compute mapping keys.

Optimizing storage pointer construction

I don’t know about you, but I’m not happy with this macro. It smells…inefficient. In part 1, when we set BALANCE_LOCATION() to storage slot 0x00, we did that on purpose! balances is the most commonly used mapping in an ERC20 contract, and there’s no point storing 0x00 in memory. Smart contract memory isn’t like normal memory and uninitialised — all memory starts off initialised to 0x00. We can scrap that stuff, leaving:

// stack state:
// value 0x20 0x00 signature from to
dup6 0x00 mstore
0x40 0x00 sha3

Note that if our program has previously stored something at index 0x20, we will compute the wrong mapping key.

But this is Huff; clearly the solution is to just never use more than 32 bytes of memory for our entire program. What are we, some kind of RAM hog?

Setting storage variables

Next, we’re going to update balances[to]. To start, we need to load up our balance and duplicate value, in preparation to add it to balances[to].

// stack state:
// key(balances[to]) value 0x20 0x00 signature from to
dup1 sload
dup3

Now, remember that MATH__ADD macro we made in part 1? We can use it here! See, Huff makes programming easy with plug-and-play macros! What a joy.

// stack state:
// key(balances[to]) value 0x20 0x00 signature from to
dup1 sload
dup3MATH_ADD()

…wait. I said we could use MATH__ADD, not that we were going to. I don’t like how expensive this code is looking and I think we can haggle.

Specifically, let’s look at that MATH__ADD macro:

template #define macro MATH__ADD = takes(2) returns(1) { // stack state: a b dup2 add // stack state: (a+b) a dup1 swap2 gt // stack state: (a > (a+b)) (a+b) jumpi }

Remember how, well, optimised, our macro seemed in part 1? All I see now is a bloated gorgon feasting on wasted gas with opcodes dribbling down its chin. Disgusting!

First off, we don’t need that swap2 opcode. Our original macro consumed a from the stack because that variable wasn’t needed anymore. But… a is uint256 value, and we do need it for later — we can replace dup1 swap2 with a simple dup opcode.

But there’s a larger culprit here that needs to go, that jumpi opcode.

Combining error codes

The transfer function has three error tests it must perform: the two safemath checks when updating balances[from] and balances[to], and validating that callvalue is 0.

If we were to implement this naively, we would end up with three conditional jump instructions and associated opcodes to set up jump labels. That’s 48 gas. Quite frankly, I’d rather put that gas budget towards another Malbec, so let’s start optimising.

Instead of directly invoking a jumpi instruction against our error test, let’s store it for later. We can combine all of our error tests into a single test and only onejumpi instruction. Putting this into action, let’s compute the error condition, but leave it on the stack:

// stack state:
// key(balances[to]) value 0x20 0x00 signature from to
dup1 sload // balances[to]
dup3       // value balances[to]
add        // value+balances[to]
dup1       // value+balances[to] value+balances[to]
dup4       // value v+b v+b
gt         // error_code value+balances[to]

Finally, we need to store value+balances[to] at the mapping key we computed previously. We need key(balances[to]) to be in front of value+balances[to], which we can perform with a simple swap opcode:

#define macro ERC20__TRANSFER_TO = takes(6) returns(7) {
// stack state:
// value 0x20 0x00 signature from to
dup6 0x00 mstore
0x40 0x00 sha3
dup1 sload // balances[to] key(balances[to]) ...
dup3       // value balances[to] ...
add        // value+balances[to] ...
dup3       // value value+balances[to] ...
gt         // error_code value+balances[to] key(balances[to])
swap2      // key(balances[to]) value+balances[to} error_code
sstore     // error_code ...
}

And that’s it! We’re done with updating balances[to]. What a breeze.

Updating balances[from]

Next up, we need to repeat that process for balances[from], pillaging the parts of MATH__SUB that are useful.

#define macro ERC20__TRANSFER_FROM = takes(7) returns(8) {
// stack state:
// error_code, value, 0x20, 0x00, signature, from, to
caller 0x00 mstore
0x40 0x00 sha3
dup1 sload // balances[from], key(balances[from]), error_code,...
dup4 dup2 sub // balances[from]-value, balances[from], key, e,...
dup5 swap3 // key, balances[from]-value, balances[from], value
sstore     // balances[from], value, error_code, value, ...
lt         // error_code_2, error_code, value, ...
}

Now that we have both of our error variables on the stack, we can perform our error test! In addition to the two error codes, we want to test whether callvalue> 0. We don’t need gt(callvalue,0) here, any non-zero value of callvalue will trigger our jumpi instruction to jump.

callvalue or or jumpi

What an adorable little line of Huff.

Emitting Transfer(from, to, value)

The penultimate step in our method is to emit that event. Our stack state at this point is where we left it after ERC20__TRANSFER_INIT, so all we need to do is store value at memory index 0x00 and call the log3 opcode.

// stack state:
// value, 0x20, 0x00, event_signature, from, to
0x00 mstore log3

Finally, we need to return true (i.e. 1). We can do that by storing 0x01 at memory position 0x00 and calling return(0x00, 0x20)

Putting it all together…

At long last, we have our transfer method! It’s this…thing

#define macro OWNER_LOCATION = takes(0) returns(1) {
0x01
}

#define macro ADDRESS_MASK = takes(1) returns(1) {
0x000000000000000000000000ffffffffffffffffffffffffffffffffffffffff
and
}

#define macro TRANSFER_EVENT_SIGNATURE = takes(0) returns(1) {
0xDDF252AD1BE2C89B69C2B068FC378DAA952BA7F163C4A11628F55A4DF523B3EF
}

#define macro ERC20 = takes(0) returns(0) {
caller OWNER_LOCATION() mstore
}

#define macro ERC20__TRANSFER_INIT = takes(0) returns(6) {
0x04 calldataload ADDRESS_MASK()
caller
TRANSFER_EVENT_SIGNATURE()
0x20
0x00
0x24 calldataload
}

#define macro ERC20__TRANSFER_GIVE_TO = takes(6) returns(7) {
// stack state:
// value, 0x20, 0x00, signature, from, to
dup6 0x00 mstore
0x40 0x00 sha3
dup1 sload // balances[to], key(balances[to]), ...
dup3       // value, balances[to], ...
add        // value+balances[to], ...
dup1       // value+balances[to], value+balances[to], ...
dup4       // value value+balances[to] ...
gt         // error_code, value+balances[to], key(balances[to])
swap2      // key(balances[to]), value+balances[to}, error_code
sstore     // error_code, ...
}

#define macro ERC20__TRANSFER_TAKE_FROM = takes(7) returns(8) {
// stack state:
// error_code, value, 0x20, 0x00, signature, from, to
caller 0x00 mstore
0x40 0x00 sha3
dup1 sload // balances[from], key(balances[from]), error_code,...
dup4 dup2 sub // balances[from]-value, balances[from], key, e,...
dup5 swap3 // key, balances[from]-value, balances[from], value
sstore     // balances[from], value, error_code, value, ...
lt         // error_code_2, error_code, value, ...
}

template #define macro ERC20__TRANSFER = takes(0) returns(0) { ERC20__TRANSFER_INIT() ERC20__TRANSFER_TO() ERC20__TRANSFER_FROM() callvalue or or jumpi 0x00 mstore log3 0x01 0x00 mstore 0x20 0x00 return }

Wasn’t that fun?

I think that’s a reasonable place to end this article. In the next chapter, we’ll deal with how to handle ERC20 allowances in a Huff contract.

Cheers,

Zac.

Click here for part 3

Read more
Aztec Network
Aztec Network
31 Mar
xx min read

Announcing the Alpha Network

Alpha is live: a fully feature-complete, privacy-first network. The infrastructure is in place, privacy is native to the protocol, and developers can now build truly private applications. 

Nine years ago, we set out to redesign blockchain for privacy. The goal: create a system institutions can adopt while giving users true control of their digital lives. Privacy band-aids are coming to Ethereum (someday), but it’s clear we need privacy now, and there’s an arms race underway to build it. Privacy is complex, it’s not a feature you can bolt-on as an afterthought. It demands a ground-up approach, deep tech stack integration, and complete decentralization.

In November 2025, the Aztec Ignition Chain went live as the first decentralized L2 on Ethereum, it’s the coordination layer that the execution layer sits on top of. The network is not operated by the Aztec Labs or the Aztec Foundation, it’s run by the community, making it the true backbone of Aztec. 

With the infrastructure in place and a unanimous community vote, the network enters Alpha. 

What is the Alpha Network?

Alpha is the first Layer 2 with a full execution environment for private smart contracts. All accounts, transactions, and the execution itself can be completely private. Developers can now choose what they want public and what they want to keep private while building with the three privacy pillars we have in place across data, identity, and compute.

These privacy pillars, which can be used individually or combined, break down into three core layers: 

  1. Data: The data you hold or send remains private, enabling use cases such as private transactions, RWAs, payments and stablecoins.
  2. Identity: Your identity remains private, enabling accounts that privately connect real world identities onchain, institutional compliance, or financial reporting where users selectively disclose information.
  3. Compute: The actions you take remain private, enabling applications in private finance, gaming, and beyond.

The Key Components  

Alpha is feature complete–meaning this is the only full-stack solution for adding privacy to your business or application. You build, and Aztec handles the cryptography under the hood. 

It’s Composable. Private-preserving contracts are not isolated; they can talk to each other and seamlessly blend both private and public state across contracts. Privacy can be preserved across contract calls for full callstack privacy. 

No backdoor access. Aztec is the only decentralized L2, and is launching as a fully decentralized rollup with a Layer 1 escape hatch.

It’s Compliant. Companies are missing out on the benefits of blockchains because transparent chains expose user data, while private networks protect it, but still offer fully customizable controls. Now they can build compliant apps that move value around the world instantly.

How Apps Work on Alpha 

  1. Write in Noir, a proprietary rust-like programming language for writing smart contracts. Build contracts with Aztec.nr and mark functions private or public.
  1. Prove on a device. Users execute private logic locally and a ZK proof is generated.
  1. Submit to Aztec. The proof goes to sequencers who validate without seeing the data. Any public aspects are then executed.
  1. Settle on Ethereum. Checkpoints batch proofs to L1 every ~12s. Ethereum verifies everything. 

Developers can explore our privacy primitives across data, identity, and compute and start building with them using the documentation here. Note that this is an early version of the network with known vulnerabilities, see this post for details. While this is the first iteration of the network, there will be several upgrades that secure and harden the network on our path to Beta. If you’d like to learn more about how you can integrate privacy into your project, reach out here

To hear directly from our Cofounders, join our live from Cannes Q&A on Tuesday, March 31st at 9:30 am ET. Follow us on X to get the latest updates from the Aztec Network.

Aztec Network
Aztec Network
27 Mar
xx min read

Critical Vulnerability in Alpha v4

On Wednesday 17 March 2026 our team discovered a new vulnerability in the Aztec Network. Following the analysis, the vulnerability has been confirmed as a critical vulnerability in accordance with our vulnerability matrix.

The vulnerability affects the proving system as a whole, and is not mitigated via public re-execution by the committee of validators. Exploitation can lead to severe disruption of the protocol and theft of user funds.

In accordance with our policy, fixes for the network will be packaged and distributed with the “v5” release of the network, currently planned for July 2026.

The actual bug and corresponding patch will not be publicly disclosed until “v5.”

Aztec applications and portals bridging assets from Layer 1s should warn users about the security guarantees of Alpha, in particular, reminding users not to put in funds they are not willing to lose. Portals or applications may add additional security measures or training wheels specific to their application or use case.

State of Alpha security

We will shortly establish a bug tracker to show the number and severity of bugs known to us in v4. The tracker will be updated as audits and security researchers discover issues. Each new alpha release will get its own tracker. This will allow developers and users to judge for themselves how they are willing to use the network, and we will use the tracker as a primary determinant for whether the network is ready for a "Beta" label.

Additional bug disclosure

We have identified a vulnerability in barretenberg allowing inclusion of incorrect proofs in the Aztec Network mempool, and ask all nodes to upgrade to versions v.4.1.2 or later.

We’d like to thank Consensys Diligence & TU Vienna for a recent discovery of a separate vulnerability in barretenberg categorized as medium for the network and critical for Noir:

We have published a fixed version of barretenberg.

We’d also like to thank Plainshift AI for discovery, reproduction, and reporting of one more vulnerability in the Aztec Network and their ongoing work to help secure the network.

Aztec Network
Aztec Network
18 Mar
xx min read

How Aztec Governance Works

Decentralization is not just a technical property of the Aztec Network, it is the governing principle. 

No single team, company, or individual controls how the network evolves. Upgrades are proposed in public, debated in the open, and approved by the people running the network. Decentralized sequencing, proving, and governance are hard-coded into the base protocol so that no central actor can unilaterally change the rules, censor transactions, or appropriate user value.

The governance framework that makes this possible has three moving parts: Aztec Improvement Proposal (AZIP), Aztec Upgrade Proposal (AZUP), and the onchain vote. Together, they form a pipeline that takes an idea to a live protocol change, with multiple independent checkpoints along the way.

The Virtual Town Square

Every upgrade starts with an AZIP. AZIPs are version-controlled design documents, publicly maintained on GitHub, modeled on the same EIP process that has governed Ethereum since its earliest days. Anyone is encouraged to suggest improvements to the Aztec Network protocol spec.

Before a formal proposal is opened, ideas live in GitHub Discussions, an open forum where the community can weigh in, challenge assumptions, and shape the direction of a proposal before it hardens into a spec. This is the virtual town square: the place where the network's future gets debated in public, not decided behind closed doors.

The AZIP framework is what decentralization looks like in practice. Multiple ideas can surface simultaneously, get stress-tested by the community, and the strongest ones naturally rise. Good arguments win, not titles or seniority. The process selects for quality discussion precisely because anyone can participate and everything is visible.

Once an AZIP is formalized as a pull request, it enters a structured lifecycle: Draft, Ready for Discussion, then Accepted or Rejected. Rejected AZIPs are not deleted — they remain permanently in the repository as a record of what was tried and why it was rejected. Nothing gets quietly buried.

Security Considerations are mandatory for all Core, Standard, and Economics AZIPs. Proposals without them cannot pass the Draft stage. Security is structural, not an afterthought.

From Proposal to Upgrade

Once Core Contributors, a merit-based and informal group of active protocol contributors, have reviewed an AZIP and approved it for inclusion, it gets bundled into an AZUP.

An AZUP takes everything an AZIP described and deploys it — a real smart contract, real onchain actions. Each AZUP includes a payload that encodes the exact onchain changes that will occur if the upgrade is approved. Anyone can inspect the payload on a block explorer and see precisely what will change before voting begins.

The payload then goes to sequencers for signaling. Sequencers are the backbone of the network. They propose blocks, attest to state, and serve as the first governance gate for any upgrade. A payload must accumulate enough signals from sequencers within a fixed round to advance. The people actually running the network have to express coordinated support before any change reaches a broader vote.

Once sequencers signal quorum, the proposal moves to tokenholders. Sequencers' staked voting power defaults to "yea" on proposals that came through the signaling path, meaning opposition must be active, not passive. Any sequencer or tokenholder who wants to vote against a proposal must explicitly re-delegate their stake before the voting snapshot is taken. The system rewards genuine engagement from all sides.

For a proposal to pass, it must meet quorum, a supermajority margin, and a minimum participation threshold, all three. If any condition is unmet, the proposal fails.

Built-In Delays, Built-In Safety

Even after a proposal passes, it does not execute immediately. A mandatory delay gives node operators time to deploy updated software, allows the community to perform final checks, and reduces the risk of sudden uncoordinated changes hitting the network. If the proposal is not executed within its grace period, it expires.

Failed AZUPs cannot be resubmitted. A new proposal must be created that directly addresses the feedback received. There is no way to simply retry and hope for a different result.

No Single Point of Control

The teams building the network have no special governance power. Sequencers, tokenholders, and Core Contributors are the governing actors, each playing a distinct and non-redundant role.

No single party can force or block an upgrade. Sequencers can withhold signals. Tokenholders can vote nay. Proposals not executed within the grace period expire on their own.

This is decentralization working as intended. The network upgrades not because a team decides it should, but because the people running it agree that it should.

If you want to help shape what Aztec becomes, the forum is open. The proposals are public. The town square is yours. 

Follow Aztec on X to stay up to date on the latest developments.

Aztec Network
Aztec Network
10 Mar
xx min read

Alpha Network Security: What to Expect

Aztec’s Approach to Security

Aztec is novel code — the bleeding edge of cryptography and blockchain technology. As the first decentralized L2 on Ethereum, Aztec is powered by a global network of sequencers and provers. Decentralization introduces some novel challenges in how security is addressed; there is no centralized sequencer to pause or a centralized entity who has power over the network. The rollout of the network reflects this, with distinct goals at each phase.

Ignition

Validate governance and decentralized block building work as intended on Ethereum Mainnet. 

Alpha

Enable transactions at 1TPS, ~6s block times and improve the security of the network via continual ongoing audits and bug bounty. New releases of the alpha network are expected regularly to address any security vulnerabilities. Please note, every alpha deployment is distinct and state is not migrated between Alpha releases. 

Beta

We will transition to Beta once the network scales to >10 TPS, with reduced block times while ensuring 99.9% uptime. Additionally, the transition requires no critical bugs disclosed via bug bounty in 3 months. State migrations across network releases can be considered.

TL;DR: The roadmap from Ignition to Alpha to Beta is designed to reflect the core team's growing confidence in the network's security.

This phased approach lets us balance ecosystem growth while building security confidence and steadily expanding the community of researchers and tools working to validate the network’s security, soundness and correctness.

Ultimately, time in production without an exploit is the most reliable indicator of how secure a codebase is.

At the start of Alpha, that confidence is still developing. The core team believes the network is secure enough to support early ecosystem use cases and handle small amounts of value. However this is experimental alpha software and users should not deposit more value than they are willing to lose. Apps may choose to limit deposit amounts to mitigate risk for users.

Audits are ongoing throughout Alpha, with the goal to achieve dual external audits across the entire codebase.

The table below shows current security and audit coverage at the time of writing.

The main bug bounty for the network is not yet live, other than for the non-cryptographic L1 smart contracts as audits are ongoing. We encourage security researchers to responsibly disclose findings in line with our security policy .

As the audits are still ongoing, we expect to discover vulnerabilities in various components. The fixes will be packaged and distributed with the “v5” release.

If we discover a Critical vulnerability in “v4” in accordance with the following severity matrix, which would require the change of verification keys to fix, we will first alert the portal operators to pause deposits and then post a message on the forum, stating that the rollup has a vulnerability.

Security of the Aztec Virtual Machine (AVM)

Aztec uses a hybrid execution model, handling private and public execution separately — and the security considerations differ between them.

As per the audit table above, it is clear that the Aztec Virtual Machine (AVM) has not yet completed its internal and external audits. This is intentional as all AVM execution is public, which allows it to benefit from a “Training Wheel” — the validator re-execution committee.

Every 72 seconds, a collection of newly proposed Aztec blocks are bundled into a "checkpoint" and submitted to L1. With each proposed checkpoint, a committee of 48 staking validators randomly selected from the entire set of validators (presently 3,959) re-execute all txs of all blocks in the checkpoint, and attest to the resulting state roots. 33 out of 48 attestations are required for the checkpoint proposal to be considered valid. The committee and the eventual zk proof must agree on the resultant state root for a checkpoint to be added to the proven chain. As a result, an attacker must control 33/48 of any given committee to exploit any bug in the AVM.

The only time the re-execution committee is not active is during the escape hatch, where the cost to propose a block is set at a level which attempts to quantify the security of the execution training wheel. For this version of the alpha network, this is set a 332M AZTEC, a figure intended to approximate the economic protection the committee normally provides, equivalent to roughly 19% of the un-staked circulating supply at the time of writing. Since the Aztec Foundation holds a significant portion of that supply, the effective threshold is considerably higher in practice.

Quantifying the cost of committee takeover attacks

A key design assumption is that just-in-time bribery of the sequencer committee is impractical and the only ****realistic attack vector is stake acquisition, not bribery.

Assuming a sequencer set size of 4,000 and a committee that rotates each epoch (~38.4mins) from the full sequencer set using a Fisher-Yates shuffle seeded by L1 RANDAO we can see the probability and amount of stake required in the table below.

To achieve a 99% probability of controlling at least one supermajority within 3 days, an attacker would need to control approximately 55.4% of the validator set - roughly 2,215 sequencers representing 443M AZTEC in stake. Assuming an exploit is successful their stake would likely de-value by 70-80%, resulting in an expected economic loss of approximately 332M AZTEC.

To achieve only a 0.5% probability of controlling at least one supermajority within 6 months, an attacker would need to control approximately 33.88% of the validator set.

What does this means for builders?

The practical effect of this training wheel is that the network can exist while there are known security issues with the AVM, as long as the value an attacker would gain from any potential exploit is less than the cost of acquiring 332M AZTEC.

The training wheel allows security researchers to spend more time on the private execution paths that don’t benefit from the training wheel and for the network to be deployed in an alpha version where security researchers can attempt to find additional AVM exploits.

In concrete terms, the training wheel means the Alpha network can reasonably secure value up to around 332M AZTEC (~$6.5M at the time of writing).

Ecosystem builders should keep the above limits in mind, particularly when designing portal contracts that bridge funds into the network.

Portals are the main way value will be bridged into the alpha network, and as a result are also the main target for any exploits. The design of portals can allow the network to secure far higher value. If a portal secures > 332M AZTEC and allows all of its funds to be taken in one withdrawal without any rate limits, delays or pause functionality then it is a target for an AVM exploit attack.

If a portal implements a maximum withdrawal per user, pause functionality or delays for larger withdrawals it becomes harder for an attacker to steal a large quantum of funds in one go.

Conclusion

The Aztec Alpha code is ready to go. The next step is for someone in the community to submit a governance proposal and for the network to vote on enabling transactions. This is decentralization working as intended.

Once live, Alpha will run at 1 TPS with roughly 6 second block times. Audits are still ongoing across several components, so keep deposits small and only put in what you're comfortable losing.

On the security side, a 48-validator re-execution committee provides the main protection during Alpha, requiring 33/48 consensus on every 72-second checkpoint. Successfully attacking the AVM would require controlling roughly 55% of the validator set at a cost of around 332M AZTEC, putting the practical security ceiling at approximately $6.5M.

Alpha is about growing the ecosystem, expanding the security of the network, and accumulating the one thing no audit can shortcut: time in production. This is the network maturing in exactly the way it was designed to as it progresses toward Beta.