With over 10,000 new nodes joining in just the past week, the Aztec testnet has surpassed 15,000 connected nodes. These nodes have processed more than 160,000 transactions in the past month, with over 10+ projects now live on testnet.
Node operators are joining from 50+ countries across 6 continents — from hobbyists running on home laptops to professionals operating nodes in data centers or through cloud providers.

On May 29th, 2025, the Aztec Labs team observed a sharp slowdown in block production starting around 1:30 PM UTC. An emergency update was shipped and adopted by node operators, which resolved the issue and restored the network to full capacity.
Here’s a breakdown of what went wrong — and how we fixed it.

What caused the slowdown in block production?
At the time of the slowdown, over 19,000 nodes including approximately 750 sequencer nodes were online. Most of these were running with default configurations, including a 100Mb mempool. A smaller number of nodes, including several operated by Aztec Labs, used larger mempool configurations.
When transactions are broadcast, nodes validate and store them in their local mempools. However, if the mempool is full, older transactions are evicted to make room for new ones. Since all Aztec testnet transactions have the same priority fee, there’s no incentive mechanism to rank them — so eviction becomes effectively random.
This resulted in:
- Smaller nodes evicting transactions earlier than larger nodes
- Mempools across the network diverging significantly
- Nodes missing transactions required to validate incoming block proposals
Below, we will walk through the details of how and why this occurred, and the fix that was pushed to resolve this issue.
- Nodes receive and store transactions: Each node accepts incoming transactions and validates them before adding them to its mempool and gossiping them further throughout the network.

- Transaction eviction kicks in when the mempool is full: Without priority fees to guide eviction and replacement, transactions are evicted randomly once the mempool fills.

- Block proposals only include hashes: When a node receives a block proposal, it contains transaction hashes and a proposed state root — not the full transactions. The node checks its mempool and requests any missing transactions from its peers. If the node is able to retrieve all transactions, it executes them and signs the block proposal if the resulting state root matches the proposed state root.

- But most peers don’t have the transactions either: Since small nodes were connected to other small nodes, many lacked the same transactions due to the random eviction behavior.

- Widespread divergence: This led to significant differences in transaction availability across nodes. Most validator nodes often could not find the transactions needed to validate and attest to proposals.

- Block production stalls: Without the required quorum for attestation, block proposals failed, and the network slowed to a halt.

How the issue was resolved
To resolve this, we made two critical changes:
Block Proposals Now Include Full Transactions
Previously, proposals only included transaction hashes. Now, each proposal includes the full transaction data alongside the hashes and the proposed state root. This ensures that even nodes with empty or divergent mempools can still validate the block — they receive all required transactions directly from the sequencer.

Nodes Request Missing Transactions from the Sending Peer
While sequencers use floodPublish to send proposals to all their peers, a peer that receives the proposal does not re-send the proposal to ALL peers. Instead, it uses gossipsub to forward the proposal it just received to a subset of its peers - the default is 8 peers.
The receiving peer does not forward the transactions but only the block proposal instead. So peers 2 or more "hops" away still don't have the transactions and will instead ask a random subset of their peers for the transactions. With 19k nodes, it's highly likely that those peers in the random subset also don't have the transactions.
So the other change that was needed is to have nodes that receive a block proposal and are missing transactions, to always include the peer that sent them the proposal in the random subset of peers to ask transactions from.

So the team merged this PR, which instructs nodes to always request transactions from the peer that sent them the proposal. Other changes include reducing maxTxSize back down to 8 from 20 and disabling flood publish (otherwise, the sequencer is in for a rough time with all the incoming requests).
However, while the above fix restored block production to normal, it is ultimately an incomplete fix. Here’s the main breakdown of why there is still work to be done to improve the solution in the long term.
Sending transactions along with the proposal is bandwidth-heavy

- Current maxTxPerBlock=20 and each transaction is 80kb due to the size of the clientIVC proofs.
- Sending proposals is done via floodPublish, which sends a payload to every peer – currently, the default maxPeerLimit=100.
- The sequencer must send a payload of 20tx * 80kb = 1.6MB to 100 peers, which is a 1.6MB data transfer.
- The sequencer must do this in under 10s to leave time for block inclusion in the L1.
- This is an upload bandwidth of around 128Mbps when the network throughput is less than 0.6TPS which goes against Aztec's design philosophy, which prioritizes decentralization.
What’s next?
This incident highlights the complexity of running a truly decentralized L2 network, especially in a testnet setting where incentives (like priority fees) don’t yet exist.
Even under those constraints, the Aztec testnet continued to function, and a fix was deployed within days — thanks to the rapid response from our community of node operators and contributors. Our team will continue to monitor network performance as we prepare for a fully decentralized network launch.
For more information on this upgrade, join our Discord Town Hall on Wednesday, June 11th at 2 pm UTC and stay up-to-date on Noir and Aztec by following Noir and Aztec on X.