#

Inside Aztec

Inside
Aztec

purple_2
Aztec Network
30 Jan
xx min read

Aztec Ignition Chain Update

The first decentralized L2 on Ethereum reaches 75k block height with 30M $AZTEC distributed through block rewards.

In November 2025, the Aztec Ignition Chain went live as the first decentralized L2 on Ethereum. Since launch, more than 185 operators across 5 continents have joined the network, with 3,400+ sequencers now running. The Ignition Chain is the backbone of the Aztec Network; true end-to-end programmable privacy is only possible when the underlying network is decentralized and permissionless. 

Until now, only participants from the $AZTEC token sale have been able to stake and earn block rewards ahead of Aztec's upcoming Token Generation Event (TGE), but that's about to change. Keep reading for an update on the state of the network and learn how you can spin up your own sequencer or start delegating your tokens to stake once TGE goes live.

Block Production 

The Ignition Chain launched to prove the stability of the consensus layer before the execution environment ships, which will enable privacy-preserving smart contracts. The network has remained healthy, crossing a block height of 75k blocks with zero downtime. That includes navigating Ethereum's major Fusaka upgrade in December 2025 and a governance upgrade to increase the queue speed for joining the sequencer set.

Source: AztecBlocks

Block Rewards

Over 30M $AZTEC tokens have been distributed to sequencers and provers to date. Block rewards go out every epoch (every 32 blocks), with 70% going to sequencers and 30% going to provers for generating block proofs.

If you don't want to run your own node, you can delegate your stake and share in block rewards through the staking dashboard. Note that fractional staking is not currently supported, so you'll need 200k $AZTEC tokens to stake.

Global Participation  

The Ignition Chain launched as a decentralized network from day one. The Aztec Labs and Aztec Foundation teams are not running any sequencers on the network or participating in governance. This is your network.

Anyone who purchased 200k+ tokens in the token sale can stake or delegate their tokens on the staking dashboard. Over 180 operators are now running sequencers, with more joining daily as they enter the sequencer set from the queue. And it's not just sequencers: 50+ provers have joined the permissionless, decentralized prover network to generate block proofs.

These operators span the globe, from solo stakers to data centers, from Australia to Portugal.

Source: Nethermind 

Node Performance

Participating sequencers have maintained a 99%+ attestation rate since network launch, demonstrating strong commitment and network health. Top performers include P2P.org, Nethermind, and ZKV. You can see all block activity and staker performance on the Dashtec dashboard. 

How to Join the Network 

On January 26th, 2026, the community passed a governance proposal for TGE. This makes tokens tradable and unlocks the AZTEC/ETH Uniswap pool as early as February 11, 2026. Once that happens, anyone with 200k $AZTEC tokens can run a sequencer or delegate their stake to participate in block rewards.

Here's what you need to run a validator node:

  • CPU: 8 cores
  • RAM: 16 GB
  • Storage: 1 TB NVMe SSD
  • Bandwidth: 25 Mbps

These are accessible specs for most solo stakers. If you've run an Ethereum validator before, you're already well-equipped.

To get started, head to the Aztec docs for step-by-step instructions on setting up your node. You can also join the Discord to connect with other operators, ask questions, and get support from the community. Whether you run your own hardware or delegate to an experienced operator, you're helping build the infrastructure for a privacy-preserving future.

Solo stakers are the beating heart of the Aztec Network. Welcome aboard.

Most Recent
Aztec Network
22 Jan
xx min read

The $AZTEC TGE Vote: What You Need to Know

The TL:DR:

  • The $AZTEC token sale, conducted entirely onchain concluded on December 6, 2025, with ~50% of the capital committed coming from the community. 
  • Immediately following the sale, tokens could be withdrawn from the sale website into personal Token Vault smart contracts on the Ethereum mainnet.
  • The proposal for TGE (Token Generation Event) is now live, and sequencers can start signaling to bring the proposal to a vote to unlock these tokens and make them tradeable. 
  • Anyone who participated in the token sale can participate in the TGE vote. 

The $AZTEC token sale was the first of its kind, conducted entirely onchain with ~50% of the capital committed coming from the community. The sale was conducted completely onchain to ensure that you have control over your tokens from day one. As we approach the TGE vote, all token sale participants will be able to vote to unlock their tokens and make them tradable. 

What Is This Vote About?

Immediately following the $AZTEC token sale, tokens could be withdrawn from the sale website into your personal Token Vault smart contracts on the Ethereum mainnet. Right now, token holders are not able to transfer or trade these tokens. 

The TGE is a governance vote that decides when to unlock these tokens. If the vote passes, three things happen:

  1. Tokens purchased in the token sale become fully transferable 
  2. Trading goes live for the Uniswap v4 pool
  3. Block rewards become transferable for sequencers

This decision is entirely in the hands of $AZTEC token holders. The Aztec Labs and Aztec Foundation teams, and investors cannot participate in staking or governance for 12 months, which includes the TGE governance proposal. Team and investor tokens will also remain locked for 1 year and then slowly unlock over the next 2 years. 

The proposal for TGE is now live, and sequencers are already signaling to bring the proposal to a vote. Once enough sequencers have signaled, anyone who participated in the token sale will be able to connect their Token Vault contract to the governance dashboard to vote. Note, this will require you to stake/unstake and follow the regular 15-day process to withdraw tokens.

If the vote passes, TGE can go live as early as February 12, 2026, at 7am UTC. TGE can be executed by the first person to call the execute function to execute the proposal after the time above. 

How Do I Participate?

If you participated in the token sale, you don't have to do anything if you prefer not to vote. If the vote passes, your tokens will become available to trade at TGE. If you want to vote, the process happens in two phases:

Phase 1: Sequencer Signaling

Sequencers kick things off by signaling their support. Once 600 out of 1,000 sequencers signal, the proposal moves to a community vote.

Phase 2: Community Voting

After sequencers create the proposal, all Token Vault holders can vote using the voting governance dashboard. Please note that anyone who wants to vote must stake their tokens, locking their tokens for at least 15 days to ensure the proposal can be executed before the voter exits. Once signaling is complete, the timeline is as follows:

  • Days 1–3: Waiting period 
  • Days 4–10: Voting period (7 days to cast your vote)
  • Days 11–17: Execution delay
  • Days 18–24: Grace period to execute the proposal

Vote Requirements:

  • At least 100M tokens must participate in the vote. This is less than 10% of the tokens sold in the token sale.  
  • 66% of votes must be in favor for the vote to pass.

Frequently Asked Questions

Do I need to participate in the vote? No. If you don't vote, your tokens will become available for trading when TGE goes live. 

Can I vote if I have less than 200,000 tokens? Yes! Anyone who participated in the token sale can participate in the TGE vote. You'll need to connect your wallet to the governance dashboard to vote. 

Is there a withdrawal period for my tokens after I vote? Yes. If you participate in the vote, you will need to withdraw your tokens after voting. Voters can initiate a withdrawal of their tokens immediately after voting, but require a standard 15-day withdrawal period to ensure the vote is executed before voters can exit.

If I have over 200,000 tokens is additional action required to make my tokens tradable after TGE? Yes. If you purchased over 200,000 $AZTEC tokens, you will need to stake your tokens before they become tradable. 

What if the vote fails? A new proposal can be submitted. Your tokens remain locked until a successful vote is completed, or the fallback date of November 13, 2026, whichever happens first.

I'm a Genesis sequencer. Does this apply to me? Genesis sequencer tokens cannot be unlocked early. You must wait until November 13, 2026, to withdraw. However, you can still influence the vote by signaling, earn block rewards, and benefit from trading being enabled.

Where to Learn More

This overview covers the essentials, but the full technical proposal includes contract addresses, code details, and step-by-step instructions for sequencers and advanced users. 

Read the complete proposal on the Aztec Forum and join us for the Privacy Rabbit Hole on Discord happening this Thursday, January 22, 2026, at 15:00 UTC. 

Follow Aztec on X to stay up to date on the latest developments.

Aztec Network
6 Dec
xx min read

$AZTEC TGE: Next Steps For Holders

The TL;DR: 

The $AZTEC token sale was conducted entirely onchain to maximize transparency and fair distribution. Next steps for holders are as follows:

  1. Step 1: Create your Token Vault on the sale website. Your Token Vault will keep your tokens secure on Ethereum, keep them non-transferable until TGE, allow you to stake/delegate/participate in governance, and then withdraw them to your wallet after TGE.
  1. Step 2: Staking and Earning Block Rewards. If you have more than 200,000 tokens, you can start staking today on the staking dashboard
  1. Step 3: Token sale participants can vote for TGE as early as February 11th, 2026, at which 100% of tokens from the sale become transferable, and a Uniswap V4 pool goes live. 

The $AZTEC token sale has come to a close– the sale was conducted entirely onchain, and the power is now in your hands. Over 16.7k people participated, with 19,476 ETH raised. A huge thank you to our community and everyone who participated– you all really showed up for privacy. 50% of the capital committed has come from the community of users, testnet operators and creators!

Now that you have your tokens, what’s next? This guide walks you through the next steps leading up to TGE, showing you how to withdraw, stake, and vote with your tokens.

Step 1: Creating a Token Vault 

The $AZTEC sale was conducted onchain to ensure that you have control over your own tokens from day 1 (even before tokens become transferable at TGE). 

The team has no control over your tokens. You will be self-custodying them in a smart contract known as the Token Vault on the Ethereum mainnet ahead of TGE. 

Your Token Vault contract will: 

  • Keep your tokens secure on the Ethereum mainnet.
  • Ensure tokens remain non-transferable until TGE.
  • Allows you to stake, delegate, and take part in governance.
  • After TGE, you can withdraw your tokens to your wallet.

To create and withdraw your tokens to your Token Vault, simply go to the sale website and click on ‘Create Token Vault.’ Any unused ETH from your bids will be returned to your wallet in the process of creating your Token Vault. 

Step 2: Staking and Earning Block Rewards 

If you have 200,000+ tokens, you are eligible to start staking and earning block rewards today. 

You can stake by connecting your Token Vault to the staking dashboard, just select a provider to delegate your stake. Alternatively, you can run your own sequencer node.

If your Token Vault holds 200,000+ tokens, you must stake in order to withdraw your tokens after TGE. If your Token Vault holds less than 200,000 tokens, you can withdraw without any additional steps at TGE

Fractional staking for anyone with less than 200,000 tokens is not currently supported, but multiple external projects are already working to offer this in the future. 

Step 3: TGE 

TGE is triggered by an onchain governance vote, which can happen as early as February 11th, 2026. 

At TGE, 100% of tokens from the token sale will be transferable. Only token sale participants and genesis sequencers can participate in the TGE vote, and only tokens purchased in the sale will become transferrable. 

How does the voting process work? 

Community members discuss potential votes on the governance forum. If the community agrees, sequencers signal to start a vote with their block proposals. Once enough sequencers agree, the vote goes onchain for eligible token holders. 

Voting lasts 7 days, requires participation of at least 100,000,000 $AZTEC tokens, and passes if 2/3 vote yes.

What happens when the vote passes? 

Following a successful yes vote, anyone can execute the proposal after a 7-day execution delay, triggering TGE. 

At TGE, the following tokens will be 100% unlocked and available for trading: 

  • All tokens in Token Vaults that belong to token sale participants.
  • Accumulated block rewards for anyone staking.
  • Uniswap V4 pool. This pool will have 273,000,000 $AZTEC tokens and a matching ETH amount at the final clearing price. 

Join us Thursday, December 11th at 3 pm UTC for the next Discord Town Hall–AMA style on next steps for token holders. Follow Aztec on X to stay up to date on the latest developments.

Aztec Network
13 Nov
xx min read

The ticker is $AZTEC

We invented the math. We wrote the language. Proved the concept and now, we’re opening registration and bidding for the $AZTEC token today, starting at 3 pm CET. 

The community-first distribution offers a starting floor price based on a $350 million fully diluted valuation (FDV), representing an approximate 75% discount to the implied network valuation (based on the latest valuation from Aztec Labs’ equity financings). The auction also features per-user participation caps to give community members genuine, bid-clearing opportunities to participate daily through the entirety of the auction. 

How to Check Eligibility and Submit Your Bid 

The token auction portal is live at: sale.aztec.network

  • This is the only valid link to the $AZTEC token auction site. Be cautious of phishing scams. No one from the Aztec team will ever contact you directly for seed phrase or private keys. 
  • Visit the site to verify your eligibility and mint a soul-bound NFT that confirms your participation rights. 
  • We have incorporated zero-knowledge proofs into the sale smart contracts by using ZKPassport's Noir circuits to ensure compliant sanctions checks without risking the privacy of our users. 
  • Registration and bidding for early contributors start today, November 13th, at 3 PM CET, with early contributors receiving one day of exclusive access before bidding opens to the general public.
  • The public auction will run from December 2nd, 2025, to December 6th, 2025, at which point tokens can be withdrawn and staked.

Why Are We Doing This? 

We’ve taken the community access that made the 2017 ICO era great and made it even better. 

For the past several months, we've worked closely with Uniswap Labs as core contributors on the CCA protocol, a set of smart contracts that challenge traditional token distribution mechanisms to prioritize fair access, permissionless, on-chain access to community members and the general public pre-launch. This means that on day 1 of the unlock, 100% of the community's $AZTEC tokens will be unlocked.

This model is values-aligned with our Core team and addresses the current challenges in token distribution, where retail participants often face unfair disadvantages against whales and institutions that hold large amounts of money. 

Early contributors and long-standing community members, including genesis sequencers, OG Aztec Connect users, network operators, and community members, can start bidding today, ahead of the public auction, giving those who are whitelisted a head start and early advantage for competitive pricing. Community members can participate by visiting the token sale site to verify eligibility and mint a soul-bound NFT that confirms participation rights. 

To read more about Aztec’s fair-access token sale, visit the economic and technical whitepapers and the token regulatory report.

Discount Price Disclaimer: Any reference to a prior valuation or percentage discount is provided solely to inform potential purchasers of how the initial floor price for the token sale was calculated. Equity financing valuations were determined under specific circumstances that are not comparable to this offering. They do not represent, and should not be relied upon as, the current or future market value of the tokens, nor as an indication of potential returns. The price of tokens may fluctuate substantially, the token may lose its value in part or in full, and purchasers should make independent assessments without reliance on past valuations. No representation or warranty is made that any purchaser will achieve profits or recover the purchase price.

Information for Persons in the UK: This communication is directed only at persons outside the UK. Persons in the UK are not permitted to participate in the token sale and must not act upon this communication.

MiCA Disclaimer: Any crypto-asset marketing communications made from this account have not been reviewed or approved by any competent authority in any Member State of the European Union. Aztec Foundation as the offeror of the crypto-asset is solely responsible for the content of such crypto-asset marketing communications. The Aztec MiCA white paper has been published and is available here. The Aztec Foundation can be contacted at hello@aztec.foundation or +41 41 710 16 70. For more information about the Aztec Foundation, visit https://aztec.foundation.

Aztec Network
28 Oct
xx min read

Your Favorite DeFi Apps, Now With Privacy

Every time you swap tokens on Uniswap, deposit into a yield vault, or vote in a DAO, you're broadcasting your moves to the world. Anyone can see what you own, where you trade, how much you invest, and when you move your money.

Tracking and analysis tools like Chainalysis and TRM are already extremely advanced, and will only grow stronger with advances in AI in the coming years. The implications of this are that the ‘pseudo-anonymous’ wallets on Ethereum are quickly becoming linked to real-world identities. This is concerning for protecting your personal privacy, but it’s also a major blocker in bringing institutions on-chain with full compliance for their users. 

Until now, your only option was to abandon your favorite apps and move to specialized privacy-focused apps or chains with varying degrees of privacy. You'd lose access to the DeFi ecosystem as you know it now, the liquidity you depend on, and the community you're part of. 

What if you could keep using Uniswap, Aave, Yearn, and every other app you love, but with your identity staying private? No switching chains. Just an incognito mode for your existing on-chain life? 

If you’ve been following Aztec for a while, you would be right to think about Aztec Connect here, which was hugely popular with $17M TVL and over 100,000 active wallets, but was sunset in 2024 to focus on bringing a general-purpose privacy network to life. 

Read on to learn how you’ll be able to import privacy to any L2, using one of the many privacy-focused bridges that are already built. 

The Aztec Network  

Aztec is a fully decentralized, privacy-preserving L2 on Ethereum. You can think of Aztec as a private world computer with full end-to-end programmable privacy. A private world computer extends Ethereum to add optional privacy at every level, from identity and transactions to the smart contracts themselves. 

On Aztec, every wallet is a smart contract that gives users complete control over which aspects they want to make public or keep private. 

Aztec is currently in Testnet, but will have multiple privacy-preserving bridges live for its mainnet launch, unlocking a myriad of privacy preserving features.

Bringing Privacy to You

Now, several bridges, including Wormhole, TRAIN, and Substance, are connecting Aztec to other chains, adding a privacy layer to the L2s you already use. Think of it as a secure tunnel between you and any DeFi app on Ethereum, Arbitrum, Base, Optimism, or other major chains.

Here's what changes: You can now use any DeFi protocol without revealing your identity. Furthermore, you can also unlock brand new features that take advantage of Aztec’s private smart contracts, like private DAO voting or private compliance checks. 

Here's what you can do:

  • Use DeFi without revealing your portfolio: trade on Uniswap or deposit into Yearn without broadcasting your strategy to the world
  • Donate to causes without being tracked: support projects on Base without linking donations to your identity
  • Vote in DAOs without others seeing your choices: participate in governance on Arbitrum while keeping your votes private
  • Prove you're legitimate without doxxing yourself: pass compliance checks or prove asset ownership without revealing which specific assets you hold
  • Access exclusive perks without revealing which NFTs you own: unlock token-gated content on Optimism without showing your entire collection

The apps stay where they are. Your liquidity stays where it is. Your community stays where it is. You just get a privacy upgrade.

How It Actually Works 

Let's follow Alice through a real example.

Alice wants to invest $1,000 USDC into a yield vault on Arbitrum without revealing her identity. 

Step 1: Alice Sends Funds Through Aztec

Alice moves her funds into Aztec's privacy layer. This could be done in one click directly in the app that she’s already using if the app has integrated one of the bridges. Think of this like dropping a sealed envelope into a secure mailbox. The funds enter a private space where transactions can't be tracked back to her wallet.

Step 2: The Funds Arrive at the DeFi Vault

Aztec routes Alice's funds to the Yearn vault on Arbitrum. The vault sees a deposit and issues yield-earning tokens. But there's no way to trace those tokens back to Alice's original wallet. Others can see someone made a deposit, but they have no idea who.

Step 3: Alice Gets Her Tokens Back Privately

The yield tokens arrive in Alice's private Aztec wallet. She can hold them, trade them privately, or eventually withdraw them, without anyone connecting the dots.

Step 4: Alice Earns Yield With Complete Privacy

Alice is earning yield on Arbitrum using the exact same vault as everyone else. But while other users broadcast their entire investment strategy, Alice's moves remain private. 

The difference looks like this:

Without privacy: "Wallet 0x742d...89ab deposited $5,000 into Yearn vault at 2:47 PM"

With Aztec privacy: "Someone deposited funds into Yearn vault" (but who? from where? how much? unknowable).

In the future, we expect apps to directly integrate Aztec, making this experience seamless for you as a user. 

The Developers Behind the Bridges 

While Aztec is still in Testnet, multiple teams are already building bridges right now in preparation for the mainnet launch.

Projects like Substance Labs, Train, and Wormhole are creating connections between Aztec and major chains like Optimism, Unichain, Solana, and Aptos. This means you'll soon have private access to DeFi across nearly every major ecosystem.

Aztec has also launched a dedicated cross-chain catalyst program to support developers with grants to build additional bridges and apps. 

Unifying Liquidity Across Ethereum L2s

L2s have sometimes received criticism for fragmenting liquidity across chains. Aztec is taking a different approach. Instead, Aztec is bringing privacy to the liquidity that already exists. Your funds stay on Arbitrum, Optimism, Base, wherever the deepest pools and best apps already live. Aztec doesn't compete for liquidity, it adds privacy to existing liquidity.

You can access Uniswap's billions in trading volume. You can tap into Aave's massive lending pools. You can deposit into Yearn's established vaults, all without moving liquidity away from where it's most useful.

The Future of Private DeFi

We’re rolling out a new approach to how we think about L2s on Ethereum. Rather than forcing users to choose between privacy and access to the best DeFi applications, we’re making privacy a feature you can add to any protocol you're already using. As more bridges go live and applications integrate Aztec directly, using DeFi privately will become as simple as clicking a button—no technical knowledge required, no compromise on the apps and liquidity you depend on.

While Aztec is currently in testnet, the infrastructure is rapidly taking shape. With multiple bridge providers building connections to major chains and a dedicated catalyst program supporting developers, the path to mainnet is clear. Soon, you'll be able to protect your privacy while still participating fully in the Ethereum ecosystem. 

If you’re a developer and want a full technical breakdown, check out this post. To stay up to date with the latest updates for network operators, join the Aztec Discord and follow Aztec on X.

Explore by Topic
Aztec Network
Aztec Network
1 May
xx min read

A New Era for Web3: Introducing Aztec Public Testnet

When Aztec first got started, the world of zero-knowledge proving systems and applications was in its infancy. There was no PLONK, no Noir, no programmable privacy, and it wasn’t clear that demand for onchain privacy was even strong enough to necessitate a new blockchain network.

After a decade of building, revolutionary breakthroughs in privacy technology have paved the way to, and now set the stage for, mainnet including: PLONK, a novel proving system for user-level privacy and programmability that yielded zk.money and Aztec Connect, which was a pivotal moment for privacy and encryption solutions; Noir, an intuitive zero-knowledge, Rust-like programming language; and a client-side library for a private execution environment (PXE). These tools allow developers to explore privacy-preserving applications across any use case where protecting sensitive data is a critical function. 

In 2023 and 2024 Aztec was named by Electric Capital as one of the fastest-growing developer ecosystems. The next generation of applications on Ethereum are already being built using parts of the Aztec stack, like Noir. Projects such as zkPassport and zkEmail are unlocking key identity use cases, while other applications like Anoncast (built in one weekend) have caught the attention of heavyweights like Vitalik Buterin and Laura Shin.

Earlier this month, we announced the successful testing of the first decentralized upgrade process for an L2, with over 100 sequencers participating. Now, with the mission to bring programmable privacy to the masses, the Aztec Public Testnet is here and, for the first time ever, open to developers to build fully private applications on Ethereum. 

Click here to see the full product roadmap.

True privacy means full decentralization 

The Aztec Network will launch fully decentralized from day one. 

Not because it’s a flex, but because true privacy can only be achieved when there is no central entity that has potential backdoor access.

Imagine logging into your hot wallet using web2 auth with Google or iCloud, or proving you’re a U.S. citizen onchain without revealing your passport information. For this, you need onchain privacy, and true privacy needs full decentralization so the user can maintain control over their data. 

This is the vision for the Aztec Network. 

Like Zac, our CEO and Co-founder, said in his talk Privacy: The Missing Link, “there are three fundamental attributes required to bridge the gap and bring the world onchain: interfacing with web2 systems, linking accounts to identities, and establishing digital sovereignty.”

Launching a decentralized network is a complex task filled with lots of intricacies and nuances to navigate. The Aztec Public Testnet plays a crucial role in stress-testing the network, identifying early issues, and ensuring its participants work as intended – ultimately leading to a more robust mainnet. 

How do I participate in the Network?

There are two ways you can participate in the network: as a developer who wants to build and deploy applications (with end-to-end privacy) or as a node operator powering the network.

Developers 

Aztec enables developers to build with both private and public state. 

Smart contracts on Aztec blend private functions that execute on the client side with public functions that are executed by sequencers on the Aztec Network. This allows you to customize your contract with both public and private components while deploying them to a fully decentralized network. 

The fastest way to get started with the Aztec Public Testnet is to deploy a smart contract using the Playground. If you’re a developer, visit our dev landing page to connect to Testnet and deploy on the Aztec Network.

Node operators  

The Aztec Network is run by a decentralized sequencer and prover network. 

Sequencers propose and produce blocks using consumer hardware and are responsible for proposing and voting on network upgrades. Provers participate in a decentralized prover network and are selected to prove the rollup integrity.

No airdrops. No marketing gimmicks. We just want to create a community of highly skilled operators who share the vision of a fully decentralized privacy-preserving network. Anyone can boot up a sequencer node and access the testnet faucet. See the sequencer quickstart to get started. Apply to get a special Discord role and peer support from experienced node operators leading the Aztec Network. 

Start building   

To see existing applications and get inspo for what you want to build on the Aztec Public Testnet, check out our Ecosystem page. If you’ve already built an app and would like to be featured, submit your app here.

Next, head to the Playground to try out the Aztec Public Testnet, where you can deploy and interact with privacy-preserving smart contracts. Tools and infrastructure to start building wallets, bridges, and explorers are already available.

If you’re a developer, click ➡️ here to get started and deploy your smart contract in literal minutes. 

If you’re a node operator, click ➡️ here to set up and run a node. 

Stay up-to-date on Noir and Aztec by following Noir and Aztec on X.

Aztec Network
Aztec Network
1 May
xx min read

What is the Aztec Public Testnet?

Aztec will be a fully decentralized, permissionless and privacy-preserving L2 on Ethereum. The purpose of Aztec’s Public Testnet is to test all the decentralization mechanisms needed to launch a strong and decentralized mainnet. In this post, we’ll explore what full decentralization means, how the Aztec Foundation is testing each aspect in the Public Testnet, and the challenges and limitations of testing a decentralized network in a testnet environment.

The three aspects of decentralization

Three requirements must be met to achieve decentralization for any zero-knowledge L2 network: 

  1. Decentralized sequencing: the process of using a network of nodes to sequence transactions, rather than relying on centralized authority; 
  1. Decentralized proving: generating zero-knowledge proofs (ZKPs) across a distributed network of computers; and 
  1. Decentralized governance: a system where decision-making authority is distributed across a network of participants. 

Decentralization across sequencing, proving, and governance is essential to ensure that no single party can control or censor the network. Decentralized sequencing guarantees open participation in block production, while decentralized proving ensures that block validation remains trustless and resilient, and finally, decentralized governance empowers the community to guide network evolution without centralized control. 

Together, these pillars secure the rollup’s autonomy and long-term trustworthiness. Let’s explore how Aztec’s Public Testnet is testing the implementation of each of these aspects. 

Decentralized sequencing

Aztec will launch with a fully decentralized sequencer network. 

This means that anyone can run a sequencer node and start sequencing transactions, proposing blocks to L1 and validating blocks built by other sequencers. The sequencer network is a proof-of-stake (PoS) network like Ethereum, but differs in an important way. Rather than broadcasting blocks to every sequencer, Aztec blocks are validated by a randomly chosen set of 48 sequencers. In order for a block to be added to the L2 chain, two-thirds of the sequencers need to verify the block. This offers users fast preconfirmations, meaning the Aztec Network can sequence transactions faster while utilizing Ethereum for final settlement security. 

PoS is fundamentally an anti-sybil mechanism—it works by giving economic weight to participation and slashing malicious actors. At the time of Aztec’s mainnet, this will allow sequencers to vote out bad actors and burn their staked assets. On the Public Testnet, where there are no real economic incentives, PoS doesn't function properly. To address this, we introduced a queue system that limits how quickly new sequencers can join, helping to maintain network health and giving the network time to react to potential malicious behavior.

Behind the scenes, a contract handles sequencer onboarding—it mints staking assets, adds sequencers to the set, and can remove them if necessary. This contract is just for Public Testnet and will be removed on Mainnet, allowing us to simulate and test the decentralized sequencing mechanisms safely.

Decentralized proving 

Aztec will also launch with a fully decentralized prover network. 

Provers generate cryptographic proofs that verify the correctness of public transactions, culminating in a single rollup proof submitted to Ethereum. Decentralized proving reduces centralization risk and liveness failures, but also opens up a marketplace to incentivize fast and efficient proof generation. The proving client developed by Aztec Labs involves three components: 

  1. Prover nodes identify unproven epochs (set of 32 blocks) and create individual proving jobs;

  2. Proving brokers add these proving job requests to a queue and allocate them to idle proving agents; and

  3. Proving agents compute the actual proofs. 

Once the final proof has been computed, the proving node sends the proof to L1 for verification. The Aztec Network splits proving rewards amongst everyone who submits a proof on time, reducing centralization risk where one entity with large compute dominates the network. 

For Aztec’s Public Testnet, anyone can spin up a prover node and start generating proofs. Running a prover node is more hardware-intensive than running a sequencer node, requiring ~40 machines with an estimated 16 cores and 128GB RAM each. Because running provers can be cost-intensive and incur the same costs on a testnet as it will on mainnet, Aztec’s Public Testnet will throttle transactions to 0.2 per second (TPS). 

Keeping transaction volumes low allows us to test a fully decentralized prover network without overwhelming participating provers with high costs before real network incentives are in place. 

Decentralized governance

Finally, Aztec will launch with fully decentralized governance. 

In order for network upgrades to occur, anyone can put forward a proposal for sequencers to consider. If a majority of sequencers signal their support, the proposal gets sent to a vote. Once it passes the vote, anyone can execute the script that will implement the upgrade. Note: For this testnet, the second phase of voting will be skipped. 

Decentralized governance is an important step in enabling anyone to participate in shaping the future of the network. The goal of the public testnet is to ensure the mechanisms are functioning properly for sequencers to permissionlessly join and control the Aztec Network from day 1. 

Client-side proofs 

One additional aspect to consider with regard to full decentralization is the role of network users in decentralizing the compute load of the network.

Aztec Labs has developed groundbreaking technology to make end-to-end programmable privacy possible. First with the release of Plonk, and later refinements like MegaHonk, which make it feasible to generate client-side ZKPs. Client-side proofs keep sensitive data on the user’s device while still enabling users to interact with and store this information privately onchain. They also help to scale throughput by pushing execution to users. This decentralizes the compute requirements and means users can execute arbitrary logic in their private functions.

Sequencers and provers on the Aztec Network are never able to see any information that users or applications want to keep private, including accounts, activity, balances, function execution, or other data of any kind. 

Aztec’s Public Testnet is shipping with a full execution environment, including the ability to create client-side proofs in the browser. Here are some time estimations to expect for generating private, client-side proofs: 

  • Client-side proofs natively on laptop: ~2.5 seconds for a basic function call (i.e, transfers);

  • Client-side proofs in browser: ~25 seconds fixed cost for basic function call, incremental calls add a few seconds; and
     
  • Client-side proofs natively on mobile: ~5 seconds.

Conclusion

Aztec’s Public Testnet is designed to rigorously test decentralization across sequencing, proving, and governance ahead of our mainnet launch. The network design ensures no single entity can control or censor activity, empowering anyone to participate in sequencing transactions, generating proofs, and proposing governance changes. 

Visit the Aztec Testnet page to start building with programmable privacy and join our community on Discord.

Aztec Network
Aztec Network
22 Apr
xx min read

History of Aztec: Pioneering Privacy in Web3

The Early Days of Aztec (2017)

When Aztec mainnet launches, it will be the first fully private and decentralized L2 on Ethereum. Getting here was a long road: when Aztec started eight years ago, the initial plan was to build an onchain financial service called CreditMint for issuing corporate debt to mid-market enterprises – obviously a distant use case from how we understand Aztec today. When co-founders Zac Williamson, Joe Andrews, Tom Pocock, and Arnaud Schenk, got started, the world of zero-knowledge proving systems and applications weren’t even in their infancy: there was no PLONK, no Noir, no programmable privacy, and it wasn’t clear that demand for onchain privacy was even strong enough to necessitate a new blockchain network. The founders’ initial explorations through CreditMint led to what we know as Aztec today. 

While putting corporate debt onchain might seem unglamorous (or just limited compared with how we now understand Aztec’s capabilities), it was useful, wildly popular, and necessary for the founding team to realized that no serious institution wanted to touch the blockchain without the same privacy assurances that they were accustomed to in the corporate world. Traditional finance is built around trusted intermediaries and middlemen, which of course introduces friction and bottlenecks progress – but offers more privacy assurances than what you see on public blockchains like Ethereum. 

This takeaway led to a bigger understanding: the number of people (not just the number of institutions) who wanted to use the blockchain was limited by a lack of programmable privacy. Aztec was born out of the recognition that everyone – not only corporations – could use permissionless, onchain systems for private transactions, and this could become the default for all online payments. In the words of the CEO, Zac Williamson:

“If you had programmable digital money that had privacy guarantees around it, you could use that to create extremely fast permissionless payment channels for payments on the internet.” 

Equipped with this understanding, Zac and Joe began to specialize. Zac, whose background is in particle physics, went deep on cryptography research and began exploring protocols that could be used to enable onchain privacy. Meanwhile, Joe worked on how to get user adoption for privacy tech, while Arnaud focused on getting the initial CreditMint platform live and recruiting early members of the team. In 2018, Aztec published a proof-of-concept transaction demonstrating the creation and transfer of private assets on Ethereum  – using an early cryptographic protocol that predated modern proving schemes like PLONK. It was a limited example, with just DAI as the test-case (and it could only facilitate private assets, not private identities), but it garnered a lot of early interest from members of the Ethereum community. 

“The Product Needs Drive the Proving Scheme” (2018-2020)

The 2018 version of the Aztec Protocol had three key limitations: it wasn’t programmable, it only supported private data (rather than private data and user-level privacy), and it was expensive, from both a computation and gas perspective. The underlying proving scheme was, in the words of Zac, a “Frankenstein cryptography protocol using older primitives than zk-SNARKs.” These limitations motivated the development of PLONK in 2019, a SNARK-based proving system that is computationally inexpensive, and only requires one universal trusted setup. 

A single universal trusted setup is desirable because it allows developers to utilize a common reference string for all of the programs they might want to instantiate in a circuit; the alternative is a much more cumbersome process of conducting a trusted setup ceremony for each cryptographic circuit. In other words, PLONK enabled programmable privacy for future versions of Aztec. 

PLONK was a big breakthrough, not just for Aztec, but for the wider blockchain community. Today, PLONK has been implemented and extended by teams like zkSync, Polygon, Mina, and more. There is even an entire category of proving systems called PLONKish that all derive from the original 2019 paper. For Aztec specifically, PLONK was also instrumental in paving the way for zk.money and Aztec Connect, a private payment network and private DeFi rollup, which launched in 2021 and 2022 respectively.  

The product needs of Aztec motivated the development of a modern-day proving system. PLONK proofs are computationally cheap to generate, leading not only to lower transaction costs and programmability for developers, but big steps forward for privacy and decentralization. PLONK made it simpler to generate client-side proofs on inexpensive hardware. In the words of Joe, “PLONK [was] developed to keep the middleman away.”  

Making the Blockchain Real (2021-2023)

Between 2021 and 2023, the Aztec team operated zk.money and Aztec Connect. The products were not only vital in illustrating that there was a demand for onchain privacy solutions, but in demonstrating that it was possible to build performant and private networks leveraging PLONK. Joe remarked that they “wanted to test that we could build a viable payments network, where the user experience was on par with a public transaction. Privacy needed to be in the background.” 

Aztec’s early products indicated that there was significant demand for private onchain payments and DeFi – at peak, the rollups had over $20 million in TVL. Both products fit into the vision Zac had to “make the blockchain real.” In his team’s eyes, blockchains are held back from mainstream adoption because you can’t bring consequential, real-world assets onchain without privacy. 

Despite the demand for these networks, the team made the decision to sunset both zk.money and Aztec Connect after recognizing that they could not fully decentralize the networks without massive architectural changes. Zac and Joe don’t believe in “Progressive Decentralization” – the network needs to have no centralized operators from day one. And it wasn’t just the sequencer of these early Aztec products that were centralized – the team also recognized that it would have been impossible for other developers to write programs on Aztec that could compose with each other, because all programs operated on shared state. In 2023, zk.money and Aztec Connect were officially shut down. 

In tandem, the team also began developing Noir (an original brainchild of Kevaundray Wedderbaum). Noir is a Rust-like programming language for writing zero-knowledge circuits that makes privacy technology accessible to mainstream developers. While Noir began as a way to make it easier for developers to write private programs without needing to know cryptography, the team soon realized that the demand for privacy didn’t just apply to applications on the Aztec stack, and that Noir could be a general-purpose DSL for any kind of application that needs to leverage privacy. In the same way that bringing consequential assets and activity onchain “makes the blockchain real,” bringing zero-knowledge technology to any application – onchain or offchain – makes privacy real. The team continued working on Noir, and it has developed into its own product stack today. 

Aztec Today 

Aztec from 2017 to 2024 can be seen as a methodical journey toward building a fully private, programmable, and decentralized blockchain network. The earliest attempt at Aztec as a protocol introduced asset-level privacy, without addressing user-level privacy, or significant programmability. PLONK paved the way for user-level privacy and programmability, which yielded zk.money and Aztec Connect. Noir extended programmability even further, making it easy for developers to build applications in zero-knowledge. But zk.money and Aztec Connect were incomplete without a viable path to decentralization. So, the team decided to build a new network from scratch. Extending on their learnings from past networks, the foundations and findings from continuous R&D efforts of PLONK, and the growing developer community around Noir, they set the stage for Aztec mainnet. 

The fact of the matter is that creating a network that is fully private and decentralized is hard. To have privacy, all data must be shielded cheaply inside of a SNARK. If you want to really embrace the idea of “making the blockchain real” then you should also be able to leverage outside authentication and identity solutions, like Apple ID – and you need to be able to put those technologies inside of a SNARK as well. The number of statements that need to be represented as provable circuits is massive. Then, all of these capabilities need to run inside of a network that is decentralized. The combination of mathematical, technological, and networking problems makes this very difficult to achieve

The technical architecture of Aztec reflects the learnings of the Aztec team. Zac describes Aztec mainnet as a “Russian nesting doll” of products that all add up to a private and decentralized network. Aztec today consists of:

  1. A decentralized Prover and Sequencer network that eliminates central points of control
  2. The Privacy Execution Environment (PXE) that enables client-side proving
  3. Significant innovations in proving systems, including the faster, low-memory proving systems optimized for browser performance

At the network level, there will be many participants in the decentralization efforts of Aztec: provers, sequencers, and node operators. Joe views the infrastructure-level decentralization as a crucial first stage of Aztec’s mainnet launch.

As Aztec goes live, the vision extends beyond private transactions to enabling entirely new categories of applications. The team envisions use cases ranging from consumer lending based on private credit scores to games leveraging information asymmetry, to social applications that preserve user privacy. The next phase will focus on building a robust ecosystem of developers and the next generation of applications on Ethereum using  Noir, the universal language of privacy. 

Aztec mainnet marks the emergence of applications that weren't possible before – applications that combine the transparency and programmability of blockchain with the privacy necessary for real-world adoption. 

Community
Community
25 Mar
xx min read

Is ZK-MPC-FHE-TEE a real creature?

Many thanks to Remi Gai, Hannes Huitula, Giacomo Corrias, Avishay Yanai, Santiago Palladino, ais, ji xueqian, Brecht Devos, Maciej Kalka, Chris Bender, Alex, Lukas Helminger, Dominik Schmid, ​​0xCrayon, Zac Williamson for inputs, discussions, and reviews. 

Contents

  1. Introduction: why we are here and why this article should exist
  2. Quick overview of each technology
    1. Client-side proving
    2. FHE
    3. MPC
    4. TEE
  3. Does it make sense to combine any of them and is it feasible?
    1. ZK-MPC
    2. MPC-FHE
    3. ZK-FHE
    4. ZK-MPC-FHE
    5. TEE-{everything}
  4. Conclusions: what to use and under what circumstances
    1. Comparison table
    2. What are the most reasonable approaches for on-chain privacy?

Prerequisites:

Introduction

Buzzwords are dangerous. They amuse and fascinate as cutting-edge, innovative, mesmerizing markers of new ideas and emerging mindsets. Even better if they are abbreviations, insider shorthand we can use to make ourselves look smarter and more progressive:

Using buzzwords can obfuscate the real scope and technical possibilities of technology. Furthermore, buzzwords might act as a gatekeeper making simple things look complex, or on the contrary, making complex things look simple (according to the Dunning-Kruger effect).

In this article, we will briefly review several suggested privacy-related abbreviations, their strong points, and their constraints. And after that, we’ll think about whether someone will benefit from combining them or not. We’ll look at different configurations and combinations.

Disclaimer: It’s not fair to compare the technologies we’re discussing since it won’t be an apples-to-apples comparison. The goal is to briefly describe each of them, highlighting their strong and weak points. Understanding this, we will be able to make some suggestions about combining these technologies in a meaningful way. 

POV: a new dev enters the space.

Quick overview of each technology

Client-side ZKPs

Client-side ZKP is a specific category of zero-knowledge proofs (started in 1989). The exploration of general ZKPs in great depth is out-of-scope for this piece. If you're curious to learn about it, check this article

Essentially, zero-knowledge protocol allows one party (prover) to prove to another party (verifier) that some given statement is true, while avoiding conveying any information beyond the mere fact of that statement's truth.

Client-side ZKPs enable generation of the proof on a user's device for the sake of privacy. A user makes some arbitrary computations and generates proof that whatever they computed was computed correctly. Then, this proof can be verified and utilized by external parties.

One of the most widely known use cases of the client-side ZKPs is a privacy preserving L2 on Ethereum where, thanks to client-side data processing, some functions and values in a smart-contract can be executed privately, while the rest are executed publicly. In this case, the client-side ZKP is generated by the user executing the transaction, then verified by the network sequencer. 

However, client-side proof generation is not limited to Ethereum L2s, nor to blockchain at all. Whenever there are two or more parties who want to compute something privately and then verify each other’s computation and utilize their results for some public protocols, client-side ZKPs will be a good fit.

Check this article for more details on how client-side ZKPs work.

The main concern today about on-chain privacy by means of client-side proof generation is the lack of a private shared state. Potentially, it can be mitigated with an MPC committee (which we will cover in later sections). 

Speaking of limitations of client-side proving, one should consider: 

  • The memory constraint: inherited from WASM memory cap – 4Gb and in case of mobile proving each device has its own memory cap as well. 
  • The maximum circuit size (derived from WASM memory cap): currently 2^20 for Aztec’s client-side proof generation (i.e. to prove any Noir program with Barretenberg in WASM).

What can we do with client-side ZKPs today: 

  • According to HashCloak benchmarking, a client-side ZKP of an RSA signature in Noir is generated in 0.2s (using UltraHonk and a laptop with Intel(R) Core(TM) i7-13700H CPU and 32 GB of RAM).
  • According to Polygon Miden, a STARK ZKP for the Fibonacci calculator program for 2^20 cycles at 96-bit security level can be generated in 7 sec using Apple M1 Pro (16 threads). 
  • According to ZKPrize winners’ benchmarks, it takes 10 minutes to prove the target of 50 signatures over 100B to 1kB messages on a consumer device (Macbook pro with 32GB of memory).

Whom to follow for client-side ZKPs updates: Aztec Labs, Miden, Aleo

MPC (Multiparty computation) 

Disclaimer: in this section, we discuss general-purpose MPC (i.e. allowing computations on arbitrary functions). There are also a bunch of specialized MPC protocols optimized for various use cases (i.e. designing customized functions) but those are out-of-scope for this article.

MPC enables a set of parties to interact and compute a joint function of their private inputs while revealing nothing but the output: f(input_1, input_2, …, input_n) → output.

For example, parties can be servers that hold a distributed database system and the function can be the database update. Or parties can be several people jointly managing a private key from an Ethereum account and the function can be a transaction signing mechanism. 

One issue of concern with MPCs is that one or more parties participating in the protocol can be malicious. They can try to:

  • Learn private inputs of other parties;
  • Cause the result of computations to be incorrect.

Hence in the context of MPC security, one wants to ensure that:

  • All private inputs stay private (i.e. each party knows its input and nothing else);
  • The output was computed correctly and each party received its correct output.

To think about MPC security in an exhaustive way, we should consider three perspectives:

  1. How many parties are assumed to be honest?
  2. The specific methods of corrupting parties.
  3. What can corrupted parties do?

How many parties are assumed to be honest?

Rather than requiring all parties in the computation to remain honest, MPC tolerates different levels of corruption depending on the underlying assumptions. Some models remain secure if less than 1/3 of parties are corrupt, some if less than 1/2 are corrupt, and some even have security guarantees even in the case that more than half of the parties are corrupt. For details, formal definition, and proof of MPC protocol security, check this paper.

The specific methods of corrupting parties

There are three main corruption strategies:

  1. Static – parties are corrupted before the protocol starts and remain corrupted to the end. 
  2. Adaptive – parties can be corrupted at different stages of protocol execution and after execution remain corrupted to the end. 
  3. Proactive – parties can switch between malicious and honest behavior during the protocol execution an arbitrary number of times, etc. 

Each of these assumptions will assume a different security model.

What can corrupted parties do?

Two definitions of malicious behavior are: 

  1. Semi-honest (also referred to as honest but curious, or passive adversary) – following the protocol as prescribed but trying to extract some additional information.
  2. Malicious – deviating from the protocol.

When it comes to the definition of privacy, MPC guarantees that the computation process itself doesn’t reveal any information. However, it doesn’t guarantee that the output won’t reveal any information. For an extreme example, consider two people computing the average of their salaries. While it’s true that nothing but the average will be output, when each participant knows their own salary amount and the average of both salaries, they can derive the exact salary of the other person.

That is to say, while the core “value proposition” of MPC seems to be very attractive for a wide range of real world use cases, a whole bunch of nuances should be taken into account before it will actually provide a high enough security level. (It's important to clarify the problem statement and decide whether it is the right tool for this particular task.)

What can be done with MPC protocols today:

When we think about MPC performance, we should consider the following parameters: number of participating parties, witness size of each party, and function complexity. 

  • According to the “Efficient Arithmetic in Garbled Circuits” paper, for general-purpose MPC, the computation costs are the following: at most O(n · ℓ · λ) bits per gate, with each multiplication gate using O(ℓ · λ) bits where  ℓ is the bit length of values, λ is a computational security parameter, and n is the number of gates. A value can be translated from arithmetic to Boolean (and vice versa) at cost O(ℓ · λ) bits (e.g. to perform comparison operation).

Source

  • As a matter of illustration, we are also providing an example of a specialized MPC protocol:
    According to dWallet Labs, their implementation of 2PC-MPC protocol (2-party ECDSA protocol) completes the signing phase in 1.23 and 12.703 seconds, for 256 and 1024 parties (emulating the second party in 2PC), respectively (claiming the number of parties can be scaled further).
  • Worldcoin jointly with TACEO made a number of optimizations to existing Secure Multi-Party Computation (SMPC) protocol, that enabled them to apply SMPC to the problem of iris code uniqueness. Early benchmarks show that one can achieve 10 iris uniqueness checks per second in ~6M database.

When it comes to using MPC in blockchain context, it’s important to consider message complexity, computational complexity, and such properties as public verifiability and abort identifiability (i.e. if a malicious party causes the protocol to prematurely halt, then they can be detected). For message distribution, the protocol relies either on P2P channels between each two parties (requires a large bandwidth) or broadcasting. Another concern arises around the permissionless nature of blockchain since MPC protocols often operate over permissioned sets of nodes.

Taking into account all that, it’s clear that MPC is a very nuanced technology on its own. And it becomes even more nuanced when combined with other technologies. Adding MPC to a specific blockchain protocol often requires designing a custom MPC protocol that will fit. And that design process often requires a room full of MPC PhDs who can not only design but also prove its security.

Whom to follow for MPC updates: dWallet Labs, TACEO, Fireblocks, Cursive, PSE, Fairblock, Soda Labs, Silence Laboratories, Nillion

TEE

TEE stands for Trusted Execution Environment. TEE is an area on the main processor of a device that is separated from the system's main operating system (OS). It ensures data is stored, processed, and protected in a separate environment. One of the most widely known units of TEE (and one we often mention when discussing blockchain) is Software Guard Extensions (SGX) made by Intel. 

SGX can be considered a type of private execution. For example, if a smart contract is run inside SGX, it’s executed privately. 

SGX creates a non-addressable memory region of code and data (separated from RAM), and encrypts both at a hardware level. 

How SGX works:

  • There are two areas in the hardware, trusted and untrusted. 
  • The application creates an enclave in the trusted area and makes a call to the trusted function. (The function is a piece of code developed for working inside the enclave.) Only trusted functions are allowed to run in the enclave. All other attempts to access the enclave memory from outside the enclave are denied by the processor.
  • Once the function is called, the application is running in the trusted space and sees the enclave code and data as clear text.
  • When the trusted function returns, the enclave data remains in the trusted memory area.

It’s worth noting that there is a key pair: a secret key and a public key. The secret key is generated inside of the enclave and never leaves it. The public key is available to anyone: Users can encrypt a message using a public key so only the enclave can decrypt it.

An SGX feature often utilized in the blockchain context is attestations. Attestation is the process of demonstrating that a software executable has been properly instantiated on a platform. Remote Attestation allows a remote party to be confident that the intended software is securely running within an enclave on a fully patched, Intel SGX-enabled platform.

Core SGX concerns:

  • SGX is subject to side-channel attacks. Observing a program’s indirect effects on the system during execution might leak information if a program’s runtime behavior is correlated with the secret input content that it operates on. Different attack vectors include page access patterns, timing behavior, power usage, etc.
  • Using SGX requires trusting Intel. Users must assume that everything is fine since the hardware is delivered with the private key already inside the trusted enclave. 
  • As a large enterprise, Intel is pretty slow in terms of patching new attacks. Check sgx.fail to find a list of publicly known SGX attacks that are yet to be fixed by Intel.
  • Application developers who use SGX are dependent on specific hardware produced by Intel. The company might eventually decide to deprecate or significantly change all or specific versions in ways that might make some or all applications incompatible. Or even break them. For example in 2021, SGX was deprecated on consumer CPUs. 
  • It might be hard to detect cheating fast enough if it takes place in a private domain (like with SGX). 
  • In the case of a network relying purely on TEE for privacy (i.e. a number of nodes run inside TEE and each node has complete information), exploiting one node in the network is enough to exploit the whole network (i.e. leak secrets).

Speaking of SGX cost, the proof generation cost can be considered free of charge. Though if one wants to use remote attestations, the initial one-time cost (once per SGX prover) for it is in the order of 1M gas (to make sure the code in SGX is running in the expected way).

Onchain verification cost equals to verifying an ECDSA signature (~5k gas while for ZK signature verification will cost ~300k gas). 

When it comes to execution time, there is effectively no overhead. For example, for proving a zk-rollup block, it will be around 100ms.

Where SGX is utilized in blockchain today:

  • Taiko is running an execution client inside the SGX (utilizing TEE for integrity). 
  • Secret Network’s validators run their code inside a TEE (utilizing TEE for privacy).
  • Flashbots are running SUAVE testnet on SGX.

Whom to follow for TEE updates: Secret Network, Flashbots, Andrew Miller, Oasis, Phala, Marlin, Automata, TEN.

FHE (Fully Homomorphic Encryption)

FHE enables encrypted data processing (i.e. computation on encrypted data). 

The idea of FHE was proposed in 1978 by Rivest, Adleman, and Dertouzos. “Fully” means that both addition and multiplication can be performed on encrypted data. Let m be some plain text and E(m) be an encrypted text (ciphertext). Then additive homomorphism is E(m_1 + m_2) = E(m_1) + E(m_2) and multiplicative homomorphism is E(m_1 * m_2) = E(m_1) * E(m_2). 

Additive Homomorphic Encryption was used for a while, but Multiplicative Homomorphic Encryption was still an issue. In 2009, Craig Gentry came up with the idea to use ideal lattices to tackle this problem. That made it possible to do both addition and multiplication, although it also made growing noise an issue. 

How FHE works:

Plain text is encoded into ciphertext. Ciphertext consists of encrypted data and some noise. 

That means when computations are done on ciphertext, they are done not purely on data but on data together with added noise. With each performed operation, the noise increases. After several operations, it starts overflowing on the bits of actual data, which might lead to incorrect results.

A number of tricks were proposed later on to handle the noise and make the FHE work more reliably. One of the most well-known tricks was bootstrapping, a special operation that reset the noise to its nominal level. However, bootstrapping is slow and costly (both in terms of memory consumption and computational cost). 

Researchers rolled out even more workarounds to make bootstrapping efficient and took FHE several more steps forward. Further details are out-of-scope for this article, but if you’re interested in FHE history, check out this talk by mathematician Zvika Brakerski. 

Core FHE concerns:

  • If the user (who encrypts information) outsources computations to an external party, they have to trust that the computations were done correctly.
    To handle the trust issue, (i) theoretically ZK can be used (though practically it’s not feasible today), (ii) economic consensus can be used. However, as FHE requires custom hardware (as computations to be done are very heavy), the number of participants in the FHE consensus network will always be limited, which is a problem for security. 
  • In the case of the FHE blockchain, there is one key for the whole network. Who holds the decryption key? The same will apply to dApps. For example, if an FHE computation modifies a liquidity pool total supply, that “total supply” must be decrypted at some point. But who possesses the key? (If you’re curious about FHE key attacks, check out this paper by Li and Micciancio).
  • If an external party provides encrypted input, how can the party performing computations be sure that the external party knows the input and that the input was encrypted correctly? (This can be mitigated with zero-knowledge proof of knowledge, which will be discussed in the ZK-FHE section).
  • While using FHE, one should ensure that the decrypted output doesn’t contain any private information that should not be revealed. Otherwise, formally it breaks privacy.
    One should note that there are two different types of decryption: (i) to reveal the entire network (e.g. reveal cards at the end of the game), (ii) reencryption (i.e. decryption and encryption) as a view function (e.g. view your own cards). 
  • FHE is “heavy.” When considering FHE computation cost (both in terms of computation volume and memory required), related considerations include (i) operations computation cost, (ii) communication cost, and (iii) evaluation keys size (a separate public key that is used to control the noise growth or the ciphertext expansion during homomorphic evaluation).
    One might think about FHE hardware similar to Bitcoin hardware (highly performant ASICs).


Compared to computations on plain text, the best per-operation overhead available today is polylogarithmic [GHS12b] where if n is the input size, by polylogarithmic we mean O(log^k(n)), k is a constant. For communication overhead, it’s reasonable if doing batching and unbatching of a number of ciphertexts but not reasonable otherwise. 

For evaluation keys, key size is huge (larger than ciphertexts that are large as well). The evaluation key size is around 160,000,000 bits. Furthermore, one needs to permanently compute on these keys. Whenever homomorphic evaluation is done, you’ll need to access the evaluation key, bring it into the CPU (a regular data bus in a regular processor will be unable to bring it), and make computations on it. 


If you want to do something beyond addition and multiplication—a branch operation, for example—you have to break down this operation into a sequence of additions and multiplications. That’s pretty expensive. Imagine you have an encrypted database and an encrypted data chunk, and you want to insert this chunk into a specific position in the database. If you’re representing this operation as a circuit, the circuit will be as large as the whole database.


In the future, FHE performance is expected to be optimized both on the FHE side (new tricks discovered) and hardware side (acceleration and ASIC design). This promises to allow for more complex smart contract logics as well as more computation-intensive use cases such as AI/ML. A number of companies are working on designing and building FHE-specific FPGAs (e.g. Belfort).

“Misuse of FHE can lead to security faults.”

Source

What can be done with FHE today: 

  • According to Ingonyama: With an LLM like GPT2, processing time for a single token is approximately 14.5 hours.
    Token is a unit of text, for example, one english word ≈ 1.3 tokens. Each text request to GPT2 consists of a number of tokens. Based on the processing time of one token, one can define the processing time of the whole request.
    With parallel processing, deploying 10,000 machines, the time is 5 seconds/token. With a custom ASIC designed, the time can be decreased to 0.1 second/token, but this would require huge initial investments in data centers and ASIC design.
  • According to Zvika Brakerski: When asked the question “Can we build production-level systems where FHE brings value?” he responds, “I don’t know the answer yet.”
  • According to Zama: A toy-implementation of Shazam (a music recognition app) with Zama FHE library takes 300 milliseconds to recognize a single song out of 1,000. But how will that change as the database grows? (The real Shazam library has 45M songs.)
  • According to Inco, FHE is usable today for simple blockchain use cases (i.e. smart contracts with simple logics). For example, in a confidential ERC-20 transfer that’s FHE-based, you are performing an FHE addition, subtraction, comparison, and conditional multiplexer (cmux/select) to update the balances of the sender and recipient. With CPU, Inco can do 10 TPS, and with GPU – 20-30 TPS. 

Note: In all of these examples, we are talking about plain FHE, without any MPC or ZK superstructures handling the core FHE issues.

Whom to follow for FHE updates: Zama, Sunscreen, Zvika Brakerski, Inco, FHE Onchain.

Does it make sense to combine any of these, and is doing so feasible?

As we can see from the technology overview, these technologies are not exactly interchangeable. That said, they can complement each other. Now let’s think. Which ones should be combined, and for what reason?

Disclaimer: Each of the technologies we are talking about is pretty complex on its own. The combinations of them we discuss below are, to a large extent, theoretical and hypothetical. However, there are a number of teams working on combining them at the time of writing (both research and implementation). 

ZK-MPC

In this section, we mostly describe two papers as examples and don’t claim to be exhaustive. 

One of the possible applications of ZK-MPC is a collaborative zk-snark. This would allow users to jointly generate a proof over the witnesses of multiple, mutually distrusting parties. The proof generation algorithm is run as an MPC among N provers where function f is the circuit representation of a zk-SNARK proof generator. 

Source

Collaborative zk-SNARKs also offer an efficient construction for a cryptographic primitive called a publicly auditable MPC (PA-MPC). This is an MPC that also produces a proof the public can use to verify that the computation was performed correctly with respect to commitments to the inputs.

ZK-MPC introduces the notion of MPC-friendly zk-SNARKs. That is to say, not just any MPC protocol or any zk-SNARK can feasibly be combined into ZK-MPC. This is because MPC protocols and zk-SNARK provers are each thousands of times slower than their underlying functionality, and their combination is likely to be millions of times slower.

For those familiar with elliptic curve cryptography, let’s think for a moment about why is ZK-MPC tricky:

If doing it naively, you could decompose an elliptic curve operation into operations over the curve’s base field; then there is an obvious way to perform them in an MPC. But curve additions require tens of field operations, and scalar products require thousands. 

The core tricks suggested for use include: 

  • MPC techniques applied directly to elliptic curves to make curve operations cheap.
  • The N shares are themselves elliptic curve points, and the secret is reconstructed by a weighted linear combination of a sufficient number of shares.
  • An optimized MPC protocol is utilized for computing sequences of partial products. 

Essentially, ZK-MPC in general and collaborative zk-SNARKs in particular are not just about combining ZK and MPC. Getting these two technologies to work in concert is complex and requires a huge chunk of research. 

According to one of the papers on this topic, for collaborative zk-SNARKs, over a 3Gb/s link, security against a malicious minority of provers can be achieved with approximately the same runtime as a single prover. Security against N−1 malicious provers requires only a 2x slowdown. Both TACEO and Renegade (launched mainnet on 04.09.24) teams are currently working on implementing this paper.

Another application of ZK-MPC is delegated zk-SNARKs. This enables a prover (called a delegator) to outsource proof generation to a set of workers for the sake of efficiency and engaging less powerful machines. This means that if at least one worker does not collude with other workers, no private information will be revealed to any worker. 

This approach introduces a custom MPC protocol. The issues with using existing protocols are:

  • Existing state-of-the-art MPC protocols achieving malicious security against a dishonest majority of workers rely on relatively heavyweight public-key cryptography, which has a non-trivial computational overhead. 
  • These MPC protocols require expressing the computation as an arithmetic circuit, including expressing complex operations such as elliptic curve multi-scalar multiplications and polynomial arithmetic that is expensive.

One of the papers on this topic suggests using SPDZ as a starting point and modifying it. A naive approach would be to use the zk-SNARK to succinctly check that the MPC execution is correct by having the delegator verify the zk-SNARK produced by the workers. However, this wouldn’t be knowledge-sound because the adversary can attempt to malleate its shares of the delegator’s valid witness (w) to produce a proof of a related statement. Even if the resulting proof is invalid, it can leak information about w. However, we can use the succinct verification properties of the underlying components of the zk-SNARK, the PIOP (Polynomial Interactive Oracle Proof) and the PC (Polynomial Commitment) scheme.

Other modifications correspond to optimizations, such as optimizing the number of multiplications in, and the multiplicative depth of circuits for these operations; and introducing a consistency checker for the PIOP to enable the delegator to efficiently check that the polynomials computed during the MPC execution are consistent with those that an honest prover would have computed.

According to one of the papers on this topic, “... when compared to local proving, using our protocols to delegate proof generation from a recent smartphone (a) reduces end-to-end latency by up to 26x, (b) lowers the delegator’s active computation time by up to 1447x, and (c) enables proving up to 256x larger instances.”

For a privacy-preserving blockchain, ZK-MPC can be utilized for collaboratively proving the correctness of state transition, where each party participating in generating proof has only a part of the witness. Hence the proof can be generated while no single party is aware of what they are proving. For this purpose, there should be an on-chain committee that will generate collaborative zk-SNARKs. It’s worth noting that even though we are using the term “committee,” this is still a purely cryptographic solution. 

Whom to follow for ZK-MPC updates: TACEO, Renegade.

MPC-FHE

There are a number of ways to combine FHE and MPC and each serves a different goal. For example, MPC-FHE can be employed to tackle the issue “Who holds the decryption key?” This is relevant for an FHE network or an FHE DEX. 

One approach is to have several parties jointly generate a global single FHE key. Another approach is multi-key FHE: the parties take their existing individual (multiple) FHE key pairs and combine them in order to perform an MPC-like computation. 

As a concrete example, for an FHE network, the state decryption key can be distributed to multiple parties, with each party receiving one piece. While decrypting the state, each party does a partial decryption. The partial decryptions are aggregated to yield the full decrypted value. The security of this approach holds under an assumption of 2/3 honest validators. 

The next question is, “How should other network participants (e.g. network nodes) access the decrypted data?” It can’t be done using a regular oracle (i.e. each node in the oracle consensus network must obtain the same result given the same input) since that would break privacy. 

One possible solution is a two-round consensus mechanism (though this relies on social consensus, not pure cryptography). The first round is the consensus on what should be decrypted. That is, the oracle waits until most validators send it the same request for decryption. Next, the round of decryption. Then, the validators update the chain state and append the block to the blockchain. 

Whom to follow for MPC-FHE updates: Gauss Labs (utilized by Cursive team).

ZK-FHE

MPC-FHE has two issues that can potentially be mitigated with ZK:

  1. Were inputs encrypted correctly?
  2. Were the computations on encrypted data performed correctly?

Without introducing ZK, both issues listed above make one fragment of private computations unverifiable. (That doesn’t quite work for most blockchain use cases). 

Where are we today with ZK-FHE?

According to Zama, proof of one correct bootstrapping operation can be generated in 21 minutes on a huge AWS machine (c6i.metal). And that’s pretty much it. Hopefully, in the upcoming years we will see more research on ZK-FHE.

Whom to follow for ZK-FHE updates: Zama, Pado Labs.

ZK-MPC-FHE (a sum of MPC-FHE and ZK-FHE)

One issue with MPC-FHE we haven’t mentioned so far has to do with knowing for sure that an encrypted piece of information supplied by a specific party was encrypted by that same party. What if party A took a piece of information encrypted by party B and supplied it as its own input? 

To handle this issue, each party can generate a ZKP that they know the plaintext they are sending in an encrypted way. Adding this ZK tweak with two ZK tweaks from the previous section (ZK-FHE), we will get verifiable privacy with ZK-MPC-FHE.

Whom to follow for ZK-MPC-FHE updates: Pado Labs, Greco.

TEE-{everything}

TL;DR: In general, when it comes to using any new technology, it makes sense to run it inside TEE since the attack vector with TEE is orders of magnitude smaller than on a regular computer:

Source

Using TEE as an execution environment (to construct ZK proofs and participate in MPC and FHE protocols) improves security at almost zero cost. In this case, secrets stay in TEE only within active computation and then they are discarded. However, using TEE for storing secrets is a bad idea. Trusting TEEs for a month is bad, trusting TEEs for 30 seconds is probably fine. 

Another approach is to use TEE as a “training wheels,” for example, for multi-prover where computations are run both in a ZK circuit and TEE, and to be considered valid they should agree on the same result. 

Whom to follow for TEE-{something} updates: Safeheron (TEE-MPC).

Conclusions: should we combine them all?

It might feel tempting to take all of the technologies we’ve mentioned and craft a zk-mpc-fhe-tee machine that will combine all their strengths:

However, the mere fact that we can combine technologies doesn’t mean we should combine them. We can combine ZK-MPC-FHE-TEE and then add quantum computers, restaking, and AI gummy bears on top. But for what reason? 

Source

Each of these technologies adds its own overhead to the initial computations. 10 years ago, the blockchain, ZK, and FHE communities were mostly interested in proof of concept. But today, when it comes to blockchain applications, we are mostly interested in performance. That is to say we are curious to know if we combine a row of fancy technologies, what product/application could we build on it?

Let’s structure everything we discussed in a table:

Hence, if we are thinking about a privacy stack that will be expressive enough that developers can build any Web3 dApps they imagine, from everything we’ve mentioned in the article, we either have MPC-ZK (MPC is utilized for shared state) or ZK-MPC-FHE. As for today, client-side zero-knowledge proof generation is a proven concept and we are currently at the production stage. The same relates to ZK-MPC; a number of teams are working on its practical implementation. 

At the same time, ZK-MPC-FHE is still at the research and proof-of-concept stage because when it comes to imposing zero-knowledge, it’s know how to zk-prove one bootstrapping operation but not arbitrary computations (i.e. circuit of arbitrary size). Without ZK, we lose the verifiability property necessary for blockchain. 

Sources:

  • A paper, “Secure Multiparty Computation (MPC)” by Yehuda Lindell.
  • An article, “Introduction to FHE: What is FHE, how does FHE work, how is it connected to ZK and MPC, what are the FHE use cases in and outside of the blockchain, etc.”
  • A talk, “Trusted Execution Environments (TEEs) for Blockchain Applications” by Ari Juels.
  • An article, “Why multi-prover matters. SGX as a possible solution.” 
  • A paper, “Experimenting with Collaborative zk-SNARKs: Zero-Knowledge Proofs for Distributed Secrets” by Alex Ozdemir and Dan Boneh.
  • A paper, “EOS: Efficient Private Delegation of zkSNARK Provers” by Alessandro Chiesa, Ryan Lehmkuhl, Pratyush Mishra, and Yinuo Zhang.
  • A paper, “Practical MPC+FHE with Applications in Secure Multi-Party Neural Network Evaluation” by Ruiyu Zhu,  Changchang Ding, and Yan Huang.
  • An article, “Between a Rock and a Hard Place: Interpolating between MPC and FHE”
  • A talk, “Building Verifiable FHE using ZK with Zama.”
  • An article, “Client-side Proof Generation.”
  • An article, “Does zero-knowledge provide privacy?”