#

Inside Aztec

Inside
Aztec

purple_2
Aztec Network
31 Mar
xx min read

Announcing the Alpha Network

The First Feature Complete Privacy Stack is Here

Alpha is live: a fully feature-complete, privacy-first network. The infrastructure is in place, privacy is native to the protocol, and developers can now build truly private applications. 

Nine years ago, we set out to redesign blockchain for privacy. The goal: create a system institutions can adopt while giving users true control of their digital lives. Privacy band-aids are coming to Ethereum (someday), but it’s clear we need privacy now, and there’s an arms race underway to build it. Privacy is complex, it’s not a feature you can bolt-on as an afterthought. It demands a ground-up approach, deep tech stack integration, and complete decentralization.

In November 2025, the Aztec Ignition Chain went live as the first decentralized L2 on Ethereum, it’s the coordination layer that the execution layer sits on top of. The network is not operated by the Aztec Labs or the Aztec Foundation, it’s run by the community, making it the true backbone of Aztec. 

With the infrastructure in place and a unanimous community vote, the network enters Alpha. 

What is the Alpha Network?

Alpha is the first Layer 2 with a full execution environment for private smart contracts. All accounts, transactions, and the execution itself can be completely private. Developers can now choose what they want public and what they want to keep private while building with the three privacy pillars we have in place across data, identity, and compute.

These privacy pillars, which can be used individually or combined, break down into three core layers: 

  1. Data: The data you hold or send remains private, enabling use cases such as private transactions, RWAs, payments and stablecoins.
  2. Identity: Your identity remains private, enabling accounts that privately connect real world identities onchain, institutional compliance, or financial reporting where users selectively disclose information.
  3. Compute: The actions you take remain private, enabling applications in private finance, gaming, and beyond.

The Key Components  

Alpha is feature complete–meaning this is the only full-stack solution for adding privacy to your business or application. You build, and Aztec handles the cryptography under the hood. 

It’s Composable. Private-preserving contracts are not isolated; they can talk to each other and seamlessly blend both private and public state across contracts. Privacy can be preserved across contract calls for full callstack privacy. 

No backdoor access. Aztec is the only decentralized L2, and is launching as a fully decentralized rollup with a Layer 1 escape hatch.

It’s Compliant. Companies are missing out on the benefits of blockchains because transparent chains expose user data, while private networks protect it, but still offer fully customizable controls. Now they can build compliant apps that move value around the world instantly.

How Apps Work on Alpha 

  1. Write in Noir, an open-source Rust-like programming language for writing smart contracts. Build contracts with Aztec.nr and mark functions private or public.
  1. Prove on a device. Users execute private logic locally and a ZK proof is generated.
  1. Submit to Aztec. The proof goes to sequencers who validate without seeing the data. Any public aspects are then executed.
  1. Settle on Ethereum. Proofs of transactions on Aztec are settled to Ethereum L1.

Developers can explore our privacy primitives across data, identity, and compute and start building with them using the documentation here. Note that this is an early version of the network with known vulnerabilities, see this post for details. While this is the first iteration of the network, there will be several upgrades that secure and harden the network on our path to Beta. If you’d like to learn more about how you can integrate privacy into your project, reach out here

To hear directly from our Cofounders, join our live from Cannes Q&A on Tuesday, March 31st at 9:30 am ET. Follow us on X to get the latest updates from the Aztec Network.

Most Recent
Aztec Network
27 Mar
xx min read

Critical Vulnerability in Alpha v4

On Wednesday 17 March 2026 our team discovered a new vulnerability in the Aztec Network. Following the analysis, the vulnerability has been confirmed as a critical vulnerability in accordance with our vulnerability matrix.

The vulnerability affects the proving system as a whole, and is not mitigated via public re-execution by the committee of validators. Exploitation can lead to severe disruption of the protocol and theft of user funds.

In accordance with our policy, fixes for the network will be packaged and distributed with the “v5” release of the network, currently planned for July 2026.

The actual bug and corresponding patch will not be publicly disclosed until “v5.”

Aztec applications and portals bridging assets from Layer 1s should warn users about the security guarantees of Alpha, in particular, reminding users not to put in funds they are not willing to lose. Portals or applications may add additional security measures or training wheels specific to their application or use case.

State of Alpha security

We will shortly establish a bug tracker to show the number and severity of bugs known to us in v4. The tracker will be updated as audits and security researchers discover issues. Each new alpha release will get its own tracker. This will allow developers and users to judge for themselves how they are willing to use the network, and we will use the tracker as a primary determinant for whether the network is ready for a "Beta" label.

Additional bug disclosure

We have identified a vulnerability in barretenberg allowing inclusion of incorrect proofs in the Aztec Network mempool, and ask all nodes to upgrade to versions v.4.1.2 or later.

We’d like to thank Consensys Diligence & TU Vienna for a recent discovery of a separate vulnerability in barretenberg categorized as medium for the network and critical for Noir:

We have published a fixed version of barretenberg.

We’d also like to thank Plainshift AI for discovery, reproduction, and reporting of one more vulnerability in the Aztec Network and their ongoing work to help secure the network.

Aztec Network
18 Mar
xx min read

How Aztec Governance Works

Decentralization is not just a technical property of the Aztec Network, it is the governing principle. 

No single team, company, or individual controls how the network evolves. Upgrades are proposed in public, debated in the open, and approved by the people running the network. Decentralized sequencing, proving, and governance are hard-coded into the base protocol so that no central actor can unilaterally change the rules, censor transactions, or appropriate user value.

The governance framework that makes this possible has three moving parts: Aztec Improvement Proposal (AZIP), Aztec Upgrade Proposal (AZUP), and the onchain vote. Together, they form a pipeline that takes an idea to a live protocol change, with multiple independent checkpoints along the way.

The Virtual Town Square

Every upgrade starts with an AZIP. AZIPs are version-controlled design documents, publicly maintained on GitHub, modeled on the same EIP process that has governed Ethereum since its earliest days. Anyone is encouraged to suggest improvements to the Aztec Network protocol spec.

Before a formal proposal is opened, ideas live in GitHub Discussions, an open forum where the community can weigh in, challenge assumptions, and shape the direction of a proposal before it hardens into a spec. This is the virtual town square: the place where the network's future gets debated in public, not decided behind closed doors.

The AZIP framework is what decentralization looks like in practice. Multiple ideas can surface simultaneously, get stress-tested by the community, and the strongest ones naturally rise. Good arguments win, not titles or seniority. The process selects for quality discussion precisely because anyone can participate and everything is visible.

Once an AZIP is formalized as a pull request, it enters a structured lifecycle: Draft, Ready for Discussion, then Accepted or Rejected. Rejected AZIPs are not deleted — they remain permanently in the repository as a record of what was tried and why it was rejected. Nothing gets quietly buried.

Security Considerations are mandatory for all Core, Standard, and Economics AZIPs. Proposals without them cannot pass the Draft stage. Security is structural, not an afterthought.

From Proposal to Upgrade

Once Core Contributors, a merit-based and informal group of active protocol contributors, have reviewed an AZIP and approved it for inclusion, it gets bundled into an AZUP.

An AZUP takes everything an AZIP described and deploys it — a real smart contract, real onchain actions. Each AZUP includes a payload that encodes the exact onchain changes that will occur if the upgrade is approved. Anyone can inspect the payload on a block explorer and see precisely what will change before voting begins.

The payload then goes to sequencers for signaling. Sequencers are the backbone of the network. They propose blocks, attest to state, and serve as the first governance gate for any upgrade. A payload must accumulate enough signals from sequencers within a fixed round to advance. The people actually running the network have to express coordinated support before any change reaches a broader vote.

Once sequencers signal quorum, the proposal moves to tokenholders. Sequencers' staked voting power defaults to "yea" on proposals that came through the signaling path, meaning opposition must be active, not passive. Any sequencer or tokenholder who wants to vote against a proposal must explicitly re-delegate their stake before the voting snapshot is taken. The system rewards genuine engagement from all sides.

For a proposal to pass, it must meet quorum, a supermajority margin, and a minimum participation threshold, all three. If any condition is unmet, the proposal fails.

Built-In Delays, Built-In Safety

Even after a proposal passes, it does not execute immediately. A mandatory delay gives node operators time to deploy updated software, allows the community to perform final checks, and reduces the risk of sudden uncoordinated changes hitting the network. If the proposal is not executed within its grace period, it expires.

Failed AZUPs cannot be resubmitted. A new proposal must be created that directly addresses the feedback received. There is no way to simply retry and hope for a different result.

No Single Point of Control

The teams building the network have no special governance power. Sequencers, tokenholders, and Core Contributors are the governing actors, each playing a distinct and non-redundant role.

No single party can force or block an upgrade. Sequencers can withhold signals. Tokenholders can vote nay. Proposals not executed within the grace period expire on their own.

This is decentralization working as intended. The network upgrades not because a team decides it should, but because the people running it agree that it should.

If you want to help shape what Aztec becomes, the forum is open. The proposals are public. The town square is yours. 

Follow Aztec on X to stay up to date on the latest developments.

Aztec Network
10 Mar
xx min read

Alpha Network Security: What to Expect

Aztec’s Approach to Security

Aztec is novel code — the bleeding edge of cryptography and blockchain technology. As the first decentralized L2 on Ethereum, Aztec is powered by a global network of sequencers and provers. Decentralization introduces some novel challenges in how security is addressed; there is no centralized sequencer to pause or a centralized entity who has power over the network. The rollout of the network reflects this, with distinct goals at each phase.

Ignition

Validate governance and decentralized block building work as intended on Ethereum Mainnet. 

Alpha

Enable transactions at 1TPS, ~6s block times and improve the security of the network via continual ongoing audits and bug bounty. New releases of the alpha network are expected regularly to address any security vulnerabilities. Please note, every alpha deployment is distinct and state is not migrated between Alpha releases. 

Beta

We will transition to Beta once the network scales to >10 TPS, with reduced block times while ensuring 99.9% uptime. Additionally, the transition requires no critical bugs disclosed via bug bounty in 3 months. State migrations across network releases can be considered.

TL;DR: The roadmap from Ignition to Alpha to Beta is designed to reflect the core team's growing confidence in the network's security.

This phased approach lets us balance ecosystem growth while building security confidence and steadily expanding the community of researchers and tools working to validate the network’s security, soundness and correctness.

Ultimately, time in production without an exploit is the most reliable indicator of how secure a codebase is.

At the start of Alpha, that confidence is still developing. The core team believes the network is secure enough to support early ecosystem use cases and handle small amounts of value. However this is experimental alpha software and users should not deposit more value than they are willing to lose. Apps may choose to limit deposit amounts to mitigate risk for users.

Audits are ongoing throughout Alpha, with the goal to achieve dual external audits across the entire codebase.

The table below shows current security and audit coverage at the time of writing.

The main bug bounty for the network is not yet live, other than for the non-cryptographic L1 smart contracts as audits are ongoing. We encourage security researchers to responsibly disclose findings in line with our security policy .

As the audits are still ongoing, we expect to discover vulnerabilities in various components. The fixes will be packaged and distributed with the “v5” release.

If we discover a Critical vulnerability in “v4” in accordance with the following severity matrix, which would require the change of verification keys to fix, we will first alert the portal operators to pause deposits and then post a message on the forum, stating that the rollup has a vulnerability.

Security of the Aztec Virtual Machine (AVM)

Aztec uses a hybrid execution model, handling private and public execution separately — and the security considerations differ between them.

As per the audit table above, it is clear that the Aztec Virtual Machine (AVM) has not yet completed its internal and external audits. This is intentional as all AVM execution is public, which allows it to benefit from a “Training Wheel” — the validator re-execution committee.

Every 72 seconds, a collection of newly proposed Aztec blocks are bundled into a "checkpoint" and submitted to L1. With each proposed checkpoint, a committee of 48 staking validators randomly selected from the entire set of validators (presently 3,959) re-execute all txs of all blocks in the checkpoint, and attest to the resulting state roots. 33 out of 48 attestations are required for the checkpoint proposal to be considered valid. The committee and the eventual zk proof must agree on the resultant state root for a checkpoint to be added to the proven chain. As a result, an attacker must control 33/48 of any given committee to exploit any bug in the AVM.

The only time the re-execution committee is not active is during the escape hatch, where the cost to propose a block is set at a level which attempts to quantify the security of the execution training wheel. For this version of the alpha network, this is set a 332M AZTEC, a figure intended to approximate the economic protection the committee normally provides, equivalent to roughly 19% of the un-staked circulating supply at the time of writing. Since the Aztec Foundation holds a significant portion of that supply, the effective threshold is considerably higher in practice.

Quantifying the cost of committee takeover attacks

A key design assumption is that just-in-time bribery of the sequencer committee is impractical and the only ****realistic attack vector is stake acquisition, not bribery.

Assuming a sequencer set size of 4,000 and a committee that rotates each epoch (~38.4mins) from the full sequencer set using a Fisher-Yates shuffle seeded by L1 RANDAO we can see the probability and amount of stake required in the table below.

To achieve a 99% probability of controlling at least one supermajority within 3 days, an attacker would need to control approximately 55.4% of the validator set - roughly 2,215 sequencers representing 443M AZTEC in stake. Assuming an exploit is successful their stake would likely de-value by 70-80%, resulting in an expected economic loss of approximately 332M AZTEC.

To achieve only a 0.5% probability of controlling at least one supermajority within 6 months, an attacker would need to control approximately 33.88% of the validator set.

What does this means for builders?

The practical effect of this training wheel is that the network can exist while there are known security issues with the AVM, as long as the value an attacker would gain from any potential exploit is less than the cost of acquiring 332M AZTEC.

The training wheel allows security researchers to spend more time on the private execution paths that don’t benefit from the training wheel and for the network to be deployed in an alpha version where security researchers can attempt to find additional AVM exploits.

In concrete terms, the training wheel means the Alpha network can reasonably secure value up to around 332M AZTEC (~$6.5M at the time of writing).

Ecosystem builders should keep the above limits in mind, particularly when designing portal contracts that bridge funds into the network.

Portals are the main way value will be bridged into the alpha network, and as a result are also the main target for any exploits. The design of portals can allow the network to secure far higher value. If a portal secures > 332M AZTEC and allows all of its funds to be taken in one withdrawal without any rate limits, delays or pause functionality then it is a target for an AVM exploit attack.

If a portal implements a maximum withdrawal per user, pause functionality or delays for larger withdrawals it becomes harder for an attacker to steal a large quantum of funds in one go.

Conclusion

The Aztec Alpha code is ready to go. The next step is for someone in the community to submit a governance proposal and for the network to vote on enabling transactions. This is decentralization working as intended.

Once live, Alpha will run at 1 TPS with roughly 6 second block times. Audits are still ongoing across several components, so keep deposits small and only put in what you're comfortable losing.

On the security side, a 48-validator re-execution committee provides the main protection during Alpha, requiring 33/48 consensus on every 72-second checkpoint. Successfully attacking the AVM would require controlling roughly 55% of the validator set at a cost of around 332M AZTEC, putting the practical security ceiling at approximately $6.5M.

Alpha is about growing the ecosystem, expanding the security of the network, and accumulating the one thing no audit can shortcut: time in production. This is the network maturing in exactly the way it was designed to as it progresses toward Beta.

Aztec Network
4 Mar
xx min read

Aztec Network: Roadmap Update

The Ignition Chain launched late last year, as the first fully decentralized L2 on Ethereum– a huge milestone for decentralized networks. The team has reinvented what true programmable privacy means, building the execution model from the ground up— combining the programmability of Ethereum with the privacy of Zcash in a single execution environment.

Since then, the network has been running with zero downtime with 3,500+ sequencers and 50+ provers across five continents. With the infrastructure now in place, the network is fully in the hands of the community, and the culmination of the past 8 years of work is now converging. 

Major upgrades have landed across four tracks: the execution layer, the proving system, the programming language, Noir, and the decentralization stack. Together, these milestones deliver on Aztec’s original promise, a system where developers can write fully programmable smart contracts with customizable privacy.

The infrastructure is in place. The code is ready. And we’re ready to ship. 

What’s New on the Roadmap?

The Execution Layer

The execution layer delivers on Aztec's core promise: fully programmable, privacy-preserving smart contracts on Ethereum. 

A complete dual state model is now in place–with both private and public state. Private functions execute client-side in the Private Execution Environment (PXE), running directly in the user's browser and generating zero-knowledge proofs locally, so that private data never leaves the original device. Public functions execute on the Aztec Virtual Machine (AVM) on the network side. 

Aztec.js is now live, giving developers a full SDK for managing accounts and interacting with contracts. Native account abstraction has been implemented, meaning every account is a smart contract with customizable authentication rules. Note discovery has been solved through a tagging mechanism, allowing recipients to efficiently query for relevant notes without downloading and decrypting everything on the network.

Contract standards are underway, with the Wonderland team delivering AIP-20 for tokens and AIP-721 for NFTs, along with escrow contracts and logic libraries, providing the production-ready building blocks for the Alpha Network. 

The Proving System

The proving system is what makes Aztec's privacy guarantees real, and it has deep roots.

In 2019, Aztec's cofounder Zac Williamson and Chief Scientist Ariel Gabizon introduced PLONK, which became one of the most widely used proving systems in zero-knowledge cryptography. Since then, Aztec's cryptographic backend, Barretenberg, has evolved through multiple generations, each facilitating faster, lighter, and more efficient proving than the last. The latest innovation, CHONK (Client-side Highly Optimized ploNK), is purpose-built for proving on phones and browsers and is what powers proof generation for the Alpha Network.

CHONK is a major leap forward for the user experience, dramatically reducing the memory and time required to generate proofs on consumer devices. It leverages best-in-class circuit primitives, a HyperNova-style folding scheme for efficiently processing chains of private function calls, and Goblin, a hyper-efficient purpose-built recursion acceleration scheme. The result is that private transactions can be proven on the devices people actually use, not just powerful servers.

This matters because privacy on Aztec means proofs are generated on the user's own device, keeping private data private. If proving is too slow or too resource-intensive, privacy becomes impractical. CHONK makes it practical.

Decentralization

Decentralization is what makes Aztec's privacy guarantees credible. Without it, a central operator could censor transactions, introduce backdoors, or compromise user privacy at will. 

Aztec addressed this by hardcoding decentralized sequencing, proving, and governance directly into the base protocol. The Ignition Chain has proven the stability of this consensus layer, maintaining zero downtime with over 3,500 sequencers and 50+ provers running across five continents. Aztec Labs and the Aztec Foundation run no sequencers and do not participate in governance.

Noir

Noir 1.0 is nearing completion, bringing a stable, production-grade language within reach. Aztec's own protocol circuits have been entirely rewritten in Noir, meaning the language is already battle-tested at the deepest layer of the stack. 

Internal and external audits of the compiler and toolchain are progressing in parallel, and security tooling including fuzzers and bytecode parsers is nearly finished. A stable, audited language means application teams can build on Alpha with confidence that the foundation beneath them won't shift.

What Comes Next

The code for Alpha Network, a functionally complete and raw version of the network, is ready.

The Alpha Network brings fully programmable, privacy-preserving smart contracts to Ethereum for the first time. It's the culmination of years of parallel work across the four tracks in the Aztec Roadmap. Together, they enable efficient client-side proofs that power customizable smart contracts, letting users choose exactly what stays private and what goes public. 

No other project in the space is close to shipping this. 

The code is written. The network is running. All the pieces are in place. The governance proposal is now live on the forum and open for discussion. Read through it, ask questions, poke holes, and help shape the path forward. 

Once the community is aligned, the proposal moves to a vote. This is how a decentralized network upgrades. Not by a team pushing a button, but by the people running it.

Programmable privacy will unlock a renaissance in onchain adoption. Real-world applications are coming and institutions are paying attention. Alpha represents the culmination of eight years of intense work to deliver privacy on Ethereum. 

Now it needs to be battle-tested in the wild. 

View the updated product roadmap here and join us on Thursday, March 5th, at 3 pm UTC on X to hear more about the most recent updates to our product roadmap.

Explore by Topic
Aztec Network
Aztec Network
1 May
xx min read

What is the Aztec Public Testnet?

Aztec will be a fully decentralized, permissionless and privacy-preserving L2 on Ethereum. The purpose of Aztec’s Public Testnet is to test all the decentralization mechanisms needed to launch a strong and decentralized mainnet. In this post, we’ll explore what full decentralization means, how the Aztec Foundation is testing each aspect in the Public Testnet, and the challenges and limitations of testing a decentralized network in a testnet environment.

The three aspects of decentralization

Three requirements must be met to achieve decentralization for any zero-knowledge L2 network: 

  1. Decentralized sequencing: the process of using a network of nodes to sequence transactions, rather than relying on centralized authority; 
  1. Decentralized proving: generating zero-knowledge proofs (ZKPs) across a distributed network of computers; and 
  1. Decentralized governance: a system where decision-making authority is distributed across a network of participants. 

Decentralization across sequencing, proving, and governance is essential to ensure that no single party can control or censor the network. Decentralized sequencing guarantees open participation in block production, while decentralized proving ensures that block validation remains trustless and resilient, and finally, decentralized governance empowers the community to guide network evolution without centralized control. 

Together, these pillars secure the rollup’s autonomy and long-term trustworthiness. Let’s explore how Aztec’s Public Testnet is testing the implementation of each of these aspects. 

Decentralized sequencing

Aztec will launch with a fully decentralized sequencer network. 

This means that anyone can run a sequencer node and start sequencing transactions, proposing blocks to L1 and validating blocks built by other sequencers. The sequencer network is a proof-of-stake (PoS) network like Ethereum, but differs in an important way. Rather than broadcasting blocks to every sequencer, Aztec blocks are validated by a randomly chosen set of 48 sequencers. In order for a block to be added to the L2 chain, two-thirds of the sequencers need to verify the block. This offers users fast preconfirmations, meaning the Aztec Network can sequence transactions faster while utilizing Ethereum for final settlement security. 

PoS is fundamentally an anti-sybil mechanism—it works by giving economic weight to participation and slashing malicious actors. At the time of Aztec’s mainnet, this will allow sequencers to vote out bad actors and burn their staked assets. On the Public Testnet, where there are no real economic incentives, PoS doesn't function properly. To address this, we introduced a queue system that limits how quickly new sequencers can join, helping to maintain network health and giving the network time to react to potential malicious behavior.

Behind the scenes, a contract handles sequencer onboarding—it mints staking assets, adds sequencers to the set, and can remove them if necessary. This contract is just for Public Testnet and will be removed on Mainnet, allowing us to simulate and test the decentralized sequencing mechanisms safely.

Decentralized proving 

Aztec will also launch with a fully decentralized prover network. 

Provers generate cryptographic proofs that verify the correctness of public transactions, culminating in a single rollup proof submitted to Ethereum. Decentralized proving reduces centralization risk and liveness failures, but also opens up a marketplace to incentivize fast and efficient proof generation. The proving client developed by Aztec Labs involves three components: 

  1. Prover nodes identify unproven epochs (set of 32 blocks) and create individual proving jobs;

  2. Proving brokers add these proving job requests to a queue and allocate them to idle proving agents; and

  3. Proving agents compute the actual proofs. 

Once the final proof has been computed, the proving node sends the proof to L1 for verification. The Aztec Network splits proving rewards amongst everyone who submits a proof on time, reducing centralization risk where one entity with large compute dominates the network. 

For Aztec’s Public Testnet, anyone can spin up a prover node and start generating proofs. Running a prover node is more hardware-intensive than running a sequencer node, requiring ~40 machines with an estimated 16 cores and 128GB RAM each. Because running provers can be cost-intensive and incur the same costs on a testnet as it will on mainnet, Aztec’s Public Testnet will throttle transactions to 0.2 per second (TPS). 

Keeping transaction volumes low allows us to test a fully decentralized prover network without overwhelming participating provers with high costs before real network incentives are in place. 

Decentralized governance

Finally, Aztec will launch with fully decentralized governance. 

In order for network upgrades to occur, anyone can put forward a proposal for sequencers to consider. If a majority of sequencers signal their support, the proposal gets sent to a vote. Once it passes the vote, anyone can execute the script that will implement the upgrade. Note: For this testnet, the second phase of voting will be skipped. 

Decentralized governance is an important step in enabling anyone to participate in shaping the future of the network. The goal of the public testnet is to ensure the mechanisms are functioning properly for sequencers to permissionlessly join and control the Aztec Network from day 1. 

Client-side proofs 

One additional aspect to consider with regard to full decentralization is the role of network users in decentralizing the compute load of the network.

Aztec Labs has developed groundbreaking technology to make end-to-end programmable privacy possible. First with the release of Plonk, and later refinements like MegaHonk, which make it feasible to generate client-side ZKPs. Client-side proofs keep sensitive data on the user’s device while still enabling users to interact with and store this information privately onchain. They also help to scale throughput by pushing execution to users. This decentralizes the compute requirements and means users can execute arbitrary logic in their private functions.

Sequencers and provers on the Aztec Network are never able to see any information that users or applications want to keep private, including accounts, activity, balances, function execution, or other data of any kind. 

Aztec’s Public Testnet is shipping with a full execution environment, including the ability to create client-side proofs in the browser. Here are some time estimations to expect for generating private, client-side proofs: 

  • Client-side proofs natively on laptop: ~2.5 seconds for a basic function call (i.e, transfers);

  • Client-side proofs in browser: ~25 seconds fixed cost for basic function call, incremental calls add a few seconds; and
     
  • Client-side proofs natively on mobile: ~5 seconds.

Conclusion

Aztec’s Public Testnet is designed to rigorously test decentralization across sequencing, proving, and governance ahead of our mainnet launch. The network design ensures no single entity can control or censor activity, empowering anyone to participate in sequencing transactions, generating proofs, and proposing governance changes. 

Visit the Aztec Testnet page to start building with programmable privacy and join our community on Discord.

Aztec Network
Aztec Network
22 Apr
xx min read

History of Aztec: Pioneering Privacy in Web3

The Early Days of Aztec (2017)

When Aztec mainnet launches, it will be the first fully private and decentralized L2 on Ethereum. Getting here was a long road: when Aztec started eight years ago, the initial plan was to build an onchain financial service called CreditMint for issuing corporate debt to mid-market enterprises – obviously a distant use case from how we understand Aztec today. When co-founders Zac Williamson, Joe Andrews, Tom Pocock, and Arnaud Schenk, got started, the world of zero-knowledge proving systems and applications weren’t even in their infancy: there was no PLONK, no Noir, no programmable privacy, and it wasn’t clear that demand for onchain privacy was even strong enough to necessitate a new blockchain network. The founders’ initial explorations through CreditMint led to what we know as Aztec today. 

While putting corporate debt onchain might seem unglamorous (or just limited compared with how we now understand Aztec’s capabilities), it was useful, wildly popular, and necessary for the founding team to realized that no serious institution wanted to touch the blockchain without the same privacy assurances that they were accustomed to in the corporate world. Traditional finance is built around trusted intermediaries and middlemen, which of course introduces friction and bottlenecks progress – but offers more privacy assurances than what you see on public blockchains like Ethereum. 

This takeaway led to a bigger understanding: the number of people (not just the number of institutions) who wanted to use the blockchain was limited by a lack of programmable privacy. Aztec was born out of the recognition that everyone – not only corporations – could use permissionless, onchain systems for private transactions, and this could become the default for all online payments. In the words of the CEO, Zac Williamson:

“If you had programmable digital money that had privacy guarantees around it, you could use that to create extremely fast permissionless payment channels for payments on the internet.” 

Equipped with this understanding, Zac and Joe began to specialize. Zac, whose background is in particle physics, went deep on cryptography research and began exploring protocols that could be used to enable onchain privacy. Meanwhile, Joe worked on how to get user adoption for privacy tech, while Arnaud focused on getting the initial CreditMint platform live and recruiting early members of the team. In 2018, Aztec published a proof-of-concept transaction demonstrating the creation and transfer of private assets on Ethereum  – using an early cryptographic protocol that predated modern proving schemes like PLONK. It was a limited example, with just DAI as the test-case (and it could only facilitate private assets, not private identities), but it garnered a lot of early interest from members of the Ethereum community. 

“The Product Needs Drive the Proving Scheme” (2018-2020)

The 2018 version of the Aztec Protocol had three key limitations: it wasn’t programmable, it only supported private data (rather than private data and user-level privacy), and it was expensive, from both a computation and gas perspective. The underlying proving scheme was, in the words of Zac, a “Frankenstein cryptography protocol using older primitives than zk-SNARKs.” These limitations motivated the development of PLONK in 2019, a SNARK-based proving system that is computationally inexpensive, and only requires one universal trusted setup. 

A single universal trusted setup is desirable because it allows developers to utilize a common reference string for all of the programs they might want to instantiate in a circuit; the alternative is a much more cumbersome process of conducting a trusted setup ceremony for each cryptographic circuit. In other words, PLONK enabled programmable privacy for future versions of Aztec. 

PLONK was a big breakthrough, not just for Aztec, but for the wider blockchain community. Today, PLONK has been implemented and extended by teams like zkSync, Polygon, Mina, and more. There is even an entire category of proving systems called PLONKish that all derive from the original 2019 paper. For Aztec specifically, PLONK was also instrumental in paving the way for zk.money and Aztec Connect, a private payment network and private DeFi rollup, which launched in 2021 and 2022 respectively.  

The product needs of Aztec motivated the development of a modern-day proving system. PLONK proofs are computationally cheap to generate, leading not only to lower transaction costs and programmability for developers, but big steps forward for privacy and decentralization. PLONK made it simpler to generate client-side proofs on inexpensive hardware. In the words of Joe, “PLONK [was] developed to keep the middleman away.”  

Making the Blockchain Real (2021-2023)

Between 2021 and 2023, the Aztec team operated zk.money and Aztec Connect. The products were not only vital in illustrating that there was a demand for onchain privacy solutions, but in demonstrating that it was possible to build performant and private networks leveraging PLONK. Joe remarked that they “wanted to test that we could build a viable payments network, where the user experience was on par with a public transaction. Privacy needed to be in the background.” 

Aztec’s early products indicated that there was significant demand for private onchain payments and DeFi – at peak, the rollups had over $20 million in TVL. Both products fit into the vision Zac had to “make the blockchain real.” In his team’s eyes, blockchains are held back from mainstream adoption because you can’t bring consequential, real-world assets onchain without privacy. 

Despite the demand for these networks, the team made the decision to sunset both zk.money and Aztec Connect after recognizing that they could not fully decentralize the networks without massive architectural changes. Zac and Joe don’t believe in “Progressive Decentralization” – the network needs to have no centralized operators from day one. And it wasn’t just the sequencer of these early Aztec products that were centralized – the team also recognized that it would have been impossible for other developers to write programs on Aztec that could compose with each other, because all programs operated on shared state. In 2023, zk.money and Aztec Connect were officially shut down. 

In tandem, the team also began developing Noir (an original brainchild of Kevaundray Wedderbaum). Noir is a Rust-like programming language for writing zero-knowledge circuits that makes privacy technology accessible to mainstream developers. While Noir began as a way to make it easier for developers to write private programs without needing to know cryptography, the team soon realized that the demand for privacy didn’t just apply to applications on the Aztec stack, and that Noir could be a general-purpose DSL for any kind of application that needs to leverage privacy. In the same way that bringing consequential assets and activity onchain “makes the blockchain real,” bringing zero-knowledge technology to any application – onchain or offchain – makes privacy real. The team continued working on Noir, and it has developed into its own product stack today. 

Aztec Today 

Aztec from 2017 to 2024 can be seen as a methodical journey toward building a fully private, programmable, and decentralized blockchain network. The earliest attempt at Aztec as a protocol introduced asset-level privacy, without addressing user-level privacy, or significant programmability. PLONK paved the way for user-level privacy and programmability, which yielded zk.money and Aztec Connect. Noir extended programmability even further, making it easy for developers to build applications in zero-knowledge. But zk.money and Aztec Connect were incomplete without a viable path to decentralization. So, the team decided to build a new network from scratch. Extending on their learnings from past networks, the foundations and findings from continuous R&D efforts of PLONK, and the growing developer community around Noir, they set the stage for Aztec mainnet. 

The fact of the matter is that creating a network that is fully private and decentralized is hard. To have privacy, all data must be shielded cheaply inside of a SNARK. If you want to really embrace the idea of “making the blockchain real” then you should also be able to leverage outside authentication and identity solutions, like Apple ID – and you need to be able to put those technologies inside of a SNARK as well. The number of statements that need to be represented as provable circuits is massive. Then, all of these capabilities need to run inside of a network that is decentralized. The combination of mathematical, technological, and networking problems makes this very difficult to achieve

The technical architecture of Aztec reflects the learnings of the Aztec team. Zac describes Aztec mainnet as a “Russian nesting doll” of products that all add up to a private and decentralized network. Aztec today consists of:

  1. A decentralized Prover and Sequencer network that eliminates central points of control
  2. The Privacy Execution Environment (PXE) that enables client-side proving
  3. Significant innovations in proving systems, including the faster, low-memory proving systems optimized for browser performance

At the network level, there will be many participants in the decentralization efforts of Aztec: provers, sequencers, and node operators. Joe views the infrastructure-level decentralization as a crucial first stage of Aztec’s mainnet launch.

As Aztec goes live, the vision extends beyond private transactions to enabling entirely new categories of applications. The team envisions use cases ranging from consumer lending based on private credit scores to games leveraging information asymmetry, to social applications that preserve user privacy. The next phase will focus on building a robust ecosystem of developers and the next generation of applications on Ethereum using  Noir, the universal language of privacy. 

Aztec mainnet marks the emergence of applications that weren't possible before – applications that combine the transparency and programmability of blockchain with the privacy necessary for real-world adoption. 

Community
Community
25 Mar
xx min read

Is ZK-MPC-FHE-TEE a real creature?

Many thanks to Remi Gai, Hannes Huitula, Giacomo Corrias, Avishay Yanai, Santiago Palladino, ais, ji xueqian, Brecht Devos, Maciej Kalka, Chris Bender, Alex, Lukas Helminger, Dominik Schmid, ​​0xCrayon, Zac Williamson for inputs, discussions, and reviews. 

Contents

  1. Introduction: why we are here and why this article should exist
  2. Quick overview of each technology
    1. Client-side proving
    2. FHE
    3. MPC
    4. TEE
  3. Does it make sense to combine any of them and is it feasible?
    1. ZK-MPC
    2. MPC-FHE
    3. ZK-FHE
    4. ZK-MPC-FHE
    5. TEE-{everything}
  4. Conclusions: what to use and under what circumstances
    1. Comparison table
    2. What are the most reasonable approaches for on-chain privacy?

Prerequisites:

Introduction

Buzzwords are dangerous. They amuse and fascinate as cutting-edge, innovative, mesmerizing markers of new ideas and emerging mindsets. Even better if they are abbreviations, insider shorthand we can use to make ourselves look smarter and more progressive:

Using buzzwords can obfuscate the real scope and technical possibilities of technology. Furthermore, buzzwords might act as a gatekeeper making simple things look complex, or on the contrary, making complex things look simple (according to the Dunning-Kruger effect).

In this article, we will briefly review several suggested privacy-related abbreviations, their strong points, and their constraints. And after that, we’ll think about whether someone will benefit from combining them or not. We’ll look at different configurations and combinations.

Disclaimer: It’s not fair to compare the technologies we’re discussing since it won’t be an apples-to-apples comparison. The goal is to briefly describe each of them, highlighting their strong and weak points. Understanding this, we will be able to make some suggestions about combining these technologies in a meaningful way. 

POV: a new dev enters the space.

Quick overview of each technology

Client-side ZKPs

Client-side ZKP is a specific category of zero-knowledge proofs (started in 1989). The exploration of general ZKPs in great depth is out-of-scope for this piece. If you're curious to learn about it, check this article

Essentially, zero-knowledge protocol allows one party (prover) to prove to another party (verifier) that some given statement is true, while avoiding conveying any information beyond the mere fact of that statement's truth.

Client-side ZKPs enable generation of the proof on a user's device for the sake of privacy. A user makes some arbitrary computations and generates proof that whatever they computed was computed correctly. Then, this proof can be verified and utilized by external parties.

One of the most widely known use cases of the client-side ZKPs is a privacy preserving L2 on Ethereum where, thanks to client-side data processing, some functions and values in a smart-contract can be executed privately, while the rest are executed publicly. In this case, the client-side ZKP is generated by the user executing the transaction, then verified by the network sequencer. 

However, client-side proof generation is not limited to Ethereum L2s, nor to blockchain at all. Whenever there are two or more parties who want to compute something privately and then verify each other’s computation and utilize their results for some public protocols, client-side ZKPs will be a good fit.

Check this article for more details on how client-side ZKPs work.

The main concern today about on-chain privacy by means of client-side proof generation is the lack of a private shared state. Potentially, it can be mitigated with an MPC committee (which we will cover in later sections). 

Speaking of limitations of client-side proving, one should consider: 

  • The memory constraint: inherited from WASM memory cap – 4Gb and in case of mobile proving each device has its own memory cap as well. 
  • The maximum circuit size (derived from WASM memory cap): currently 2^20 for Aztec’s client-side proof generation (i.e. to prove any Noir program with Barretenberg in WASM).

What can we do with client-side ZKPs today: 

  • According to HashCloak benchmarking, a client-side ZKP of an RSA signature in Noir is generated in 0.2s (using UltraHonk and a laptop with Intel(R) Core(TM) i7-13700H CPU and 32 GB of RAM).
  • According to Polygon Miden, a STARK ZKP for the Fibonacci calculator program for 2^20 cycles at 96-bit security level can be generated in 7 sec using Apple M1 Pro (16 threads). 
  • According to ZKPrize winners’ benchmarks, it takes 10 minutes to prove the target of 50 signatures over 100B to 1kB messages on a consumer device (Macbook pro with 32GB of memory).

Whom to follow for client-side ZKPs updates: Aztec Labs, Miden, Aleo

MPC (Multiparty computation) 

Disclaimer: in this section, we discuss general-purpose MPC (i.e. allowing computations on arbitrary functions). There are also a bunch of specialized MPC protocols optimized for various use cases (i.e. designing customized functions) but those are out-of-scope for this article.

MPC enables a set of parties to interact and compute a joint function of their private inputs while revealing nothing but the output: f(input_1, input_2, …, input_n) → output.

For example, parties can be servers that hold a distributed database system and the function can be the database update. Or parties can be several people jointly managing a private key from an Ethereum account and the function can be a transaction signing mechanism. 

One issue of concern with MPCs is that one or more parties participating in the protocol can be malicious. They can try to:

  • Learn private inputs of other parties;
  • Cause the result of computations to be incorrect.

Hence in the context of MPC security, one wants to ensure that:

  • All private inputs stay private (i.e. each party knows its input and nothing else);
  • The output was computed correctly and each party received its correct output.

To think about MPC security in an exhaustive way, we should consider three perspectives:

  1. How many parties are assumed to be honest?
  2. The specific methods of corrupting parties.
  3. What can corrupted parties do?

How many parties are assumed to be honest?

Rather than requiring all parties in the computation to remain honest, MPC tolerates different levels of corruption depending on the underlying assumptions. Some models remain secure if less than 1/3 of parties are corrupt, some if less than 1/2 are corrupt, and some even have security guarantees even in the case that more than half of the parties are corrupt. For details, formal definition, and proof of MPC protocol security, check this paper.

The specific methods of corrupting parties

There are three main corruption strategies:

  1. Static – parties are corrupted before the protocol starts and remain corrupted to the end. 
  2. Adaptive – parties can be corrupted at different stages of protocol execution and after execution remain corrupted to the end. 
  3. Proactive – parties can switch between malicious and honest behavior during the protocol execution an arbitrary number of times, etc. 

Each of these assumptions will assume a different security model.

What can corrupted parties do?

Two definitions of malicious behavior are: 

  1. Semi-honest (also referred to as honest but curious, or passive adversary) – following the protocol as prescribed but trying to extract some additional information.
  2. Malicious – deviating from the protocol.

When it comes to the definition of privacy, MPC guarantees that the computation process itself doesn’t reveal any information. However, it doesn’t guarantee that the output won’t reveal any information. For an extreme example, consider two people computing the average of their salaries. While it’s true that nothing but the average will be output, when each participant knows their own salary amount and the average of both salaries, they can derive the exact salary of the other person.

That is to say, while the core “value proposition” of MPC seems to be very attractive for a wide range of real world use cases, a whole bunch of nuances should be taken into account before it will actually provide a high enough security level. (It's important to clarify the problem statement and decide whether it is the right tool for this particular task.)

What can be done with MPC protocols today:

When we think about MPC performance, we should consider the following parameters: number of participating parties, witness size of each party, and function complexity. 

  • According to the “Efficient Arithmetic in Garbled Circuits” paper, for general-purpose MPC, the computation costs are the following: at most O(n · ℓ · λ) bits per gate, with each multiplication gate using O(ℓ · λ) bits where  ℓ is the bit length of values, λ is a computational security parameter, and n is the number of gates. A value can be translated from arithmetic to Boolean (and vice versa) at cost O(ℓ · λ) bits (e.g. to perform comparison operation).

Source

  • As a matter of illustration, we are also providing an example of a specialized MPC protocol:
    According to dWallet Labs, their implementation of 2PC-MPC protocol (2-party ECDSA protocol) completes the signing phase in 1.23 and 12.703 seconds, for 256 and 1024 parties (emulating the second party in 2PC), respectively (claiming the number of parties can be scaled further).
  • Worldcoin jointly with TACEO made a number of optimizations to existing Secure Multi-Party Computation (SMPC) protocol, that enabled them to apply SMPC to the problem of iris code uniqueness. Early benchmarks show that one can achieve 10 iris uniqueness checks per second in ~6M database.

When it comes to using MPC in blockchain context, it’s important to consider message complexity, computational complexity, and such properties as public verifiability and abort identifiability (i.e. if a malicious party causes the protocol to prematurely halt, then they can be detected). For message distribution, the protocol relies either on P2P channels between each two parties (requires a large bandwidth) or broadcasting. Another concern arises around the permissionless nature of blockchain since MPC protocols often operate over permissioned sets of nodes.

Taking into account all that, it’s clear that MPC is a very nuanced technology on its own. And it becomes even more nuanced when combined with other technologies. Adding MPC to a specific blockchain protocol often requires designing a custom MPC protocol that will fit. And that design process often requires a room full of MPC PhDs who can not only design but also prove its security.

Whom to follow for MPC updates: dWallet Labs, TACEO, Fireblocks, Cursive, PSE, Fairblock, Soda Labs, Silence Laboratories, Nillion

TEE

TEE stands for Trusted Execution Environment. TEE is an area on the main processor of a device that is separated from the system's main operating system (OS). It ensures data is stored, processed, and protected in a separate environment. One of the most widely known units of TEE (and one we often mention when discussing blockchain) is Software Guard Extensions (SGX) made by Intel. 

SGX can be considered a type of private execution. For example, if a smart contract is run inside SGX, it’s executed privately. 

SGX creates a non-addressable memory region of code and data (separated from RAM), and encrypts both at a hardware level. 

How SGX works:

  • There are two areas in the hardware, trusted and untrusted. 
  • The application creates an enclave in the trusted area and makes a call to the trusted function. (The function is a piece of code developed for working inside the enclave.) Only trusted functions are allowed to run in the enclave. All other attempts to access the enclave memory from outside the enclave are denied by the processor.
  • Once the function is called, the application is running in the trusted space and sees the enclave code and data as clear text.
  • When the trusted function returns, the enclave data remains in the trusted memory area.

It’s worth noting that there is a key pair: a secret key and a public key. The secret key is generated inside of the enclave and never leaves it. The public key is available to anyone: Users can encrypt a message using a public key so only the enclave can decrypt it.

An SGX feature often utilized in the blockchain context is attestations. Attestation is the process of demonstrating that a software executable has been properly instantiated on a platform. Remote Attestation allows a remote party to be confident that the intended software is securely running within an enclave on a fully patched, Intel SGX-enabled platform.

Core SGX concerns:

  • SGX is subject to side-channel attacks. Observing a program’s indirect effects on the system during execution might leak information if a program’s runtime behavior is correlated with the secret input content that it operates on. Different attack vectors include page access patterns, timing behavior, power usage, etc.
  • Using SGX requires trusting Intel. Users must assume that everything is fine since the hardware is delivered with the private key already inside the trusted enclave. 
  • As a large enterprise, Intel is pretty slow in terms of patching new attacks. Check sgx.fail to find a list of publicly known SGX attacks that are yet to be fixed by Intel.
  • Application developers who use SGX are dependent on specific hardware produced by Intel. The company might eventually decide to deprecate or significantly change all or specific versions in ways that might make some or all applications incompatible. Or even break them. For example in 2021, SGX was deprecated on consumer CPUs. 
  • It might be hard to detect cheating fast enough if it takes place in a private domain (like with SGX). 
  • In the case of a network relying purely on TEE for privacy (i.e. a number of nodes run inside TEE and each node has complete information), exploiting one node in the network is enough to exploit the whole network (i.e. leak secrets).

Speaking of SGX cost, the proof generation cost can be considered free of charge. Though if one wants to use remote attestations, the initial one-time cost (once per SGX prover) for it is in the order of 1M gas (to make sure the code in SGX is running in the expected way).

Onchain verification cost equals to verifying an ECDSA signature (~5k gas while for ZK signature verification will cost ~300k gas). 

When it comes to execution time, there is effectively no overhead. For example, for proving a zk-rollup block, it will be around 100ms.

Where SGX is utilized in blockchain today:

  • Taiko is running an execution client inside the SGX (utilizing TEE for integrity). 
  • Secret Network’s validators run their code inside a TEE (utilizing TEE for privacy).
  • Flashbots are running SUAVE testnet on SGX.

Whom to follow for TEE updates: Secret Network, Flashbots, Andrew Miller, Oasis, Phala, Marlin, Automata, TEN.

FHE (Fully Homomorphic Encryption)

FHE enables encrypted data processing (i.e. computation on encrypted data). 

The idea of FHE was proposed in 1978 by Rivest, Adleman, and Dertouzos. “Fully” means that both addition and multiplication can be performed on encrypted data. Let m be some plain text and E(m) be an encrypted text (ciphertext). Then additive homomorphism is E(m_1 + m_2) = E(m_1) + E(m_2) and multiplicative homomorphism is E(m_1 * m_2) = E(m_1) * E(m_2). 

Additive Homomorphic Encryption was used for a while, but Multiplicative Homomorphic Encryption was still an issue. In 2009, Craig Gentry came up with the idea to use ideal lattices to tackle this problem. That made it possible to do both addition and multiplication, although it also made growing noise an issue. 

How FHE works:

Plain text is encoded into ciphertext. Ciphertext consists of encrypted data and some noise. 

That means when computations are done on ciphertext, they are done not purely on data but on data together with added noise. With each performed operation, the noise increases. After several operations, it starts overflowing on the bits of actual data, which might lead to incorrect results.

A number of tricks were proposed later on to handle the noise and make the FHE work more reliably. One of the most well-known tricks was bootstrapping, a special operation that reset the noise to its nominal level. However, bootstrapping is slow and costly (both in terms of memory consumption and computational cost). 

Researchers rolled out even more workarounds to make bootstrapping efficient and took FHE several more steps forward. Further details are out-of-scope for this article, but if you’re interested in FHE history, check out this talk by mathematician Zvika Brakerski. 

Core FHE concerns:

  • If the user (who encrypts information) outsources computations to an external party, they have to trust that the computations were done correctly.
    To handle the trust issue, (i) theoretically ZK can be used (though practically it’s not feasible today), (ii) economic consensus can be used. However, as FHE requires custom hardware (as computations to be done are very heavy), the number of participants in the FHE consensus network will always be limited, which is a problem for security. 
  • In the case of the FHE blockchain, there is one key for the whole network. Who holds the decryption key? The same will apply to dApps. For example, if an FHE computation modifies a liquidity pool total supply, that “total supply” must be decrypted at some point. But who possesses the key? (If you’re curious about FHE key attacks, check out this paper by Li and Micciancio).
  • If an external party provides encrypted input, how can the party performing computations be sure that the external party knows the input and that the input was encrypted correctly? (This can be mitigated with zero-knowledge proof of knowledge, which will be discussed in the ZK-FHE section).
  • While using FHE, one should ensure that the decrypted output doesn’t contain any private information that should not be revealed. Otherwise, formally it breaks privacy.
    One should note that there are two different types of decryption: (i) to reveal the entire network (e.g. reveal cards at the end of the game), (ii) reencryption (i.e. decryption and encryption) as a view function (e.g. view your own cards). 
  • FHE is “heavy.” When considering FHE computation cost (both in terms of computation volume and memory required), related considerations include (i) operations computation cost, (ii) communication cost, and (iii) evaluation keys size (a separate public key that is used to control the noise growth or the ciphertext expansion during homomorphic evaluation).
    One might think about FHE hardware similar to Bitcoin hardware (highly performant ASICs).


Compared to computations on plain text, the best per-operation overhead available today is polylogarithmic [GHS12b] where if n is the input size, by polylogarithmic we mean O(log^k(n)), k is a constant. For communication overhead, it’s reasonable if doing batching and unbatching of a number of ciphertexts but not reasonable otherwise. 

For evaluation keys, key size is huge (larger than ciphertexts that are large as well). The evaluation key size is around 160,000,000 bits. Furthermore, one needs to permanently compute on these keys. Whenever homomorphic evaluation is done, you’ll need to access the evaluation key, bring it into the CPU (a regular data bus in a regular processor will be unable to bring it), and make computations on it. 


If you want to do something beyond addition and multiplication—a branch operation, for example—you have to break down this operation into a sequence of additions and multiplications. That’s pretty expensive. Imagine you have an encrypted database and an encrypted data chunk, and you want to insert this chunk into a specific position in the database. If you’re representing this operation as a circuit, the circuit will be as large as the whole database.


In the future, FHE performance is expected to be optimized both on the FHE side (new tricks discovered) and hardware side (acceleration and ASIC design). This promises to allow for more complex smart contract logics as well as more computation-intensive use cases such as AI/ML. A number of companies are working on designing and building FHE-specific FPGAs (e.g. Belfort).

“Misuse of FHE can lead to security faults.”

Source

What can be done with FHE today: 

  • According to Ingonyama: With an LLM like GPT2, processing time for a single token is approximately 14.5 hours.
    Token is a unit of text, for example, one english word ≈ 1.3 tokens. Each text request to GPT2 consists of a number of tokens. Based on the processing time of one token, one can define the processing time of the whole request.
    With parallel processing, deploying 10,000 machines, the time is 5 seconds/token. With a custom ASIC designed, the time can be decreased to 0.1 second/token, but this would require huge initial investments in data centers and ASIC design.
  • According to Zvika Brakerski: When asked the question “Can we build production-level systems where FHE brings value?” he responds, “I don’t know the answer yet.”
  • According to Zama: A toy-implementation of Shazam (a music recognition app) with Zama FHE library takes 300 milliseconds to recognize a single song out of 1,000. But how will that change as the database grows? (The real Shazam library has 45M songs.)
  • According to Inco, FHE is usable today for simple blockchain use cases (i.e. smart contracts with simple logics). For example, in a confidential ERC-20 transfer that’s FHE-based, you are performing an FHE addition, subtraction, comparison, and conditional multiplexer (cmux/select) to update the balances of the sender and recipient. With CPU, Inco can do 10 TPS, and with GPU – 20-30 TPS. 

Note: In all of these examples, we are talking about plain FHE, without any MPC or ZK superstructures handling the core FHE issues.

Whom to follow for FHE updates: Zama, Sunscreen, Zvika Brakerski, Inco, FHE Onchain.

Does it make sense to combine any of these, and is doing so feasible?

As we can see from the technology overview, these technologies are not exactly interchangeable. That said, they can complement each other. Now let’s think. Which ones should be combined, and for what reason?

Disclaimer: Each of the technologies we are talking about is pretty complex on its own. The combinations of them we discuss below are, to a large extent, theoretical and hypothetical. However, there are a number of teams working on combining them at the time of writing (both research and implementation). 

ZK-MPC

In this section, we mostly describe two papers as examples and don’t claim to be exhaustive. 

One of the possible applications of ZK-MPC is a collaborative zk-snark. This would allow users to jointly generate a proof over the witnesses of multiple, mutually distrusting parties. The proof generation algorithm is run as an MPC among N provers where function f is the circuit representation of a zk-SNARK proof generator. 

Source

Collaborative zk-SNARKs also offer an efficient construction for a cryptographic primitive called a publicly auditable MPC (PA-MPC). This is an MPC that also produces a proof the public can use to verify that the computation was performed correctly with respect to commitments to the inputs.

ZK-MPC introduces the notion of MPC-friendly zk-SNARKs. That is to say, not just any MPC protocol or any zk-SNARK can feasibly be combined into ZK-MPC. This is because MPC protocols and zk-SNARK provers are each thousands of times slower than their underlying functionality, and their combination is likely to be millions of times slower.

For those familiar with elliptic curve cryptography, let’s think for a moment about why is ZK-MPC tricky:

If doing it naively, you could decompose an elliptic curve operation into operations over the curve’s base field; then there is an obvious way to perform them in an MPC. But curve additions require tens of field operations, and scalar products require thousands. 

The core tricks suggested for use include: 

  • MPC techniques applied directly to elliptic curves to make curve operations cheap.
  • The N shares are themselves elliptic curve points, and the secret is reconstructed by a weighted linear combination of a sufficient number of shares.
  • An optimized MPC protocol is utilized for computing sequences of partial products. 

Essentially, ZK-MPC in general and collaborative zk-SNARKs in particular are not just about combining ZK and MPC. Getting these two technologies to work in concert is complex and requires a huge chunk of research. 

According to one of the papers on this topic, for collaborative zk-SNARKs, over a 3Gb/s link, security against a malicious minority of provers can be achieved with approximately the same runtime as a single prover. Security against N−1 malicious provers requires only a 2x slowdown. Both TACEO and Renegade (launched mainnet on 04.09.24) teams are currently working on implementing this paper.

Another application of ZK-MPC is delegated zk-SNARKs. This enables a prover (called a delegator) to outsource proof generation to a set of workers for the sake of efficiency and engaging less powerful machines. This means that if at least one worker does not collude with other workers, no private information will be revealed to any worker. 

This approach introduces a custom MPC protocol. The issues with using existing protocols are:

  • Existing state-of-the-art MPC protocols achieving malicious security against a dishonest majority of workers rely on relatively heavyweight public-key cryptography, which has a non-trivial computational overhead. 
  • These MPC protocols require expressing the computation as an arithmetic circuit, including expressing complex operations such as elliptic curve multi-scalar multiplications and polynomial arithmetic that is expensive.

One of the papers on this topic suggests using SPDZ as a starting point and modifying it. A naive approach would be to use the zk-SNARK to succinctly check that the MPC execution is correct by having the delegator verify the zk-SNARK produced by the workers. However, this wouldn’t be knowledge-sound because the adversary can attempt to malleate its shares of the delegator’s valid witness (w) to produce a proof of a related statement. Even if the resulting proof is invalid, it can leak information about w. However, we can use the succinct verification properties of the underlying components of the zk-SNARK, the PIOP (Polynomial Interactive Oracle Proof) and the PC (Polynomial Commitment) scheme.

Other modifications correspond to optimizations, such as optimizing the number of multiplications in, and the multiplicative depth of circuits for these operations; and introducing a consistency checker for the PIOP to enable the delegator to efficiently check that the polynomials computed during the MPC execution are consistent with those that an honest prover would have computed.

According to one of the papers on this topic, “... when compared to local proving, using our protocols to delegate proof generation from a recent smartphone (a) reduces end-to-end latency by up to 26x, (b) lowers the delegator’s active computation time by up to 1447x, and (c) enables proving up to 256x larger instances.”

For a privacy-preserving blockchain, ZK-MPC can be utilized for collaboratively proving the correctness of state transition, where each party participating in generating proof has only a part of the witness. Hence the proof can be generated while no single party is aware of what they are proving. For this purpose, there should be an on-chain committee that will generate collaborative zk-SNARKs. It’s worth noting that even though we are using the term “committee,” this is still a purely cryptographic solution. 

Whom to follow for ZK-MPC updates: TACEO, Renegade.

MPC-FHE

There are a number of ways to combine FHE and MPC and each serves a different goal. For example, MPC-FHE can be employed to tackle the issue “Who holds the decryption key?” This is relevant for an FHE network or an FHE DEX. 

One approach is to have several parties jointly generate a global single FHE key. Another approach is multi-key FHE: the parties take their existing individual (multiple) FHE key pairs and combine them in order to perform an MPC-like computation. 

As a concrete example, for an FHE network, the state decryption key can be distributed to multiple parties, with each party receiving one piece. While decrypting the state, each party does a partial decryption. The partial decryptions are aggregated to yield the full decrypted value. The security of this approach holds under an assumption of 2/3 honest validators. 

The next question is, “How should other network participants (e.g. network nodes) access the decrypted data?” It can’t be done using a regular oracle (i.e. each node in the oracle consensus network must obtain the same result given the same input) since that would break privacy. 

One possible solution is a two-round consensus mechanism (though this relies on social consensus, not pure cryptography). The first round is the consensus on what should be decrypted. That is, the oracle waits until most validators send it the same request for decryption. Next, the round of decryption. Then, the validators update the chain state and append the block to the blockchain. 

Whom to follow for MPC-FHE updates: Gauss Labs (utilized by Cursive team).

ZK-FHE

MPC-FHE has two issues that can potentially be mitigated with ZK:

  1. Were inputs encrypted correctly?
  2. Were the computations on encrypted data performed correctly?

Without introducing ZK, both issues listed above make one fragment of private computations unverifiable. (That doesn’t quite work for most blockchain use cases). 

Where are we today with ZK-FHE?

According to Zama, proof of one correct bootstrapping operation can be generated in 21 minutes on a huge AWS machine (c6i.metal). And that’s pretty much it. Hopefully, in the upcoming years we will see more research on ZK-FHE.

Whom to follow for ZK-FHE updates: Zama, Pado Labs.

ZK-MPC-FHE (a sum of MPC-FHE and ZK-FHE)

One issue with MPC-FHE we haven’t mentioned so far has to do with knowing for sure that an encrypted piece of information supplied by a specific party was encrypted by that same party. What if party A took a piece of information encrypted by party B and supplied it as its own input? 

To handle this issue, each party can generate a ZKP that they know the plaintext they are sending in an encrypted way. Adding this ZK tweak with two ZK tweaks from the previous section (ZK-FHE), we will get verifiable privacy with ZK-MPC-FHE.

Whom to follow for ZK-MPC-FHE updates: Pado Labs, Greco.

TEE-{everything}

TL;DR: In general, when it comes to using any new technology, it makes sense to run it inside TEE since the attack vector with TEE is orders of magnitude smaller than on a regular computer:

Source

Using TEE as an execution environment (to construct ZK proofs and participate in MPC and FHE protocols) improves security at almost zero cost. In this case, secrets stay in TEE only within active computation and then they are discarded. However, using TEE for storing secrets is a bad idea. Trusting TEEs for a month is bad, trusting TEEs for 30 seconds is probably fine. 

Another approach is to use TEE as a “training wheels,” for example, for multi-prover where computations are run both in a ZK circuit and TEE, and to be considered valid they should agree on the same result. 

Whom to follow for TEE-{something} updates: Safeheron (TEE-MPC).

Conclusions: should we combine them all?

It might feel tempting to take all of the technologies we’ve mentioned and craft a zk-mpc-fhe-tee machine that will combine all their strengths:

However, the mere fact that we can combine technologies doesn’t mean we should combine them. We can combine ZK-MPC-FHE-TEE and then add quantum computers, restaking, and AI gummy bears on top. But for what reason? 

Source

Each of these technologies adds its own overhead to the initial computations. 10 years ago, the blockchain, ZK, and FHE communities were mostly interested in proof of concept. But today, when it comes to blockchain applications, we are mostly interested in performance. That is to say we are curious to know if we combine a row of fancy technologies, what product/application could we build on it?

Let’s structure everything we discussed in a table:

Hence, if we are thinking about a privacy stack that will be expressive enough that developers can build any Web3 dApps they imagine, from everything we’ve mentioned in the article, we either have MPC-ZK (MPC is utilized for shared state) or ZK-MPC-FHE. As for today, client-side zero-knowledge proof generation is a proven concept and we are currently at the production stage. The same relates to ZK-MPC; a number of teams are working on its practical implementation. 

At the same time, ZK-MPC-FHE is still at the research and proof-of-concept stage because when it comes to imposing zero-knowledge, it’s know how to zk-prove one bootstrapping operation but not arbitrary computations (i.e. circuit of arbitrary size). Without ZK, we lose the verifiability property necessary for blockchain. 

Sources:

  • A paper, “Secure Multiparty Computation (MPC)” by Yehuda Lindell.
  • An article, “Introduction to FHE: What is FHE, how does FHE work, how is it connected to ZK and MPC, what are the FHE use cases in and outside of the blockchain, etc.”
  • A talk, “Trusted Execution Environments (TEEs) for Blockchain Applications” by Ari Juels.
  • An article, “Why multi-prover matters. SGX as a possible solution.” 
  • A paper, “Experimenting with Collaborative zk-SNARKs: Zero-Knowledge Proofs for Distributed Secrets” by Alex Ozdemir and Dan Boneh.
  • A paper, “EOS: Efficient Private Delegation of zkSNARK Provers” by Alessandro Chiesa, Ryan Lehmkuhl, Pratyush Mishra, and Yinuo Zhang.
  • A paper, “Practical MPC+FHE with Applications in Secure Multi-Party Neural Network Evaluation” by Ruiyu Zhu,  Changchang Ding, and Yan Huang.
  • An article, “Between a Rock and a Hard Place: Interpolating between MPC and FHE”
  • A talk, “Building Verifiable FHE using ZK with Zama.”
  • An article, “Client-side Proof Generation.”
  • An article, “Does zero-knowledge provide privacy?”

Vision
Vision
13 Feb
xx min read

Aztec Foundation Launches to Accelerate Vision of Programmable Privacy

Today we introduce the Aztec Foundation, a nonprofit organization to support the growth and development of open-source programmable privacy. The launch of the Aztec Foundation marks a significant milestone for the Aztec Network, bringing us closer to the launch of a fully decentralized, privacy-preserving network.

As steward of the Aztec Network, the Foundation will conduct fundamental research in freedom-enhancing cryptography. It will also provide support to builders developing innovative applications that protect user privacy, enable compliance, and maintain Noir, the universal language for zero-knowledge proofs.

The Foundation will empower the open-source community to put programmable privacy technology into the hands of builders and deliver on the promise to solve one of the biggest barriers to mass blockchain adoption – privacy. 

The Foundation will contribute across protocol operations, technology, and commercial, ensuring the community is involved in key decisions. Its goal is to provide support in bootstrapping a healthy and active ecosystem for emerging projects, helping them become self-sustaining while supporting public good projects that drive adoption.

The Foundation will provide ancillary support to the protocol ecosystem through grants to teams and individuals building end-user-facing applications. The Foundation will also fund cryptography research and other special projects to support Aztec’s greater aim of empowering developers to build privacy-first applications.

The Aztec Foundation is co-founded by Zac Williamson, who will serve as President and Chairman of the Board, and Arnaud Schenk who will lead the Foundation as Executive Director and serve on the board. We’re honored to welcome Arnaud back to the ecosystem. Arnaud is an original co-founder of Aztec with deep expertise in early-stage startup ecosystems who will now help lead day-to-day operations and commercial efforts at the Foundation. Herbert Sterchi, an early board member of Ethereum Switzerland GmbH, will also join as a board member.

For more information about the Aztec Foundation, visit the website and follow @aztecFND on X. To continue following updates on Noir and Aztec, follow Noir and Aztec on X.