Vision
26 Jun
## min read

Can blockchains and zero-knowledge help humanity survive? 47 real-world use cases

Worldwide, the average person’s daily screen time is 6 hours and 58 minutes. This is approximately 41% of waking time. Does it mean that a modern human is 41% digital?

Share
Written by
Lisa A.
Edited by

Are we becoming cyborgs in 2024?

Many thanks to Elias, Palla, Mahima, Hannes, and Rafal for review. Separate thanks to João Montenegro for an awesome discussion.

Worldwide, the average person’s daily screen time is 6 hours and 58 minutes. This is approximately 41% of waking time. Does it mean that a modern human is 41% digital?

Let’s think.

  • Personal relations are becoming digital (17% of marriages and 20% of relationships begin online) as well as business and consumer relations (81% of consumers prefer communicating in a messenger).
  • Society is becoming “cashless”: only 9% of Americans pay with cash, and in some countries it accounts for only 1% (e.g. Norway, Honk Kong, Sweden, etc.).
  • 71% of smartphone owners sleep with or next to their mobile phones.
  • Google search engine acts almost like humanity's hive-mind.
  • “Information and communications technologies are more widespread than electricity.” – Shoshana Zuboff.

One thing can be said for sure: our everyday life is a blend of the physical and digital worlds and this trend won’t reverse. Furthermore, the share of digital reality is going to grow.

Humanity has rich experience (approx. 300,000 years) living in physical reality.

Within this time we’ve learnt some useful skills. For example:

  • How to build reliable constructions such as houses, bridges, roller coasters, and military offices.
  • How to treat our bodies in such a way that they serve us as long as possible.
  • How to build meaningful relations that last for years and make us feel good, loved, and needed.

With digital reality, our experience is approximately 4,878x shorter (assuming 1983 being the internet’s year of birth). So, it seems we are still at the beginning of the learning curve of living in digital reality.

What part of ourselves is digital? Let’s think.

  • Most of our documents, files, and data (e.g. financial and health) are digital and stored on smartphones or laptops.
  • Most of our communications and social interactions are digital: through emails, social media, video calls, etc.
  • Most of our work is done digitally: collaborative work using Google services and GitHub, web services (e.g. Figma), storing and managing files in the cloud (e.g. Dropbox).
  • Most of our memorable moments and things that make us feel good are stored digitally: pictures from birthdays and weddings, screenshotted messages from those we love and hate, playlists we’ve been collecting for years, movie ratings, etc.
  • Authentications and passwords for all services and accounts we use.

What exactly does “digital” mean in all these cases? In most cases, it means stored in cloud services or on a company server. That is saying that 99.9% of our digital “us” is outside of our control.

Google won’t close tomorrow, will it?

Humans like to be optimistic. Google won’t close tomorrow, right? It just doesn’t make any sense. Humanity pays it so much money. Humanity gives it all its data.

Okay, thinking rationally, let’s assume that Google’s best option is to exist forever. But what about our Google accounts? What about the digital “us”? What about data authenticity? What about the part of our life that happens digitally every day? Will it exist tomorrow?

Let’s for a second move from that optimistic perspective to a more realistic one:

  • Whether we have access to our money or not depends on whether the bank allows us or not.
  • Whether we can access our iPhone or not depends on whether Apple allows us or not.
  • Whether we can access our data backup on iCloud depends on whether Apple allows us or not.
  • Whether we can leave a comment on a forum or social media depends on whether we meet the criteria of being a “good user” or not.

So, the first “layer” is: in order to have access to our digital life, we need to meet the criteria of being a “good customer” every day. Every single day. Doesn’t sound nice, but not a big deal, right?

Source

At the end of the day, we are good humans, we have nothing to hide, we trust our governments and these pretty multibillion corporations. They are nice guys. Is there anything else to worry about?

Hmm, let us think what happens if…

  • Artificial general intelligence (AGI) kidnaps the internet and floods it with bots?
  • Company servers are destroyed because of war or natural disasters?
  • Governmental databases break down, destroying all information about citizens and pushing the world into anarchy?
  • Autopilot transports and machinery get programmed for self-destruction on Wednesday 9:00 Eastern Time?
  • Banking system gets lost or corrupted, so we lose all our money because it is just a digital entry on a centralized server?

We still can stick to an optimistic mindset and say: okay, the probability of this happening is very low. Let’s wait until one such incident happens (e.g. nuclear war) and then we should start worrying. Today we still can access banking apps, scroll Instagram, and order a poke bowl. Why do we need to worry now?

There are several reasons to think about it today:

  1. We can’t think about saving the world after the world has been destroyed because the world won’t exist anymore.
  2. Saving the world might take time. A couple of seconds might not be enough. And if the world is going to be destroyed, no one will kindly tell us 20 years in advance.
  3. We still can eat poke while trying to save the world. There is no contradiction therein.

Now let’s be more precise. While appealing to “saving the world”, by “the world”, we mean “the digital world”, a part of our life and identity that exists purely online or in a hybrid state of offline and online (e.g. medical devices implanted into human bodies having software components). By “saving”, we mean making it robust. That is to say neither governments, corporations, AGI, nor bots can kidnap, destroy, control, or limit a digital part of us.

We must make the internet more robust

Comparing our physical reality to digital reality, if in the former we developed robust materials such as carbon, metal, glass, and ultra-high molecular weight polyethylene fiber, in the latter we are still building twig huts.

The good news is that today we are already equipped well enough to start making the internet more robust. Three components that will allow us to resist factors such as AGI or dictatorships are blockchains, privacy incorporated into blockchains, and cryptography. Why these three?

To have control of our digital “us”, we need four components:

  • Permissionlessness
  • Immutability
  • Verifiability
  • Privacy

Let’s abstract from the wording “the internet”. Instead, let’s use the word “cyberspace”, meaning “everything digital” including the internet, computer networks, and whatnot.

By permissionlessness, we mean that cyberspace is equal for any human on Earth. Create an account, deploy code, own a digital object, talk to another human in cyberspace – whatever is possible there for one human is possible for everyone.

By immutability, we mean that humans have control over their life in cyberspace and if they make an action in cyberspace (e.g. open an account, issue an ID, or deploy code), no one can undo it. One should note that immutability might work as a double-edged sword, as if being immutable means nothing can ever be forgotten. Thus, we should be very conscious while building of what exactly we are making immutable and how it might manifest as a negative externality one day.

By verifiability, we mean something like an objective source of truth. If human 54835934 tells human 82849934 that “this statement” is true, human 82849934 has a tool to objectively verify its truthfulness.

By privacy, we mean control over what is disclosed, when, and to whom. How sincere are we claiming “we have nothing to hide”? Even if we don’t mind “someone” having access to all our correspondence, medical data, pictures and videos (including those hot nudes), food delivery, e-commerce, and porn browser history… What if we take it one step further, right to brain-computer interfaces (BCI)? If someone (whether governments, corporations, AGI or another party with exceptionally good intentions) now has access to all our thoughts and body chemistry, do we still have nothing to hide?

These four properties allow us to ensure that our digital “us” will last at least as long as our physical “us” (or maybe even longer, but that is a topic for another article).

Blockchains and cryptography

No technology in isolation is a magic pill. However, combining the right technologies and applying them in the right cases allow us to come closer to inventing a magic pill.

Let’s first clarify what technologies we have in our “survival kit” and what properties of theirs we are interested in.

Blockchain provides permissionlessness and immutability

Blockchain is a shared, immutable ledger that facilitates the process of recording transactions and tracking assets in a network.

In this definition, the key property is “shared”. The question is how exactly it is shared or who are the participants of the network. To be shared “for real”, we want the blockchain to have thousands or even millions of nodes that are independent of each other. As of the 15th of April 2024, Ethereum has 5,912 nodes in the network where by nodes we mean different independent parties running infrastructure (one node might run many validators). And even though some of them are run by institutions, Ethereum can be fairly called a shared and immutable ledger. That is to say, Ethereum is good enough to be the base of the robust internet.

Cryptography provides verifiability

Cryptography’s history is almost as long as humanity’s history. But with the advent of the digital era, some specific kinds of cryptography that fit the needs of computers and the internet started evolving at an insane pace.

The history of zero-knowledge cryptography started in 1989, although it wasn’t obvious it would be a perfect complement for blockchains (even though the idea of blockchains was proposed earlier).

A zero-knowledge protocol consists of two parties, a Prover and a Verifier, where the Prover can prove to the Verifier that a certain huge statement is true while conveying only a tiny bit of information. This property is called “succinctness” and is a key component of cyberspace, where individuals need to coordinate with each other all the time and the communication has to be as lightweight as possible.

Privacy-preserving blockchains provide privacy

Privacy-preserving blockchains allow individuals to choose what information to disclose, when, to whom, and in what form. This is another crucial element of cyberspace, as disclosing one bit of data while preserving private other bits of data is a must-have mechanism for efficient coordination.

For example, in the case of elections, an individual wants to tell the tally that (i) they are eligible to vote, (ii) they have voted, (iii) they voted only once, but they don’t want to disclose any details about their vote or their identity.

One should note that privacy on the application layer (e.g. private identity or transfers) can’t be added to a transparent blockchain purely by means of cryptography. Privacy should be incorporated into blockchain design at the level of smart contract anatomy and state management.

Now that we’ve looked at what components we need to build robust cyberspace, let’s explore specific cases we need to fix and decide if we really can fix them with these tools.

47 use cases and 75 examples for blockchains, privacy, and zero-knowledge

In the second part of this article, we will explore 47 use cases and 75 examples of how blockchains, privacy, and verifiability can disrupt, heal, expand, and modify the world around us, addressing its problems, weaknesses, points of failure, and fragilities.

But before we dive into the full list, let’s cover three quick examples as an introductory illustration:

  1. Programmable identity: Today, when someone needs to confirm their identity, age, citizenship, or residential address, they provide a passport, ID, or other formal documents. With zero-knowledge proofs, one can manage all their documents client-side and provide proofs of specific properties upon request, without requiring documents.

One should note that for this use case we strictly need privacy-preserving composable blockchain (e.g. Aztec).

  1. Combining several data sets and a model without datasets and model disclosure between parties.
  1. Smart home that processes all data client-side (i.e. inside of smart home) without sharing its inhabitants’ data with anyone (even Google!).

Now we have some intuition behind how zero-knowledge proofs and privacy-preserving blockchains can create an alternative to the current reality, with several of the cases we’ve mentioned involving fixing data privacy and individual sovereignty issues and unlocking new forms of collaboration and coordination. Finally we are ready to dive into the longlist of blockchain, privacy, and verifiability use cases, where these three factors become the real game changers for the world and humanity’s future.

Most of the ideas described below were borrowed from brilliant minds, either from their public presentations or personal talks. Credit to Barry, Steven, Zac, freeatnet.eth, Jessica, Henry, Tarun, and Joe for being brilliant minds.

We will also categorize all use cases into categories:

  1. Efficient coordination – allows small numbers of individuals coordinating efficiently to compete with much larger agents.
  2. Verifiable computation – “truth objectivity” allows individuals to be sure that something is true without relying on honesty of institutions or other individuals.
  3. Data immutability and robustness – makes the cyberspace environment more long-lasting, reliable, and independent of institutions and individual agents.
  4. Improved world economic efficiency – allows individuals and institutions to utilize the resources at their disposal in a more economically efficient manner.
  5. Privacy – just privacy.

In the table below, we categorize a number of solutions to specific problems we will probably meet within the next decades while the digital share of the world is expanding and rooting deeper and deeper into our reality. We classify them according to whether they require blockchain, privacy, verifiability (provided by zero-knowledge), or a specific combination of these three. In most cases, blockchain serves as a coordination layer, privacy is engaged in all cases where participants can’t go with 100% transparency of all data, and zero-knowledge adds verifiability.

Under blockchain, we assume a decentralized one where nodes are run by thousands of diverse, independent parties. This decentralization makes the network credibly neutral and provides strong security guarantees. Today, Ethereum fits this criteria more than anything else. Layers 2 on top of Ethereum inherit its security property, however, their credible neutrality depends on design details such as approach to block building and upgrade mechanism.

Under privacy, we mean that whatever we want to stay private should be fully processed client-side on the user's device and should not be exposed to any other parties.

We also consider combining blockchain, privacy, and zero-knowledge with other existing technologies such as:

  • MPC and 2PC – enables multiple parties (or two parties in 2PC case) – each holding their own private/secret data – to compute the value of a public function using that private data while keeping their own piece of data secret.
  • TEE – a secure area of a main processor that prevents unauthorized entities from outside the TEE from reading data, and prevents code in the TEE from being replaced or modified by unauthorized entities.
  • Any existing cryptographic primitive such as an ECDSA signature (an example of a public key cryptography encryption algorithm).

Why can't we use these technologies separately and why do we need to combine them with zero knowledge and blockchain? Because we don’t just want to trust those using them that they are acting in good faith; we want to be able to verify that they are acting in good faith.

Efficient coordination

Use case Blockchain Privacy Verifiability
Proof of eligibility (for participation, access, etc.) Example 1: Access to the building (e.g. office).
Example 2: Access to military data regarding ongoing war where if this data is leaked it will lead to numbers of deaths.
Optional Need Need
Proof of innocence
Example: Compliance (proof of not interacting with agents meeting specific criteria, e.g. North Korean hackers).
Need Need Need
A mechanism ensuring collective agreement for making crucial decisions
Example 1: Minimum threshold for pushing the nuclear button where the voting members represent all parties that should be represented in a fair manner (MPC+ZK).
Example 2: Minimum threshold to approve decisions of the board of directors with huge financial impact.
Need Need Need
Prevention from malicious cooperation such as bribery or voting system “tampering” Example 1: Voting mechanism where it is impossible to prove that one has voted for a specific candidate.Example 2: Voters can independently verify that their vote was fairly recorded in the election tally. Need Need Need
“Programmable” trust: verification of specific properties of individuals (e.g. belonging to specific groups, visiting specific events, etc.) that allows the building of “safe spaces”
Example: If one studied at the same university and is a member of the same charity organization, it increases the probability that we share similar values and my default trust in this person can rationally be higher than in those who don’t meet these properties.
Need Need Need
“Temperature check” before making decisions/actions with extremely high general costs or the cost of being among the first.
Example: A government member can check how many other government members are ready to oppose the current ruling coalition and start to act only if they are sure they will have sufficient supporters (MPC+ZK).
Need Need Need
Check the alignment or expectations regarding a specific question in a neutral way as (i) participants don’t need to disclose sensitive information to each other and (ii) no one needs to express their preferences first, so there is no time priority.
Example 1: Negotiation of salary between the employer and employee where no single party needs to reveal their preference first, allowing the second party to adjust their preferences based on the disclosed information.
Example 2: Negotiation of the type of relationship that works best of all for all participants, such as “Should friends start romantic relationships”? (2PC+ZK or MPC+ZK depending on the number of participants).
Need Need Need
Private DAOs as a type of a legal entity
Example: Programmable organization with task-based system where compensation payments fully depend on executed milestones.
Need Need Need
Neutral personal data regulation
Example: Instead of governments enforcing the laws and standards around personal data (e.g. the EU’s GDPR), these standards are designed and regulated by a neutral third party while its execution is verifiable.
Need Need Need
Fair incentives mechanisms
Example: Corporations commit to a specific max level of negative externality regarding the greenhouse effect. If the max level is exceeded even by a tiny amount, the agent is “slashed”. By slashed we might mean monetary or other types of slashing (e.g. “reputational slashing” through public reporting for a worldwide audience).
Need Need Need
Blended experience between offline and online domains
Example: Pokémon Go, where blockchain serves as a coordination layer for a number of players.
Need Need Need

Verifiable computations / truth objectivity

Use case Blockchain Privacy zk
Ensuring that an autonomous automatic system will work exactly in the way it is expected to work (i.e. according to the designed algorithm)Example: Self-driving cars, autopilot planes, and self-navigating rockets can execute only the algorithm they committed to. Optional Optional Need
Executing the algorithm while keeping it a secret, i.e. executing any algorithm that should be executed by an external party while the algorithm owner would prefer not to disclose the underlying mechanism (either for commercial or personal motivations)
Example: Dating matching algorithm (2PC/MPC+ZK).
Optional Need Need
Content authenticity: verifying specific criteria of content (pictures, videos, etc.) such as no-edits, produced by a human, time and place of production, etc.
Example: Distinguishing between deepfakes and authentic sources.
Optional Optional Need
Ensuring that some technology was used in a proper way
Example 1: Ensuring that FHE encryption for cloud data storage was done correctly.Example 2: Zk-snark wrapped up around an NFC signature from a chip's public key to prove that one owns a valid signature without revealing the raw signature itself.
Optional Need Need
Direct data verification from a web2 service
Example: Verifying one’s Twitter username to be used in an external (non-twitter) service.
Optional Need Need
Optimizing subjective manual processes by converting them into objective non-manualExample: Today, to confirm a bank transfer over a specific threshold, one needs to confirm this transfer manually through the bank call center. This manual process can be eliminated by using zero-knowledge proof to pass the bank’s security check. Optional Optional Need
Converting post-factum verification/check into preliminary
Example: In the modern blockchain world, compliance and KYC information is supplied post-factum, after the transaction was executed when it might be too late to address it. Using zero-knowledge, the transaction can be valid only if the counterparties proved that they meet a specific criteria (e.g. possess some kyc token provided by some kyc provider). Hence, any fraud attempts can be handled in time.
Need Need Need
Proof of ML models’ integrity
Example 1: The model owner proves that the result of the ML model running is fair without revealing any information about the model itself.Example 2: Proving that the submitted model was executed on the committed data without revealing data. Combining both examples, the model owner publicly commits to a model, and generates a zk-proof that the committed model was applied to the user’s submitted input, yielding the claimed result.Example 3: In case of LLM making decisions on its own (e.g. shouting military goals), proving how LLM made the decision when the information is fragmented and different parties have access to different fragments. One should be able to see how the data was put together and be able to question it without revealing all data to one specific party.
Optional Need Need
Proof of treating all participants in the same wayExample: Proof of no price discrimination. Need Need Need
Verifiability in supply chains
Example 1: Proving the origin and authenticity of goods and materials.
Example 2: Proving that materials meet compliance standards of supply chains within modern economies without revealing sensitive corporate information such as the identities of their suppliers and customers.
Need Need Need
Verified rating services
Example: Employer review (e.g. Glassdoor) where all the reviewers (i) stay anonymous, (ii) provide proofs of being eligible to review specific company/role.
Need Need Need
Proof of a mistake in a decision made by big data machine
Example 1: If one’s transaction was identified as a fraud but it is not, one can generate verifiable proof that the transaction is legitimate.
Example 2: If one was marked as an individual affiliated with a specific person (e.g. a terrorist) when one is not, one can generate verifiable proof that there are no connections/interactions with this party.
Optional Optional Need
List of attested events (created client-side) based on the data stream
Example 1: Using zk email as a data stream, based on the parking bill from a government authority, to generate a proof that the email owner has a car and lives in a specific city.
Example 2: Based on the shipment delivery updates delivered through zk email, generate a proof of order delivery.
Optional Need Need

Data immutability and robustness

Use case Blockchain Privacy zk
Robustness of interoperability Example: Guaranteed access configurations to Twitter API. That is to say Twitter can’t withdraw or modify this API version and thus break the operations of those using it. Need Need Need
Games/virtual worlds ownershipExample: Games’ immutability protects gamers from game studios’ “dictatorship”, enforcing the ownership over game assets and rules, which is meaningful for professional gamers who spend a huge chunk of their lives playing. Need Need Optional
Immutability for ownership guarantee
Example 1: Impossibility of suspending social media accounts by automated algorithms (especially those with large audiences, e.g. creators) without proof of violation.
Example 2: Impossibility of freezing ad accounts in social media by automated algorithms without proof of violation.
Optional Need Need
Signed internet history that is recoverable in case of web2 internet servers destruction (e.g. putting every internet website as a rollup)
Example 1: Proving data authenticity in case of “sybil attacks”. Example 2: Resisting AI DDOS attack.
Example 3: Impossibility of malicious power’s actions because of data immutability (e.g. revocation of citizenship).
Example 4: Preserving core digital infrastructure in case of internet shutdown/firewalling/censoring (e.g. in case of revolution or war in countries with oppressive regimes).
Need Need Need
A “car” transferring sealed data between various web2 and web3 services (as an alternative to current API mechanism). This allows (i) avoid trusting APIs (e.g. Facebook), (ii) get the data in the format one needs, (iii) data authenticity is verifiable.Example 1: Web2 and web3 oracles.
Example 2: Services built on top of existing services such as “who unfollowed me” on top of Instagram.
Example 3: Cross-chain verification.
Need Need Need
Physical and digital property ownership history
Example: Verifiable ownership history of rare antiques or pieces of art.
Need Need Need
Eliminating the risks of personal database leaks: because of client-side personal data processing, there is no need for collecting and storing huge personal data databases that can be leaked or hacked Need Need Need

Efficiency

Use case Blockchain Privacy zk
Creating global settlement layer of the world’s assets: digitizing real-world assets (e.g. property, pieces of art, IP, etc) to be able to use them in the digital world
Example: Use physical assets as collateral for digital lending.
Need Dearly need Need
Improving efficiency of decentralized services so that they are able to compete with centralized ones
Example: For democracies (which require a lot of multiparty coordination) to be able to compete with the authoritarian regimes that are highly centralized and coordinated.
Need Need Need
Neutral databases of authorized attestations
Example: Proof of holding a certificate issued by the International Culinary Institute that can be verified by anyone without additional requests to the institution that issued it.
Need Need Need
DeFi as an alternative to traditional financial institutions to (i) provide banking services for underbanked parts of the world, (ii) provide humans with censorship resistance and immutability where institutions can’t limit access to assets and number of allowed actions. One should note that even though we have been talking about DeFi for quite a while, the DeFi we have today is fully transparent. That is to say, if one borrows money, everyone can see it, all payments (e.g. salaries), savings, and assets are easily observable. To make DeFi the real alternative to traditional financial institutions, we need privacy-preserving composable blockchains, otherwise it’s a toy example.
Example 1: Buying stablecoins in countries where people are not allowed to buy foreign currency.
Example 2: Immutable private transactions or private compliant transactions.
Example 3: Providing basic banking services in countries with low GDP and weak institutions that can’t provide their population with access to banking.
Example 4: Broker-dealer network for securities trading services.
Need Deadly need Need
By reducing the amount of data that needs to be transmitted and processed, ZKPs can also significantly reduce the energy demands of IoT devices, improving efficiency and reducing costs. Optional Optional Need
Programmable spending based on the on-chain income
Example: Recurring monthly debt payment as the first spending from the salary.
Need Need Optional

To glimpse how all these use cases are possible through combining zero-knowledge and privacy-preserving blockchains, one should note that even though these technologies are absolutely awesome, they are still not magic pills. We need developers, engineers, and architects to combine them in smart ways. That is to say that the real magic pills are the developers, engineers, and architects capable of doing this and caring about our common future as much as the Aztec team does.

The components (i.e. blockchain, privacy, and verifiability) have been clear at a theoretical level for years. But even when they started to be implemented, most implementations were not really feasible for real-world usage and were hard to combine with each other. This was a serious issue, as web3 applications should be competitive with the existing web2 stuff we already use.

Aztec Labs addressed this issue and has been developing Noir, an open-source, domain-specific language that makes it easy for developers to write programs with arbitrary programmable logic, zero-knowledge proofs, and composable privacy. The program logic can vary from simple “I know X such that X < Y” to complex RSA signature verification. If you’re curious about Noir, check the talk “Learn Noir in an afternoon” by José Pedro Sousa.

Privacy-preserving blockchain and zero-knowledge in spacetech and defense

For those who are still skeptical about these 47 use cases above as too cypherpunk-ish, the next section talks about the need for privacy-preserving blockchains and zero-knowledge cryptography in a very specific domain for solving very specific problems.

Blockchains and zero-knowledge in space

Today, operating in near-Earth space is a part of everyday life: countries, industries, and businesses rely on the day-to-day operation of satellites and the complex infrastructure created in Earth’s orbit. That is to say, a number of parties (some of them are obviously hostile towards each other) manage a number of programmable “nodes” with sensitive data. That sounds exactly like a coordination problem that privacy-preserving blockchains and zero-knowledge are able to handle!

  • Case 1: satellite coordination
    Most large countries have their satellites flying in space for some specific missions. That reminds one of car or plane traffic, but instead there are a number of satellites going from point A to point B. The goal is to manage this traffic in a way that there are no collisions or accidents. The coordination problem is that space is everyone’s – that is, the model of plane traffic doesn’t work.

Instead, satellite owners have to coordinate with each other (and as a consequence trust each other) to negotiate space management. This requires pretty deep data disclosure and trust that other parties have good intentions.

We can utilize zero-knowledge to provide proof of correct code execution, meaning that the satellite will perform exactly what its owner promises. For example, it can generate “proof of route”, which is a crucial component of space coordination as one additional zero in the code can send the satellite spinning forever or force it to change its trajectory and crash into other satellites. Privacy-preserving blockchain can be used as a coordination layer to deploy protocols for specific use cases.

  • Case 2: operating over hostile areas
    Sometimes satellites need to fly over the area of the countries with whom they are “not friends”. While flying over these areas, enemy representatives might be interested in hacking the satellite, intervening in its operating activity, and data forgery. For example, providing AI-generated images instead of authentic ones. To mitigate this risk, we can use zero-knowledge to provide proof of data authenticity or proof of metadata.

  • Case 3: planetary defense
    Planetary defense is the effort to monitor and protect Earth from asteroids, comets, and other objects in space. Life on Earth has been drastically altered by asteroid impacts before. For example, once a planet-shaking strike led to the extinction of the non-avian dinosaurs.
    Planetary defense combines comet and asteroid detection, trajectory assessment, tracking over time, and developing tools for possible collision prevention (includes slamming a spacecraft into the target, pulling it using gravity, and nuclear explosions).
    Planetary defense is operated by NASA, the European Space Agency, and other organizations, all representing different countries and dealing with sensitive data collection and secret technologies (e.g. satellite engineering mechanisms).
    ZKPs and privacy-preserving blockchains can be a coordination layer to process data collected by different parties without exposing it to others.  

Blockchains and zero-knowledge for LLMs on battlefields, enterprise, and everyone’s daily lives

LLMs are (today, already, obviously) very valuable, are shaping an absolutely different economy, and will have a huge impact on daily humans’ lives, geopolitical balance, and enterprise operations.

One of the issues with LLMs is that they can’t tell you how they reached their conclusions. However, some of these conclusions change the world and impact millions, if not billions, of people, being used for business, for countries, for battlefields. Take for example the case of using LLMs to detect targets on a battlefield. The model tells the commander “This is the target”, but it doesn’t provide any proof that this target was detected correctly. In this case, the cost of wrong detection is at least a life, maybe a dozen lives, or several hundred thousand lives.

Zero-knowledge proofs can be a “neutral arbiter” providing the proof of what data the model was trained on, what data the model used to make a conclusion, how this data was put together, what the underlying algorithm is, etc. Furthermore, it can be done without revealing any specific information about either the data or the model.

One should note that today we are not talking about whether or not we should use LLMs for specific use cases. The reality is they are already used everywhere, by all major corporations, countries, and their governments. But what we still are able to do while LLMs start flooding our world is make the model owners accountable for their models – that is to say enforce some formal LLM compliance.

Similar to private data processing and collection today, some legislation for AI regulation will be set up as well. However, mere legislation is not enough; standards should be transparent and equal for everyone, and they should be followed.

Can a trusted third party who is believable, shares pro-democratic values, and is neutral enforce legislation compliance? In a domain such as AI, where at stakes are at the very least huge amounts of money (if not shaping the geopolitical landscape of the world for years to come), there are no neutral parties – everyone has their own interests and skin in the game.

In LLMs we don’t have custodial relations to the data, we can’t prove how the decision was made. But at the same time, we need to know how the data was put together and be able to question it. That is, for example, an absolute necessity for the presumption of innocence. ZKPs as a source of truth and privacy-preserving blockchains as a coordination layer can solve this issue to enforce AI standards compliance.

Conclusion

We are just at the very beginning. Privacy-preserving blockchains and zero-knowledge cryptography are needed in spacetech, agrotech, medtech, biotech, AI/ML, military technology, social networks, retail, robotics, big data, IoT, media and entertainment, edtech, fintech, logistics, neurotech, etc.

The 47 use cases mentioned above are kind of obvious today, but the real landscape of ZKPs and blockchain usage in 20 years will be much wider, deeper, and more diverse. Some of them can be predicted today, while some of them are almost impossible to imagine (unless you’re a true visionary).

One thing is absolutely clear, however: world verifiability is an absolutely required property while our world merges offline and online universes deeper and deeper. ZKPs as a “source of truth” and privacy-preserving blockchains as a coordination layer are a very promising duo to make the world verifiable.

It will take us time. The right moment to start was yesterday. But today is also a good day: for those ready to act together with Aztec – fill in the form.

Sources:

  • An article “Planetary defense: Protecting Earth from space-based threats” by Vicky Stein.
  • A talk “on The Global Tech Race” by Alex Karp.
  • A talk “Charting Taiwan DID as a Showcase & Experiment” by Noah Yeh.
  • A talk “State of ZK ECDSA” by Gauthier.
  • A talk 2PC is for Lovers by Barry Whitehat.
Read more
Aztec Network
Aztec Network
24 Sep
xx min read

Testnet Retro - 2.0.3 Network Upgrade

Special thanks to Santiago Palladino, Phil Windle, Alex Gherghisan, and Mitch Tracy for technical updates and review.

On September 17th, 2025, a new network upgrade was deployed, making Aztec more secure and flexible for home stakers. This upgrade, shipped with all the features needed for a fully decentralized network launch, includes a completely redesigned slashing system that allows inactive or malicious operators to be removed, and does not penalize home stakers for short outages. 

With over 23,000 operators running validators across 6 continents (in a variety of conditions), it is critical not to penalize nodes that temporarily drop due to internet connectivity issues. This is because users of the network are also found across the globe, some of whom might have older phones. A significant effort was put into shipping a low-memory proving mode that allows older mobile devices to send transactions and use privacy-preserving apps. 

The network was successfully deployed, and all active validators on the old testnet were added to the queue of the new testnet. This manual migration was only necessary because major upgrades to the governance contracts had gone in since the last testnet was deployed. The new testnet started producing blocks after the queue started to be “flushed,” moving validators into the rollup. Because the network is fully decentralized, the initial flush could have been called by anyone. The network produced ~2k blocks before an invalid block made it to the chain and temporarily stalled block production. Block production is now restored and the network is healthy. This post explains what caused the issue and provides an update on the current status of the network. 

Note: if you are a network operator, you must upgrade to version 2.0.3 and restart your node to participate in the latest testnet. If you want to run a node, it’s easy to get started.

What’s included in the upgrade? 

This upgrade was a team-wide effort that optimized performance and implemented all the mechanisms needed to launch Aztec as a fully decentralized network from day 1. 

Feature highlights include: 

  • Improved node stability: The Aztec node software is now far more stable. Users will see far fewer crashes and increased performance in terms of attestations and blocks produced. This translates into a far better experience using testnet, as transactions get included much faster.
  • Boneh–Lynn–Shacham (BLS) keys: When a validator registers on the rollup, they also provide keys that allow BLS signature aggregation. This unlocks future optimizations where signatures can be combined via p2p communication, then verified on Ethereum, while proving that the signatures come from block proposers.
  • Low-memory proving mode: The client-side proving requirements have dropped dramatically from 3.7GB to 1.3GB through a new low-memory proving mode, enabling older mobile devices to send Aztec transactions and use apps like zkPassport. 
  • AVM performance: The Aztec Virtual Machine (AVM) performance has seen major improvements with constraint coverage jumping from 0% to approximately 90-95%, providing far more secure AVM proving and more realistic proving performance numbers from provers. 
  • Flexible key management: The system now supports flexible key management through keystores, multi-EOA support, and remote signers, eliminating the need to pass private keys through environment variables and representing a significant step toward institutional readiness. 
  • Redesigned slashing: Slashing has been redesigned to provide much better consensus guarantees. Further, the new configuration allows nodes not to penalize home stakers for short outages, such as 20-minute interruptions. 
  • Slashing Vetoer: The Slasher contract now has an explicit vetoer: an address that can prevent slashing. At Mainnet, the initial vetoer will be operated by an independent group of security researchers who will also provide security assessments on upgrades. This acts as a failsafe in the event that nodes are erroneously trying to slash other nodes due to a bug.

With these updates in place, we’re ready to test a feature-complete network. 

What happened after deployment? 

As mentioned above, block production started when someone called the flush function and a minimum number of operators from the queue were let into the validator set. 

Shortly thereafter, while testing the network, a member of the Aztec Labs team spun up a “bad” sequencer that produced an invalid block proposal. Specifically, one of the state trees in the proposal was tampered with. 

Initial block production 

The expectation was that this would be detected immediately and the block rejected. Instead, a bug was discovered in the validator code where the invalid block proposal wasn't checked thoroughly enough. In effect, the proposal got enough attestations, so it was posted to the rollup. Due to extra checks in the nodes, when the nodes pulled the invalid block from Ethereum, they detected the tampered tree and refused to sync it. This is a good outcome as it prevented the attack. Additionally, prover nodes refused to prove the epoch containing the invalid block. This allowed the rollup to prune the entire bad epoch away. After the prune, the invalid state was reset to the last known good block.

Block production stalled

The prune revealed another, smaller bug, where, after a failed block sync, a prune does not get processed correctly, requiring a node restart to clear up. This led to a 90-minute outage from the moment the block proposal was posted until the testnet recovered. The time was equally split between waiting for pruning to happen and for the nodes to restart in order to process the prune.

The Fix

Validators were correctly re-executing all transactions in the block proposals and verifying that the world state root matched the one in the block proposal, but they failed to check that intermediate tree roots, which are included in the proposal and posted to the rollup contract on L1, were also correct. The attack tweaked one of these intermediate roots while proposing a correct world state root, so it went unnoticed by the attestors. 

As mentioned above, even though the block made it through the initial attestation and was posted to L1, the invalid block was caught by the validators, and the entire epoch was never proven as provers refused to generate a proof for the inconsistent state. 

A fix was pushed that resolved this issue and ensured that invalid block proposals would be caught and rejected. A second fix was pushed that ensures inconsistent state is removed from the uncommitted cache of the world state.

Block production restored

What’s Next

Block production is currently running smoothly, and the network health has been restored. 

Operators who had previously upgraded to version 2.0.3 will need to restart their nodes. Any operator who has not upgraded to 2.0.3 should do so immediately. 

Attestation and Block Production rate on the new rollup

Slashing has also been functioning as expected. Below you can see the slashing signals for each round. A single signal can contain votes for multiple validators, but a validator's attester needs to receive 65 votes to be slashed.

Votes on slashing signals

Join us this Thursday, September 25, 2025, at 4 PM CET on the Discord Town Hall to hear more about the 2.0.3 upgrade. To stay up to date with the latest updates for network operators, join the Aztec Discord and follow Aztec on X.

Noir
Noir
18 Sep
xx min read

Just write “if”: Why Payy left Halo2 for Noir

The TL;DR:

Payy, a privacy-focused payment network, just rewrote its entire ZK architecture from Halo2 to Noir while keeping its network live, funds safe, and users happy. 

Code that took months to write now takes weeks (with MVPs built in as little as 30 minutes). Payy’s codebase shrank from thousands of lines to 250, and now their entire engineering team can actually work on its privacy infra. 

This is the story of how they transformed their ZK ecosystem from one bottlenecked by a single developer to a system their entire team can modify and maintain.

Starting with Halo2

Eighteen months ago, Payy faced a deceptively simple requirement: build a privacy-preserving payment network that actually works on phones. That requires client-side proving.

"Anyone who tells you they can give you privacy without the proof being on the phone is lying to you," Calum Moore - Payy's Technical Lead - states bluntly.

To make a private, mobile network work, they needed:

  • Mobile proof generation with sub-second performance
  • Minimal proof sizes for transmission over weak mobile signals
  • Low memory footprint for on-device proving
  • Ethereum verifier for on-chain settlement

To start, the team evaluated available ZK stacks through their zkbench framework:

STARKs (e.g., RISC Zero): Memory requirements made them a non-starter on mobile. Large proof sizes are unsuitable for mobile data transmission.

Circom with Groth16: Required trusted setup ceremonies for each circuit update. It had “abstracted a bit too early” and, as a result, is not high-level enough to develop comfortably, but not low-level enough for controls and optimizations, said Calum.

Halo2: Selected based on existing production deployments (ZCash, Scroll), small proof sizes, and an existing Ethereum verifier. As Calum admitted with the wisdom of hindsight: “Back a year and a half ago, there weren’t any other real options.”

Bus factor = 1 😳

Halo2 delivered on its promises: Payy successfully launched its network. But cracks started showing almost immediately.

First, they had to write their own chips from scratch. Then came the real fun: if statements.

"With Halo2, I'm building a chip, I'm passing this chip in... It's basically a container chip, so you'd set the value to zero or one depending on which way you want it to go. And, you'd zero out the previous value if you didn't want it to make a difference to the calculation," Calum explained, “when I’m writing in Noir, I just write ‘if’. "

With Halo2, writing an if statement (programming 101) required building custom chip infra. 

Binary decomposition, another fundamental operation for rollups, meant more custom chips. The Halo2 implementation quickly grew to thousands of lines of incomprehensible code.

And only Calum could touch any of it.

The Bottleneck

"It became this black box that no one could touch, no one could reason about, no one could verify," he recalls. "Obviously, we had it audited, and we were confident in that. But any changes could only be done by me, could only be verified by me or an auditor."

In engineering terms, this is called a bus factor of one: if Calum got hit by a bus (or took a vacation to Argentina), Payy's entire proving system would be frozen. "Those circuits are open source," Calum notes wryly, "but who's gonna be able to read the Halo2 circuits? Nobody."

Evaluating Noir: One day, in Argentina…

During a launch event in Argentina, "I was like, oh, I'll check out Noir again. See how it's going," Calum remembers. He'd been tracking Noir's progress for months, occasionally testing it out, waiting for it to be reliable.

"I wrote basically our entire client-side proof in about half an hour in Noir. And it probably took me - I don't know, three weeks to write that proof originally in Halo2."

Calum recreated Payy's client-side proof in Noir in 30 minutes. And when he tested the proving speed, without any optimization, they were seeing 2x speed improvements.

"I kind of internally… didn't want to tell my cofounder Sid that I'd already made my decision to move to Noir," Calum admits. "I hadn't broken it to him yet because it's hard to justify rewriting your proof system when you have a deployed network with a bunch of money already on the network and a bunch of users."

Rebuilding (Ship of Theseus-ing) Payy

Convincing a team to rewrite the core of a live financial network takes some evidence. The technical evaluation of Noir revealed improvements across every metric:

Proof Generation Time: Sub-0.5 second proof generation on iPhones. "We're obsessive about performance," Calum notes (they’re confident they can push it even further).

Code Complexity: Their entire ZK implementation compressed from thousands of lines of Halo2 to just 250 lines of Noir code. "With rollups, the logic isn't complex—it's more about the preciseness of the logic," Calum explains.

Composability: In Halo2, proof aggregation required hardwiring specific verifiers for each proof type. Noir offers a general-purpose verifier that accepts any proof of consistent size.

"We can have 100 different proving systems, which are hyper-efficient for the kind of application that we're doing," Calum explains. "Have them all aggregated by the same aggregation proof, and reason about whatever needs to be."

Migration Time

Initially, the goal was to "completely mirror our Halo2 proofs": no new features. This conservative approach meant they could verify correctness while maintaining a live network.

The migration preserved Payy's production architecture:

  • Rust core (According to Calum, "Writing a financial application in JavaScript is borderline irresponsible")
  • Three-proof system: client-side proof plus two aggregators  
  • Sparse Merkle tree with Poseidon hashing for state management

When things are transparent, they’re secure

"If you have your proofs in Noir, any person who understands even a little bit about logic or computers can go in and say, 'okay, I can kinda see what's happening here'," Calum notes.

The audit process completely transformed. With Halo2: "The auditors that are available to audit Halo2 are few and far between."

With Noir: "You could have an auditor that had no Noir experience do at least a 95% job."

Why? Most audit issues are logic errors, not ZK-specific bugs. When auditors can read your code, they find real problems instead of getting lost in implementation details.

Code Comparison

Halo2: Binary decomposition

  • Write a custom chip for binary decomposition
  • Implement constraint system manually
  • Handle grid placement and cell references
  • Manage witness generation separately
  • Debug at the circuit level when something goes wrong

Payy’s previous 383 line implementation of binary decomposition can be viewed here (pkg/zk-circuits/src/chips/binary_decomposition.rs).

Payy’s previous binary decomposition implementation

Meanwhile, binary decomposition is handled in Noir with the following single line.

pub fn to_le_bits<let N: u32>(self: Self) -> [u1; N]

(Source)

What's Next

With Noir's composable proof system, Payy can now build specialized provers for different operations, each optimized for its specific task.

"If statements are horrendous in SNARKs because you pay the cost of the if statement regardless of its run," Calum explains. But with Noir's approach, "you can split your application logic into separate proofs, and run whichever proof is for the specific application you're looking for."

Instead of one monolithic proof trying to handle every case, you can have specialized proofs, each perfect for its purpose.

The Bottom Line

"I fell a little bit in love with Halo2," Calum admits, "maybe it's Stockholm syndrome where you're like, you know, it's a love-hate relationship, and it's really hard. But at the same time, when you get a breakthrough with it, you're like, yes, I feel really good because I'm basically writing assembly-level ZK proofs."

“But now? I just write ‘if’.”

Technical Note: While "migrating from Halo2 to Noir" is shorthand that works for this article, technically Halo2 is an integrated proving system where circuits must be written directly in Rust using its constraint APIs, while Noir is a high-level language that compiles to an intermediate representation and can use various proving backends. Payy specifically moved from writing circuits in Halo2's low-level constraint system to writing them in Noir's high-level language, with Barretenberg (UltraHonk) as their proving backend.

Both tools ultimately enable developers to write circuits and generate proofs, but Noir's modular architecture separates circuit logic from the proving system - which is what made Payy's circuits so much more accessible to their entire team, and now allows them to swap out their proving system with minimal effort as proving systems improve.

Payy's code is open source and available for developers looking to learn from their implementation.

Aztec Network
Aztec Network
4 Sep
xx min read

A New Brand for a New Era of Aztec

After eight years of solving impossible problems, the next renaissance is here. 

We’re at a major inflection point, with both our tech and our builder community going through growth spurts. The purpose of this rebrand is simple: to draw attention to our full-stack privacy-native network and to elevate the rich community of builders who are creating a thriving ecosystem around it. 

For eight years, we’ve been obsessed with solving impossible challenges. We invented new cryptography (Plonk), created an intuitive programming language (Noir), and built the first decentralized network on Ethereum where privacy is native rather than an afterthought. 

It wasn't easy. But now, we're finally bringing that powerful network to life. Testnet is live with thousands of active users and projects that were technically impossible before Aztec.

Our community evolution mirrors our technical progress. What started as an intentionally small, highly engaged group of cracked developers is now welcoming waves of developers eager to build applications that mainstream users actually want and need.

Behind the Brand: A New Mental Model

A brand is more than aesthetics—it's a mental model that makes Aztec's spirit tangible. 

Our Mission: Start a Renaissance

Renaissance means "rebirth"—and that's exactly what happens when developers gain access to privacy-first infrastructure. We're witnessing the emergence of entirely new application categories, business models, and user experiences.

The faces of this renaissance are the builders we serve: the entrepreneurs building privacy-preserving DeFi, the activists building identity systems that protect user privacy, the enterprise architects tokenizing real-world assets, and the game developers creating experiences with hidden information.

Values Driving the Network

This next renaissance isn't just about technology—it's about the ethos behind the build. These aren't just our values. They're the shared DNA of every builder pushing the boundaries of what's possible on Aztec.

Agency: It’s what everyone deserves, and very few truly have: the ability to choose and take action for ourselves. On the Aztec Network, agency is native

Genius: That rare cocktail of existential thirst, extraordinary brilliance, and mind-bending creation. It’s fire that fuels our great leaps forward. 

Integrity: It’s the respect and compassion we show each other. Our commitment to attacking the hardest problems first, and the excellence we demand of any solution. 

Obsession: That highly concentrated insanity, extreme doggedness, and insatiable devotion that makes us tick. We believe in a different future—and we can make it happen, together. 

Visualizing the Next Renaissance

Just as our technology bridges different eras of cryptographic innovation, our new visual identity draws from multiple periods of human creativity and technological advancement. 

The Wordmark: Permissionless Party 

Our new wordmark embodies the diversity of our community and the permissionless nature of our network. Each letter was custom-drawn to reflect different pivotal moments in human communication and technological progress.

  • The A channels the bold architecture of Renaissance calligraphy—when new printing technologies democratized knowledge. 
  • The Z strides confidently into the digital age with clean, screen-optimized serifs. 
  • The T reaches back to antiquity, imagined as carved stone that bridges ancient and modern. 
  • The E embraces the dot-matrix aesthetic of early computing—when machines first began talking to each other. 
  • And the C fuses Renaissance geometric principles with contemporary precision.

Together, these letters tell the story of human innovation: each era building on the last, each breakthrough enabling the next renaissance. And now, we're building the infrastructure for the one that's coming.

The Icon: Layers of the Next Renaissance

We evolved our original icon to reflect this new chapter while honoring our foundation. The layered diamond structure tells the story:

  • Innermost layer: Sensitive data at the core
  • Black privacy layer: The network's native protection
  • Open third layer: Our permissionless builder community
  • Outermost layer: Mainstream adoption and real-world transformation

The architecture echoes a central plaza—the Roman forum, the Greek agora, the English commons, the American town square—places where people gather, exchange ideas, build relationships, and shape culture. It's a fitting symbol for the infrastructure enabling the next leap in human coordination and creativity.

Imagery: Global Genius 

From the Mughal and Edo periods to the Flemish and Italian Renaissance, our brand imagery draws from different cultures and eras of extraordinary human flourishing—periods when science, commerce, culture and technology converged to create unprecedented leaps forward. These visuals reflect both the universal nature of the Renaissance and the global reach of our network. 

But we're not just celebrating the past —we're creating the future: the infrastructure for humanity's next great creative and technological awakening, powered by privacy-native blockchain technology.

You’re Invited 

Join us to ask questions, learn more and dive into the lore.

Join Our Discord Town Hall. September 4th at 8 AM PT, then every Thursday at 7 AM PT. Come hear directly from our team, ask questions, and connect with other builders who are shaping the future of privacy-first applications.

Take your stance on privacy. Visit the privacy glyph generator to create your custom profile pic and build this new world with us.

Stay Connected. Visit the new website and to stay up-to-date on all things Noir and Aztec, make sure you’re following along on X.

The next renaissance is what you build on Aztec—and we can't wait to see what you'll create.

Aztec Network
Aztec Network
22 Jul
xx min read

Introducing the Adversarial Testnet

Aztec’s Public Testnet launched in May 2025.

Since then, we’ve been obsessively working toward our ultimate goal: launching the first fully decentralized privacy-preserving layer-2 (L2) network on Ethereum. This effort has involved a team of over 70 people, including world-renowned cryptographers and builders, with extensive collaboration from the Aztec community.

To make something private is one thing, but to also make it decentralized is another. Privacy is only half of the story. Every component of the Aztec Network will be decentralized from day one because decentralization is the foundation that allows privacy to be enforced by code, not by trust. This includes sequencers, which order and validate transactions, provers, which create privacy-preserving cryptographic proofs, and settlement on Ethereum, which finalizes transactions on the secure Ethereum mainnet to ensure trust and immutability.

Strong progress is being made by the community toward full decentralization. The Aztec Network now includes nearly 1,000 sequencers in its validator set, with 15,000 nodes spread across more than 50 countries on six continents. With this globally distributed network in place, the Aztec Network is ready for users to stress test and challenge its resilience.

Introducing the Adversarial Testnet

We're now entering a new phase: the Adversarial Testnet. This stage will test the resilience of the Aztec Testnet and its decentralization mechanisms.

The Adversarial Testnet introduces two key features: slashing, which penalizes validators for malicious or negligent behavior in Proof-of-Stake (PoS) networks, and a fully decentralized governance mechanism for protocol upgrades.

This phase will also simulate network attacks to test its ability to recover independently, ensuring it could continue to operate even if the core team and servers disappeared (see more on Vitalik’s “walkaway test” here). It also opens the validator set to more people using ZKPassport, a private identity verification app, to verify their identity online.  

Slashing on the Aztec Network

The Aztec Network testnet is decentralized, run by a permissionless network of sequencers.

The slashing upgrade tests one of the most fundamental mechanisms for removing inactive or malicious sequencers from the validator set, an essential step toward strengthening decentralization.

Similar to Ethereum, on the Aztec Network, any inactive or malicious sequencers will be slashed and removed from the validator set. Sequencers will be able to slash any validator that makes no attestations for an entire epoch or proposes an invalid block.

Three slashes will result in being removed from the validator set. Sequencers may rejoin the validator set at any time after getting slashed; they just need to rejoin the queue.

Decentralized Governance

In addition to testing network resilience when validators go offline and evaluating the slashing mechanisms, the Adversarial Testnet will also assess the robustness of the network’s decentralized governance during protocol upgrades.

Adversarial Testnet introduces changes to Aztec Network’s governance system.

Sequencers now have an even more central role, as they are the sole actors permitted to deposit assets into the Governance contract.

After the upgrade is defined and the proposed contracts are deployed, sequencers will vote on and implement the upgrade independently, without any involvement from Aztec Labs and/or the Aztec Foundation.

Start Your Plan of Attack  

Starting today, you can join the Adversarial Testnet to help battle-test Aztec’s decentralization and security. Anyone can compete in six categories for a chance to win exclusive Aztec swag, be featured on the Aztec X account, and earn a DappNode. The six challenge categories include:

  • Homestaker Sentinel: Earn 1 Aztec Dappnode by maximizing attestation and proposal success rates and volumes, and actively participating in governance.
  • The Slash Priest: Awarded to the participant who most effectively detects and penalizes misbehaving validators or nodes, helping to maintain network security by identifying and “slashing” bad actors.
  • High Attester: Recognizes the participant with the highest accuracy and volume of valid attestations, ensuring reliable and secure consensus during the adversarial testnet.
  • Proposer Commander: Awarded to the participant who consistently creates the most successful and timely proposals, driving efficient consensus.
  • Meme Lord: Celebrates the creator of the most creative and viral meme that captures the spirit of the adversarial testnet.
  • Content Chronicler: Honors the participant who produces the most engaging and insightful content documenting the adversarial testnet experience.

Performance will be tracked using Dashtec, a community-built dashboard that pulls data from publicly available sources. Dashtec displays a weighted score of your validator performance, which may be used to evaluate challenges and award prizes.

The dashboard offers detailed insights into sequencer performance through a stunning UI, allowing users to see exactly who is in the current validator set and providing a block-by-block view of every action taken by sequencers.

To join the validator set and start tracking your performance, click here. Join us on Thursday, July 31, 2025, at 4 pm CET on Discord for a Town Hall to hear more about the challenges and prizes. Who knows, we might even drop some alpha.

To stay up-to-date on all things Noir and Aztec, make sure you’re following along on X.