ZK programmability adds a whole new layer to worry about
written by
David Wong
on
Zero-knowledge (ZK) programs are a powerful new tool for developers. They allow you to write programs that can prove their execution without revealing any of the underlying data. This has a wide range of potential applications, including privacy-preserving computation, secure data sharing, and fraud prevention.
However, ZK programmability also introduces new security challenges. In this blog post, we will explore the security implications of ZK programs and discuss how developers can mitigate these risks.
From a developer perspective, ZK systems are most often simply about writing programs that can be proven. For that, there’s usually some high-level library or language that one can use. At the very least an abstraction that makes writing ZK programs easier.
These ZK programs are still programs, and so they can have logic bugs, as much as any type of program can have logic bugs. Nothing new here, this has been a well-known problem in the crypto space since the introduction of smart contracts with Ethereum.
an example of a ZK program using the snarkyJS library.
The same set of solutions apply. As a reminder, in Ethereum this is usually what you have access to:
improving the language or library - solidity, for example, has introduced a lot of concepts and syntax to its language to answer well-known bugs.
improving the VM itself - this is usually harder as you most often need to remain backward compatible, and as such a VM is less flexible.
safer-to-use libraries - with the number of integer overflow bugs that happened in Ethereum, best practice today is to use the libraries published by OpenZeppelin (like safemath).
updatable smart contracts - while not super kocher, updatable smart contracts allow admins to fix bugs that would otherwise not be fixable. It seems common practice these days to launch your smart contract as updatable, at least for the first few years, until enough confidence has been gained in the implementation.
security audits - these are usually pretty impactful as smart contracts are most often self-contained and constrained in size. Hiring a security firm is as easy as emailing [email protected].
formal analysis - while it’s sometimes easy to simply run a static analyzer and see what it finds, a lot of solution are much more advanced and require you to write a high-level specification to help the analysis (the Move language offers this, for example). The overhead is often complex enough that very few people use this in practice.
In addition, ZK programs introduce a few novel concepts that one has to keep in mind: they allow for private inputs and non-deterministic computations.
Private inputs imply that one should be careful not to leak data that is not supposed to be public. In general, a private input should not be something that’s easy to guess, or should not end up being public. For example, making y public in y = private + 1 will trivially lead to a leak of private.
Non-deterministic computation means that the ZK program can sometimes decide to just accept arbitrary data. This can be very useful to optimize circuits and avoid some computations. The famous example is division, which can be computed by letting the user compute the result themselves (let’s say c = a / b) and provide the result to the ZK program which can constrain it to be correct (e.g. a = b * c).
The key is in the word “constrain”. If you don’t constrain the arbitrary value, then it could be anything, and here be dragons.
Now let’s go down one level. What happens with the high-level ZK program written by a developer? It most often gets through a compilation phase. The output is either a list of instructions for a zkVM, or a number of gates and wirings for an arithmetic circuit (see What are zkVMs and what’s the difference with a zkEVM?).
Note: That part of a ZK system is most often referred to as the “frontend”, and as with any program this can have bugs too! Frontends are most often made out of gadgets, which are ZK programs as well (by definition) that abstract components for the developers. Think: it’s like a library in a programming language.
To ensure that what was written at a high-level corresponds to the final circuit or program to prove, debugging tools are going to be extremely important. Especially for low-level programmers who are used to check the assembly code generated by their programs.
Now we’re getting to the good part: once compiled, ZK circuits will be sent to the proof system. Most likely, if you’re using a Preprocessing SNARK (if you don’t know what it is, don’t worry too much about it) you’re going through another phase of compilation.
Note: This part is most often referred to as the backend, as a frontend can compile to different backends (proof systems), and a backend can have multiple frontends.
Proof systems are where most of the heavy logic is. Their protocol are most often underspecified, and following (not to the letter) papers often written in a theoretical and handwavy way (in terms of implementation detail). This is nothing new, most cryptographic protocols in the real-world end up this way. In my book Real-World Cryptography I wrote the following:
Unfortunately, more often than cryptographers are willing to admit, you will run into trouble when your problem either meets an edge case that the mainstream primitives or protocols don’t address, or when your problem doesn’t match a standardized solution. For this reason, it is extremely common to see developers creating their own mini-protocols or mini-standards. This is when trouble starts.
There’s a multitude of detail to get right in practice, and devastating proof system bugs have happened. For example:
The catastrophic Zcash bug that happened back in 2019.
The 00 bug where the value 0 ended up being a valid proof.
Within proof systems, you have more circuitry and arithmetic going on that looks like the ZK circuits I mentioned earlier. In systems like Plonk it looks like accelerated operations (often called custom gates), in systems like zkVMs the VM itself is encoded as a circuit.
There isn’t much interesting things to say about this layer, the same problem as with circuits on the developer side can be found. “Unsound” logic would mean that attackers could produce wrong results, and an “Incomplete” logic would mean that random self-Denial of Service attacks could happen simply by virtue of legitimate logic not being able to be ran.
Trusted setups are often organized as ceremonies where participants collaboratively generate the dangerous values using multi-party computation. They are often not fun, and a nightmare to organize, and might lead to extremely serious vulnerabilities if not done correctly or not implemented correctly.
In both approach (zkVM or ZK circuits), a program must be deployed somewhere. In blockchains, the ZK program is deployed on chain, and thus most likely can’t be updated (unless you allow for updatable smart contracts, like in Ethereum).
Once deployed, problems can still arise and patching leads to complicated scenarios…
What if there was a bug in the frontend? You’d have to recompile and redeploy. It’s not always possible to “patch” things automatically for everyone in your system, it’s easier for instructions, but in a blockchain context, you don’t want that anyway.
What if there was a bug in the backend? It depends where the bug is. It is likely that you won’t have to redeploy your program (although in recursive zero-knolwedge proof systems your proof system is your program…)
Finally, privacy at the application level can be a problem too. Looking at Tornado Cash (an Ethereum mixer) which not only had logic bug in its application logic that could have been devastating (more on that in a future post), it also got sanctioned by the U.S. Treasury for failing to prevent money laundering.
ZK proofs are a powerful new tool for developers, but as we’ve seen they also introduce new security challenges at different layers of the stack. So don’t wait until the last minute to think about security, contact us at [email protected]!