Public report of Aleo's synthesizer
on

Aleo synthesizer

On September 11th, 2023, zkSecurity was tasked to audit Aleo’s synthesizer for use in the Aleo blockchain. Two consultants worked over the next 3 weeks to review Aleo’s codebase for security issues.

The code was found to be thoroughly documented, rigorously tested, and well specified. A number of findings were reported.

You can find the full report here.

The report was well received by the Aleo team:

apruden

victor

What follows is a copy/paste of the overview section of the report


The Aleo synthesizer is the core protocol used in the deployment and execution of user programs on the Aleo blockchain. It wraps Varuna (Aleo’s Marlin-based zero-knowledge proof system) and provides a high-level interface for users to deploy and execute programs and for the network to verify correct deployments and executions.

There’s in theory a third flow supported by the synthesizer, the fee collection process, which is significantly less complex and which we’ll omit in this overview.

This section provides an overview of the deployment and the execution flows of the synthesizer, from the user and from the network perspectives.

Aleo Programs

Aleo provides two ways of writing programs for its blockchain: using the Leo programming language or using a lower-level VM instruction set called the Aleo instruction language.

Leo programs are eventually compiled down to Aleo instructions, which is the form used to deploy and execute programs. For this reason this is the last time we mention Leo.

At a high-level, an Aleo program includes a number of structures that are similar to Ethereum smart contracts:

  • A list of other (already-deployed) programs imported by the program.
  • A list of public mappings that can be mutated by the program (only through finalize functions, see later explanations).
  • A list of functions that can be executed by users.

So far, this shouldn’t look that much different from Ethereum’s smart contract design. The exception is that functions are only executed once, locally on a user’s machine, and then verified by the network via the use of zero-knowledge proofs.

Because user transactions execute functions before they can be ordered by the network, they introduce contention issues on the resources they consume. Imagine the following: if two transactions rely on the state of the program being A, and then both try to mutate the state to B and C respectively, then only one of them can succeed. As they are conflicting transactions, the first one being processed by the network will invalidate the other.

For this reason, functions in Aleo programs are actually split into two parts.

The first part is simply called the “function”, and contains highly-parallelizable logic that should only act on single-user objects. These single-user objects are called “records” and are akin to UTXOs in Bitcoin, but stored in a shielded pool similar to Zcash. As previously explained, the user can execute a function and provide a proof of correct execution to the network.

As such, Aleo programs define their own custom records, which always have an owner field (enforcing who can consume them) and that can contain application-specific data.

The second part is some logic, which execution is delegated to the network (like in Ethereum). Its execution is deferred to after its wrapping transaction has been ordered and sequenced by Aleo’s consensus. A function can encode such logic by creating a finalize function with the same name.

Finalize functions cost a fee to run depending on the instructions it contains (similar to the “gas” notion of Ethereum). A finalize function is run in the clear by the network, and can perform public state transitions (specifically, mutate any mappings defined in the program). This is the only way to mutate the public state of a program.

To simplify this concept of split logic, one can think of developing on Aleo as a similar experience as developing on Ethereum if one would stick to hosting most of the logic in finalize functions. But to decrease the cost of execution, or to provide privacy features, developers have the choice to move part of the logic in the non-finalize part.

Synthesizing the circuits associated to the functions

The SnarkVM synthesizer’s main goal is to produce the zero-knowledge proof circuits that encode the functions of an Aleo program. In practice, the synthesizer does that by parsing an Aleo program into its functions written in Aleo instructions, and then by converting each of the Aleo instructions into their matching circuit gadget.

In addition, a number of subcircuits are added at the beginning and end of a program’s function:

Request verification. As Aleo programs offer user privacy, the caller of the function cannot be leaked. For this reason, each circuit enforces that the caller of the function is in possession of their private key. While this could be done by simply witnessing the private key and enforcing that it correctly derives to the caller’s address, signatures on “requests” are used instead.

These signed requests (as seen below) authorize the execution of some program’s function with specific arguments (or inputs). A user can use them to delegate the creation of the proof to a third party prover.

pub struct Request<N: Network> {
    /// The request caller.
    caller: Address<N>,
    /// The network ID.
    network_id: U16<N>,
    /// The program ID.
    program_id: ProgramID<N>,
    /// The function name.
    function_name: Identifier<N>,
    /// The input ID for the transition.
    input_ids: Vec<InputID<N>>,
    /// The function inputs.
    inputs: Vec<Value<N>>,
    /// The signature for the transition.
    signature: Signature<N>,
    /// The tag secret key.
    sk_tag: Field<N>,
    /// The transition view key.
    tvk: Field<N>,
    /// The transition secret key.
    tsk: Scalar<N>,
    /// The transition commitment.
    tcm: Field<N>,
}

Inputs and outputs as public inputs. Each input to the function is committed to and exposed as a public input. The same is done with any output created by the function.

Call to finalize. Finally, if the function calls a finalize function, a hash of all of the inputs to the finalize function is also produced. These values will also be sent in the clear in the Transition object below (since the network needs them to run the finalize function in the clear).

Encoding nested function calls

A large part of the complexity of the synthesizer comes from allowing functions to call other programs’ functions. This is akin to smart contracts calling other smart contracts in Ethereum.

In Aleo, functions are translated into circuits, and so a function call means that two circuits are created: one for the caller and one for the callee. To drive the point home, if a function ends up producing $n$ function calls, then there’ll be $n+1$ circuits that will be run (and $n+1$ proofs will be created when executing the function).

In practice, the execution of a “root” function is encoded as a list of “transitions”, where each transition represents the execution of a single function call. Transitions are ordered from most-nested calls to least-nested calls. This means that the last transition is the main (or root) function being called by the user.

pub struct Transition<N: Network> {
    /// The transition ID.
    id: N::TransitionID,
    /// The program ID.
    program_id: ProgramID<N>,
    /// The function name.
    function_name: Identifier<N>,
    /// The transition inputs.
    inputs: Vec<Input<N>>,
    /// The transition outputs.
    outputs: Vec<Output<N>>,
    /// The inputs for finalize.
    finalize: Option<Vec<Value<N>>>,
    /// The transition public key.
    tpk: Group<N>,
    /// The transition commitment.
    tcm: Field<N>,
}

In a function’s synthesized circuit, a call to an external function is replaced by witnessing (publicly) the arguments of the function call, and then by witnessing (publicly) the outputs of the resulting call.

It is thus the role of the verifier to “glue” together the different function calls (or transitions) by ensuring that the publicly-witnessed arguments are used to verify the callee’s circuit, and that the publicly-witnessed outputs are indeed the values output by the callee’s circuit.

Note that we also don’t want to leak inputs and outputs between calls, and so inputs and outputs are committed before being exposed as public inputs. In Aleo the hiding commitments are referred to as input IDs and output IDs.

Program deployment flow

Users can deploy their own Aleo programs which will be uploaded in their totality to the Aleo blockchain. This means that the program’s code (made out of Aleo instructions) will be stored on-chain.

In addition, a deployment has the user (who wishes to deploy their program) produce as many verifier keys as there are functions. The verifier keys are then deployed on-chain.

pub struct Deployment<N: Network> {
    /// The edition.
    edition: u16,
    /// The program.
    program: Program<N>,
    /// The mapping of function names to their verifying key and certificate.
    verifying_keys: Vec<(Identifier<N>, (VerifyingKey<N>, Certificate<N>))>,
}

Since verifier keys are expensive to produce (they consist of a number of large MSMs which commit to R1CS-related structures), the user produces them and then a proof of correctness (the Certificate above).

The proof is basically a Sigma protocol that the commitments encode low-degree extensions of R1CS-related vectors, by evaluating the low-degree extensions at a random point.

Function execution flow

To execute a function, a user uses the synthesizer in a similar way as the deployment process. As such, the user will produce $n + 1$ transition proofs if they wish to execute a function call that triggers $n$ nested calls. The synthesizer is run with the actual values (as opposed to random values) for the inputs as well as the different registers in the circuits.

In addition to producing these proofs, a user also produces $n + 1$ “inclusion proofs”. These inclusion proofs are used to prove that any record being used as input in the transition or function call indeed exists. Inclusion proofs prove that records exist either in some previous block that has been included in the blockchain, or in one of the previous transition outputs.

An inclusion proof also publicly outputs the serial numbers (also called a nullifier in Zcash-like systems) that uniquely identify the records without leaking any information about them. This way, records cannot be consumed more than once. (In addition, the network enforces that no serial number is seen twice within the same transaction.)

Note that all of these different proofs are eventually aggregated together into a single proof using Varuna’s batching capabilities.