Proving a batch
If you got to this section, then most likely you are wondering how to prove and verify the batch by yourself. After
releases prover-v15.1.0
and core-v24.9.0
prover subsystem doesn’t need access to core database anymore, which means
you can run only prover subsystem and prove batches without running the whole core system. This guide will help you with
that.
Requirements
Hardware
Setup for running the whole process should be the same as described here, except you need 48 GB of GPU, which requires an NVIDIA A100 80GB GPU.
Prerequisites
First of all, you need to install CUDA drivers, all other things will be dealt with by zkstack
and prover_cli
tools.
For that, check the following guide(you can skip bellman-cuda step).
Install the prerequisites, which you can find here. Note, that if you are not using Google VM instance, you also need to install gcloud.
Now, you can use zkstack
and prover_cli
tools for setting up the env and running prover subsystem.
First, install zkstackup
with:
curl -L https://raw.githubusercontent.com/matter-labs/zksync-era/main/zkstack_cli/zkstackup/install | bash
Then install the most recent version of zkstack
with:
zkstackup
Initializing system
After you have installed the tool, you can create ecosystem(you need to run only if you are outside of zksync-era
) by
running:
zkstack ecosystem create --l1-network=localhost --prover-mode=gpu --wallet-creation=localhost --l1-batch-commit-data-generator-mode=rollup --start-containers=true
The command will create the ecosystem and all the necessary components for the prover subsystem. You can leave default values for all the prompts you will see Now, you need to initialize the prover subsystem by running:
zkstack prover init --shall-save-to-public-bucket=false --setup-database=true --use-default=true --dont-drop=false
For prompts you can leave default values as well.
Proving the batch
Getting data needed for proving
At this step, we need to get the witness inputs data for the batch you want to prove. Database information now lives in
input file, called witness_inputs_<batch>.bin
generated by different core components).
-
If batch was produced by your system, the file is stored by prover gateway in GCS (or your choice of object storage – check config). At the point of getting it, most likely there is no artifacts directory created. If you have cloned the zksync-era repo, then it is in the root of ecosystem directory. Create artifacts directory by running:
mkdir -p <path/to/era/prover/artifacts/witness_inputs>
To access it from GCS (assuming you have access to the bucket), run:
gsutil cp gs://your_bucket/witness_inputs/witness_inputs_<batch>.bin <path/to/era/prover/artifacts/witness_inputs>
-
If you want to prove the batch produced by zkSync, you can get the data from the
ExternalProofIntegrationAPI
using{address}/proof_generation_data
endpoint. You need to replace{address}
with the address of the API and provide the batch number as a query data to get the data for specific batch, otherwise, you will receive latest data for the batch, that was already proven. Example:wget --content-disposition {address}/proof_generation_data
or
wget --content-disposition {address}/proof_generation_data/{l1_batch_number}
Preparing database
After you have the data, you need to prepare the system to run the batch. So, database needs to know about the batch and the protocol version it should use. You can do that with running
zkstack dev prover info
Example output:
===============================
Current prover setup information:
Protocol version: 0.24.2
Snark wrapper: 0x14f97b81e54b35fe673d8708cc1a19e1ea5b5e348e12d31e39824ed4f42bbca2
Database URL: postgres://postgres:notsecurepassword@localhost:5432/zksync_prover_localhost_era
===============================
This command will provide you with the information about the semantic protocol version(you need to know only minor and
patch versions) and snark wrapper value. In the example, MINOR_VERSION
is 24, PATCH_VERSION
is 2, and
SNARK_WRAPPER
is 0x14f97b81e54b35fe673d8708cc1a19e1ea5b5e348e12d31e39824ed4f42bbca2
.
Now, with the use of prover_cli
tool, you can insert the data about the batch and protocol version into the database:
First, get the database URL(you can find it in <ecosystem_dir>/chains/<chain_name>/configs/secrets.yaml
- it is the
prover_url
value) Now, insert the information about protocol version in the database:
prover_cli <DATABASE_URL> insert-version --version=<MINOR_VERSION> --patch=<PATCH_VERSION> --snark-wrapper=<SNARK_WRAPPER>
And finally, provide the data about the batch:
prover_cli <DATABASE_URL> insert-batch --number=<BATCH_NUMBER> --version=<MINOR_VERSION> --patch=<PATCH_VERSION>
Also, provers need to know which setup keys they should use. It may take some time, but you can generate them with:
zkstack prover generate-sk
Running prover subsystem
At this step, all the data is prepared and you can run the prover subsystem. To do that, run the following commands:
zkstack prover run --component=prover
zkstack prover run --component=witness-generator --round=all-rounds
zkstack prover run --component=witness-vector-generator --threads=10
zkstack prover run --component=compressor
zkstack prover run --component=prover-job-monitor
And you are good to go! The prover subsystem will prove the batch and you can check the results in the database.
Verifying zkSync batch
Now, assuming the proof is already generated, you can verify using ExternalProofIntegrationAPI
. Usually proof is
stored in GCS bucket(for which you can use the same steps as for getting the witness inputs data
here, but locally you can find it in /artifacts/proofs_fri
directory). Now, simply
send the data to the endpoint {address}/verify_batch/{batch_number}
.
Example:
curl -v -F proof=@{path_to_proof_binary} {address_of_API}/verify_proof/{l1_batch_number}
API will respond with status 200 if the proof is valid and with the error message otherwise.