ZKsync OS Server Developer Documentation

This book guides you through running, understanding, and extending the ZKsync OS Server.

Understanding the System

Deep dives into internal components and lifecycle.

Running the System

These guides help you set up and operate the server in different environments.

Setup

Prerequisites

This project requires:

  • The Foundry nightly toolchain
  • The Rust toolchain

Install Foundry (v1.5.1)

Install Foundry v1.5.1:

# Download the Foundry installer
curl -L https://foundry.paradigm.xyz | bash

# Install forge, cast, anvil, chisel
# Ensure you are using the 1.5.1 stable release
foundryup -i 1.5.1

Verify your installation:

anvil --version

The output should include a anvil Version: 1.5.1.

Install Rust

Install Rust using rustup:

curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh

After installation, ensure Rust is available:

rustc --version

Linux packages

# essentials
sudo apt-get install -y build-essential pkg-config cmake clang lldb lld libssl-dev apt-transport-https ca-certificates curl software-properties-common git    

Run

Using the run_local.sh Script

⚠️ This script is a temporary solution. Do not depend on it in production.

The run_local.sh script automates starting Anvil and chain node(s):

# Run a single chain
./run_local.sh ./local-chains/v30.2/default

# Run multiple chains
./run_local.sh ./local-chains/v30.2/multi_chain

# Run with logging to files
./run_local.sh ./local-chains/v30.2/multi_chain --logs-dir ./logs

Manual setup

To run node locally, first decompress state and launch anvil:

gzip -dfk ./local-chains/v30.2/l1-state.json.gz
anvil --load-state ./local-chains/v30.2/l1-state.json --port 8545

then launch the server:

cargo run

To restart the chain, erase the local DB and re-run anvil:

rm -rf db/*

By default, fake (dummy) proofs are used both for FRI and SNARK proofs.

Rich account:

PRIVATE_KEY=0x7726827caac94a7f9e1b160f7ea819f172f7b6f9d2a97f992c38edeab82d4110
ACCOUNT_ID=0x36615Cf349d7F6344891B1e7CA7C72883F5dc049

Example transaction to send:

cast send -r http://localhost:3050 0x5A67EE02274D9Ec050d412b96fE810Be4D71e7A0 --value 
100 --private-key 0x7726827caac94a7f9e1b160f7ea819f172f7b6f9d2a97f992c38edeab82d4110

Config options

See node/sequencer/config.rs for config options and defaults. Use a JSON configuration file to override the defaults, e.g.:

cargo run --release -- --config ./local-chains/v30.2/default/config.yaml

Explore the local-chains folder for additional chain configs grouped by protocol version. Detailed information is available in local-chains/README.md.

You can also use environment variables to override the default settings:

prover_api_fake_provers_enabled=false cargo run --release

If both the JSON config file and environment variables are set, the latter takes precedence.

Ephemeral mode Ephemeral mode runs the node using a temporary, isolated state directory, allowing you to spin up one or more local chains without them interfering with the same folder. When enabled, the node creates a temporary base directory for RocksDB and the file-backed object store, this directory is automatically removed on shutdown. To remain as lightweight as possible, Ephemeral mode disables all APIs except for JSON-RPC (status, prometheus APIs etc are unavailable). It can be used for quick local testing and multi-chain setups.

The ephemeral setting is part of the general config and can be set like any other config value:

general_ephemeral=true cargo run --release

Docker

sudo docker build -t zksync_os_sequencer .
sudo docker run -d --name sequencer -p 3050:3050 -p 3124:3124 -p 3312:3312 -e batcher_maximum_in_flight_blocks=15  -v /mnt/localssd/db:/db   zksync_os_sequencer

External node

Setting the general_node_role=external environment variable puts the node in external node mode, which means it receives block replays from another node instead of producing its own blocks. The node will get priority transactions from L1 and check that they match the ones in the replay but it won’t change L1 state.

To run the external node locally, you need to enable networking on both main node and external node. Then set external node’s services’ ports so they don’t overlap with the main node. Main Node:

network_enabled=true \
network_secret_key=0af6153646bbf600f55ce455e1995283542b1ae25ce2622ce1fda443927c5308 \
network_boot_nodes=enode://246e07030b4c48b8f28ab1fdf797a02308b0ca724696b695aabee48ea48298ff221144a0c0f14ebf030aea6d5fb6b31bd3a02676204bb13e78336bb824e32f1d@127.0.0.1:3060,enode://d2db8005d59694a5b79b7c58d4d375c60c9323837e852bbbfd05819621c48a4218cefa37baf39a164e2a6f6c1b34c379c4a72c7480b5fbcc379d1befb881e8fc@127.0.0.1:3060 \
cargo run

EN:

RUST_LOG="info,zksync_os_storage_api=debug" \
network_enabled=true \
network_secret_key=c2c8042b03801e2e14b395ed24f970ead7646a9ff315b54f747bcefdb99afda7 \
network_address=127.0.0.1 \
network_port=3061 \
network_boot_nodes="enode://246e07030b4c48b8f28ab1fdf797a02308b0ca724696b695aabee48ea48298ff221144a0c0f14ebf030aea6d5fb6b31bd3a02676204bb13e78336bb824e32f1d@127.0.0.1:3060,enode://d2db8005d59694a5b79b7c58d4d375c60c9323837e852bbbfd05819621c48a4218cefa37baf39a164e2a6f6c1b34c379c4a72c7480b5fbcc379d1befb881e8fc@127.0.0.1:3060" \
general_main_node_rpc_url="http://127.0.0.1:3050" \
general_node_role=external \
observability_prometheus_port=3313 \
network_port="3061" \
general_rocks_db_path="db/en" \
status_server_enabled=false \
rpc_address=0.0.0.0:3051 \
cargo run 

Batch verification (2FA)

Batch verification, also referred to as 2FA, adds independent approval to batch commits. Instead of trusting the main node alone, you require external nodes (ENs) to confirm that they can reproduce a batch and sign it before the batch is treated as ready to commit.

2FA serves two different purposes depending on how it is configured:

  • L2-only: a data-availability and recoverability safeguard. The main node only moves batches forward after independent ENs have reproduced them.
  • Settlement-layer backed: if a MultisigCommitter is present on the settlement layer, the same signatures are also used by the settlement-layer commit path. This adds an execution-correctness check that is separate from the proof system.

L2-only mode

L2-only mode is for data availability. In this setup:

  • the main node runs the batch verification server;
  • selected ENs connect to it and sign approvals;
  • the main node requires enough EN approvals before the batch can move forward.

The value of this mode is that committed batches must already be reproducible by the participating ENs. If the main node is lost, committed L2 data should already exist on those ENs.

Settlement-layer-backed mode

If the chain’s settlement layer uses a MultisigCommitter for ValidatorTimelock, batch verification also has a settlement-layer-backed component.

In that case the main node reads the settlement-layer validator set and threshold on startup and uses them for batch commit submission. This changes the behavior in two important ways:

  • the settlement-layer validator set becomes the signer allowlist used for commit submission;
  • the effective threshold is the higher of the local batch_verification_threshold and the settlement-layer threshold.

This mode is not primarily about data availability. It adds assurance that batch execution was accepted by the configured validator set, independently of the proof system.

How To Use 2FA

Use 2FA with ENs that are operationally independent from the main node. The point is not just to run extra processes, but to require approval from separate nodes that can independently replay the same batch.

Each participating EN should have its own signing key. The corresponding signer addresses should match the allowlist that the main node accepts:

  • in L2-only mode, that allowlist comes from batch_verification_accepted_signers;
  • in settlement-layer-backed mode, it comes from the settlement-layer validator set.

The EN signing keys do not submit transactions themselves. The main node collects signatures and, when settlement-layer-backed mode is active, includes them in the settlement-layer commit flow.

Main Node Configuration

Enable and configure the main node / sequencer with these options:

  • batch_verification_server_enabled Enables the batch verification server on the main node. Without this, the main node does not collect EN signatures.
  • batch_verification_listen_address Address the main node listens on for batch verification client connections, for example 0.0.0.0:3072.
  • batch_verification_threshold Minimum number of EN signatures required by the main node. If settlement-layer-backed mode is active, the effective threshold is max(local threshold, settlement-layer threshold).
  • batch_verification_accepted_signers Comma-separated list of accepted signer addresses for L2-only mode. These should correspond to the EN signing keys. If the settlement layer provides a non-empty validator set or a non-zero threshold through MultisigCommitter, that settlement-layer validator set takes precedence over this local list.
  • batch_verification_request_timeout How long the main node waits for responses during a single signature collection attempt.
  • batch_verification_retry_delay Delay between collection attempts when the main node retries.
  • batch_verification_total_timeout Overall time budget for collecting enough signatures for a batch.

2FA EN Configuration

Each EN that participates in 2FA needs these options:

  • batch_verification_client_enabled Enables the EN batch verification client.
  • batch_verification_connect_address URL of the main node batch verification server, for example http://10.10.1.1:3072. This EN will connect to that server and return approvals from its local replay state.
  • batch_verification_signing_key Private key used by the EN to sign batch approvals. Its address must be present in the local accepted signer list for L2-only mode, or in the settlement-layer validator set for settlement-layer-backed mode.

Otterscan (Local Explorer)

Server supports ots_ namespace and hence can be used in combination with Otterscan block explorer. To run a local instance as a Docker container (bound to http://localhost:5100):

docker run --rm -p 5100:80 --name otterscan -d --env ERIGON_URL="http://127.0.0.1:3050" otterscan/otterscan

See Otterscan’s docs for other running options.

Exposed Ports

  • 3050 - L2 JSON RPC
  • 3060 - P2P communication (e.g. replay transport)
  • 3124 - Prover API (e.g. 127.0.0.1/prover-jobs/status) (only enabled if prover_api_component_enabled is set to true)
  • 3312 - Prometheus

FAQ

Failed to read L1 state: contract call to getAllZKChainChainIDs returned no data (“0x”); the called address might not be a contract

Something went wrong with L1 - check that you’re really running the anvil with the proper state on the right port.

Failed to deserialize context

If you hit this error when starting, check if you don’t have some ‘old’ rocksDB data in db/node1 directory.

Design principles

  • Minimal, async persistence
    • to meet throughput and latency requirements, we avoid synchronous persistence at the critical path. Additionally, we aim at storing only the data that is strictly needed - minimizing the potential for state inconsistency
  • Easy to replay arbitrary blocks
    • Sequencer: components are idempotent
    • Batcher: batcher component skips all blocks until the first uncommitted batch. Thus, downstream components only receive batches that they need to act upon
  • State - strong separation between
    • Actual state - data needed to execute VM: key-value storage and preimages map
    • Receipts repositories - data only needed in API
    • Data related to Proofs and L1 - not needed by sequencer / JSON RPC - only introduced downstream from batcher

Subsystems

  • Sequencer subsystem — mandatory for every node. Executes transactions in VM, sends results downstream to other components.
    • Handles Produce and Replay commands in an uniform way (see model/mod.rs and execution/block_executor.rs)
    • For each block: (1) persists it in WAL (see block_replay_storage.rs), (2) pushes to state (see state crate), (3) exposes the block and tx receipts to API (see repositories/mod.rs), (4) pushes to async channels for downstream subsystems. Waits on backpressure.
  • API subsystem — optional (not configurable atm). Has shared access to state. Exposes ethereum-compatible JSON RPC
  • Batcher subsystem — runs for the main node - most of it is disabled for ENs.
    • Turns a stream of blocks into a stream of batches (1 batch = 1 proof = 1 L1 commit); exposes Prover APIs; submits batches and proofs to L1.
    • For each batch, computes the Prover Input (runs RiscV binary (app.bin) and records its input as a stream of Vec<u32> - see batcher/mod.rs)
    • This process requires Merkle Tree with materialized root hashes and proofs at every block boundary.
    • Runs L1 senders for each of commit / prove / execute
    • Runs Priority Tree Manager that applies new L1->L2 transactions to the dynamic Merkle tree and prepares execute commands. It’s run both for main node and ENs. ENs don’t send execute txs to L1, but they need to keep the tree up to date, so that if the node become main, it doesn’t need to build the tree from scratch.

Note on Persistent Tree — it is only necessary for Batcher Subsystem. Sequencer doesn’t need the tree — block hashes don’t include root hash. Still, even when batcher subsystem is not enabled, we want to run the tree for potential failover.

Component Details

Screenshot 2025-07-24 at 15 00 55

See individual components and state recovery details in the table below. Note that most components have little to no internal state or persistence — this is one of the design principles.

ComponentIn-memory statePersistenceState Recovery
Command Sourcestarting_block (only used on startup)nonestarting_block is the first block after the compacted block stored in state, i.e., starting_block = highest_block - blocks_to_retain_in_memory.
BlockContextProvidernext_l1_priority_id; block_hashes_for_next_block (last 256 block hashes)nonenext_l1_priority_id: take from ReplayRecord of starting_block - 1; block_hashes_for_next_block: take from 256 ReplayRecords before starting_block
L1WatcherGapless list of Priority transactions - starting from the last committed to L1nonenone - recovers itself from L1
L2Mempool (RETH crate)prepared list of pending L2 transactionsnonenone (consider persisting mempool transactions in the future)
BlockExecutornone 🔥nonenone
Repositories (API subsystem)BlockHeaders and Transactions for ~blocks_to_retain_in_memory blocksHistorical BlockHeaders and Transactionsnone - recovers naturally when replaying blocks from starting_block
StateAll Storage Logs and Preimages for blocks_to_retain_in_memory last blocksCompacted state at some older block (highest_block - blocks_to_retain_in_memory): full state map and all preimagesnone - recovers naturally when replaying blocks from starting_block
Merkle TreeOnly persistenceFull Merkle tree - including previous values on each leafnone
⬇️ Batcher Subsystem Components⬇️ Components below operate on Batches - not Blocks️⬇️ Components below must not rely on persistence - otherwise failover is not possible⬇️
Batcherstartup: starting_batch and batcher_starting_block;
operation: Trailing Batch’s CommitBatchInfo;
nonefirst_block_to_process: block after the last block in the last committed L1 batch;
last_persisted_block: the block after which we start checking for batch timeouts
StoredBatchInfo in run_loop: Currently: from FRI cache; todo - Load last committed StoredBatchInfo from L1 OR reprocess last committed batch
Prover Input Generatornonenone
FRI Job ManagerGapless List of unproved batches with ProverInput and prover assignment infononenone - batches before starting_batch are guaranteed to have FRI proofs, batches after will go through the pipeline again
FRI Store/CachenoneMap<BatchNumber, FRIProof> (todo: extract from the node process to enable failover)none
L1 Committernone*nonenone - recovers itself from L1
L1 Proof Submitternone*nonenone - recovers itself from L1
L1 Executornone*nonenone - recovers itself from L1
SNARK Job Manager (TODO - missing)Gapless list of batches with their FRI proofs and prover assignment infononeLoad batches that are committed but not proved on L1 yet. Load their FRI proofs from FRI cache (TODO)
Priority Tree ManagerDynamic Merkle tree with L1->L2 transaction hashesCompressed data needed to rebuild the tree, see CachedTreeData for more detailsnone - recovers itself from replay storage

RPC

  • All standard eth_ methods are supported (except those specific to EIP-2930, EIP-4844 and EIP-7702). Block tags have a special meaning:
    • earliest - not supported yet (will return genesis or first uncompressed block)
    • pending - the latest produced block
    • latest - same as pending (consider taking consensus into account here)
    • safe - the latest block that has been committed to L1
    • finalized - not supported yet (will return the latest block that has been executed on L1)
  • zks_ namespace is kept to the minimum right now to avoid legacy from Era. Only following methods are supported:
    • zks_getBridgehubContract
  • ots_ namespace is used for Otterscan integration (meant for local development only)

Prover API

        .route("/prover-jobs/v1/status", get(status))
        .route("/prover-jobs/v1/FRI/pick", post(pick_fri_job))
        .route("/prover-jobs/v1/FRI/submit", post(submit_fri_proof))
        .route("/prover-jobs/v1/SNARK/pick", post(pick_snark_job))
        .route("/prover-jobs/v1/SNARK/submit", post(submit_snark_proof))

Database Schema Overview

All persistent data is stored across multiple RocksDB databases:

  • block_replay_wal
  • preimages
  • repository
  • state
  • tree
  • proofs (JSON files, not RocksDB)

1. block_replay_wal

Write-ahead log containing recent (non-compacted) block data.

ColumnKeyValue
block_output_hashblock numberBlock output hash
contextblock numberBinary-encoded BlockContext (BlockMetadataFromOracle)
last_processed_l1_tx_idblock numberID (u64) of the last processed L1 tx in the block
txsblock numberVector of EIP-2718 encoded transactions
node_versionblock numberNode version that produced the block
latest‘latest_block’Latest block number

2. preimages

ColumnKeyValue
meta‘block’Latest block ID
storagehashPreimage for the hash

3. repository

Canonical blocks and transactions.

ColumnKeyValue
initiator_and_nonce_to_hashaddress (20 bytes) + nonce (u64)Transaction hash
tx_metatransaction hashBinary TxMeta (hash, number, gas used, etc.)
block_datablock hashAlloy-serialized block
tx_receipttransaction hashBinary EIP-2718 receipt
meta‘block_number’Latest block number
txtransaction hashEIP-2718 encoded bytes
block_number_to_hashblock numberBlock hash

4. state

Data compacted from the write-ahead log.

ColumnKeyValue
meta‘base_block’Base block number for this state snapshot
storagekeyValue (compacted storage)

5. tree

Merkle-like structure.

ColumnKeyValue
defaultcomposite (version + nibble + index)Serialized Leaf or Internal node
key_indiceshashKey index

Note: The ‘default’ column also stores a serialized Manifest at key ‘0’.


6. proofs

Stored as JSON files in a separate directory: ../shared/fri_batch_envelopes

State

The global state is a set of key/value pairs:

  • key = keccak(address, slot)
  • value = a single 32-byte word (U256)

All such pairs are stored (committed) in the Merkle tree (see tree.md).

Account metadata (AccountProperties)

Account-related data (balance, nonce, deployed bytecode hash, etc.) is grouped into an AccountProperties struct. We do NOT store every field directly in the tree. Instead:

  • We hash the AccountProperties struct.
  • That hash (a single U256) is what appears in the Merkle tree.
  • The full struct is retrievable from a separate preimage store.

Special address: ACCOUNT_STORAGE (0x8003)

We reserve the synthetic address 0x8003 to map account addresses to their AccountProperties hash. Concretely: value at key = keccak(0x8003, user_address) = hash(AccountProperties(user_address))

Example: fetching the nonce for address 0x1234

  1. Compute key = keccak(0x8003, 0x1234)
  2. Read the U256 value H from the Merkle tree at that key
  3. Look up preimage(H) to get AccountProperties
  4. Take the nonce field from that struct

This indirection:

  • Keeps the Merkle tree smaller (one leaf per account metadata bundle)
  • Avoids multiple leaf updates when several account fields change at once.

Bytecodes

We track two related things:

  1. What the outside world sees (the deployed / observable bytecode).
  2. An internal, enriched form that adds execution helpers (artifacts).

Terminology

  • Observable (deployed) bytecode: The exact bytes you get from an RPC call like eth_getCode or cast code.
  • Observable bytecode hash (observable_bytecode_hash): keccak256(observable bytecode). This matches Ethereum conventions.
  • Internal extended representation: observable bytecode
    • padding (if any, e.g. to align)
    • artifacts (pre‑computed data used to speed execution, e.g. jumpdest map).
  • Internal bytecode hash (bytecode_hash): blake2 hash of the full extended representation above. The extended blob itself lives in the preimage store; only the blake hash is stored in AccountProperties.

Stored fields in AccountProperties

  • bytecode_hash (Bytes32): blake2 hash of [observable bytecode | padding | artifacts].
  • unpadded_code_len (u32): Length (in bytes) of the original observable bytecode, before any internal padding or artifacts.
  • artifacts_len (u32): Length (in bytes) of the artifacts segment appended after padding.
  • observable_bytecode_hash (Bytes32): keccak256 of the observable (deployed) bytecode.
  • observable_bytecode_len (u32): Length of the observable (deployed) bytecode. (Currently mirrors unpadded_code_len; kept explicitly for clarity / future evolution.)

Why two hashes?

  • keccak (observable_bytecode_hash) is what external tooling expects and can independently recompute.
  • blake (bytecode_hash) commits to the richer internal representation the node actually executes against (including acceleration data), avoiding recomputing artifacts on every access.

Lookup workflow (simplified)

  1. From AccountProperties get:
    • bytecode_hash → fetch extended blob via preimage store.
    • observable_bytecode_hash → verify against externally visible code if needed.
  2. Use lengths (unpadded_code_len, artifacts_len) to slice: [0 .. unpadded_code_len) → observable code [end of padding .. end) → artifacts

This separation keeps the Merkle tree lean while enabling fast execution.

Tree

State is stored in a binary Merkle‑like tree. In production the logical tree depth is 64 (root at depth 0, leaves at depth 63). We use Blake2 as a tree hashing algorithm.

Optimization:

  • Instead of persisting every individual leaf, we group (package) 8 consecutive leaves together.
  • 8 leaves form a perfectly balanced subtree of height 3 (because 2^3 = 8).
  • One such packaged subtree is stored as a single DB record.

Terminology:

  • We call each 3-level chunk of the logical tree a nibble (note: this is an internal term here).
  • Effective path length (number of nibbles) = ceil(64 / 3) = 22.

So:

  • Logical depth: 64 levels.
  • Physical traversal steps: 22 nibbles.
  • Each nibble lookup loads or updates one packaged subtree (8 leaves).

Result: fewer DB reads/writes while preserving a logical depth of 64.

Genesis and Block 0

Genesis is the one-time process of starting a new chain and producing its first block.

What happens at genesis

  1. The sequencer starts with an empty database.
  2. It loads a hardcoded file (genesis.json) that defines the initial on-chain state.
  3. This file deploys exactly three system contracts:
    • GenesisUpgrade
    • L2WrappedBase
    • ForceDeployer
  4. After loading these, the sequencer begins listening to L1 events that describe the first actual L2 block:
    • Bytecodes to deploy additional contracts (ForceDeployer enables this)
    • Parameters passed to GenesisUpgrade to finish remaining initialization steps

Why these contracts exist

  • ForceDeployer: Allows forced deployment of predefined contract bytecode needed at boot.
  • GenesisUpgrade: Finalizes system configuration after initial contract deployment.
  • L2WrappedBase: Provides a required base implementation (infrastructure dependency).

Security perspective

L1 already knows the exact expected initial L2 state (the three contracts and their storage layout).
This state is defined in zksync-era’s genesis.json and is consumed by the zkStack tooling when setting up the ecosystem on L1.
Because L1 has this canonical reference, it can validate that the L2 started correctly.

Summary

Genesis = load predefined state from genesis.json -> deploy 3 core contracts -> process L1 events to finalize initialization -> produce block 0.

Base token price updater

Base token price updater is a service that periodically fetches USD prices of tokens that are required to properly calculate fee parameters. There are 3 options for token price source: CoinGecko, CoinMarketCap (3rd party APIs), or Forced which instantiates a client that returns prices that are configured, by default price randomly fluctuate a little to simulate real world scenario (fluctuation can be disabled).

Price source configuration

Source is configured in ExternalPriceApiClientConfig. For example, CoinGecko can be configured as follows:

external_price_api_client:
  source: "Coingecko"
  coingecko_api_key: "<key>"

For forced config it’s essential to provide prices for all required tokens:

  • chain base token
  • base token of the settlement layer (ETH for L1, ZK for Gateway)
  • ETH

So for the chain that uses USDC as base token and settles on gateway forced configuration can look like this:

external_price_api_client:
  source: "Forced"
  forced_prices:
    "0xa0b86991c6218b36c1d19d4a2e9eb0ce3606eb48": 1.0    # USDC
    "0x0000000000000000000000000000000000000001": 3000.0 # ETH
    "0x66a5cfb2e9c529f14fe6364ad1075df3a649c0a5": 0.035  # ZK

In simple case, when chain base token is ETH and settlement layer is L1, only ETH price is required:

external_price_api_client:
  source: "Forced"
  forced_prices:
    "0x0000000000000000000000000000000000000001": 3000.0 # ETH

Token multiplier setter

For chains with base token different from ETH it’s recommended to configure a token multiplier setter signer, then the component will also periodically update “ETH:token” price ratio on L1. Component and node will still work without it but there will be a warning in logs and ratio on L1 won’t change meaning that price for L1->L2 txs can eventually get outdated.

You can use either a local private key or a GCP KMS key via the token_multiplier_setter_sk field:

# Option 1: Local private key (plain hex string)
base_token_price_updater:
  token_multiplier_setter_sk: "<private_key_in_hex>"

# Option 2: GCP KMS key (structured object)
base_token_price_updater:
  token_multiplier_setter_sk:
    type: gcp_kms
    resource: "projects/{project}/locations/{location}/keyRings/{ring}/cryptoKeys/{key}/cryptoKeyVersions/{version}"

Mainnet recommendation

For mainnet it’s recommended to use one of 3rd party sources so that the fees are accurate and correspond to an up-to-date token price; and provide an API key to avoid getting rate-limited.

Also, it’s highly recommended to set fallback_prices configuration. It sets predefined fallback prices for tokens in case external API fetching fails on startup. If it’s missing and the price fetching fails on startup, then block sequencing will be blocked.

Configuration is similar to forced_prices. And should contain prices for all required tokens.

base_token_price_updater:
  fallback_prices:
    "0x0000000000000000000000000000000000000001": 3000.0 # ETH

Testnet recommendation

For testnets it’s usually acceptable to use Forced source with reasonable prices configured. In case you want fees to behave as on mainnet, you can still use 3rd party source and set config:

  • base_token_addr_override - mainnet token address that source can provide price for; in case base token is ETH it can be omitted.
  • base_token_decimals_override - token decimals (since token is on mainnet but node connects to testnet it cannot get the decimals from L1); in case base token is ETH or ZK it can be omitted. Similarly, you can set gateway_base_token_addr_override to ZK mainnet address in case settlement layer is Gateway.

Example configuration for a chain that settles to Gateway, and chain’s base token is USDC:

base_token_price_updater:
  base_token_addr_override: "0xa0b86991c6218b36c1d19d4a2e9eb0ce3606eb48" # USDC
  base_token_decimals_override: 6 # USDC decimals
  gateway_base_token_addr_override: "0x66a5cfb2e9c529f14fe6364ad1075df3a649c0a5" # ZK

ZKSync OS Fee model

ZKSync OS fee model is designed in a way to ensure that it describes well L2-specific costs (pubdata costs, ZK proving costs), while trying to keep it both simple and similar to Ethereum model. Internally VM keeps track of three resources: gas (similar to EVM), native (resource that reflects proving costs), pubdata (number of bytes to be posted on L1).

There are three parameters in block context that define fees:

  • native_price – price for one unit of native
  • eip1559_basefee – price for one unit of gas
  • pubdata_price – price for one byte of pubdata All three are specified in base token units, e.g. wei for ETH-based chains.

VM uses the following reasoning when calculating gas_used. Firstly, it calculates EVM gas used and effective gas price. evm_gas_used * effective_gas_price gives it a number of base token units to be charged in EVM case. Secondly, it calculates native and pubdata costs: native_price * native_used + pubdata_price * pubdata_used. Finally, it takes the maximum of two values and uses it as total fee and returns gas_used such that gas_used * effective_gas_price equals total fee.

Fee configuration and calculation

Native price

Since native reflects ZK proving costs, it should be calculated based on 2 things:

  • prover machine cost
  • prover performance (how many native units it processes per second)

Native price is configured with FeeConfig::native_price_usd (fee_native_price_usd). Node converts config parameter to base token units and uses the result as native_price.

Base fee

The idea behind calculating base fee is to choose it in a way such that in most cases

  • the resulting gas used should be equal to evm gas used
  • total fee should not be much higher than what operator spends in reality

Luckily, for the most opcodes the ratio between evm gas cost and native cost does not differ a lot. However, for some precompiles, e.g. modexp, the ratio native:gas is higher than for regular opcodes.

So, base fee is calculated as eip1559_basefee = native_price * native_per_gas, where native_per_gas can be configured via FeeConfig::native_per_gas (fee_native_per_gas). The default value is chosen such that the two properties above hold in most cases, that is if a transaction doesn’t use many precompiles that are expensive in terms of native and does not require publishing a lot of pubdata.

Pubdata price

Pubdata price depends on what DA chain uses. If chain is a validium then price is set to 0. For rollups that settle to L1:

  • if blobs are used, then L1 blob price is used for calculation
  • if calldata is used, then L1 gas price is used for calculation If rollup settles to Gateway, then gateway pubdate price is used.

Pricing for blobs case is special because calculation of blob commitments is proven so it results in additional proving costs, thus pubdata price also depends on native price. Also, if chain settles frequently and posts blob that is not full then operator still needs to pay for the full blob. Node calculates a statistic for a fill ratio of submitted blobs and uses it to adjust pubdata price accordingly.

Blob price is not stable on Ethereum testnets, and can grow a lot sometimes. At some point it can be the case that pubdata price is high enough so that gas costs for small transactions are higher than block gas limit. To circumvent this issue, node has a configuration parameter FeeConfig::pubdata_price_cap (fee_pubdata_price_cap). If it’s set, then pubdata price is capped by the value, allowing the node to operate normally even if blob price is very high. For ETH-based testnet chains we recommend to set 10_000_000_000_000. If base token price is different from ETH, then the value should be adjusted accordingly.

Config overrides

Config allows to set constant overrides for base_fee, native_price, and pubdata_price. Config variable are fee_base_fee_override, fee_native_price_override, and fee_pubdata_price_override respectively. If set, node uses the override values instead of calculating the parameters as described above. It can be used if operator prefers to not have a dynamic fee model, or for testing purposes.

EIP-1559

EIP-1559 rules for base fee calculation does not make much sense for ZKSync OS, because the operator costs doesn’t depend on block gas usage and sequencer supports big max TPS that should be enough for most situations. However, a small part of EIP-1559 is still applied:

  • native_price and base_fee can change between subsequent blocks max by 12.5%
  • pubdata_price can increase between subsequent blocks max by 50% It’s needed for smooth transitions in case of sudden changes in parameters that affect fee calculation, otherwise it may lead to poor UX, e.g. txs can stuck in mempool, fail with out of gas, etc.

zks_getProof

Returns a Merkle proof for a given account storage slot, verifiable against the L1 batch commitment.

Parameters

#NameTypeDescription
1addressAddressThe account address.
2keysH256[]Array of storage keys to prove.
3l1BatchNumberuint64The L1 batch number against which the proof should be generated. The proof is for the state after this batch.

Response

{
  "address": "0x...",
  "stateCommitmentPreimage": {
    "nextFreeSlot": "0x...",
    "blockNumber": "0x...",
    "last256BlockHashesBlake": "0x...",
    "lastBlockTimestamp": "0x..."
  },
  "storageProofs": [
    {
      "key": "0x...",
      "proof": { ... }
    }
  ],
  "l1VerificationData": {
    "batchNumber": 2,
    "numberOfLayer1Txs": 0,
    "priorityOperationsHash": "0x...",
    "dependencyRootsRollingHash": "0x...",
    "l2ToL1LogsRootHash": "0x...",
    "commitment": "0x..."
  }
}

address

The account address, as provided in the request. Included in the response so the verifier can derive the flat storage key (blake2s(address_padded32_be || key)) without external context.

stateCommitmentPreimage

The preimage fields needed to recompute the L1 state commitment from the Merkle root. These are constant per batch and shared across all storage proofs in the response.

FieldTypeDescription
nextFreeSlotuint64The next available leaf index in the state tree after this batch. Part of the tree commitment.
blockNumberuint64The last L2 block number in this batch.
last256BlockHashesBlakeH256blake2s of the concatenation of the last 256 block hashes (each as 32 bytes).
lastBlockTimestampuint64Timestamp of the last L2 block in this batch.

storageProofs[i]

Each entry corresponds to one requested storage slot.

FieldTypeDescription
keyH256The storage slot (as provided in the input). The verifier derives the tree key as `blake2s(address_padded32_be
proofobjectThe proof object. The type field discriminates between existing and non-existing proofs (see below).

The proof object always contains a type field:

  • "existing" — the slot exists in the tree. Additional fields: index, value, nextIndex, siblings.
  • "nonExisting" — the slot has never been written to (value is implicitly zero). Additional fields: leftNeighbor, rightNeighbor.

proof when type = "existing"

Returned when the storage slot has been written to at least once.

FieldTypeDescription
typestring"existing"
indexuint64The leaf index in the tree.
valueH256The storage value.
nextIndexuint64The linked-list pointer to the next leaf (by key order).
siblingsH256[]The Merkle path (see Siblings below).

The leaf key used in the tree is not included explicitly — the verifier derives it as blake2s(address_padded32_be || key) from the address and key fields in the response.

proof when type = "nonExisting"

Returned when the storage slot has never been written to (value is implicitly zero). Proves non-membership by showing two consecutive leaves in the key-sorted linked list that bracket the queried key.

FieldTypeDescription
typestring"nonExisting"
leftNeighborLeafWithProofThe leaf with the largest key smaller than the queried key.
rightNeighborLeafWithProofThe leaf with the smallest key larger than the queried key. leftNeighbor.nextIndex must equal rightNeighbor.index.

LeafWithProof

Used within non-existing proofs to represent a neighbor leaf and its Merkle path.

FieldTypeDescription
indexuint64The leaf index in the tree.
leafKeyH256The leaf’s key (the blake2s derived flat storage key).
valueH256The leaf’s value.
nextIndexuint64The linked-list pointer to the next leaf.
siblingsH256[]The Merkle path (see Siblings below).

l1VerificationData

The remaining fields of StoredBatchInfo that, together with the state commitment derived from the proof, allow the caller to reconstruct the full struct and verify it against L1:

struct StoredBatchInfo {
    uint64  batchNumber;                  // l1VerificationData
    bytes32 batchHash;                    // = stateCommitment (derived from proof)
    uint64  indexRepeatedStorageChanges;   // always 0 (ZKsync OS)
    uint256 numberOfLayer1Txs;            // l1VerificationData
    bytes32 priorityOperationsHash;       // l1VerificationData
    bytes32 dependencyRootsRollingHash;   // l1VerificationData
    bytes32 l2ToL1LogsRootHash;           // l1VerificationData
    uint256 timestamp;                    // always 0 (ZKsync OS)
    bytes32 commitment;                   // l1VerificationData
}
FieldTypeDescription
batchNumberuint64The L1 batch number.
numberOfLayer1Txsuint256Number of priority (L1 → L2) transactions in this batch.
priorityOperationsHashH256Rolling hash of priority operations.
dependencyRootsRollingHashH256Rolling hash of dependency roots.
l2ToL1LogsRootHashH256Root hash of L2 → L1 log Merkle tree.
commitmentH256Batch auxiliary commitment.

Two fields of StoredBatchInfo are fixed constants in ZKsync OS and therefore omitted from the response: indexRepeatedStorageChanges is always 0 and timestamp is always 0.

Tree Structure

The state tree is a fixed-depth (64) binary Merkle tree using Blake2s-256 as the hash function. Leaves are allocated left-to-right by insertion order and linked together in a sorted linked list by key.

Key derivation

The flat storage key for a slot is derived as:

flat_key = blake2s(address_padded32_be || storage_key)

where address_padded32_be is the 20-byte address zero-padded on the left to 32 bytes.

Leaf hashing

leaf_hash = blake2s(key || value || next_index_le8)

where key and value are 32 bytes each, and next_index_le8 is the next pointer encoded as 8 bytes little-endian.

An empty (unoccupied) leaf has key = 0, value = 0, next = 0.

Node hashing

node_hash = blake2s(left_child_hash || right_child_hash)

Siblings

The siblings array is an ordered list of sibling hashes forming the Merkle path from leaf to root.

Order. siblings[0] is the sibling at the leaf level (depth 64). Subsequent entries move toward the root. A full (uncompressed) path has 64 entries, with the last entry being the sibling at depth 1 (one level below the root). At each level, if the current index is even the node is a left child; if odd it is a right child. The index is halved (integer division) after each level.

Empty subtree compression. The tree has depth 64 but is sparsely populated — most subtrees are entirely empty. The hash of an empty subtree at each level is deterministic:

emptyHash[0] = blake2s(0x00{32} || 0x00{32} || 0x00{8})    // empty leaf hash (72 zero bytes)
emptyHash[i] = blake2s(emptyHash[i-1] || emptyHash[i-1])    // for i = 1..63

If trailing siblings (toward the root) are equal to the corresponding emptyHash for that level, they are omitted. The verifier reconstructs them: if siblings has fewer than 64 entries, the missing entries at positions len(siblings) through 63 are filled with emptyHash[len(siblings)], emptyHash[len(siblings)+1], etc.

For example, if a leaf is at index 5 in a tree with 100 occupied leaves, siblings at levels ~7 and above will all be empty subtree hashes, so the array will contain only ~7 entries instead of 64.

Verification

deriveFlatKey (address, storageKey) → H256 :=
    blake2s(leftPad32(address) || storageKey)

hashLeaf (leafKey, value, nextIndex) → H256 :=
    blake2s(leafKey || value || nextIndex.to_le_bytes(8))

emptyHash (0) → H256 := blake2s(0x00{72})
emptyHash (i) → H256 := blake2s(emptyHash(i-1) || emptyHash(i-1))

padSiblings (siblings) → H256[64] :=
    siblings ++ [emptyHash(i) for i in len(siblings)..63]

walkMerklePath (leafHash, index, siblings) → H256 :=
    fullPath ← padSiblings(siblings)
    current ← leafHash
    idx ← index
    for sibling in fullPath:
        current ← if even(idx) then blake2s(current || sibling)
                                else blake2s(sibling || current)
        idx ← idx / 2
    assert idx = 0
    current

verifyExistingProof (address, storageProof) → (H256, H256) :=
    let flatKey = deriveFlatKey(address, storageProof.key) in
    let p = storageProof.proof in
    let stateRoot = walkMerklePath(hashLeaf(flatKey, p.value, p.nextIndex),
                                   p.index, p.siblings) in
    (stateRoot, p.value)

verifyNonExistingProof (address, storageProof) → (H256, H256) :=
    let flatKey = deriveFlatKey(address, storageProof.key) in
    let left = storageProof.proof.leftNeighbor in
    let right = storageProof.proof.rightNeighbor in
    let leftRoot = walkMerklePath(hashLeaf(left.leafKey, left.value, left.nextIndex),
                                  left.index, left.siblings) in
    let rightRoot = walkMerklePath(hashLeaf(right.leafKey, right.value, right.nextIndex),
                                   right.index, right.siblings) in
    assert leftRoot = rightRoot
    assert left.leafKey < flatKey < right.leafKey
    assert left.nextIndex = right.index
    (leftRoot, 0x00{32})

computeStateCommitment (stateRoot, preimage) → H256 :=
    blake2s(
        stateRoot
        || preimage.nextFreeSlot.to_be_bytes(8)
        || preimage.blockNumber.to_be_bytes(8)
        || preimage.last256BlockHashesBlake
        || preimage.lastBlockTimestamp.to_be_bytes(8)
    )

Full verification

A client verifies a storage proof end-to-end in three steps:

  1. Verify the Merkle proof — walk storageProofs to recover the tree root, hash it with stateCommitmentPreimage to get the stateCommitment.

  2. Reconstruct StoredBatchInfo — place stateCommitment into the batchHash field, fill the remaining fields from l1VerificationData, set indexRepeatedStorageChanges = 0 and timestamp = 0.

  3. Compare against L1 — compute keccak256(abi.encode(StoredBatchInfo)) and compare with the hash fetched from L1 by calling storedBatchHash(batchNumber) on the diamond proxy contract. This is a single eth_call, no event scanning required.

If the hashes match, the storage values are proven to be part of the state committed on L1.

verify (response, onChainHash) :=
    -- onChainHash = diamondProxy.storedBatchHash(batchNumber)  (fetched by caller via eth_call)

    -- Step 1: verify each Merkle proof and collect the tree root
    let stateRoots = []
    forall storageProof in response.storageProofs:
        let (stateRoot, value) =
            match storageProof.proof.type with
            | "existing"    => verifyExistingProof(response.address, storageProof)
            | "nonExisting" => verifyNonExistingProof(response.address, storageProof)
        in
        stateRoots.append(stateRoot)

    -- All proofs must agree on the same tree root
    assert all elements of stateRoots are equal
    let stateRoot = stateRoots[0]

    -- Step 2: compute state commitment from tree root + preimage
    let stateCommitment = computeStateCommitment(stateRoot, response.stateCommitmentPreimage)

    -- Step 3: reconstruct StoredBatchInfo and check against L1
    let storedBatchInfo = StoredBatchInfo {
        batchNumber:                 response.l1VerificationData.batchNumber,
        batchHash:                   stateCommitment,
        indexRepeatedStorageChanges: 0,
        numberOfLayer1Txs:           response.l1VerificationData.numberOfLayer1Txs,
        priorityOperationsHash:      response.l1VerificationData.priorityOperationsHash,
        dependencyRootsRollingHash:  response.l1VerificationData.dependencyRootsRollingHash,
        l2ToL1LogsRootHash:          response.l1VerificationData.l2ToL1LogsRootHash,
        timestamp:                   0,
        commitment:                  response.l1VerificationData.commitment,
    }

    let computedHash = keccak256(abi.encode(storedBatchInfo))
    assert computedHash = onChainHash

Where onChainHash is obtained by the caller via diamondProxy.storedBatchHash(batchNumber) — a single eth_call, no event scanning required.

Alternatively, the caller can obtain onChainHash by scanning BlockCommit events emitted by the diamond proxy for the relevant batch number.

Guides

Updating local chains with new genesis and L1 state

This guide describes how to update local chains setup for a particular protocol version with the new genesis, L1 state, and all required dependencies including chain configs, wallets and contracts YAML files.

There are two ways to update the local setup:

  1. using GitHub Actions (recommended way for most users)
  2. running the update scripts locally (on self-hosted setups, next protocol version development, or when experimenting with custom contracts)

Info

Use automated update through GitHub Actions if:

  • you’re iterating on the next protocol version and need to update the server setup with the new version of contracts
  • you need to regenerate local chains setup due to the tooling changes, e.g. anvil updates, new scripts, etc.

Run scripts locally if:

  • you are experimenting with custom contracts locally and want to update your local setup
  • you are self-hosted user wanting to update your server setup with the custom version of contracts

Follow the appropriate section below depending on your use case.


Updating with GitHub Actions

For the general guidelines about the GitHub Actions workflow to perform the local chains update, please refer to the General GitHub Actions guidelines. Especially important are the sections about:

Then, follow the instructions in the Server Update GitHub workflow guide to perform the update of the local chains setup through GitHub Actions.

As the result of the workflow execution, you will get the updated local chains setup available as:

  • git patch published as a workflow artifact that you can download and apply locally using git apply command
  • new commit on your custom development branch
  • new pull request with the update

Tip

If you are planning to merge your local chains update into the main branch, please prefer automated update through GitHub Actions as it is more reliable, tested in CI and less error-prone than the manual update.


Updating locally

To perform a local update of the local chains setup, you can run the update scripts locally on your machine.

Compatibility between protocol versions

Before performing the update, please make sure to check the protocol compatibility tables to make sure that you are using compatible versions of the protocol (contracts), server, and tooling.

Warning

Using incompatible versions of the protocol, server, or tooling may lead to unexpected errors during the update process. Carefully check the compatibility tables and make sure to use compatible versions of all components before proceeding with the update.

These tables are the source of truth for the compatibility of different versions of the protocol, server, and tooling. If you notice that the tables are outdated, please report it to the team, and/or contribute to updating the documentation with the correct information.

Prerequisites and environment

The scripts are located in the matter-labs/zksync-os-scripts repository.

Go through the Prerequisites guide to set up your environment.

Info

There is a Quick install section that goes through all required dependencies setup for Unix-like system, and it is recommended to use it as a reference.

Re-generate local state

It is recommended to go through the full Server Update guide first.

Tip

You can directly jump to the Local Use section of the guide if you already know what the script does.

As the result, you will get the updated local chains setup available in the local-chains/ directory of your server repository.