ZKsync OS Server Developer Documentation

This book guides you through running, understanding, and extending the ZKsync OS Server.

Understanding the System

Deep dives into internal components and lifecycle.

Running the System

These guides help you set up and operate the server in different environments.

Setup

Prerequisites

This project requires:

  • The Foundry nightly toolchain
  • The Rust toolchain

Install Foundry (v1.3.4)

Install Foundry v1.3.4 (newer stable versions are likely to work too but not guaranteed):

# Download the Foundry installer
curl -L https://foundry.paradigm.xyz | bash

# Install forge, cast, anvil, chisel
# Ensure you are using the 1.3.4 stable release
foundryup -i 1.3.4

Verify your installation:

anvil --version

The output should include a anvil Version: 1.3.4-v1.3.4.

Install Rust

Install Rust using rustup:

curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh

After installation, ensure Rust is available:

rustc --version

Linux packages

# essentials
sudo apt-get install -y build-essential pkg-config cmake clang lldb lld libssl-dev apt-transport-https ca-certificates curl software-properties-common git    

Run

Local

To run node locally, first launch anvil:

anvil --load-state zkos-l1-state.json --port 8545

then launch the server:

cargo run

To restart the chain, erase the local DB and re-run anvil:

rm -rf db/*

By default, fake (dummy) proofs are used both for FRI and SNARK proofs.

Rich account:

PRIVATE_KEY = 0x7726827caac94a7f9e1b160f7ea819f172f7b6f9d2a97f992c38edeab82d4110
ACCOUNT_ID = 0x36615Cf349d7F6344891B1e7CA7C72883F5dc049

Example transaction to send:

cast send -r http://localhost:3050 0x5A67EE02274D9Ec050d412b96fE810Be4D71e7A0 --value 
100 --private-key 0x7726827caac94a7f9e1b160f7ea819f172f7b6f9d2a97f992c38edeab82d4110

Config options

See node/sequencer/config.rs for config options and defaults. Use env variables to override, e.g.:

prover_api_fake_provers_enabled=false cargo run --release

Docker

sudo docker build -t zksync_os_sequencer .
sudo docker run -d --name sequencer -p 3050:3050 -p 3124:3124 -p 3312:3312 -e batcher_maximum_in_flight_blocks=15  -v /mnt/localssd/db:/db   zksync_os_sequencer

External node

Setting the block_replay_download_address environment variable puts the node in external node mode, which means it receives block replays from another node instead of producing its own blocks. The node will get priority transactions from L1 and check that they match the ones in the replay but it won’t change L1 state.

To run the external node locally, you need to set its services’ ports so they don’t overlap with the main node.

For example:

block_replay_download_address=localhost:3053 \
block_replay_server_address=0.0.0.0:3054 \
sequencer_rocks_db_path=./db/en sequencer_prometheus_port=3313 rpc_address=0.0.0.0:3051 \
cargo run --release

Batch verification (2FA)

Batch verification requires each batch generated by main node to be signed-of by certain number of designated ENs before its committed on L1. To enable it configure as follows.

Main node / sequencer:

  • batch_verification_server_enabled=true – enable
  • batch_verification_threshold – required number of ENs to sign each batch
  • batch_verification_accepted_signers – comma separated list of eth addresses corresponding to EN keys

Participating ENs:

  • batch_verification_client_enabled=true – enable
  • batch_verification_connect_address – ip and port of main node verification server (eg. 10.10.1.1:1234)
  • batch_verification_signing_key – EN private key

Otterscan (Local Explorer)

Server supports ots_ namespace and hence can be used in combination with Otterscan block explorer. To run a local instance as a Docker container (bound to http://localhost:5100):

docker run --rm -p 5100:80 --name otterscan -d --env ERIGON_URL="http://127.0.0.1:3050" otterscan/otterscan

See Otterscan’s docs for other running options.

Exposed Ports

  • 3050 - L2 JSON RPC
  • 3053 - Block replay server (transport for EN)
  • 3124 - Prover API (e.g. 127.0.0.1/prover-jobs/status) (only enabled if prover_api_component_enabled is set to true)
  • 3312 - Prometheus

FAQ

Failed to read L1 state: contract call to getAllZKChainChainIDs returned no data (“0x”); the called address might not be a contract

Something went wrong with L1 - check that you’re really running the anvil with the proper state on the right port.

Failed to deserialize context

If you hit this error when starting, check if you don’t have some ‘old’ rocksDB data in db/node1 directory.

Design principles

  • Minimal, async persistence
    • to meet throughput and latency requirements, we avoid synchronous persistence at the critical path. Additionally, we aim at storing only the data that is strictly needed - minimizing the potential for state inconsistency
  • Easy to replay arbitrary blocks
    • Sequencer: components are idempotent
    • Batcher: batcher component skips all blocks until the first uncommitted batch. Thus, downstream components only receive batches that they need to act upon
  • State - strong separation between
    • Actual state - data needed to execute VM: key-value storage and preimages map
    • Receipts repositories - data only needed in API
    • Data related to Proofs and L1 - not needed by sequencer / JSON RPC - only introduced downstream from batcher

Subsystems

  • Sequencer subsystem — mandatory for every node. Executes transactions in VM, sends results downstream to other components.
    • Handles Produce and Replay commands in an uniform way (see model/mod.rs and execution/block_executor.rs)
    • For each block: (1) persists it in WAL (see block_replay_storage.rs), (2) pushes to state (see state crate), (3) exposes the block and tx receipts to API (see repositories/mod.rs), (4) pushes to async channels for downstream subsystems. Waits on backpressure.
  • API subsystem — optional (not configurable atm). Has shared access to state. Exposes ethereum-compatible JSON RPC
  • Batcher subsystem — runs for the main node - most of it is disabled for ENs.
    • Turns a stream of blocks into a stream of batches (1 batch = 1 proof = 1 L1 commit); exposes Prover APIs; submits batches and proofs to L1.
    • For each batch, computes the Prover Input (runs RiscV binary (app.bin) and records its input as a stream of Vec<u32> - see batcher/mod.rs)
    • This process requires Merkle Tree with materialized root hashes and proofs at every block boundary.
    • Runs L1 senders for each of commit / prove / execute
    • Runs Priority Tree Manager that applies new L1->L2 transactions to the dynamic Merkle tree and prepares execute commands. It’s run both for main node and ENs. ENs don’t send execute txs to L1, but they need to keep the tree up to date, so that if the node become main, it doesn’t need to build the tree from scratch.

Note on Persistent Tree — it is only necessary for Batcher Subsystem. Sequencer doesn’t need the tree — block hashes don’t include root hash. Still, even when batcher subsystem is not enabled, we want to run the tree for potential failover.

Component Details

Screenshot 2025-07-24 at 15 00 55

See individual components and state recovery details in the table below. Note that most components have little to no internal state or persistence — this is one of the design principles.

ComponentIn-memory statePersistenceState Recovery
Command Sourcestarting_block (only used on startup)nonestarting_block is the first block after the compacted block stored in state, i.e., starting_block = highest_block - blocks_to_retain_in_memory.
BlockContextProvidernext_l1_priority_id; block_hashes_for_next_block (last 256 block hashes)nonenext_l1_priority_id: take from ReplayRecord of starting_block - 1; block_hashes_for_next_block: take from 256 ReplayRecords before starting_block
L1WatcherGapless list of Priority transactions - starting from the last committed to L1nonenone - recovers itself from L1
L2Mempool (RETH crate)prepared list of pending L2 transactionsnonenone (consider persisting mempool transactions in the future)
BlockExecutornone 🔥nonenone
Repositories (API subsystem)BlockHeaders and Transactions for ~blocks_to_retain_in_memory blocksHistorical BlockHeaders and Transactionsnone - recovers naturally when replaying blocks from starting_block
StateAll Storage Logs and Preimages for blocks_to_retain_in_memory last blocksCompacted state at some older block (highest_block - blocks_to_retain_in_memory): full state map and all preimagesnone - recovers naturally when replaying blocks from starting_block
Merkle TreeOnly persistenceFull Merkle tree - including previous values on each leafnone
⬇️ Batcher Subsystem Components⬇️ Components below operate on Batches - not Blocks️⬇️ Components below must not rely on persistence - otherwise failover is not possible⬇️
Batcherstartup: starting_batch and batcher_starting_block;
operation: Trailing Batch’s CommitBatchInfo;
nonefirst_block_to_process: block after the last block in the last committed L1 batch;
last_persisted_block: the block after which we start checking for batch timeouts
StoredBatchInfo in run_loop: Currently: from FRI cache; todo - Load last committed StoredBatchInfo from L1 OR reprocess last committed batch
Prover Input Generatornonenone
FRI Job ManagerGapless List of unproved batches with ProverInput and prover assignment infononenone - batches before starting_batch are guaranteed to have FRI proofs, batches after will go through the pipeline again
FRI Store/CachenoneMap<BatchNumber, FRIProof> (todo: extract from the node process to enable failover)none
L1 Committernone*nonenone - recovers itself from L1
L1 Proof Submitternone*nonenone - recovers itself from L1
L1 Executornone*nonenone - recovers itself from L1
SNARK Job Manager (TODO - missing)Gapless list of batches with their FRI proofs and prover assignment infononeLoad batches that are committed but not proved on L1 yet. Load their FRI proofs from FRI cache (TODO)
Priority Tree ManagerDynamic Merkle tree with L1->L2 transaction hashesCompressed data needed to rebuild the tree, see CachedTreeData for more detailsnone - recovers itself from replay storage

RPC

  • All standard eth_ methods are supported (except those specific to EIP-2930, EIP-4844 and EIP-7702). Block tags have a special meaning:
    • earliest - not supported yet (will return genesis or first uncompressed block)
    • pending - the latest produced block
    • latest - same as pending (consider taking consensus into account here)
    • safe - the latest block that has been committed to L1
    • finalized - not supported yet (will return the latest block that has been executed on L1)
  • zks_ namespace is kept to the minimum right now to avoid legacy from Era. Only following methods are supported:
    • zks_getBridgehubContract
  • ots_ namespace is used for Otterscan integration (meant for local development only)

Prover API

        .route("/prover-jobs/status", get(status))
        .route("/prover-jobs/FRI/pick", post(pick_fri_job))
        .route("/prover-jobs/FRI/submit", post(submit_fri_proof))
        .route("/prover-jobs/SNARK/pick", post(pick_snark_job))
        .route("/prover-jobs/SNARK/submit", post(submit_snark_proof))

Database Schema Overview

All persistent data is stored across multiple RocksDB databases:

  • block_replay_wal
  • preimages
  • repository
  • state
  • tree
  • proofs (JSON files, not RocksDB)

1. block_replay_wal

Write-ahead log containing recent (non-compacted) block data.

ColumnKeyValue
block_output_hashblock numberBlock output hash
contextblock numberBinary-encoded BlockContext (BlockMetadataFromOracle)
last_processed_l1_tx_idblock numberID (u64) of the last processed L1 tx in the block
txsblock numberVector of EIP-2718 encoded transactions
node_versionblock numberNode version that produced the block
latest‘latest_block’Latest block number

2. preimages

ColumnKeyValue
meta‘block’Latest block ID
storagehashPreimage for the hash

3. repository

Canonical blocks and transactions.

ColumnKeyValue
initiator_and_nonce_to_hashaddress (20 bytes) + nonce (u64)Transaction hash
tx_metatransaction hashBinary TxMeta (hash, number, gas used, etc.)
block_datablock hashAlloy-serialized block
tx_receipttransaction hashBinary EIP-2718 receipt
meta‘block_number’Latest block number
txtransaction hashEIP-2718 encoded bytes
block_number_to_hashblock numberBlock hash

4. state

Data compacted from the write-ahead log.

ColumnKeyValue
meta‘base_block’Base block number for this state snapshot
storagekeyValue (compacted storage)

5. tree

Merkle-like structure.

ColumnKeyValue
defaultcomposite (version + nibble + index)Serialized Leaf or Internal node
key_indiceshashKey index

Note: The ‘default’ column also stores a serialized Manifest at key ‘0’.


6. proofs

Stored as JSON files in a separate directory: ../shared/fri_batch_envelopes

State

The global state is a set of key/value pairs:

  • key = keccak(address, slot)
  • value = a single 32-byte word (U256)

All such pairs are stored (committed) in the Merkle tree (see tree.md).

Account metadata (AccountProperties)

Account-related data (balance, nonce, deployed bytecode hash, etc.) is grouped into an AccountProperties struct. We do NOT store every field directly in the tree. Instead:

  • We hash the AccountProperties struct.
  • That hash (a single U256) is what appears in the Merkle tree.
  • The full struct is retrievable from a separate preimage store.

Special address: ACCOUNT_STORAGE (0x8003)

We reserve the synthetic address 0x8003 to map account addresses to their AccountProperties hash. Concretely: value at key = keccak(0x8003, user_address) = hash(AccountProperties(user_address))

Example: fetching the nonce for address 0x1234

  1. Compute key = keccak(0x8003, 0x1234)
  2. Read the U256 value H from the Merkle tree at that key
  3. Look up preimage(H) to get AccountProperties
  4. Take the nonce field from that struct

This indirection:

  • Keeps the Merkle tree smaller (one leaf per account metadata bundle)
  • Avoids multiple leaf updates when several account fields change at once.

Bytecodes

We track two related things:

  1. What the outside world sees (the deployed / observable bytecode).
  2. An internal, enriched form that adds execution helpers (artifacts).

Terminology

  • Observable (deployed) bytecode: The exact bytes you get from an RPC call like eth_getCode or cast code.
  • Observable bytecode hash (observable_bytecode_hash): keccak256(observable bytecode). This matches Ethereum conventions.
  • Internal extended representation: observable bytecode
    • padding (if any, e.g. to align)
    • artifacts (pre‑computed data used to speed execution, e.g. jumpdest map).
  • Internal bytecode hash (bytecode_hash): blake2 hash of the full extended representation above. The extended blob itself lives in the preimage store; only the blake hash is stored in AccountProperties.

Stored fields in AccountProperties

  • bytecode_hash (Bytes32): blake2 hash of [observable bytecode | padding | artifacts].
  • unpadded_code_len (u32): Length (in bytes) of the original observable bytecode, before any internal padding or artifacts.
  • artifacts_len (u32): Length (in bytes) of the artifacts segment appended after padding.
  • observable_bytecode_hash (Bytes32): keccak256 of the observable (deployed) bytecode.
  • observable_bytecode_len (u32): Length of the observable (deployed) bytecode. (Currently mirrors unpadded_code_len; kept explicitly for clarity / future evolution.)

Why two hashes?

  • keccak (observable_bytecode_hash) is what external tooling expects and can independently recompute.
  • blake (bytecode_hash) commits to the richer internal representation the node actually executes against (including acceleration data), avoiding recomputing artifacts on every access.

Lookup workflow (simplified)

  1. From AccountProperties get:
    • bytecode_hash → fetch extended blob via preimage store.
    • observable_bytecode_hash → verify against externally visible code if needed.
  2. Use lengths (unpadded_code_len, artifacts_len) to slice: [0 .. unpadded_code_len) → observable code [end of padding .. end) → artifacts

This separation keeps the Merkle tree lean while enabling fast execution.

Tree

State is stored in a binary Merkle‑like tree. In production the logical tree depth is 64 (root at depth 0, leaves at depth 63). We use Blake2 as a tree hashing algorithm.

Optimization:

  • Instead of persisting every individual leaf, we group (package) 8 consecutive leaves together.
  • 8 leaves form a perfectly balanced subtree of height 3 (because 2^3 = 8).
  • One such packaged subtree is stored as a single DB record.

Terminology:

  • We call each 3-level chunk of the logical tree a nibble (note: this is an internal term here).
  • Effective path length (number of nibbles) = ceil(64 / 3) = 22.

So:

  • Logical depth: 64 levels.
  • Physical traversal steps: 22 nibbles.
  • Each nibble lookup loads or updates one packaged subtree (8 leaves).

Result: fewer DB reads/writes while preserving a logical depth of 64.

Genesis and Block 0

Genesis is the one-time process of starting a new chain and producing its first block.

What happens at genesis

  1. The sequencer starts with an empty database.
  2. It loads a hardcoded file (genesis.json) that defines the initial on-chain state.
  3. This file deploys exactly three system contracts:
    • GenesisUpgrade
    • L2WrappedBase
    • ForceDeployer
  4. After loading these, the sequencer begins listening to L1 events that describe the first actual L2 block:
    • Bytecodes to deploy additional contracts (ForceDeployer enables this)
    • Parameters passed to GenesisUpgrade to finish remaining initialization steps

Why these contracts exist

  • ForceDeployer: Allows forced deployment of predefined contract bytecode needed at boot.
  • GenesisUpgrade: Finalizes system configuration after initial contract deployment.
  • L2WrappedBase: Provides a required base implementation (infrastructure dependency).

Security perspective

L1 already knows the exact expected initial L2 state (the three contracts and their storage layout).
This state is defined in zksync-era’s genesis.json and is consumed by the zkStack tooling when setting up the ecosystem on L1.
Because L1 has this canonical reference, it can validate that the L2 started correctly.

Summary

Genesis = load predefined state from genesis.json -> deploy 3 core contracts -> process L1 events to finalize initialization -> produce block 0.

Guides

Running with L1

Simplest (no contract changes etc)

If you’re not doing any contract changes, and simply want to hook up to L1, start anvil with pre-created state.

This repo includes a pre-setup L1 state zkos-l1-state.json that can be loaded into anvil. The state was generated by zkstack init and essentially consists of all L1 contracts deployed and initialized with L2 genesis. The state comes with some L1 priority transactions that were generated by the old genesis logic and are hence failing in the new implementation. It also comes with a deposit transaction that makes 0x36615cf349d7f6344891b1e7ca7c72883f5dc049 into a rich account (>10k ETH) (to regenerate it, see “Regenerate zkos-l1-state.json` below)

Before you run an L1 node, make sure you have a 1.x version of anvil installed (see foundry guide). Then:

anvil --load-state zkos-l1-state.json --port 8545
...
Listening on 127.0.0.1:8545
...

Advanced (contract changes, multi setup etc)

If you want to have more custom setup (for example you did some changes in L1 contracts, or want to run multiple sequencers hooked up to the same L1).

The high level steps are:

  • Start L1 (anvil)
  • setup ecosystem and configs using zkstack cli from zksync-era
  • update necessary config
  • start sequencers

Start L1

Start local L1 – by running anvil.

Setup ecosystem and chain configs

Use zksync-os-integration branch fromzksync-era.

IMPORTANT: the contracts deployed will come from the zksync-era/contracts directory. So if you want to test any changes to contracts, you have to put them there.

Make sure that your zkstack was compiled from ‘main’ branch of era, and is relatively fresh (after September 10).

Run this from the directory above zksync-era.

mkdir zkstack-playground && cd zkstack-playground
zkstack ecosystem create --ecosystem-name local-v1 --l1-network localhost --chain-name era1 --chain-id 270 --prover-mode no-proofs --wallet-creation random --link-to-code ../../zksync-era --l1-batch-commit-data-generator-mode rollup --start-containers false   --base-token-address 0x0000000000000000000000000000000000000001 --base-token-price-nominator 1 --base-token-price-denominator 1 --evm-emulator false

For validium, use --l1-batch-commit-data-generator-mode validium instead.

This will create a ‘local-v1’ ecosystem directory, with one chain ‘era1’.

Fund L1 accounts

Now we’re ready to compile contracts and deploy them to L1.

Before the step below, you might want to fund some of the wallet accounts above. If you’re running on local L1, you can use the script below. Do not forget to use different PRIVKEY in case you have initialized the anvil with a different mnemonic. If you’re running on sepolia, zkstack will tell you which accounts to fund.

RPC_URL=http://localhost:8545
PRIVKEY=0xac0974bec39a17e36ba4a6b4d238ff944bacb478cbed5efcae784d7bf4f2ff80
find . -type f -name 'wallets.yaml' | while read -r file; do
  echo "Processing $file …"

  # extract all addresses (strips leading spaces and the "address:" prefix)
  grep -E '^[[:space:]]*address:' "$file" \
    | sed -E 's/^[[:space:]]*address:[[:space:]]*//' \
    | while read -r addr; do

      if [[ $addr =~ ^0x[0-9a-fA-F]{40}$ ]]; then
        echo "→ Sending 10 ETH to $addr"
        cast send "$addr" \
          --value 10ether \
          --private-key "$PRIVKEY" \
          --rpc-url "$RPC_URL"
      else
        echo "⚠️  Skipping invalid address: '$addr'" >&2
      fi

    done
done

Deploy L1 contracts

cd local_v1
zkstack ecosystem init --deploy-paymaster=false --deploy-erc20=false --observability=false \
  --deploy-ecosystem --l1-rpc-url=http://localhost:8545 --chain era1 --zksync-os

Start sequencer

After this, you can finally run the sequencer:

general_zkstack_cli_config_dir=../zkstack-playground/local_v1/chains/era1 cargo run --release

the general_zkstack_cli_config_dir config option will read the YAML files and set the proper addresses and private keys. Alternatively, you need to set:

  • l1_sender_operator_commit_pk to the operator private key of wallets.yaml of zkstack tool output,
  • l1_sender_operator_prove_pk and l1_sender_operator_execute_pk to respective wallets from wallets.yaml,
  • l1_sender_bridgehub_address to bridgehub_proxy_addr in contracts.yaml of zkstack tool output
  • (if running validium) l1_sender_da_input_mode to validium

Restarting

If you restart anvil, you have to repeat a subset of steps from above, to re-create the bridgehub contracts:

  • fund the accounts (shell script)
  • re-run ecosystem init
  • you might also want to restart the sequencer - it will figure out the state on L1, and commit missing batches.

Regenerate L1 state

Note: There is an experimental tool that can run these commands for you. If it turns out to be useful, we might make it more permanent.

L1 state is checked in into this repo under zkos-l1-state.json. To regenerate it from scratch, run the following commands:

anvil -m "stuff slice staff easily soup parent arm payment cotton trade scatter struggle" --state zkos-l1-state.json

Note that we pass this mnemonic to have 0x36615cf349d7f6344891b1e7ca7c72883f5dc049 rich wallet - legacy from era.

Then deploy the contracts using legacy tooling (see above). After that, add a deposit transaction to the state - integration and load tests expect that 0x36615cf349d7f6344891b1e7ca7c72883f5dc049 has L2 funds. For this, use generate-deposit tool in this repo. Make sure to provide correct bridgehub_addres (you can find it in configs/contracts.yaml):

> cargo run --bin zksync_os_generate_deposit -- --bridgehub <BRIDGEHUB_ADDRESS>
L1 balance: 9879999865731420184000
Successfully submitted L1->L2 deposit tx with hash '0xb8544a2a9bc55713f1f94acf3711c23d07e02917f44885b05e20b13af1402283'

Process finished with exit code 0

Now stop anvil (ctrl+c) - the state will be saved to the file. Rerun it with --load-state zkos-l1-state.json (--load-state - not --state, otherwise it will be overwritten). Commit the new file in git.

Update values in L1SenderConfig:

  • bridgehub_address -> bridgehub_proxy_addr in contracts.yaml of zkstack tool output
  • operator_commit_pk -> operator_private_key in wallets.yaml
  • operator_prove_pk, operator_execute_pk -> prove_operator and execute_operarator keys from wallets.yaml

Running multiple chains

Create a new chain (era2)

zkstack ecosystem create --ecosystem-name local-v1 --l1-network localhost --chain-name era2 --chain-id 271 --prover-mode no-proofs --wallet-creation random --link-to-code ../../zksync-era --l1-batch-commit-data-generator-mode rollup --start-containers false   --base-token-address 0x0000000000000000000000000000000000000001 --base-token-price-nominator 1 --base-token-price-denominator 1 --evm-emulator false

Make sure to fund the accounts again (see the script in the docs above).

Init new chain (deploying contacts etc):

zkstack chain init --deploy-paymaster=false  \
  --l1-rpc-url=http://localhost:8545 --chain era2 \
  --server-db-url=postgres://invalid --server-db-name=invalid

And start the sequencer.

general_zkstack_cli_config_dir=../zkstack-playground/local_v1/chains/era2 cargo run --release

Updates

Verification keys

If you did any change to zkos binary (for example including a binary from the new version of zkos), you should do following steps:

  • commit it here & an inside zksync-airbender-prover (you’ll be committing multiblock_batch.bin binary).
  • generate verification keys and update era-contracts
    • you can use the tool from https://github.com/mm-zk/zksync_tools/tree/main/zkos/generate_vk
    • you need to find the latest era-contracts tag that we used (probably on top of zksync-os-stable branch)
    • once the script generate the change, commit it into era-contracts repo.

Then follow instructions below for era-contracts updates.

Updating era contracts

If you do any change to era-contracts, we should update zkos-l1-state.json (especially if this is a breaking change – be careful with those when we’re in production).

  • commit your change to era-contracts, and generate a new release/tag (we name them as zkos-v0.29.3 for example)
  • go to zksync-era, checkout zksync-os-integration, and update the contracts dependency there (this step will hopefully disappear soon)
  • then you can run the tool from: https://github.com/mm-zk/zksync_tools/tree/main/zkos/update_state_json
    • this tool will generate state.json (and genesis.json if needed), and if you run it with COMMIT_CHANGES=true, it will also create a branch in zksync-os-server.
  • check that the server is still working (start anvil with new state, and run a clear server).
  • commit your change to zksync-os-server and optionally create a new release.

WARNING: instructions above assume that you didn’t change genesis hash (any change to L2Upgrade Handler, ComplexUpgrader or WrappedBasedToken might change it). If you did, then you have to regenerate hashes, which is a longer process.

Updating genesis

There are 3 contracts that are part of genesis – L2ComplexUpgrader, L2GenesisUpgrade, L2WrappedBaseToken. If any of them have changed, you’ll have to regenerage genesis.

Currently it is a little bit of a frustrating process, but we plan to improve it in near future.

  • Step 1: run parts from updating era contracts: Run the tool above, and confirm that genesis.json was really updated.
  • Step 2: compute “genesis hash” - when you start the server with new genesis.json created in the step above - add a print here: https://github.com/matter-labs/zksync-os-server/blob/main/node/bin/src/batcher/util.rs#L36 to get the hash value.
  • Step 3: Put the new hash value into: https://github.com/matter-labs/zksync-era/blob/zksync-os-integration/etc/env/file_based/genesis.yaml
  • Step 4: Re-run the Step 1. Make sure to use zksync-era with the Step3, as new genesis is used inside CTM registration, so it will impact the state.json contents.
  • Step 5: check that everything works – you should be able to run anvil with the new state (anvil --load_state zkos-l1-state.json) and zksync-os-server with new genesis.json (it normally loads it from local directory).

https://github.com/matter-labs/zksync-os-server/blob/main/node/bin/src/batcher/util.rs#L36