ZKsync OS Server Developer Documentation
This book guides you through running, understanding, and extending the ZKsync OS Server.
Understanding the System
Deep dives into internal components and lifecycle.
Running the System
These guides help you set up and operate the server in different environments.
- Updating local chains: genesis and L1 state - local chain and setup, genesis and L1 state update guide.
Setup
Prerequisites
This project requires:
- The Foundry nightly toolchain
- The Rust toolchain
Install Foundry (v1.5.1)
Install Foundry v1.5.1:
# Download the Foundry installer
curl -L https://foundry.paradigm.xyz | bash
# Install forge, cast, anvil, chisel
# Ensure you are using the 1.5.1 stable release
foundryup -i 1.5.1
Verify your installation:
anvil --version
The output should include a anvil Version: 1.5.1.
Install Rust
Install Rust using rustup:
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
After installation, ensure Rust is available:
rustc --version
Linux packages
# essentials
sudo apt-get install -y build-essential pkg-config cmake clang lldb lld libssl-dev apt-transport-https ca-certificates curl software-properties-common git
Run
Using the run_local.sh Script
⚠️ This script is a temporary solution. Do not depend on it in production.
The run_local.sh script automates starting Anvil and chain node(s):
# Run a single chain
./run_local.sh ./local-chains/v30.2/default
# Run multiple chains
./run_local.sh ./local-chains/v30.2/multi_chain
# Run with logging to files
./run_local.sh ./local-chains/v30.2/multi_chain --logs-dir ./logs
Manual setup
To run node locally, first decompress state and launch anvil:
gzip -dfk ./local-chains/v30.2/l1-state.json.gz
anvil --load-state ./local-chains/v30.2/l1-state.json --port 8545
then launch the server:
cargo run
To restart the chain, erase the local DB and re-run anvil:
rm -rf db/*
By default, fake (dummy) proofs are used both for FRI and SNARK proofs.
Rich account:
PRIVATE_KEY=0x7726827caac94a7f9e1b160f7ea819f172f7b6f9d2a97f992c38edeab82d4110
ACCOUNT_ID=0x36615Cf349d7F6344891B1e7CA7C72883F5dc049
Example transaction to send:
cast send -r http://localhost:3050 0x5A67EE02274D9Ec050d412b96fE810Be4D71e7A0 --value
100 --private-key 0x7726827caac94a7f9e1b160f7ea819f172f7b6f9d2a97f992c38edeab82d4110
Config options
See node/sequencer/config.rs for config options and defaults. Use a JSON configuration file to override the defaults, e.g.:
cargo run --release -- --config ./local-chains/v30.2/default/config.yaml
Explore the local-chains folder for additional chain configs grouped by protocol version. Detailed information is available in local-chains/README.md.
You can also use environment variables to override the default settings:
prover_api_fake_provers_enabled=false cargo run --release
If both the JSON config file and environment variables are set, the latter takes precedence.
Ephemeral mode Ephemeral mode runs the node using a temporary, isolated state directory, allowing you to spin up one or more local chains without them interfering with the same folder. When enabled, the node creates a temporary base directory for RocksDB and the file-backed object store, this directory is automatically removed on shutdown. To remain as lightweight as possible, Ephemeral mode disables all APIs except for JSON-RPC (status, prometheus APIs etc are unavailable). It can be used for quick local testing and multi-chain setups.
The ephemeral setting is part of the general config and can be set like any other config value:
general_ephemeral=true cargo run --release
Docker
sudo docker build -t zksync_os_sequencer .
sudo docker run -d --name sequencer -p 3050:3050 -p 3124:3124 -p 3312:3312 -e batcher_maximum_in_flight_blocks=15 -v /mnt/localssd/db:/db zksync_os_sequencer
External node
Setting the general_node_role=external environment variable puts the node in external node mode, which means it
receives block replays from another node instead of producing its own blocks. The node will get priority transactions
from L1 and check that they match the ones in the replay but it won’t change L1 state.
To run the external node locally, you need to enable networking on both main node and external node. Then set external node’s services’ ports so they don’t overlap with the main node.
For example:
network_enabled=true \
network_secret_key=9cc842aaeb1492e567d989a34367c7239d1db21bad31557689c3d9d16e45b0b3 \
network_address=127.0.0.1 \
network_port=3061 \
network_boot_nodes=enode://dbd18888f17bad7df7fa958b57f4993f47312ba5364508fd0d9027e62ea17a037ca6985d6b0969c4341f1d4f8763a802785961989d07b1fb5373ced9d43969f6@127.0.0.1:3060 \
sequencer_rocks_db_path=./db/en \
sequencer_prometheus_port=3313 \
rpc_address=0.0.0.0:3051 \
cargo run --release
Batch verification (2FA)
Batch verification requires each batch generated by main node to be signed-of by certain number of designated ENs before its committed on L1. To enable it configure as follows.
Main node / sequencer:
batch_verification_server_enabled=true– enablebatch_verification_threshold– required number of ENs to sign each batchbatch_verification_accepted_signers– comma separated list of eth addresses corresponding to EN keys
Participating ENs:
batch_verification_client_enabled=true– enablebatch_verification_connect_address– ip and port of main node verification server (eg.10.10.1.1:1234)batch_verification_signing_key– EN private key
Otterscan (Local Explorer)
Server supports ots_ namespace and hence can be used in combination
with Otterscan
block explorer. To run a local instance as a Docker container (bound to http://localhost:5100):
docker run --rm -p 5100:80 --name otterscan -d --env ERIGON_URL="http://127.0.0.1:3050" otterscan/otterscan
See Otterscan’s docs for other running options.
Exposed Ports
3050- L2 JSON RPC3060- P2P communication (e.g. replay transport)3124- Prover API (e.g.127.0.0.1/prover-jobs/status) (only enabled ifprover_api_component_enabledis set totrue)3312- Prometheus
FAQ
Failed to read L1 state: contract call to getAllZKChainChainIDs returned no data (“0x”); the called address might
not be a contract
Something went wrong with L1 - check that you’re really running the anvil with the proper state on the right port.
Failed to deserialize context
If you hit this error when starting, check if you don’t have some ‘old’ rocksDB data in db/node1 directory.
Design principles
- Minimal, async persistence
- to meet throughput and latency requirements, we avoid synchronous persistence at the critical path. Additionally, we aim at storing only the data that is strictly needed - minimizing the potential for state inconsistency
- Easy to replay arbitrary blocks
- Sequencer: components are idempotent
- Batcher:
batchercomponent skips all blocks until the first uncommitted batch. Thus, downstream components only receive batches that they need to act upon
- State - strong separation between
- Actual state - data needed to execute VM: key-value storage and preimages map
- Receipts repositories - data only needed in API
- Data related to Proofs and L1 - not needed by sequencer / JSON RPC - only introduced downstream from
batcher
Subsystems
- Sequencer subsystem — mandatory for every node. Executes transactions in VM, sends results downstream to other
components.
- Handles
ProduceandReplaycommands in an uniform way (seemodel/mod.rsandexecution/block_executor.rs) - For each block: (1) persists it in WAL (see
block_replay_storage.rs), (2) pushes tostate(seestatecrate), (3) exposes the block and tx receipts to API (seerepositories/mod.rs), (4) pushes to async channels for downstream subsystems. Waits on backpressure.
- Handles
- API subsystem — optional (not configurable atm). Has shared access to
state. Exposes ethereum-compatible JSON RPC - Batcher subsystem — runs for the main node - most of it is disabled for ENs.
- Turns a stream of blocks into a stream of batches (1 batch = 1 proof = 1 L1 commit); exposes Prover APIs; submits batches and proofs to L1.
- For each batch, computes the Prover Input (runs RiscV binary (
app.bin) and records its input as a stream ofVec<u32>- seebatcher/mod.rs) - This process requires Merkle Tree with materialized root hashes and proofs at every block boundary.
- Runs L1 senders for each of
commit/prove/execute - Runs Priority Tree Manager that applies new L1->L2 transactions to the dynamic Merkle tree and prepares
executecommands. It’s run both for main node and ENs. ENs don’t sendexecutetxs to L1, but they need to keep the tree up to date, so that if the node become main, it doesn’t need to build the tree from scratch.
Note on Persistent Tree — it is only necessary for Batcher Subsystem. Sequencer doesn’t need the tree — block hashes don’t include root hash. Still, even when batcher subsystem is not enabled, we want to run the tree for potential failover.
Component Details
See individual components and state recovery details in the table below. Note that most components have little to no internal state or persistence — this is one of the design principles.
| Component | In-memory state | Persistence | State Recovery |
|---|---|---|---|
| Command Source | starting_block (only used on startup) | none | starting_block is the first block after the compacted block stored in state, i.e., starting_block = highest_block - blocks_to_retain_in_memory. |
| BlockContextProvider | next_l1_priority_id; block_hashes_for_next_block (last 256 block hashes) | none | next_l1_priority_id: take from ReplayRecord of starting_block - 1; block_hashes_for_next_block: take from 256 ReplayRecords before starting_block |
| L1Watcher | Gapless list of Priority transactions - starting from the last committed to L1 | none | none - recovers itself from L1 |
| L2Mempool (RETH crate) | prepared list of pending L2 transactions | none | none (consider persisting mempool transactions in the future) |
| BlockExecutor | none 🔥 | none | none |
| Repositories (API subsystem) | BlockHeaders and Transactions for ~blocks_to_retain_in_memory blocks | Historical BlockHeaders and Transactions | none - recovers naturally when replaying blocks from starting_block |
| State | All Storage Logs and Preimages for blocks_to_retain_in_memory last blocks | Compacted state at some older block (highest_block - blocks_to_retain_in_memory): full state map and all preimages | none - recovers naturally when replaying blocks from starting_block |
| Merkle Tree | Only persistence | Full Merkle tree - including previous values on each leaf | none |
| ⬇️ Batcher Subsystem Components | ⬇️ Components below operate on Batches - not Blocks️ | ⬇️ Components below must not rely on persistence - otherwise failover is not possible | ⬇️ |
| Batcher | startup: starting_batch and batcher_starting_block; operation: Trailing Batch’s CommitBatchInfo; | none | first_block_to_process: block after the last block in the last committed L1 batch;last_persisted_block: the block after which we start checking for batch timeouts StoredBatchInfo in run_loop: Currently: from FRI cache; todo - Load last committed StoredBatchInfo from L1 OR reprocess last committed batch |
| Prover Input Generator | none | none | |
| FRI Job Manager | Gapless List of unproved batches with ProverInput and prover assignment info | none | none - batches before starting_batch are guaranteed to have FRI proofs, batches after will go through the pipeline again |
| FRI Store/Cache | none | Map<BatchNumber, FRIProof> (todo: extract from the node process to enable failover) | none |
| L1 Committer | none* | none | none - recovers itself from L1 |
| L1 Proof Submitter | none* | none | none - recovers itself from L1 |
| L1 Executor | none* | none | none - recovers itself from L1 |
| SNARK Job Manager (TODO - missing) | Gapless list of batches with their FRI proofs and prover assignment info | none | Load batches that are committed but not proved on L1 yet. Load their FRI proofs from FRI cache (TODO) |
| Priority Tree Manager | Dynamic Merkle tree with L1->L2 transaction hashes | Compressed data needed to rebuild the tree, see CachedTreeData for more details | none - recovers itself from replay storage |
RPC
- All standard
eth_methods are supported (except those specific to EIP-2930, EIP-4844 and EIP-7702). Block tags have a special meaning:earliest- not supported yet (will return genesis or first uncompressed block)pending- the latest produced blocklatest- same aspending(consider taking consensus into account here)safe- the latest block that has been committed to L1finalized- not supported yet (will return the latest block that has been executed on L1)
zks_namespace is kept to the minimum right now to avoid legacy from Era. Only following methods are supported:zks_getBridgehubContract
ots_namespace is used for Otterscan integration (meant for local development only)
Prover API
.route("/prover-jobs/v1/status", get(status))
.route("/prover-jobs/v1/FRI/pick", post(pick_fri_job))
.route("/prover-jobs/v1/FRI/submit", post(submit_fri_proof))
.route("/prover-jobs/v1/SNARK/pick", post(pick_snark_job))
.route("/prover-jobs/v1/SNARK/submit", post(submit_snark_proof))
Database Schema Overview
All persistent data is stored across multiple RocksDB databases:
- block_replay_wal
- preimages
- repository
- state
- tree
- proofs (JSON files, not RocksDB)
1. block_replay_wal
Write-ahead log containing recent (non-compacted) block data.
| Column | Key | Value |
|---|---|---|
| block_output_hash | block number | Block output hash |
| context | block number | Binary-encoded BlockContext (BlockMetadataFromOracle) |
| last_processed_l1_tx_id | block number | ID (u64) of the last processed L1 tx in the block |
| txs | block number | Vector of EIP-2718 encoded transactions |
| node_version | block number | Node version that produced the block |
| latest | ‘latest_block’ | Latest block number |
2. preimages
| Column | Key | Value |
|---|---|---|
| meta | ‘block’ | Latest block ID |
| storage | hash | Preimage for the hash |
3. repository
Canonical blocks and transactions.
| Column | Key | Value |
|---|---|---|
| initiator_and_nonce_to_hash | address (20 bytes) + nonce (u64) | Transaction hash |
| tx_meta | transaction hash | Binary TxMeta (hash, number, gas used, etc.) |
| block_data | block hash | Alloy-serialized block |
| tx_receipt | transaction hash | Binary EIP-2718 receipt |
| meta | ‘block_number’ | Latest block number |
| tx | transaction hash | EIP-2718 encoded bytes |
| block_number_to_hash | block number | Block hash |
4. state
Data compacted from the write-ahead log.
| Column | Key | Value |
|---|---|---|
| meta | ‘base_block’ | Base block number for this state snapshot |
| storage | key | Value (compacted storage) |
5. tree
Merkle-like structure.
| Column | Key | Value |
|---|---|---|
| default | composite (version + nibble + index) | Serialized Leaf or Internal node |
| key_indices | hash | Key index |
Note: The ‘default’ column also stores a serialized Manifest at key ‘0’.
6. proofs
Stored as JSON files in a separate directory: ../shared/fri_batch_envelopes
State
The global state is a set of key/value pairs:
- key = keccak(address, slot)
- value = a single 32-byte word (U256)
All such pairs are stored (committed) in the Merkle tree (see tree.md).
Account metadata (AccountProperties)
Account-related data (balance, nonce, deployed bytecode hash, etc.) is grouped into an AccountProperties struct. We do NOT store every field directly in the tree. Instead:
- We hash the AccountProperties struct.
- That hash (a single U256) is what appears in the Merkle tree.
- The full struct is retrievable from a separate preimage store.
Special address: ACCOUNT_STORAGE (0x8003)
We reserve the synthetic address 0x8003 to map account addresses to their AccountProperties hash. Concretely: value at key = keccak(0x8003, user_address) = hash(AccountProperties(user_address))
Example: fetching the nonce for address 0x1234
- Compute key = keccak(0x8003, 0x1234)
- Read the U256 value H from the Merkle tree at that key
- Look up preimage(H) to get AccountProperties
- Take the nonce field from that struct
This indirection:
- Keeps the Merkle tree smaller (one leaf per account metadata bundle)
- Avoids multiple leaf updates when several account fields change at once.
Bytecodes
We track two related things:
- What the outside world sees (the deployed / observable bytecode).
- An internal, enriched form that adds execution helpers (artifacts).
Terminology
- Observable (deployed) bytecode: The exact bytes you get from an RPC call like eth_getCode or cast code.
- Observable bytecode hash (observable_bytecode_hash): keccak256(observable bytecode). This matches Ethereum conventions.
- Internal extended representation:
observable bytecode
- padding (if any, e.g. to align)
- artifacts (pre‑computed data used to speed execution, e.g. jumpdest map).
- Internal bytecode hash (bytecode_hash): blake2 hash of the full extended representation above. The extended blob itself lives in the preimage store; only the blake hash is stored in AccountProperties.
Stored fields in AccountProperties
- bytecode_hash (Bytes32):
blake2 hash of
[observable bytecode | padding | artifacts]. - unpadded_code_len (u32): Length (in bytes) of the original observable bytecode, before any internal padding or artifacts.
- artifacts_len (u32): Length (in bytes) of the artifacts segment appended after padding.
- observable_bytecode_hash (Bytes32): keccak256 of the observable (deployed) bytecode.
- observable_bytecode_len (u32): Length of the observable (deployed) bytecode. (Currently mirrors unpadded_code_len; kept explicitly for clarity / future evolution.)
Why two hashes?
- keccak (observable_bytecode_hash) is what external tooling expects and can independently recompute.
- blake (bytecode_hash) commits to the richer internal representation the node actually executes against (including acceleration data), avoiding recomputing artifacts on every access.
Lookup workflow (simplified)
- From AccountProperties get:
- bytecode_hash → fetch extended blob via preimage store.
- observable_bytecode_hash → verify against externally visible code if needed.
- Use lengths (unpadded_code_len, artifacts_len) to slice: [0 .. unpadded_code_len) → observable code [end of padding .. end) → artifacts
This separation keeps the Merkle tree lean while enabling fast execution.
Tree
State is stored in a binary Merkle‑like tree. In production the logical tree depth is 64 (root at depth 0, leaves at depth 63). We use Blake2 as a tree hashing algorithm.
Optimization:
- Instead of persisting every individual leaf, we group (package) 8 consecutive leaves together.
- 8 leaves form a perfectly balanced subtree of height 3 (because 2^3 = 8).
- One such packaged subtree is stored as a single DB record.
Terminology:
- We call each 3-level chunk of the logical tree a nibble (note: this is an internal term here).
- Effective path length (number of nibbles) = ceil(64 / 3) = 22.
So:
- Logical depth: 64 levels.
- Physical traversal steps: 22 nibbles.
- Each nibble lookup loads or updates one packaged subtree (8 leaves).
Result: fewer DB reads/writes while preserving a logical depth of 64.
Genesis and Block 0
Genesis is the one-time process of starting a new chain and producing its first block.
What happens at genesis
- The sequencer starts with an empty database.
- It loads a hardcoded file (genesis.json) that defines the initial on-chain state.
- This file deploys exactly three system contracts:
- GenesisUpgrade
- L2WrappedBase
- ForceDeployer
- After loading these, the sequencer begins listening to L1 events that describe the first actual L2 block:
- Bytecodes to deploy additional contracts (ForceDeployer enables this)
- Parameters passed to GenesisUpgrade to finish remaining initialization steps
Why these contracts exist
- ForceDeployer: Allows forced deployment of predefined contract bytecode needed at boot.
- GenesisUpgrade: Finalizes system configuration after initial contract deployment.
- L2WrappedBase: Provides a required base implementation (infrastructure dependency).
Security perspective
L1 already knows the exact expected initial L2 state (the three contracts and their storage layout).
This state is defined in zksync-era’s genesis.json and is consumed by the zkStack tooling when setting up the ecosystem on L1.
Because L1 has this canonical reference, it can validate that the L2 started correctly.
Summary
Genesis = load predefined state from genesis.json -> deploy 3 core contracts -> process L1 events to finalize initialization -> produce block 0.
Base token price updater
Base token price updater is a service that periodically fetches USD prices of tokens that are required to properly calculate fee parameters. There are 3 options for token price source: CoinGecko, CoinMarketCap (3rd party APIs), or Forced which instantiates a client that returns prices that are configured, by default price randomly fluctuate a little to simulate real world scenario (fluctuation can be disabled).
Price source configuration
Source is configured in ExternalPriceApiClientConfig. For example, CoinGecko can be configured as follows:
external_price_api_client:
source: "Coingecko"
coingecko_api_key: "<key>"
For forced config it’s essential to provide prices for all required tokens:
- chain base token
- base token of the settlement layer (ETH for L1, ZK for Gateway)
- ETH
So for the chain that uses USDC as base token and settles on gateway forced configuration can look like this:
external_price_api_client:
source: "Forced"
forced_prices:
"0xa0b86991c6218b36c1d19d4a2e9eb0ce3606eb48": 1.0 # USDC
"0x0000000000000000000000000000000000000001": 3000.0 # ETH
"0x66a5cfb2e9c529f14fe6364ad1075df3a649c0a5": 0.035 # ZK
In simple case, when chain base token is ETH and settlement layer is L1, only ETH price is required:
external_price_api_client:
source: "Forced"
forced_prices:
"0x0000000000000000000000000000000000000001": 3000.0 # ETH
Token multiplier setter
For chains with base token different from ETH it’s recommended to configure token_multiplier_setter_sk,
then the component will also periodically update “ETH:token” price ratio on L1. Component and node will
still work without it but there will be a warning in logs and ratio on L1 won’t change meaning that price
for L1->L2 txs can eventually get outdated.
base_token_price_updater:
token_multiplier_setter_sk: "<private_key_in_hex>"
Mainnet recommendation
For mainnet it’s recommended to use one of 3rd party sources so that the fees are accurate and correspond to an up-to-date token price; and provide an API key to avoid getting rate-limited.
Also, it’s highly recommended to set fallback_prices configuration.
It sets predefined fallback prices for tokens in case external API fetching fails on startup.
If it’s missing and the price fetching fails on startup, then block sequencing will be blocked.
Configuration is similar to forced_prices. And should contain prices for all required tokens.
base_token_price_updater:
fallback_prices:
"0x0000000000000000000000000000000000000001": 3000.0 # ETH
Testnet recommendation
For testnets it’s usually acceptable to use Forced source with reasonable prices configured. In case you want fees to behave as on mainnet, you can still use 3rd party source and set config:
base_token_addr_override- mainnet token address that source can provide price for; in case base token is ETH it can be omitted.base_token_decimals_override- token decimals (since token is on mainnet but node connects to testnet it cannot get the decimals from L1); in case base token is ETH or ZK it can be omitted. Similarly, you can setgateway_base_token_addr_overrideto ZK mainnet address in case settlement layer is Gateway.
Example configuration for a chain that settles to Gateway, and chain’s base token is USDC:
base_token_price_updater:
base_token_addr_override: "0xa0b86991c6218b36c1d19d4a2e9eb0ce3606eb48" # USDC
base_token_decimals_override: 6 # USDC decimals
gateway_base_token_addr_override: "0x66a5cfb2e9c529f14fe6364ad1075df3a649c0a5" # ZK
ZKSync OS Fee model
ZKSync OS fee model is designed in a way to ensure that it describes well L2-specific costs (pubdata costs, ZK proving costs), while trying to keep it both simple and similar to Ethereum model. Internally VM keeps track of three resources: gas (similar to EVM), native (resource that reflects proving costs), pubdata (number of bytes to be posted on L1).
There are three parameters in block context that define fees:
native_price– price for one unit of nativeeip1559_basefee– price for one unit of gaspubdata_price– price for one byte of pubdata All three are specified in base token units, e.g. wei for ETH-based chains.
VM uses the following reasoning when calculating gas_used. Firstly, it calculates EVM gas used and effective gas price.
evm_gas_used * effective_gas_price gives it a number of base token units to be charged in EVM case. Secondly, it calculates native and pubdata costs:
native_price * native_used + pubdata_price * pubdata_used. Finally, it takes the maximum of two values and uses it as total fee and returns
gas_used such that gas_used * effective_gas_price equals total fee.
Fee configuration and calculation
Native price
Since native reflects ZK proving costs, it should be calculated based on 2 things:
- prover machine cost
- prover performance (how many native units it processes per second)
Native price is configured with FeeConfig::native_price_usd (fee_native_price_usd).
Node converts config parameter to base token units and uses the result as native_price.
Base fee
The idea behind calculating base fee is to choose it in a way such that in most cases
- the resulting gas used should be equal to evm gas used
- total fee should not be much higher than what operator spends in reality
Luckily, for the most opcodes the ratio between evm gas cost and native cost does not differ a lot. However, for some precompiles, e.g. modexp, the ratio native:gas is higher than for regular opcodes.
So, base fee is calculated as eip1559_basefee = native_price * native_per_gas,
where native_per_gas can be configured via FeeConfig::native_per_gas (fee_native_per_gas).
The default value is chosen such that the two properties above hold in most cases,
that is if a transaction doesn’t use many precompiles that are expensive in terms of native and does not require publishing a lot of pubdata.
Pubdata price
Pubdata price depends on what DA chain uses. If chain is a validium then price is set to 0. For rollups that settle to L1:
- if blobs are used, then L1 blob price is used for calculation
- if calldata is used, then L1 gas price is used for calculation If rollup settles to Gateway, then gateway pubdate price is used.
Pricing for blobs case is special because calculation of blob commitments is proven so it results in additional proving costs, thus pubdata price also depends on native price. Also, if chain settles frequently and posts blob that is not full then operator still needs to pay for the full blob. Node calculates a statistic for a fill ratio of submitted blobs and uses it to adjust pubdata price accordingly.
Blob price is not stable on Ethereum testnets, and can grow a lot sometimes.
At some point it can be the case that pubdata price is high enough so that gas costs for small transactions are higher than block gas limit.
To circumvent this issue, node has a configuration parameter FeeConfig::pubdata_price_cap (fee_pubdata_price_cap).
If it’s set, then pubdata price is capped by the value, allowing the node to operate normally even if blob price is very high.
For ETH-based testnet chains we recommend to set 10_000_000_000_000. If base token price is different from ETH, then the value should be adjusted accordingly.
Config overrides
Config allows to set constant overrides for base_fee, native_price, and pubdata_price.
Config variable are fee_base_fee_override, fee_native_price_override, and fee_pubdata_price_override respectively.
If set, node uses the override values instead of calculating the parameters as described above.
It can be used if operator prefers to not have a dynamic fee model, or for testing purposes.
EIP-1559
EIP-1559 rules for base fee calculation does not make much sense for ZKSync OS, because the operator costs doesn’t depend on block gas usage and sequencer supports big max TPS that should be enough for most situations. However, a small part of EIP-1559 is still applied:
native_priceandbase_feecan change between subsequent blocks max by 12.5%pubdata_pricecan increase between subsequent blocks max by 50% It’s needed for smooth transitions in case of sudden changes in parameters that affect fee calculation, otherwise it may lead to poor UX, e.g. txs can stuck in mempool, fail with out of gas, etc.
Guides
Updating local chains with new genesis and L1 state
This guide describes how to update local chains setup for a particular protocol version with the new genesis, L1 state, and all required dependencies including chain configs, wallets and contracts YAML files.
There are two ways to update the local setup:
- using GitHub Actions (recommended way for most users)
- running the update scripts locally (on self-hosted setups, next protocol version development, or when experimenting with custom contracts)
Use automated update through GitHub Actions if:
- you’re iterating on the next protocol version and need to update the server setup with the new version of contracts
- you need to regenerate local chains setup due to the tooling changes, e.g. anvil updates, new scripts, etc.
Run scripts locally if:
- you are experimenting with custom contracts locally and want to update your local setup
- you are self-hosted user wanting to update your server setup with the custom version of contracts
Follow the appropriate section below depending on your use case.
Updating with GitHub Actions
For the general guidelines about the GitHub Actions workflow to perform the local chains update, please refer to the General GitHub Actions guidelines. Especially important are the sections about:
Then, follow the instructions in the Server Update GitHub workflow guide to perform the update of the local chains setup through GitHub Actions.
As the result of the workflow execution, you will get the updated local chains setup available as:
- git patch published as a workflow artifact that you can download and apply locally using
git applycommand - new commit on your custom development branch
- new pull request with the update
If you are planning to merge your local chains update into the main branch,
please prefer automated update through GitHub Actions
as it is more reliable, tested in CI and less error-prone than the manual update.
Updating locally
To perform a local update of the local chains setup, you can run the update scripts locally on your machine.
Compatibility between protocol versions
Before performing the update, please make sure to check the protocol compatibility tables to make sure that you are using compatible versions of the protocol (contracts), server, and tooling.
Using incompatible versions of the protocol, server, or tooling may lead to unexpected errors during the update process. Carefully check the compatibility tables and make sure to use compatible versions of all components before proceeding with the update.
These tables are the source of truth for the compatibility of different versions of the protocol, server, and tooling. If you notice that the tables are outdated, please report it to the team, and/or contribute to updating the documentation with the correct information.
Prerequisites and environment
The scripts are located in the matter-labs/zksync-os-scripts repository.
Go through the Prerequisites guide to set up your environment.
There is a Quick install section that goes through all required dependencies setup for Unix-like system, and it is recommended to use it as a reference.
Re-generate local state
It is recommended to go through the full Server Update guide first.
You can directly jump to the Local Use section of the guide if you already know what the script does.
As the result, you will get the updated local chains setup available in the local-chains/ directory of your server repository.