← Back to Blog
Erigon ClientApril 28, 2026

Erigon v3.4 Splashing Saga: Built for Production Stability

Infrastructure reliability and new capabilities for node operators, validators and RPC providers

By Erigon Team·8 min read

4× smaller chaindata, faster startup, a native MCP server, and new RPC methods together with correctness and reliability fixes for operators on Ethereum and Gnosis.

Erigon v3.4 Splashing Saga

Every milestone release is defined by its core focus. Erigon v3.4 Splashing Saga is engineered specifically for stability, performance, and efficiency at ChainTip.

This release systematically eliminates infrastructure fragility, closing failure modes that only emerge under the pressure of scale and time. Specifically, v3.4 addresses critical stability vectors: resolving phantom disk growth on archive nodes, bounding silent blob storage expansion, preventing validator client crashes at startup, clearing gossip-layer stalls on Gnosis, and coordinating clean database shutdowns to prevent process hangs.

On the new capabilities side, v3.4 introduces a native MCP server for AI and developer tooling, five new RPC methods, 4× smaller chaindata, and precise resource controls for public endpoints.

The result is a release that stakers, validators, and RPC providers on both Ethereum and Gnosis run with greater confidence and build on with greater reach.

Performance, Storage, and Stability

The most visible improvement in v3.4 for Ethereum mainnet operators is the size of chaindata which is now up to 4× smaller, dropping from roughly 80 GB to around 20 GB. This directly improves chain-tip performance, reduces storage costs, and makes it faster to spin up and maintain nodes. The reduction is available via a fresh sync or by running a single rebase command on an existing node. Storage efficiency improves further with a rework of how internal data files are merged, prioritising Domain over History and cutting the disk space required for background maintenance in half. The impact of optional heavy flags such as --persist.receipts and --prune.include-commitment-history on chain-tip performance is also reduced.

Startup is faster. Indexing and pruning no longer block the node from becoming operational on restart, and Caplin no longer loses its historical download progress when restarting at ChainTip — reducing time-to-operational for operators running frequent maintenance cycles.

v3.4 Splashing Saga also recovers disk space that operators didn't know they were losing. A zero-copy mmap optimisation introduced earlier in the 3.x cycle inadvertently skipped the unmap step in the temp-file cleanup path. On long-running stage_exec runs, deleted-but-still-mapped sortable buffers accumulated up to 2.8 TB of phantom disk space, a growing gap between reported and actual usage that could trigger out-of-space errors with no obvious cause. Temp files are now correctly unmapped before deletion, closing the gap immediately.

Shutdown is now more reliable. A read-transaction leak in the updateForkChoice path, where goroutines held open DB read transactions past database close, caused the process to hang on exit. Separately, the history download stage ran against context.Background() rather than the stage's cancellation context, making it unresponsive to Ctrl+C during long downloads. Both are resolved: shutdown is now coordinated, clean, and more reliable, with the process waiting up to 5 seconds each for WaitIdle and warmup goroutines to finish.

Further reliability improvements: a Pectra requests-hash validation bug that generated an invalid requests root hash on mainnet re-sync is corrected; the transaction pool correctly evicts stale queued transactions that exceed the max nonce gap; and a per-word heap allocation pattern in the debug trace JSON logger is resolved with zero-alloc memory word encoding, preventing out-of-memory conditions on large block traces.

RPC Layer: Correctness and New Methods

For RPC providers, v3.4 delivers correctness across the board alongside a set of new methods.

debug_traceCallMany now correctly applies global block and state overrides across multi-block simulations. Previously, overrides were silently ignored, producing incorrect trace results. eth_getBlockReceipts is protected against concurrency-driven latency spikes and OOM under high request rates. eth_feeHistory gains a performance optimisation and pending block support. debug_traceCall fixes an interaction between state and block overrides, and eth_blobBaseFee now returns the correct value.

New RPC methods in v3.4:

  • trace_rawTransaction: trace a signed transaction without broadcasting it to the network
  • eth_getStorageValues: retrieve multiple storage slots across multiple accounts in a single call
  • admin_addTrustedPeer / admin_removeTrustedPeer: add or remove trusted peers at runtime without restarting the node
  • engine_getBlobsV3:blob data retrieval for EIP-4844 tooling
  • Flat trace output format—flat trace output support for the trace_call family of methods

Operators running public endpoints gain three new parameters to control resource consumption: a block range cap (--rpc.blockrange.limit), a log results cap ( --rpc.logs.maxresults ), and a concurrency limit (--rpc.max.concurrency ). These provide precise load control without a separate reverse proxy. Note that the concurrency limit uses a dynamic default if set to zero, and admission control does not cover WebSocket requests.

A More Resilient Consensus Layer

Caplin, Erigon's embedded consensus client, delivers its most substantial reliability update in v3.4 — across blob retention, validator availability, peer management, and specification compliance.

Blob storage now stays bounded. Three compounding bugs in the blob retention pipeline caused caplin storage to grow without limit on long-running nodes — reaching 1.6 TB on observed Hoodi deployments before intervention. The root cause: the CleanupAndPruning stage was never reached, because fork choice transitioned directly to SleepForSlot; even had it run, the prune range was offset from the wrong starting slot; and the hardcoded 1,000,000 slots constant was for beacon block index pruning, not blob sidecar pruning (which uses 128,600 slots), and this kept the effective window so wide that pruning was inactive regardless. All three are fixed. Pruning runs correctly on every fork choice cycle, the range is accurate, and the retention window for PeerDAS data column sidecars is now configurable — giving operators precise control over long-term blob disk usage.

Validators connect without interruption. A nil-pointer panic in the validator attestation data handler, triggered when a validator client polled before Caplin had synced to chain head, is resolved. The fix adds access locking and nil-guarding on the head state, ensuring the handler responds correctly at any point in the sync lifecycle.

Peer quality improves automatically. Caplin was incorrectly penalising peers for a class of gossip messages that the Ethereum specification says should simply be ignored. Over time, this silently degraded the peer set by banning valid peers. v3.4 Splashing Saga aligns Caplin's behaviour fully with the specification, false bans stop, and peer set quality recovers automatically without operator action. A separate crash triggered by malformed data received from peers is also resolved.

MCP Server: Erigon as an AI-Native Node

v3.4 ships a Model Context Protocol (MCP) server, making Erigon the first Ethereum execution client with native AI tool integration. The MCP server exposes node data such as blocks, transactions, state, logs and traces through a structured interface that AI assistants, developer tools, and automation pipelines can query directly.

The MCP server is enabled by default and enables users to query Erigon's node in natural language as a read-only interface. A variety of use cases opens up for node runners, such as interactive blockchain analysis, troubleshooting, node debugging, and more.

Gnosis Chain

Erigon v3.4 follows with two Gnosis-specific improvements. The Chiado bootstrap nodes were updated to match Lighthouse’s built-in Chiado configuration, adding support for the Fulu consensus layer component ahead of the Chiado testnet Fusaka activation. And a liveness issue that caused Gnosis nodes to stall under load is resolved: redundant ENR renewal calls on every expiry were flooding the gossip layer with repeated log entries, creating mutex contention in the logging stack that degraded throughput under pressure. The ENR lifecycle now fires correctly, the log is demoted to debug level, and logger mutex contention under high request rates is addressed.

Operator Notes

A summary of changes that require action or awareness before upgrading:

Go 1.25 required. Erigon v3.4 sets Go 1.25 as the minimum build/toolchain version for source builds. Operators building from source or managing their own Go installation should upgrade before deploying v3.4.

Peer discovery change on mainnet. discv5 is now the default peer discovery protocol on Ethereum mainnet; discv4 is disabled by default. For most operators this is seamless. Those with firewall rules or static peer configurations tied to discv4 should review their network setup before upgrading.

New RPC rate-limiting defaults. Block range, log result, and concurrency limits are now active by default with conservative values. Operators running public endpoints that currently accept large block ranges or high log result counts should review these defaults before upgrading to avoid breaking existing integrations.

Historical eth_getProof is now stable. Introduced as experimental in v3.3, it graduates to production-ready in v3.4. Requirements are unchanged: a node synced with --prune.include-commitment-history and a minimum of 32 GB RAM.

4× smaller chaindata for Ethereum mainnet, available on demand. The storage reduction applies to Ethereum mainnet and is not automatic — it requires either a fresh sync or running the rebase command (./build/bin/erigon seg step-rebase --datadir=<your_path> --new-step-size=390625). Operators on other chains can apply the same command manually for a proportional reduction. Existing nodes continue to work as normal without any action required.

No data migration required. v3.4 is a drop-in upgrade from v3.3. No resync is needed unless you intend to take advantage of the new storage features, or want to apply the latest data correctness fixes for historical eth_getProof . In this case re-sync with --prune.include-commitment-history.

See here the full release notes.

The Bigger Picture

Erigon v3.4 ships with a clear orientation: production confidence first. The storage improvements are real and immediate, smaller chaindata, no phantom disk, bounded blob retention, half the merge overhead. The reliability improvements are equally concrete : clean shutdown, crash-free validator startup, correct peer management, and accurate RPC results across the board.

Layered on top: a node open to a new class of AI tools via the MCP server, five new RPC methods, and precise resource controls for public infrastructure.

For stakers and validators, that means a consensus layer that holds up under edge conditions and connects reliably from first startup. For RPC providers, it means query results you can trust and controls that protect your infrastructure under load. For all operators, it means a foundation that takes up less space, restarts faster, and stays stable exactly when it matters most.