Skip to content

Commit 620ba2d

Browse files
authored
Changelog 26.3.0 (#540)
1 parent 70bc179 commit 620ba2d

3 files changed

Lines changed: 98 additions & 17 deletions

File tree

website/docs/About/Changelog.md

Lines changed: 33 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -16,6 +16,39 @@ instead.
1616
> On 6/21/2025, Eth Docker's repository name changed. Everything should work as it did.
1717
> If you do wish to manually update your local reference, run `git remote set-url origin https://github.com/ethstaker/eth-docker.git`
1818
19+
## v26.3.0 2026-03-02
20+
21+
*This is an optional release*
22+
23+
**Breaking changes**
24+
25+
- Requires Geth `v1.17.0` or later when using the built-in Grafana via `grafana.yml` or `grafana-rootless.yml`
26+
27+
Changes
28+
- `./ethd prune-history` now offers a menu of expiry options, depending on which the chosen client supports. This ranges from "pre-merge"
29+
to "pre-cancun", "rolling" and even "aggressive".
30+
- Support `MAX_BLOBS` with Geth and Nimbus EL. `./ethd config` will attempt to compute `MAX_BLOBS` depending on upload bandwidth. This
31+
can allow bandwidth-constrained nodes to still build locally.
32+
- Initial support for Nimbus Verified Proxy. This is useful when expiring more than pre-merge history while also needing receipts,
33+
e.g. when running RocketPool, Nodeset or SSV. Note this client is still in alpha.
34+
- Geth sends traces to Tempo by default, when using `grafana.yml` or `grafana-rootless.yml`
35+
- Support EraE file import with Geth. Note there aren't many EraE files yet, this functionality requires more testing.
36+
- Remove Era/Era1 import from Nimbus EL. Only support EraE going forward.
37+
- Support Reth `v1.11.0` and later
38+
- Support optional pre- and post-update hooks when running `./ethd update`. The optional files `pre-ethd-update.sh` and/or `post-ethd-update.sh`
39+
are executed just before and after `./ethd update`, and should be bash scripts. Thanks @erl-100!
40+
- Remove deprecated `--in-process-validators=false` from Nimbus
41+
- All Dockerfiles explicitly add `adduser` and `bash`, even if these are currently already shipped with the client image
42+
- From this release, Eth Docker uses a calendar version scheme
43+
44+
Bug fixes
45+
- Fixed a bug that kept Lighthouse from starting when using the new graffiti append option. Thanks @victorelec14!
46+
- Fixed a bug in the Lighthouse jwtsecret ownership check. Thanks @Olexandr88!
47+
- Ensure `ping` utility is installed before testing IPv6 connectivity
48+
- Fixed a bug that broke Geth telemetry. Thanks @marcovc!
49+
- Remove a duplicate `gosu` install from the Nimbus-EL Dockerfile
50+
51+
1952
## v2.19.1.0 2026-02-10
2053

2154
*This is an optional release. It is required when using Lodestar `v1.39.0` or later*
Lines changed: 48 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,48 @@
1+
---
2+
title: "RPC Proxy"
3+
sidebar_position: 10
4+
sidebar_label: RPC Proxy
5+
---
6+
7+
## Verified RPC Proxy
8+
9+
When expiring more than pre-merge history and running protocols such as RocketPool, Nodeset or SSV, the receipts
10+
needed for these protocols are missing from the local execution layer. The `eth_getLogs` calls for these receipts
11+
will fail. This issue will become even more pronounced when validators can run without an execution layer at all,
12+
and just consume proofs instead.
13+
14+
The obvious solution is to use third-party RPC for RPC queries. But this requires trusting the third-party RPC.
15+
16+
A "verified RPC proxy" can solve this, by checking against a trusted block root.
17+
18+
### Setup
19+
20+
- Make an account with Alchemy or any other provider that supports `eth_getProof`. Infura does not as of March 2026.
21+
Free account should be fine if used for occasional receipts queries.
22+
- Add `nimbus-vp.yml` to `COMPOSE_FILE` in `.env` via `nano .env`
23+
- While in `.env`, set `RPC_URL` to your RPC provider's `wss://` endpoint, including API key / auth
24+
- Run `./ethd update` and `./ethd up`
25+
- RocketPool, **only** if using hybrid mode with execution layer client in Eth Docker: `rocketpool service config` and set
26+
"Execution Client" `HTTP URL` to `http://rpc-proxy:48545` and `Websocket URL` to `ws://rpc-proxy:48546`
27+
- Nodeset, **only** if using hybrid mode with execution layer client in Eth Docker: `hyperdrive service config` and set
28+
"Execution Client" `HTTP URL` to `http://rpc-proxy:48545` and `Websocket URL` to `ws://rpc-proxy:48546`
29+
- SSV Node, `nano ssv-config/config.yaml` and change the `ETH1Addr` to be `ws://rpc-proxy:48546`, then `./ethd restart ssv-node`
30+
- SSV Anchor, `nano .env` and change `EL_RPC_NODE` to `http://rpc-proxy:48545` and `EL_WS_NODE` to `ws://rpc-proxy:48546`, then
31+
`./ethd restart anchor`
32+
- Lido SimpleDVT with Obol, `nano .env` and change `OBOL_EL_NODE` to `http://rpc-proxy:48545`, then `./ethd restart validator-ejector`
33+
34+
### Adjusting defaults
35+
36+
Nimbus Verified Proxy on startup gets a trusted root from `CL_NODE`, connects to `RPC_URL`, and proxies all
37+
RPC requests while verifying them against the trusted root. This works for `http://` and `ws://` queries.
38+
39+
You can change the ports the proxy listens on with `PROXY_RPC_PORT` and `PROXY_WS_PORT`.
40+
41+
These ports can be exposed to the host via `proxy-shared.yml`, or encrypted to `https://` and `wss://`
42+
via `proxy-traefik.yml`, in which case you also want `DOMAIN`, `PROXY_RPC_HOST`, `PROXY_WS_HOST`, and either
43+
`traefik-cf.yml` or `traefik-aws.yml` with their attendant parameters.
44+
45+
If running multiple Eth Docker stacks on one host, each with an rpc proxy and using the same Docker bridge network via
46+
`ext-network.yml`, you can use `RPC_PROXY_ALIAS` and `WS_PROXY_ALIAS` to give the proxies distinctive names. In
47+
that case, use the alias names when configuring other protocols to connect to the proxy, do **not** use
48+
`rpc-proxy` as it'd round-robin between multiple instances of the proxy on the bridge network.

website/docs/Usage/ResourceUsage.md

Lines changed: 17 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -36,11 +36,11 @@ DB Size is shown with values for different types of nodes: Full, and different l
3636
|--------|---------|------|---------|---------------|----------------|------------|---------------|-----|-------|
3737
| Geth | 1.15.11 | May 2025 | ~1.2 TiB | ~830 GiB | n/a | n/a | n/a | ~ 8 GiB | |
3838
| Nethermind | 1.36.0 | February 2026 | ~1.1 TiB | ~740 GiB | ~600 GiB | ~240 GiB | n/a | ~ 7 GiB | With HalfPath, can automatic online prune at ~350 GiB free |
39-
| Besu | v25.8.0 | August 2025 | ~1.35 TiB | ~850 GiB | n/a | tbd | ~290 GiB | ~ 10 GiB | |
40-
| Reth | 1.5.0 | July 2025 | ~1.6 TiB | ~950 GiB | tbd | tbd | tbd | ~ 9 GiB | |
41-
| Erigon | 3.0.3 | May 2025 | ~1.0 TiB | ~650 GiB | n/a | tbd | tbd | See comment | Erigon will have the OS use all available RAM as a DB cache during post-sync operation, but this RAM is free to be used by other programs as needed. During sync, it may run out of memory on machines with 32 GiB or less |
42-
| Nimbus | 0.1.0-alpha | May 2025 | tbd | 755 GiB | n/a | n/a | n/a | With Era1 import |
43-
| Ethrex | 4.0.0 | October 2025 | n/a | 450 GiB | n/a | n/a | n/a | |
39+
| Besu | v26.1.0 | February 2026 | ~1.35 TiB | ~850 GiB | n/a | ~560 GiB | ~290 GiB | ~ 10 GiB | |
40+
| Reth | 1.11.1 | February 2026 | tbd | tbd | tbd | tbd | tbd | ~ 9 GiB | Storage v2 |
41+
| Erigon | 3.3.8 | February 2026 | ~1.0 TiB | ~650 GiB | n/a | ~640 GiB | ~355 GiB | See comment | Erigon will have the OS use all available RAM as a DB cache during post-sync operation, but this RAM is free to be used by other programs as needed. During sync, it may run out of memory on machines with 32 GiB or less |
42+
| Nimbus | 0.1.0-alpha | May 2025 | tbd | 755 GiB | n/a | n/a | n/a | | With Era1 import |
43+
| Ethrex | 4.0.0 | October 2025 | n/a | 450 GiB | n/a | n/a | n/a | | |
4444

4545
Notes on disk usage
4646
- Reth, Besu, Geth, Erigon, Ethrex and Nimbus continously prune
@@ -65,11 +65,10 @@ Cache size default in all tests.
6565
| Client | Version | Date | Node Type | Test System | Time Taken | Notes |
6666
|--------|---------|------|-----------|-------------|------------|--------|
6767
| Geth | 1.15.10 | April 2025 | Full | OVH Baremetal NVMe | ~ 5 hours | |
68-
| Nethermind | 1.24.0| January 2024 | Full | OVH Baremetal NVMe | ~ 5 hours | Ready to attest after ~ 1 hour |
6968
| Nethermind | 1.36.0| February 2026 | post-Cancun | Netcup RS G11 | ~ 2 hours | Ready to attest after ~ 1 hour |
70-
| Besu | v25.8.0 | August 2025 | post-merge | OVH Baremetal NVMe | ~ 13 hours | |
71-
| Erigon | 3.0.3 | May 2025 | post-merge | OVH Baremetal NVMe | ~ 2 hours | |
72-
| Reth | beta.1 | March 2024 | Full | OVH Baremetal NVMe | ~ 2 days 16 hours | |
69+
| Besu | v26.1.0 | February 2026 | rolling | Netcup RS G11 | ~ 13 hours | |
70+
| Erigon | 3.3.8 | February 2026 | rolling | Netcup RS G11 | ~ 12 hours | |
71+
| Reth | 1.11.1 | February 2026 | Full | Legacy miniPC | ~ 5 days | |
7372
| Nimbus | 0.1.0-alpha | May 2025 | Full | OVH Baremetal NVME | ~ 5 1/2 days | With Era1 import |
7473
| Ethrex | 4.0.0 | October 2025 | post-merge | OVH Baremetal NVME | ~ 2 hours | |
7574

@@ -88,24 +87,25 @@ Specifically `fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --
8887
Servers have been configured with [noatime](https://www.howtoforge.com/reducing-disk-io-by-mounting-partitions-with-noatime) and [no swap](https://www.geeksforgeeks.org/how-to-permanently-disable-swap-in-linux/) to improve latency.
8988

9089

91-
| Name | RAM | SSD Size | CPU | r/w IOPS | r/w latency | Notes |
92-
|----------------------|--------|----------|------------|------|-------|--------|
93-
| [OVH](https://ovhcloud.com/) Baremetal NVMe | 32 GiB | 1.9 TB | Intel Hexa | 177k/59k | 150us max | This is in line with any good NVMe drive |
94-
| [Netcup](https://netcup.eu) RS G11 | 96 GiB | 3 TB | 20 vCPU on an AMD 84-core | | 400us avg / 1.1ms max | This is an example of a system with storage that is fast enough to attest, but too slow to get best rewards |
90+
| Name | RAM | SSD Size | CPU | r/w latency | Notes |
91+
|----------------------|--------|----------|------------|-------------|-------|
92+
| [OVH](https://ovhcloud.com/) Baremetal NVMe | 32 GiB | 1.9 TB | Intel Hexa | 150us max | Datacenter-class NVMe drive |
93+
| [Netcup](https://netcup.eu) RS G11 | 96 GiB | 3 TB | 20 vCPU on an AMD 84-core | 400us avg / 1.1ms max | Storage is fast enough to attest, but too slow to get best rewards |
94+
| Legacy miniPC | 32 GiB | 2 TB | Intel Quad 6th gen | 230 us avg / 320 us max | Home staker setup with PCIe 3 NVMe and older CPU |
9595

9696
## Getting better latency
9797

98-
Ethereum execution layer clients need decently low latency. IOPS can be used as a proxy for that. HDD will not be sufficient.
98+
Ethereum execution layer clients need decently low latency. Measure latency with `ioping` when the system is under load. NVMe SSD is highly recommended; HDD will not be sufficient.
9999

100-
For cloud providers, here are some results for syncing Geth.
101-
- AWS, gp2 or gp3 with provisioned IOPS have both been tested successfully.
100+
For cloud providers, here are some results for syncing Geth. In a nutshell, use baremetal instead.
101+
- AWS, gp2 or gp3 with provisioned IOPS delivered sub-par performance during sync committees.
102102
- Linode block storage, make sure to get NVMe-backed storage.
103103
- Netcup RS G11 works, but rewards are not optimal.
104104
- There are reports that Digital Ocean block storage is too slow, as of late 2021.
105105
- Strato V-Server is too slow as of late 2021.
106106

107107
Dedicated servers with NVMe SSD will always have sufficiently low latency. Do avoid hardware RAID though, see below.
108-
OVH Advance line is a well-liked dedicated option; Linode or Strato or any other provider will work as well.
108+
OVH Advance line is a well-liked dedicated option; latitude.sh, Linode, Vultr or Strato or any other baremetal provider will work as well.
109109

110110
For own hardware, we've seen three causes of high latency:
111111
- DRAMless or QLC SSD. Choose a ["mainstream" SSD](https://gist.github.com/yorickdowne/f3a3e79a573bf35767cd002cc977b038)

0 commit comments

Comments
 (0)