Storage Provider Tutorial
This guide provides practical steps for the storage providers to start an es-node instance for connecting to the existing EthStorage testnet.
Before Starting
Minimum Hardware Requirements
CPU: A minimum of 4 cores and 8 threads
8GB of RAM
Disk:
We recommend using an NVMe disk to support the full speed of sampling
At least 550 GB of available storage space for the runtime and sync of one data shard
Internet: At least 8MB/sec download speed
System Environment
MacOS Version 14+, Ubuntu 20.04+, or Windows with WSL (Windows Subsystem for Linux) version 2
(Optional) Docker 24.0.5+ (would simplify the process)
(Optional) Go 1.21+ and Node.js 16+ (can be installed following the steps)
ℹ️ Note: The steps assume the use of the root user for all command line operations. If using a non-root user, you may need to prepend some commands with "sudo".
Preparing miner and signer account
We recommend preparing two specific Ethereum accounts for this test.
The miner account will be set to receive rewards once the storage provider successfully submits a storage proof to the EthStorage contract. Each storage provider must use a unique miner account.
The signer account will act as a transaction signer and should contain a balance of test ETH. A signer account can be used by multiple storage providers.
ℹ️ Note: As Sepolia is used as L1 for the testnet, the test ETH can be requested from https://sepoliafaucet.com/.
⚠️ Warning: For safety reasons, we strongly suggest creating new wallets for the accounts to avoid the loss of any personal assets.
Remember to use the signer's private key (with ETH balance) to replace <private_key>
in the following steps. And use the other address to replace <miner>
.
Preparing Ethereum API endpoints
During the operation of the ES-Node, frequent Ethereum API calls are made, including at the execution layer and the consensus layer (the beacon chain). Therefore, we need you to prepare endpoints for two types of calls. We recommend using BlockPI
for the execution layer endpoint and QuickNode
for the beacon endpoint.
For details on the application process for endpoints, please refer to this section.
In the following tutorial, you will need to replace <el_rpc> for you execution layer endpoint, and <cl_rpc> for the beacon endpoint.
About run.sh
and init.sh
run.sh
and init.sh
The run.sh
script serves as the entry point for launching the es-node with predefined parameters. By default, mining is enabled through the --miner.enabled
flag in run.sh
, which implies that you assume the role of a storage provider upon starting an es-node with the default settings.
However, before the es-node can be successfully launched, you must execute init.sh
first. The primary function of this script is to verify the system environment, download and install dependencies, and initialize the data files in preparation for mining.
For specific usage and examples of the two scripts, refer to the steps outlined in Options for running es-node.
ℹ️ Note: Some of the flags/parameters used in
run.sh
are supposed to change over time. Refer to configuration for a full list.
Mining multiple shards
By default, only the first shard (shard 0) is mined using the default options in the scripts, but you have the choice to initialize and run your es-node in order to mine multiple selected shards by using additional options.
The flag --shard_index
can be utilized multiple times with init.sh
to generate data files for multiple shards on the es-node. For example,
Please take note of the following:
The shard files will be generated in the
./es-data
directory with the naming conventionshard-$(shard_index).dat
by default settings ininit.sh
.A shard will be omitted if its corresponding data file already exists.
shard-0.dat
will be tried to create if no--shard_index
is specified.
After initialization in this way, the run.sh
script will attempt to operate on data files located in ./es-data/shard-*.dat
. If you have relocated these data files or added additional files in another location, you can specify them using the --storage.files
options repeatedly following ./run.sh
.
About the option of zk prover implementation
The --miner.zk-prover-impl
flag specifies the type of zkSNARK implementation. Its default value is 1
, indicating the generation of zk proofs using snarkjs. The option 2
means to utilize go-rapidsnark. Since --miner.zk-prover-impl
interacts closely with the environment, it is crucial to use the same configuration when running both init.sh
and run.sh
.
ℹ️ Note: If you have to run an es-node pre-built with
--miner.zk-prover-impl 2
on Ubuntu 20.04, you will need to install extra packages.
Options for running es-node
You can run es-node from a pre-built executable, a pre-built Docker image, or from the source code.
If you choose the pre-built es-node executable, you may need to install Node.js if using default zk prover implementation.
If you have Docker version 24.0.5 or above installed, the quickest way to get started is by using a pre-built Docker image.
If you prefer to build from the source code, you will also need to install Go besides other dependencies.
From pre-built executables
Before running es-node from the pre-built executables, ensure that you have installed Node.js and snarkjs, unless --miner.zk-prover-impl
flag is set to 2
.
ℹ️ Note: Ensure that you run the executables on WSL 2 if you are using Windows, and both Node.js and snarkjs are installed on WSL instead of Windows.
Download the pre-built package suitable for your platform:
Linux x86-64 or WSL:
MacOS x86-64:
MacOS ARM64:
In folder es-node.v0.1.16
, init es-node by running:
Run es-node
From a Docker image
First init an es-node environment with the following command (If you are using Windows, execute the command in WSL):
Then start an es-node container:
After launch, you can check docker logs using the following command:
Mount data location using Docker volume option
Docker volumes (-v) are a mechanism for storing data outside containers. In the above docker run
command , you have the flexibility to modify the data file location on your host machine, ensuring that the disk space requirements are fulfilled. For example:
ℹ️ Note: The absolute host path does not function well on Windows, for more details please refer here
From source code
You will need to install Go to build es-node from source code.
If you intend to build es-node on Ubuntu, be sure to verify some dependencies.
Just like running a pre-built, if you plan to utilize the default zkSNARK implementation, ensure that you have installed Node.js and snarkjs.
Now download source code and switch to the latest release branch:
Build es-node:
Init es-node
Start es-node
With source code, you also have the option to build a Docker image by yourself and run an es-node container:
If you want to run Docker container in the background and keep all the logs:
Applying for Ethereum API endpoints
Applying for a free Sepolia execution layer endpoint from BlockPI
Go to the BlockPI website, click Get Started
. After signing in, you will get your Free Package Gift
. Click Generate API Key
, and remember to select Sepolia
, and you will get your API endpoint.
Finally, access the detailed page of your API endpoint and activate the Archive Mode
under the Advanced Features
section.
ℹ️ Note: The free plan of BlockPI provides sufficient usage as execution layer RPC for es-node, but it cannot be used as a beacon endpoint.
Applying for a free Sepolia beacon endpoint from QuickNode
Go to the QuickNode website, click Get started for free
. After signing in, you can create an endpoint. Remember to select Ethereum
and Sepolia
to continue. In the Compliance & Safety
category, select Endpoint Armor
, and select the free plan. After completing the required information, you will receive the endpoint along with an API key.
ℹ️ Note: The free plan of QuickNode is adequate for use as a beacon endpoint (<cl_rpc>) for running es-node, but is NOT sufficient as execution layer endpoint.
Install dependencies
ℹ️ Note: Not all steps in this section are required; they depend on your choice.
Install Go
Download a stable Go release, e.g., go1.21.4.
Extract and install
Update $PATH
Install Node.js
Install Node Version Manager
Close and reopen your terminal to start using nvm or run the following to use it now:
Choose a Node.js version above 16 (e.g. v20.*) and install
Activate the Node.js version
Install snarkjs
Install RapidSNARK dependencies
Check if build-essential
and libomp-dev packages
are installed on your Ubuntu system:
Install the build-essential and libomp-dev packages if no information printed:
Install libc6_2.35
This installation is intended for scenarios where you encounter errors like this while running the pre-built es-node on Ubuntu 20.04:
To prevent the error, add the following line to your /etc/apt/sources.list:
Next, install libc6_2.35 by running the following commands:
Check the status after launching the es-node
It's important to monitor the node closely until it successfully submits its first storage proof. Typically, the process encompasses three main stages.
Data sync phase
The es-node will synchronize data from other peers in the network. You can check from the console the number of peers the node is connected to and, more importantly, the estimated syncing time.
During this phase, the CPUs are expected to be fully occupied for data processing. If not, please refer to the FAQ for performance tuning on this area.
A typical log entry in this phase appears as follows:
Sampling phase
Once data synchronization is complete, the es-node will enter the sampling phase, also known as mining.
A typical log entry of sampling during a slot looks like this:
When you see "Sampling done with all nonces", it indicates that your node has successfully completed all the sampling tasks within a slot. The "samplingTime" value informs you of the duration, in seconds, it took to complete the sampling process.
If the es-node doesn't have enough time to complete sampling within a slot, the log will display "Mining tasks timed out". For further actions, please refer to the FAQ.
Proof submission phase
Once the es-node calculates a valid storage proof, it will submit the proof to the EthStorage contract and receive the rewards.
A typical log entry of submitting proof looks like this:
Once you see this log, congratulations on completing the entire process as a storage provider. You can also check how many storage proofs you've submitted, your ETH profit, and your ranking on the dashboard.
Last updated