Storage Provider FAQ
What can I do about "Mining tasks timed out"?
This is the most common issue storage providers encounter after finishing the data sync phase and entering the sampling phase.
Firstly, it's crucial to understand the importance of resolving this issue. The node attempts to perform 1,048,576 disk samplings every 12 seconds. For each sampling, it calculates a number based on the sample data to determine if it constitutes a valid proof (similar to Proof of Work, PoW). Thus, if the node cannot complete the 1,000,000 samplings in 12 seconds, you will encounter this warning log. As you can deduce, fewer samplings mean a lower chance of calculating a valid storage proof.
The underlying cause of this issue is the disk's inability to provide the necessary high read speed, which is why we recommend a NVMe SSD operating on PCIe 4.0. You should verify whether your disk meets this specification. Additionally, you can use 'fio' to test the reading speed:
If the results indicate that the read IOPS are higher than 160k, then it should suffice.
If your disk meets the requirements but you still receive this warning, consider using the --miner.threads-per-shard
flag. This allows specifying the number of threads for sampling each shard, potentially increasing the number of samplings achieved. For detailed instructions, check here for the detailed operation.
What should I do when the log frequently shows "i/o deadline reached" during data syncing?
This problem is often caused by a poor internet connection between your node and some peers, a common issue given that peers are located worldwide. Thus, it's likely you'll experience connection problems with some of them. The key detail to determine is the estimated syncing time shown in the log.
If it's less than 24 hours, it should be acceptable. Additionally, it's possible that the data request size is too large compared to network performance, preventing all requested data from being returned in time.
You can change the initial size of request to a smaller value using --p2p.request.size
for each peer, and the real request size would change according to network condition between a remote peer and local node. The current value in the run.sh
is 4194304
. You can try adjusting it to 1048576
. Check here for the detailed operation.
How to tune the performance of syncing?
To enhance syncing performance or limit the usage of CPU power, you can adjust the values of the following related flags with higher or lower values:
The
--p2p.sync.concurrency
flag determines the number of threads that simultaneously retrieve shard data, with a default setting of 16.The
--p2p.fill-empty.concurrency
flag specifies the number of threads used to concurrently fill encoded empty blobs, with a default setting of the CPU number minus 2.
Check here for the detailed operation.
Why is the CPU fully utilized when syncing data? Does this imply that running an es-node requires high-end hardware? Additionally, is there a way to configure it to use less CPU power?
During data synchronization, the CPUs are fully utilized as the es-node is engaged in intensive data processing tasks like encoding and writing.
However, this high-intensity processing occurs primarily when large volumes of data need to be synchronized, such as during the initial startup of the es-node. Once the synchronization is complete, and the es-node enters the mining phase, the CPU usage decreases significantly. So, generaly speaking, the es-node does not require high-end hardware, particularly for the CPU.
If there is a need to conserve CPU power anyway, you can tune down the values of syncing performance related flags, namely --p2p.sync.concurrency
and --p2p.fill-empty.concurrency
. See here for detailed information.
How do I know whether I've got a mining reward?
You can observe the latest log that summarizes the mining status every minute, where succeeded indicates a successful mining submission.
The real-time logs that record the detailed information like reward and profit are like the following:
You can also visit the EthStorage dashboard for real-time mining statistics.
Finally, pay attention to the balance change of your miner's address which reflects the mining income.
Can I change the data file location?
Yes, you can.
By default, when executed, the run.sh
script will generate a data file named shard-0.dat
to store shard 0 in the data directory specified by --datadir
, located in the same folder as the run.sh
script.
If necessary, you can choose an alternative location for data storage by specifying the full path of the file as the value of the --storage.files
flag in the run.sh
script.
Please refer to configuration for more details.
What can I do about "The zkey file was not downloaded" error?
When you see the following message when running run.sh. you can manually download the blob_poseidon.zkey / blob_poseidon2.zkey to ./build/bin/snarkjs/
folder and run it again.
If the error accurs while using Docker, you can mount the file from the host to the container using the -v
parameter in docker run
, just adjust the path accordingly:
How can I change the default configurations?
Just append the flag and value to the end of command that execute run.sh
.
Take --p2p.request.size
as example, the following command set its value to 1048576
:
If you start es-node with docker, please note that the flag and value should be added afer the docker image name:
Why does the "missing trie node" error occur?
If you encounter the missing trie node
error as described, it is likely that your es-node attempted to access the L1 state older than 256 blocks without success. This may be due to your execution layer endpoint not being an archive node.
If you are using a BlockPI
endpoint, please navigate to the dashboard, access the detailed page of your API endpoint, and enable the Archive Mode
within the Advanced Features
section.
Why should I use WSL to run Docker instead of Windows cmd?
If you are using a Windows system, make sure to use WSL to run the docker command instead of running it directly from the command prompt (cmd). The reason for this is that in Windows cmd, it seems that the volume option (-v) for a relative path is not taking effect, which causes the data to be stored inside the Docker container. This can lead to data loss when the container needs to be upgraded. If an absolute path is provided, an "operation not supported" error occurs when creating the data file, which is caused by the program cannot allocate space for the file accross operating system.
When running es-node in Docker on Windows, I want to store data files on a disk other than C:
. How can I achieve this?
C:
. How can I achieve this?First of all, make sure you are running docker run
command on WSL 2 (Ubuntu distro by default) instead of Windows cmd.
Secondly, when start es-node with Docker image, we use the volumes option (-v) for storing data files outside containers to achive persistent storage. However the following attempt to mount an absolute path (e.g., D:\es-data
) to the container would failed with "operation not supported" error.
The following approach will relocate the Ubuntu distro to the target volume (e.g. D:
) so that -v ./es-data:/es-node/es-data
can work properly.
By default the WSL system files are mapped to a specific directory on the C
drive. To relocate the system to another volume (e.g., D:
), execute the following command in PowerShell
:
and reboot the computer at the end.
Now you can open a WSL window (Ubuntu on Windows) and start Docker container with the command here.
I am already running es-node. How can I update it to a newer version?
Firstly, you can review the changes between releases to decide whether to upgrade or not. Then, depending on how you are currently running the es-node instance, different steps may need to be taken.
From pre-built executables
"Ctrl C" to stop the es-node process.
Download new version (e.g.,
es-node.v0.1.16
) of the pre-built package suitable for your platform using commands here.Some files including data folder need to be moved from the directory of old build (e.g.,
es-node.v0.1.15
) to the new one:
Launch es-node using the same command as the one previously used.
From a Docker image
As the data folder is located on the host where you execute the
docker run
command, you can safely stop and remove the old version of the container:
Then start a new container based on the new version of the es-node Docker image with the following command, just make sure the
<version>
inghcr.io/ethstorage/es-node:<version>
is correct:
From source code
"Ctrl C" to stop the es-node process.
Switch to the correct branch of the source code (e.g., you want to move to
v0.1.16
):
build and launch es-node
Last updated