Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • 17:29
    holiman commented #4996
  • 16:13
    marcindsobczak synchronize #4909
  • 16:13

    marcindsobczak on update_ethereum_tests

    Make state test runner output s… Do not set force reset flag if … state test runner: remove block… and 5 more (compare)

  • 15:49

    MarekM25 on master

    fix whitespaces (#5000) (compare)

  • 15:49

    MarekM25 on fix_whitespaces

    (compare)

  • 15:49
    MarekM25 closed #5000
  • 15:48
    MarekM25 review_requested #5000
  • 15:18
    MarekM25 opened #5000
  • 15:17
    holiman commented #4999
  • 15:15

    MarekM25 on fix_whitespaces

    fix whitespaces (compare)

  • 15:15
    holiman commented #4999
  • 15:13
    holiman commented #4999
  • 15:10
    holiman commented #4999
  • 15:09
    holiman commented #4999
  • 15:03
    holiman commented #4999
  • 15:00
    LukaszRozmej commented #4999
  • 14:59
    LukaszRozmej closed #4998
  • 14:59
    LukaszRozmej commented #4998
  • 14:58
    LukaszRozmej review_requested #4999
  • 14:58
    LukaszRozmej review_requested #4999
Gitter Bridge
@nethermind-bridge
[discord] <luk> btw this is not intended use case for FP
Gitter Bridge
@nethermind-bridge
[discord] <luk> As we had some issues with in-memory pruning we are currently disabling it while full pruning runs, this adds few GB for that run. This could be revisit now if its still a problem. I would rather focus on state db layout changes though
Gitter Bridge
@nethermind-bridge
[discord] <DanielC> Nobody is talking here about changing this behavior but about understanding the source of these additional GB. That was a concern of @santamansanta
[discord] <luk> yeah don't start FP unless you accumulated at least 100+GB of garbage
[discord] <luk> its more once per few months thing, not once per week
Gitter Bridge
@nethermind-bridge
[discord] <kaparnos> how did it go? I am also having the issue
[discord] <Micah | serv.eth> I resynced with 1.13.6 using snap sync and it fixed things.
[discord] <kaparnos> thanks
Gitter Bridge
@nethermind-bridge
[discord] <DanielC> Did you delete all DBs before resyncing? How long did Snap Sync take?
Gitter Bridge
@nethermind-bridge
[discord] <Micah | serv.eth> I deleted the nethermind_data folder and syncing took some number of hours (I didn't pay close attention, less than 12 I think).
[discord] <DanielC> I would expect the state snap sync to take between 3 and 4 hours. Anyway, good to hear it works now.
Thanks for the update.
Gitter Bridge
@nethermind-bridge
[discord] <kamilsadik> I've been experiencing the same for a while. Just deleted my database and started fresh with SnapSync. So far so good. Fingers crossed!
[discord] <kamilsadik> Good to hear. That's what I'm doing now. You weren't kidding @MarekM, SnapSync is freaking fast
Gitter Bridge
@nethermind-bridge
[discord] <Yorick | cryptomanufaktur.io> Makes sense. Someone on Reddit suggested that hybrid mode is now the default: What’s the envisioned pruning method now? Is “run hybrid, kick off full prune” safe because this disables in-memory prune, or is the previous recommendation of running with memory prune, restarting with full, then restarting with memory prune still the way to go?
Gitter Bridge
@nethermind-bridge

[discord] <kamilsadik> My health check showed

{"status":"Unhealthy","totalDuration":"00:00:00.0418311","entries":{"node-health":{"data":{},"description":"The node is now fully synced with a network. Peers: 150. The node stopped processing blocks.","duration":"00:00:00.0159409","status":"Unhealthy","tags":[]}}}

and my node has been stuck on a single block all night. My terminal killed the process at one point yesterday, and I think that might have corrupted the database. I'm running a Goerli Nethermind node and two Teku instances (Prater and Mainnet), which I think is too heavy a load for my 32GB RAM while my mainnet node is syncing.

I've cleared out my mainnet database, restarted my node, and am syncing my mainnet node from scratch while not running any other processes on my NUC. Here's my config:

sudo ./Nethermind.Runner --config mainnet --JsonRpc.Enabled true --JsonRpc.JwtSecretFile=/tmp/jwtsecret --JsonRpc.AdditionalRpcUrls=http://localhost:8551 --Sync.DownloadBodiesInFastSync true --Sync.DownloadReceiptsInFastSync true --Sync.SnapSync true --Sync.AncientBodiesBarrier 11052984 --Sync.AncientReceiptsBarrier 11052984 --HealthChecks.Enabled true --HealthChecks.UIEnabled true

Will fire up the other processes once the mainnet node is fully synced.

Gitter Bridge
@nethermind-bridge
[discord] <dinomuuk> Hi. My main drive is only 500gb my secondary drive is 2tb. Just start to install nethermind and unable figure out how to setup database directory to secondary drive.
[discord] <dinomuuk> Anyone have any guide that i can refer? From nethermind docs they only shows the default directories
Gitter Bridge
@nethermind-bridge
[discord] <kamilsadik> Does this flag not work? You can pass it at run-time, or in your config
--Init.BaseDbPath /PATH
[discord] <Bing² | serv.eth> Hi, make sure your secondary drive is fast (SSD, not HDD). Then you can just set the datadir configuration variable to the secondary drive: https://docs.nethermind.io/nethermind/ethereum-client/configuration/#datadir
[discord] <Bing² | serv.eth> I believe you can also set it via Init module BaseDbPath https://docs.nethermind.io/nethermind/ethereum-client/configuration/init
[discord] <dinomuuk> Yup.just install secondary ssd
[discord] <dinomuuk> Thank you guys.will try tomorrow
Gitter Bridge
@nethermind-bridge
[discord] <kamilsadik> Good luck!
Gitter Bridge
@nethermind-bridge
[discord] <Marcin> "I was getting frustrated because it showed only 0.45% progress and 450MB downloaded after several hours." - our snap sync has two stages - in first one you should have logs like SNAP - progress of State Ranges (Phase 1): x%, it should take 3-4 hours. When is finished, we are starting healing phase and here logs might be misleading - it's this StateSync with low percentage value. This healing phase will be refactored in 1-2 months, most likely after The Merge. And about RAM usage - is should drop when download of Old Bodies and Old Receipts will be finished.
Gitter Bridge
@nethermind-bridge
[discord] <estebaneu> I decided to completely re-sync Nethermind before the merge, now I run into an issue where it never downloads the last "elements", saying "downloaded 153450xx/15345040" where xx is always 40 behind
[discord] <Sherie | serv.eth> Can please share the full logs here for the Devs to check?
Gitter Bridge
@nethermind-bridge
[discord] <kamilsadik> Makes total sense. Thank you!
Gitter Bridge
@nethermind-bridge
[discord] <Marcin> It's hard to say what is going on without more context - logs would be very helpful
Gitter Bridge
@nethermind-bridge
[discord] <estebaneu> This is how it looks like after almost finishing syncing. It syncs very fast until it reaches about 20-40 blocks before the latest one. My settings are fairly standard, although I run with 25 peers . Internet speed isn't great (4G 30mbit down 10mbit up), but it hasn't been an issue so far 1.5 years)
[discord] <undoubted08 | serv.eth> Hi, let's wait for @luk or @MarekM to check on this.
Gitter Bridge
@nethermind-bridge
[discord] <estebaneu> This is how it looks like after almost finishing syncing. It syncs very fast until it reaches about 20-40 blocks before the latest one. My settings are fairly standard, although I run with 25 peers . Internet speed isn't great (4G 30mbit down 10mbit up), but it hasn't been an issue so far 1.5 years)
https://cdn.discordapp.com/attachments/629004402170134537/1008694995789811792/log
Gitter Bridge
@nethermind-bridge
[discord] <Marcin> Peering doesn't look good. Would you be able to send full logs? I would like to look for the root cause of this behaviour, so I need logs from the time before headers download stuck
[discord] <estebaneu> hm, my logs files seem to be weird. They last updated on friday. I do find them in Nethermind/logs, right? Sorry for the noob question.
Gitter Bridge
@nethermind-bridge
[discord] <Marcin> Don't worry, there are no bad questions 🙂 If you are running it from our release package then right, main_folder/logs
Gitter Bridge
@nethermind-bridge
This message was deleted
[discord] <estebaneu> thank you! So, this is a log file from a few days ago, but the issue is the same (no idea why there are no newer logs in Nethermind/logs). This was when i still tried to prune the db, and before i decided tho sync from 0.
https://cdn.discordapp.com/attachments/629004402170134537/1008705245410504764/mainnet.logs.txt
Gitter Bridge
@nethermind-bridge

[discord] <Marcin> How did you removed db before syncing from 0? In logs of newest sync I see
2022-08-12 19:11:06.8373|INFO|11|Initializing 11 plugins 2022-08-12 19:11:06.8260|INFO|11|Block tree initialized, last processed is 0 (0xd4e567...cb8fa3), best queued is 15216000, best known is 15216000, lowest inserted header 15215425, body , lowest sync inserted block number

What is important here, lowest inserted header 15215425 is not null, so when starting a node there was already something in db. It not neccessary is the cause of this problem, but might be.

Other thing, downloading headers is extremely slow on your machine. After 3h we have average of only 284 headers per second. For context, I locally have ~7000. But let's not give up!

I think we can try:

  1. Turn off the node, remove the whole folder nethermind_db (main_folder/nethermind_db)
  2. Update fastsync pivot. In main_folder/configs you can find mainnet.cfg - let's replace existing pivot values with newer ones:

"PivotNumber": 15335000,
"PivotHash": "0xde3bca488cce75bee9aabbfab6fc15aa56bc04e715f3197537a1d411af7b9708",
"PivotTotalDifficulty": "56285312641338818488948",

Why I want to do it? Our headers download has 2 parts - downloading headers after pivot and downloading headers before pivot - Old Headers. To start snap sync we only need to finish downloading newest headers (after pivot). By updating pivot we will need to download few thousands headers less, which can be crucial with your internet speed. In 1.13.6 our default pivot is 15216000 so updating it to 15335000 means downloading 119000 headers less to start snap sync! And when we start snap sync, OldHeaders will be downloaded in the background.

Gitter Bridge
@nethermind-bridge
[discord] <estebaneu> You are helping a lot and I am learning plenty in the process 🙂 So, I shut down nethermind, deleted nethermind_db and started over (just as I did before), this time with the edited mainnet.cfg. I send you the logs of the current process, let's see if you it gives you more insight. I'll just let it run and sync for now. Fingers crossed. And yes, internet connection is horrendous at the moment, a lot of load on the 4G net around here this time of the year (living in the middle of nowhere)
https://cdn.discordapp.com/attachments/629004402170134537/1008741611603230725/log
Gitter Bridge
@nethermind-bridge
[discord] <Marcin> It's super slow, but otherwise everything in logs looks fine. We need to download it all Downloaded 15335016 / 15346448 and then snap sync will start. Let's wait and fingers crossed 🙂
Gitter Bridge
@nethermind-bridge
[discord] <mightypenguin> Feature request:
It would be VERY helpful to have a log message indicating on startup if NM at least has all the configuration options specified that are required for the ethereum mainnet merge.
Gitter Bridge
@nethermind-bridge
[discord] <Marcin> Thank you for feedback, created issue: NethermindEth/nethermind#4423
Gitter Bridge
@nethermind-bridge
[discord] <santamansanta> Noticed something odd today. My internet went down and I got notification that I was missing attestations. Restarted modem and that fixed the internet issue. But for some reason, Nethermind was slow processing blocks. For example, I see that it discovers latest blocks but would not process them immediately. Instead takes sweet 3-4 minutes, then processes all those discovered blocks in the previous 3-4 minutes in a go. It behaved that way for probably 15-20 mins. Now it seems to be back to normal. Just seemed odd. Was connected to the max peers within a minute after restoring internet connection. So not sure what caused the issue
[discord] <Sherie | serv.eth> Please wait for the devs to provide their feedback. @luk @MarekM
Gitter Bridge
@nethermind-bridge
[discord] <Marcin> Which network is it? With which CL?
[discord] <MarekM> and which Nethermind version?
Gitter Bridge
@nethermind-bridge
[discord] <estebaneu> Alright, a little update. Seems like the download always has a distance of exactly 32 blocks. Is that expected behaviour? I attached the logs.
https://cdn.discordapp.com/attachments/629004402170134537/1009112117003173979/mainnet.logs.txt
https://cdn.discordapp.com/attachments/629004402170134537/1009112117384859768/mainnet.logs2.txt
Gitter Bridge
@nethermind-bridge

[discord] <Marcin> This Downloaded with a distance o 32 blocks looks weird, I don't know why it is always exactly 32. I need to look a bit deeper into the code. But otherwise SnapSync started!

2022-08-16 11:11:09.0143|INFO|128|SNAP - progress of State Ranges (Phase 1): 10.9375% [* ]

It is super slow (on "standard" internet whole SnapSync takes 3-4h), but we are going in a good direction! It will take a lot more time, it's only at 11%, but we are constantly progressing. We moved from 0% about midnight to 11% at 11AM, so as far it is going with a speed of about 1% per hour. After Phase 1 we will move into Phase 2 - healing, which with this speed can take few more hours. At this point node will be fully synced and operational and in the background we will start downloading Old Bodies and Old Receipts, which will take another days

Gitter Bridge
@nethermind-bridge
[discord] <Enes Zeren> Hi, I am using ropsten and I can't receive last blocks. My logs file is including this: "Numbers resolved, level = Max(12289999, 12350738), header = Max(12289999, 12350738), body = Max(0, 12350738)".
Can you help me for what I am doing?
[discord] <Enes Zeren> How can I update max value and is it correct solution?
[discord] <Bing² | serv.eth> Hi, can you share your configuration and logs for the team to check when available?