Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • 21:03

    dceleda on master

    Add optional argument Address i… (compare)

  • 21:03

    dceleda on Add-optional-argument-Address-in-parity_pendingTransactions-(#4413)

    (compare)

  • 21:03
    dceleda closed #4657
  • 21:02
    smartprogrammer93 opened #4659
  • 21:01

    smartprogrammer93 on Extract-BySpeedStrategy-logic-in-PostMergeBlocksSyncPeerAllocationStrategy-to-BySpeedStrategy-(#4546)

    Extract BySpeedStrategy logic i… (compare)

  • 21:00

    dceleda on master

    Replace submodules with package… (compare)

  • 21:00

    dceleda on Replace-submodules-with-packages-(#4374)

    (compare)

  • 21:00
    dceleda closed #4656
  • 20:59
    smartprogrammer93 opened #4658
  • 20:57

    smartprogrammer93 on Exit-early-on-error-(#4220)

    Exit early on error (#4220) Ex… (compare)

  • 20:55
    dceleda review_requested #4657
  • 20:55
    smartprogrammer93 opened #4657
  • 20:55

    smartprogrammer93 on Add-optional-argument-Address-in-parity_pendingTransactions-(#4413)

    Add optional argument Address i… (compare)

  • 20:53
    dceleda review_requested #4656
  • 20:51
    smartprogrammer93 opened #4656
  • 20:51

    smartprogrammer93 on Replace-submodules-with-packages-(#4374)

    Replace submodules with package… Add enough projects to build be… Build fix to issue caused by in… and 1 more (compare)

  • 20:50

    dceleda on master

    Revise product version (#4322) … (compare)

  • 20:50

    dceleda on Revise-product-version-(#4322)

    (compare)

  • 20:50
    dceleda closed #4655
  • 20:47
    dceleda review_requested #4655
Gitter Bridge
@nethermind-bridge
[discord] <Yorick | cryptomanufaktur.io> Nethermind docker is nethermindeth/nethermind
[discord] <Yorick | cryptomanufaktur.io> That's where the dev stuff lives
Gitter Bridge
@nethermind-bridge
[discord] <santamansanta> It is 176 GB now. It was around 140 GB after the nethermind full/first sync 12 days ago. Before starting pruning it was 159 GB
Gitter Bridge
@nethermind-bridge
[discord] <santamansanta> Makes me wonder this - a fresh new install of nethermiind has the /nethermind_db/mainnet/state directory start at 140 GB. So when it grows to 200 GB in 3 weeks (lets assume) and we trigger a full prune, shouldn't it go back to 140-145 GB range when pruning is done? Or am I not understanding this correctly? @luk @MarekM @DanielC
Gitter Bridge
@nethermind-bridge
[discord] <DanielC> In theory, you're right. I don't know how efficient is the full pruning algorithm. I wonder how the numbers look like when let's say 140->300->full pruning -> ?
@luk
[discord] <santamansanta> So my attempt to prune at 159 GB (after 10 days) is probably not a good test? Pruning took 3 days and I was at 170 GB after it was done 😦
[discord] <santamansanta> So didnt really prune anything, it feels like?
Gitter Bridge
@nethermind-bridge
[discord] <DanielC> It looks that is not so efficient to prune ideally all obsolete trie nodes. I wonder if the process moves in the meantime from one root hash to another...
Probably full pruning makes sense when the numbers are much bigger. For example 2 x full state
Gitter Bridge
@nethermind-bridge
[discord] <luk> Bodies, Receipts and State always grows
Gitter Bridge
@nethermind-bridge
[discord] <DanielC> 11GB in 3 days only for State? And we talk about a clear state tree without obsolete nodes. Naaaaaah...
Gitter Bridge
@nethermind-bridge
[discord] <luk> hm....
[discord] <luk> btw this is not intended use case for FP
Gitter Bridge
@nethermind-bridge
[discord] <luk> As we had some issues with in-memory pruning we are currently disabling it while full pruning runs, this adds few GB for that run. This could be revisit now if its still a problem. I would rather focus on state db layout changes though
Gitter Bridge
@nethermind-bridge
[discord] <DanielC> Nobody is talking here about changing this behavior but about understanding the source of these additional GB. That was a concern of @santamansanta
[discord] <luk> yeah don't start FP unless you accumulated at least 100+GB of garbage
[discord] <luk> its more once per few months thing, not once per week
Gitter Bridge
@nethermind-bridge
[discord] <kaparnos> how did it go? I am also having the issue
[discord] <Micah | serv.eth> I resynced with 1.13.6 using snap sync and it fixed things.
[discord] <kaparnos> thanks
Gitter Bridge
@nethermind-bridge
[discord] <DanielC> Did you delete all DBs before resyncing? How long did Snap Sync take?
Gitter Bridge
@nethermind-bridge
[discord] <Micah | serv.eth> I deleted the nethermind_data folder and syncing took some number of hours (I didn't pay close attention, less than 12 I think).
[discord] <DanielC> I would expect the state snap sync to take between 3 and 4 hours. Anyway, good to hear it works now.
Thanks for the update.
Gitter Bridge
@nethermind-bridge
[discord] <kamilsadik> I've been experiencing the same for a while. Just deleted my database and started fresh with SnapSync. So far so good. Fingers crossed!
[discord] <kamilsadik> Good to hear. That's what I'm doing now. You weren't kidding @MarekM, SnapSync is freaking fast
Gitter Bridge
@nethermind-bridge
[discord] <Yorick | cryptomanufaktur.io> Makes sense. Someone on Reddit suggested that hybrid mode is now the default: What’s the envisioned pruning method now? Is “run hybrid, kick off full prune” safe because this disables in-memory prune, or is the previous recommendation of running with memory prune, restarting with full, then restarting with memory prune still the way to go?
Gitter Bridge
@nethermind-bridge

[discord] <kamilsadik> My health check showed

{"status":"Unhealthy","totalDuration":"00:00:00.0418311","entries":{"node-health":{"data":{},"description":"The node is now fully synced with a network. Peers: 150. The node stopped processing blocks.","duration":"00:00:00.0159409","status":"Unhealthy","tags":[]}}}

and my node has been stuck on a single block all night. My terminal killed the process at one point yesterday, and I think that might have corrupted the database. I'm running a Goerli Nethermind node and two Teku instances (Prater and Mainnet), which I think is too heavy a load for my 32GB RAM while my mainnet node is syncing.

I've cleared out my mainnet database, restarted my node, and am syncing my mainnet node from scratch while not running any other processes on my NUC. Here's my config:

sudo ./Nethermind.Runner --config mainnet --JsonRpc.Enabled true --JsonRpc.JwtSecretFile=/tmp/jwtsecret --JsonRpc.AdditionalRpcUrls=http://localhost:8551 --Sync.DownloadBodiesInFastSync true --Sync.DownloadReceiptsInFastSync true --Sync.SnapSync true --Sync.AncientBodiesBarrier 11052984 --Sync.AncientReceiptsBarrier 11052984 --HealthChecks.Enabled true --HealthChecks.UIEnabled true

Will fire up the other processes once the mainnet node is fully synced.

Gitter Bridge
@nethermind-bridge
[discord] <dinomuuk> Hi. My main drive is only 500gb my secondary drive is 2tb. Just start to install nethermind and unable figure out how to setup database directory to secondary drive.
[discord] <dinomuuk> Anyone have any guide that i can refer? From nethermind docs they only shows the default directories
Gitter Bridge
@nethermind-bridge
[discord] <kamilsadik> Does this flag not work? You can pass it at run-time, or in your config
--Init.BaseDbPath /PATH
[discord] <Bing² | serv.eth> Hi, make sure your secondary drive is fast (SSD, not HDD). Then you can just set the datadir configuration variable to the secondary drive: https://docs.nethermind.io/nethermind/ethereum-client/configuration/#datadir
[discord] <Bing² | serv.eth> I believe you can also set it via Init module BaseDbPath https://docs.nethermind.io/nethermind/ethereum-client/configuration/init
[discord] <dinomuuk> Yup.just install secondary ssd
[discord] <dinomuuk> Thank you guys.will try tomorrow
Gitter Bridge
@nethermind-bridge
[discord] <kamilsadik> Good luck!
Gitter Bridge
@nethermind-bridge
[discord] <Marcin> "I was getting frustrated because it showed only 0.45% progress and 450MB downloaded after several hours." - our snap sync has two stages - in first one you should have logs like SNAP - progress of State Ranges (Phase 1): x%, it should take 3-4 hours. When is finished, we are starting healing phase and here logs might be misleading - it's this StateSync with low percentage value. This healing phase will be refactored in 1-2 months, most likely after The Merge. And about RAM usage - is should drop when download of Old Bodies and Old Receipts will be finished.
Gitter Bridge
@nethermind-bridge
[discord] <estebaneu> I decided to completely re-sync Nethermind before the merge, now I run into an issue where it never downloads the last "elements", saying "downloaded 153450xx/15345040" where xx is always 40 behind
[discord] <Sherie | serv.eth> Can please share the full logs here for the Devs to check?
Gitter Bridge
@nethermind-bridge
[discord] <kamilsadik> Makes total sense. Thank you!
Gitter Bridge
@nethermind-bridge
[discord] <Marcin> It's hard to say what is going on without more context - logs would be very helpful
Gitter Bridge
@nethermind-bridge
[discord] <estebaneu> This is how it looks like after almost finishing syncing. It syncs very fast until it reaches about 20-40 blocks before the latest one. My settings are fairly standard, although I run with 25 peers . Internet speed isn't great (4G 30mbit down 10mbit up), but it hasn't been an issue so far 1.5 years)
[discord] <undoubted08 | serv.eth> Hi, let's wait for @luk or @MarekM to check on this.
Gitter Bridge
@nethermind-bridge
[discord] <estebaneu> This is how it looks like after almost finishing syncing. It syncs very fast until it reaches about 20-40 blocks before the latest one. My settings are fairly standard, although I run with 25 peers . Internet speed isn't great (4G 30mbit down 10mbit up), but it hasn't been an issue so far 1.5 years)
https://cdn.discordapp.com/attachments/629004402170134537/1008694995789811792/log
Gitter Bridge
@nethermind-bridge
[discord] <Marcin> Peering doesn't look good. Would you be able to send full logs? I would like to look for the root cause of this behaviour, so I need logs from the time before headers download stuck
[discord] <estebaneu> hm, my logs files seem to be weird. They last updated on friday. I do find them in Nethermind/logs, right? Sorry for the noob question.
Gitter Bridge
@nethermind-bridge
[discord] <Marcin> Don't worry, there are no bad questions 🙂 If you are running it from our release package then right, main_folder/logs
Gitter Bridge
@nethermind-bridge
This message was deleted
[discord] <estebaneu> thank you! So, this is a log file from a few days ago, but the issue is the same (no idea why there are no newer logs in Nethermind/logs). This was when i still tried to prune the db, and before i decided tho sync from 0.
https://cdn.discordapp.com/attachments/629004402170134537/1008705245410504764/mainnet.logs.txt
Gitter Bridge
@nethermind-bridge

[discord] <Marcin> How did you removed db before syncing from 0? In logs of newest sync I see
2022-08-12 19:11:06.8373|INFO|11|Initializing 11 plugins 2022-08-12 19:11:06.8260|INFO|11|Block tree initialized, last processed is 0 (0xd4e567...cb8fa3), best queued is 15216000, best known is 15216000, lowest inserted header 15215425, body , lowest sync inserted block number

What is important here, lowest inserted header 15215425 is not null, so when starting a node there was already something in db. It not neccessary is the cause of this problem, but might be.

Other thing, downloading headers is extremely slow on your machine. After 3h we have average of only 284 headers per second. For context, I locally have ~7000. But let's not give up!

I think we can try:

  1. Turn off the node, remove the whole folder nethermind_db (main_folder/nethermind_db)
  2. Update fastsync pivot. In main_folder/configs you can find mainnet.cfg - let's replace existing pivot values with newer ones:

"PivotNumber": 15335000,
"PivotHash": "0xde3bca488cce75bee9aabbfab6fc15aa56bc04e715f3197537a1d411af7b9708",
"PivotTotalDifficulty": "56285312641338818488948",

Why I want to do it? Our headers download has 2 parts - downloading headers after pivot and downloading headers before pivot - Old Headers. To start snap sync we only need to finish downloading newest headers (after pivot). By updating pivot we will need to download few thousands headers less, which can be crucial with your internet speed. In 1.13.6 our default pivot is 15216000 so updating it to 15335000 means downloading 119000 headers less to start snap sync! And when we start snap sync, OldHeaders will be downloaded in the background.

Gitter Bridge
@nethermind-bridge
[discord] <estebaneu> You are helping a lot and I am learning plenty in the process 🙂 So, I shut down nethermind, deleted nethermind_db and started over (just as I did before), this time with the edited mainnet.cfg. I send you the logs of the current process, let's see if you it gives you more insight. I'll just let it run and sync for now. Fingers crossed. And yes, internet connection is horrendous at the moment, a lot of load on the 4G net around here this time of the year (living in the middle of nowhere)
https://cdn.discordapp.com/attachments/629004402170134537/1008741611603230725/log
Gitter Bridge
@nethermind-bridge
[discord] <Marcin> It's super slow, but otherwise everything in logs looks fine. We need to download it all Downloaded 15335016 / 15346448 and then snap sync will start. Let's wait and fingers crossed 🙂