Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • 10:46
    daltonwide commented #4571
  • 00:03
    github-actions[bot] labeled #4683
  • 00:03

    github-actions[bot] on configs-update-72fe6ba

    Updating Fast Sync config files (compare)

  • 00:03
    github-actions[bot] opened #4683
  • Oct 01 21:50

    dceleda on 4324_snap_healing

    Tests (compare)

  • Oct 01 21:24

    dceleda on 4324_snap_healing

    Fix requested nodes order (compare)

  • Oct 01 20:19

    dceleda on 4324_snap_healing

    More logging - finish round (compare)

  • Oct 01 11:36
    dceleda labeled #4668
  • Oct 01 11:36
    dceleda assigned #4682
  • Oct 01 11:36
    dceleda assigned #4682
  • Oct 01 11:35
    dceleda labeled #4682
  • Oct 01 11:35
    dceleda labeled #4682
  • Oct 01 11:35
    dceleda labeled #4682
  • Oct 01 11:35
    dceleda opened #4682
  • Sep 30 16:13
    DemuirGos synchronize #4608
  • Sep 30 15:53
    DemuirGos synchronize #4609
  • Sep 30 15:26
    tanishqjasoria synchronize #4472
  • Sep 30 15:26

    tanishqjasoria on trace-transaction

    update the precompile tracing (compare)

  • Sep 30 14:53
    tanishqjasoria ready_for_review #4472
  • Sep 30 14:23
    DemuirGos synchronize #4599
Gitter Bridge
@nethermind-bridge
[discord] <Micah | serv.eth> Why isn't snap sync default if fast sync is broken?
[discord] <DanielC> Snap Sync never starts immediately, the headers have to catch the pivot hardcoded in the config file,
[discord] <DanielC> I think it is default in the mainnet.cfg
[discord] <DanielC> It cannot be default for all networks
[discord] <DanielC> because only Geth supports it and there are networks without Geth.
[discord] <DanielC> So the default config files for mainnet and goerli have it switched to true but probably you're using your own config file without this value set to true.
[discord] <Micah | serv.eth> Ah, looks like it changed in 1.13.6.
Gitter Bridge
@nethermind-bridge
[discord] <santamansanta> How did this go?
[discord] <santamansanta> So have to start syncing from the beginning?
Gitter Bridge
@nethermind-bridge
[discord] <Micah | serv.eth> Terribly. Removing only the state folder doesn't work. However, I learned you can trigger a full prune via RPC instead.
Gitter Bridge
@nethermind-bridge
[discord] <santamansanta> I did a full prune too. It took 3 days. state/0 directory is gone and only state/1 exists. But the size is like 13 gb more than what it was before the prune started. But I only had it running for 10 days before I tried to prune
[discord] <santamansanta> I am confused lol
Gitter Bridge
@nethermind-bridge
[discord] <Micah | serv.eth> How big is the state directory?
Gitter Bridge
@nethermind-bridge
[discord] <TobiWo> Hi, I'm new to nethermind and try to connect to our node via the Nethermind.CLI. While connecting I'm receiving Authentication error. This also happens while trying to send a command like eth.blockNumber. On the later I furthermore receive 0.0 indicating that the connection does not work.
Do I need the jwt for those rpc calls as well or how can I authenticate?
Gitter Bridge
@nethermind-bridge
[discord] <TobiWo> Same here
[discord] <TobiWo> Same here for that combi as well.
Gitter Bridge
@nethermind-bridge
[discord] <Yorick | cryptomanufaktur.io> This recovered with a goerli_testing for Nethermind and unstable for Nimbus
Gitter Bridge
@nethermind-bridge
[discord] <TobiWo> Mhh, no develop image tag for nethermind available 🤔
[discord] <TobiWo> Also no unstable or develop image tag for nimbus available.
[discord] <TobiWo> Then I need to build 👷
Gitter Bridge
@nethermind-bridge
[discord] <Yorick | cryptomanufaktur.io> goerli_testing docker tag for Nethermind; unstable source build for Nimbus
[discord] <Yorick | cryptomanufaktur.io> That did the trick here
[discord] <Yorick | cryptomanufaktur.io> Nethermind docker is nethermindeth/nethermind
[discord] <Yorick | cryptomanufaktur.io> That's where the dev stuff lives
Gitter Bridge
@nethermind-bridge
[discord] <santamansanta> It is 176 GB now. It was around 140 GB after the nethermind full/first sync 12 days ago. Before starting pruning it was 159 GB
Gitter Bridge
@nethermind-bridge
[discord] <santamansanta> Makes me wonder this - a fresh new install of nethermiind has the /nethermind_db/mainnet/state directory start at 140 GB. So when it grows to 200 GB in 3 weeks (lets assume) and we trigger a full prune, shouldn't it go back to 140-145 GB range when pruning is done? Or am I not understanding this correctly? @luk @MarekM @DanielC
Gitter Bridge
@nethermind-bridge
[discord] <DanielC> In theory, you're right. I don't know how efficient is the full pruning algorithm. I wonder how the numbers look like when let's say 140->300->full pruning -> ?
@luk
[discord] <santamansanta> So my attempt to prune at 159 GB (after 10 days) is probably not a good test? Pruning took 3 days and I was at 170 GB after it was done 😦
[discord] <santamansanta> So didnt really prune anything, it feels like?
Gitter Bridge
@nethermind-bridge
[discord] <DanielC> It looks that is not so efficient to prune ideally all obsolete trie nodes. I wonder if the process moves in the meantime from one root hash to another...
Probably full pruning makes sense when the numbers are much bigger. For example 2 x full state
Gitter Bridge
@nethermind-bridge
[discord] <luk> Bodies, Receipts and State always grows
Gitter Bridge
@nethermind-bridge
[discord] <DanielC> 11GB in 3 days only for State? And we talk about a clear state tree without obsolete nodes. Naaaaaah...
Gitter Bridge
@nethermind-bridge
[discord] <luk> hm....
[discord] <luk> btw this is not intended use case for FP
Gitter Bridge
@nethermind-bridge
[discord] <luk> As we had some issues with in-memory pruning we are currently disabling it while full pruning runs, this adds few GB for that run. This could be revisit now if its still a problem. I would rather focus on state db layout changes though
Gitter Bridge
@nethermind-bridge
[discord] <DanielC> Nobody is talking here about changing this behavior but about understanding the source of these additional GB. That was a concern of @santamansanta
[discord] <luk> yeah don't start FP unless you accumulated at least 100+GB of garbage
[discord] <luk> its more once per few months thing, not once per week
Gitter Bridge
@nethermind-bridge
[discord] <kaparnos> how did it go? I am also having the issue
[discord] <Micah | serv.eth> I resynced with 1.13.6 using snap sync and it fixed things.
[discord] <kaparnos> thanks
Gitter Bridge
@nethermind-bridge
[discord] <DanielC> Did you delete all DBs before resyncing? How long did Snap Sync take?
Gitter Bridge
@nethermind-bridge
[discord] <Micah | serv.eth> I deleted the nethermind_data folder and syncing took some number of hours (I didn't pay close attention, less than 12 I think).
[discord] <DanielC> I would expect the state snap sync to take between 3 and 4 hours. Anyway, good to hear it works now.
Thanks for the update.
Gitter Bridge
@nethermind-bridge
[discord] <kamilsadik> I've been experiencing the same for a while. Just deleted my database and started fresh with SnapSync. So far so good. Fingers crossed!
[discord] <kamilsadik> Good to hear. That's what I'm doing now. You weren't kidding @MarekM, SnapSync is freaking fast
Gitter Bridge
@nethermind-bridge
[discord] <Yorick | cryptomanufaktur.io> Makes sense. Someone on Reddit suggested that hybrid mode is now the default: What’s the envisioned pruning method now? Is “run hybrid, kick off full prune” safe because this disables in-memory prune, or is the previous recommendation of running with memory prune, restarting with full, then restarting with memory prune still the way to go?
Gitter Bridge
@nethermind-bridge

[discord] <kamilsadik> My health check showed

{"status":"Unhealthy","totalDuration":"00:00:00.0418311","entries":{"node-health":{"data":{},"description":"The node is now fully synced with a network. Peers: 150. The node stopped processing blocks.","duration":"00:00:00.0159409","status":"Unhealthy","tags":[]}}}

and my node has been stuck on a single block all night. My terminal killed the process at one point yesterday, and I think that might have corrupted the database. I'm running a Goerli Nethermind node and two Teku instances (Prater and Mainnet), which I think is too heavy a load for my 32GB RAM while my mainnet node is syncing.

I've cleared out my mainnet database, restarted my node, and am syncing my mainnet node from scratch while not running any other processes on my NUC. Here's my config:

sudo ./Nethermind.Runner --config mainnet --JsonRpc.Enabled true --JsonRpc.JwtSecretFile=/tmp/jwtsecret --JsonRpc.AdditionalRpcUrls=http://localhost:8551 --Sync.DownloadBodiesInFastSync true --Sync.DownloadReceiptsInFastSync true --Sync.SnapSync true --Sync.AncientBodiesBarrier 11052984 --Sync.AncientReceiptsBarrier 11052984 --HealthChecks.Enabled true --HealthChecks.UIEnabled true

Will fire up the other processes once the mainnet node is fully synced.

Gitter Bridge
@nethermind-bridge
[discord] <dinomuuk> Hi. My main drive is only 500gb my secondary drive is 2tb. Just start to install nethermind and unable figure out how to setup database directory to secondary drive.
[discord] <dinomuuk> Anyone have any guide that i can refer? From nethermind docs they only shows the default directories