Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • 21:17
    Aracki commented #4969
  • 20:42

    rubo on temp-4895

    Revert "Ensure `lastValidHash` … Comment out withdrawal validati… (compare)

  • 20:07

    rubo on temp-4895

    Ensure `lastValidHash` is zero … (compare)

  • 20:01

    rubo on temp-4895

    (compare)

  • 20:01

    rubo on temp-4895-2

    (compare)

  • 19:44
    Aracki edited #4969
  • 19:44
    Aracki opened #4969
  • 17:28
    deffrian synchronize #4939
  • 17:28

    deffrian on get_bodies_by_range

    Use arrays (compare)

  • 15:51
    deffrian ready_for_review #4926
  • 15:51
    deffrian edited #4926
  • 15:43
    deffrian edited #4926
  • 15:18
    smartprogrammer93 review_requested #4944
  • 14:41
    marcindsobczak labeled #4968
  • 14:41
    marcindsobczak opened #4968
  • 14:41
    marcindsobczak assigned #4968
  • 14:36
    deffrian synchronize #4926
  • 14:36

    deffrian on 4845-nonce-increment-bugs

    Remove comment (compare)

  • 14:34
    deffrian synchronize #4926
  • 14:34

    deffrian on 4845-nonce-increment-bugs

    Tests (compare)

Gitter Bridge
@nethermind-bridge
[discord] <Micah | serv.eth> I can switch mid sync?
[discord] <Micah | serv.eth> I would rather not re-download all of the headers. 😬
[discord] <DanielC> Although SnapSync:true overrides FastSync to true under the hood 😉
[discord] <DanielC> You can try. If don't delete databases then even if you upgrade to 1.13.6 the workaround will use existing RocksDB version (the one from 1.13.5) so it's going to be slower but still it's going to be max 8 h for the Snap Sync (instead of 3h). Not terrible.
[discord] <Micah | serv.eth> 8h is better than 100 days.
[discord] <DanielC> Yes 😄
[discord] <Micah | serv.eth> How can I tell if it is working other than waiting 8 hours and seeing if I still have 99.5 days left?
[discord] <Micah | serv.eth> The logs look the same after turning on snap sync and restarting.
[discord] <DanielC> When Snap Sync enabled you'll we see new types of log entries.
[discord] <DanielC> You should see SNAP entries at some point.
[discord] <Micah | serv.eth> I still just see
Syncing state nodes 
State Sync 00.06:59:26 | ~0.90 % | 892.62MB / ~98820.00MB | branches: 0.00 % | kB/s:     0 | accounts 323414 | nodes 3131015 | diagnostics: 0.658.40ms 
Changing state StateNodes to FastSync, StateNodes at processed:0|state:0|block:15321161|header:15321166|peer block:15321199 
Sync mode changed from StateNodes to FastSync, StateNodes 
Downloaded 15321167 / 15321199 | current     0.09bps | total     0.03bps 
State Sync 00.06:59:26 | ~0.90 % | 892.62MB / ~98820.00MB | branches: 0.00 % | kB/s:     0 | accounts 323414 | nodes 3131015 | diagnostics: 0.662.25ms 
Changing state FastSync, StateNodes to StateNodes at processed:0|state:0|block:15321161|header:15321167|peer block:15321199 
Sync mode changed from FastSync, StateNodes to StateNodes 
State Sync 00.06:59:26 | ~0.90 % | 892.65MB / ~98820.00MB | branches: 0.00 % | kB/s:    14 | accounts 323433 | nodes 3131083 | diagnostics: 0.665.61ms
Gitter Bridge
@nethermind-bridge
[discord] <DanielC> It seems we have some kind of a feature implemented that checks if we some process already started and it continues with it :/
[discord] <DanielC> It wasn't a few months ago when we delivered SNAP sync but it seems it's changed.
[discord] <Micah | serv.eth> Why isn't snap sync default if fast sync is broken?
[discord] <DanielC> Snap Sync never starts immediately, the headers have to catch the pivot hardcoded in the config file,
[discord] <DanielC> I think it is default in the mainnet.cfg
[discord] <DanielC> It cannot be default for all networks
[discord] <DanielC> because only Geth supports it and there are networks without Geth.
[discord] <DanielC> So the default config files for mainnet and goerli have it switched to true but probably you're using your own config file without this value set to true.
[discord] <Micah | serv.eth> Ah, looks like it changed in 1.13.6.
Gitter Bridge
@nethermind-bridge
[discord] <santamansanta> How did this go?
[discord] <santamansanta> So have to start syncing from the beginning?
Gitter Bridge
@nethermind-bridge
[discord] <Micah | serv.eth> Terribly. Removing only the state folder doesn't work. However, I learned you can trigger a full prune via RPC instead.
Gitter Bridge
@nethermind-bridge
[discord] <santamansanta> I did a full prune too. It took 3 days. state/0 directory is gone and only state/1 exists. But the size is like 13 gb more than what it was before the prune started. But I only had it running for 10 days before I tried to prune
[discord] <santamansanta> I am confused lol
Gitter Bridge
@nethermind-bridge
[discord] <Micah | serv.eth> How big is the state directory?
Gitter Bridge
@nethermind-bridge
[discord] <TobiWo> Hi, I'm new to nethermind and try to connect to our node via the Nethermind.CLI. While connecting I'm receiving Authentication error. This also happens while trying to send a command like eth.blockNumber. On the later I furthermore receive 0.0 indicating that the connection does not work.
Do I need the jwt for those rpc calls as well or how can I authenticate?
Gitter Bridge
@nethermind-bridge
[discord] <TobiWo> Same here
[discord] <TobiWo> Same here for that combi as well.
Gitter Bridge
@nethermind-bridge
[discord] <Yorick | cryptomanufaktur.io> This recovered with a goerli_testing for Nethermind and unstable for Nimbus
Gitter Bridge
@nethermind-bridge
[discord] <TobiWo> Mhh, no develop image tag for nethermind available 🤔
[discord] <TobiWo> Also no unstable or develop image tag for nimbus available.
[discord] <TobiWo> Then I need to build 👷
Gitter Bridge
@nethermind-bridge
[discord] <Yorick | cryptomanufaktur.io> goerli_testing docker tag for Nethermind; unstable source build for Nimbus
[discord] <Yorick | cryptomanufaktur.io> That did the trick here
[discord] <Yorick | cryptomanufaktur.io> Nethermind docker is nethermindeth/nethermind
[discord] <Yorick | cryptomanufaktur.io> That's where the dev stuff lives
Gitter Bridge
@nethermind-bridge
[discord] <santamansanta> It is 176 GB now. It was around 140 GB after the nethermind full/first sync 12 days ago. Before starting pruning it was 159 GB
Gitter Bridge
@nethermind-bridge
[discord] <santamansanta> Makes me wonder this - a fresh new install of nethermiind has the /nethermind_db/mainnet/state directory start at 140 GB. So when it grows to 200 GB in 3 weeks (lets assume) and we trigger a full prune, shouldn't it go back to 140-145 GB range when pruning is done? Or am I not understanding this correctly? @luk @MarekM @DanielC
Gitter Bridge
@nethermind-bridge
[discord] <DanielC> In theory, you're right. I don't know how efficient is the full pruning algorithm. I wonder how the numbers look like when let's say 140->300->full pruning -> ?
@luk
[discord] <santamansanta> So my attempt to prune at 159 GB (after 10 days) is probably not a good test? Pruning took 3 days and I was at 170 GB after it was done 😦
[discord] <santamansanta> So didnt really prune anything, it feels like?
Gitter Bridge
@nethermind-bridge
[discord] <DanielC> It looks that is not so efficient to prune ideally all obsolete trie nodes. I wonder if the process moves in the meantime from one root hash to another...
Probably full pruning makes sense when the numbers are much bigger. For example 2 x full state
Gitter Bridge
@nethermind-bridge
[discord] <luk> Bodies, Receipts and State always grows
Gitter Bridge
@nethermind-bridge
[discord] <DanielC> 11GB in 3 days only for State? And we talk about a clear state tree without obsolete nodes. Naaaaaah...
Gitter Bridge
@nethermind-bridge
[discord] <luk> hm....
[discord] <luk> btw this is not intended use case for FP
Gitter Bridge
@nethermind-bridge
[discord] <luk> As we had some issues with in-memory pruning we are currently disabling it while full pruning runs, this adds few GB for that run. This could be revisit now if its still a problem. I would rather focus on state db layout changes though
Gitter Bridge
@nethermind-bridge
[discord] <DanielC> Nobody is talking here about changing this behavior but about understanding the source of these additional GB. That was a concern of @santamansanta
[discord] <luk> yeah don't start FP unless you accumulated at least 100+GB of garbage