Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • 14:05
    asdacap synchronize #4536
  • 14:05

    asdacap on fix-incremental-sync-with-fast-sync

    Fix test (compare)

  • 13:40

    marcindsobczak on postpone_disabling_snap

    Postpone disabling snap protoco… (compare)

  • 13:40

    asdacap on asdacap_syncing_testing

    Extra log (compare)

  • 13:39

    asdacap on asdacap_syncing_testing

    Merge branch 'fix-incremental-s… (compare)

  • 13:37
    asdacap synchronize #4493
  • 13:37

    asdacap on fix-invalid-terminal-on-lower-difficulty

    Checking in before cleanup This seems to work Cleanup and 2 more (compare)

  • 13:36
    asdacap synchronize #4536
  • 13:36

    asdacap on fix-incremental-sync-with-fast-sync

    Fix fast sync lag missing post … Added some test Minor misses and 1 more (compare)

  • 13:20
    kamilchodola commented #4635
  • 13:09

    kamilchodola on 1.14.3

    not recover tx senders when not… Optimize returning transactions… remove nullable and 1 more (compare)

  • 12:55

    tanishqjasoria on snap-test-branch

    add serving methods temp small change Revert "temp small change" Thi… and 23 more (compare)

  • 12:27

    MarekM25 on master

    Not recover tx senders when not… (compare)

  • 12:27

    MarekM25 on eth_getBlockByNumber_fix

    (compare)

  • 12:27
    MarekM25 closed #4645
  • 12:13
    smartprogrammer93 synchronize #4653
  • 12:13

    smartprogrammer93 on develop-Preparing-for-rebase-1

    one more possible build fix (compare)

  • 12:09
    MarekM25 ready_for_review #4645
  • 12:05
    smartprogrammer93 synchronize #4653
  • 12:05

    smartprogrammer93 on develop-Preparing-for-rebase-1

    Build fix to issue caused by in… (compare)

Gitter Bridge
@nethermind-bridge
[discord] <Micah | serv.eth> I would rather not re-download all of the headers. 😬
[discord] <DanielC> Although SnapSync:true overrides FastSync to true under the hood 😉
[discord] <DanielC> You can try. If don't delete databases then even if you upgrade to 1.13.6 the workaround will use existing RocksDB version (the one from 1.13.5) so it's going to be slower but still it's going to be max 8 h for the Snap Sync (instead of 3h). Not terrible.
[discord] <Micah | serv.eth> 8h is better than 100 days.
[discord] <DanielC> Yes 😄
[discord] <Micah | serv.eth> How can I tell if it is working other than waiting 8 hours and seeing if I still have 99.5 days left?
[discord] <Micah | serv.eth> The logs look the same after turning on snap sync and restarting.
[discord] <DanielC> When Snap Sync enabled you'll we see new types of log entries.
[discord] <DanielC> You should see SNAP entries at some point.
[discord] <Micah | serv.eth> I still just see
Syncing state nodes 
State Sync 00.06:59:26 | ~0.90 % | 892.62MB / ~98820.00MB | branches: 0.00 % | kB/s:     0 | accounts 323414 | nodes 3131015 | diagnostics: 0.658.40ms 
Changing state StateNodes to FastSync, StateNodes at processed:0|state:0|block:15321161|header:15321166|peer block:15321199 
Sync mode changed from StateNodes to FastSync, StateNodes 
Downloaded 15321167 / 15321199 | current     0.09bps | total     0.03bps 
State Sync 00.06:59:26 | ~0.90 % | 892.62MB / ~98820.00MB | branches: 0.00 % | kB/s:     0 | accounts 323414 | nodes 3131015 | diagnostics: 0.662.25ms 
Changing state FastSync, StateNodes to StateNodes at processed:0|state:0|block:15321161|header:15321167|peer block:15321199 
Sync mode changed from FastSync, StateNodes to StateNodes 
State Sync 00.06:59:26 | ~0.90 % | 892.65MB / ~98820.00MB | branches: 0.00 % | kB/s:    14 | accounts 323433 | nodes 3131083 | diagnostics: 0.665.61ms
Gitter Bridge
@nethermind-bridge
[discord] <DanielC> It seems we have some kind of a feature implemented that checks if we some process already started and it continues with it :/
[discord] <DanielC> It wasn't a few months ago when we delivered SNAP sync but it seems it's changed.
[discord] <Micah | serv.eth> Why isn't snap sync default if fast sync is broken?
[discord] <DanielC> Snap Sync never starts immediately, the headers have to catch the pivot hardcoded in the config file,
[discord] <DanielC> I think it is default in the mainnet.cfg
[discord] <DanielC> It cannot be default for all networks
[discord] <DanielC> because only Geth supports it and there are networks without Geth.
[discord] <DanielC> So the default config files for mainnet and goerli have it switched to true but probably you're using your own config file without this value set to true.
[discord] <Micah | serv.eth> Ah, looks like it changed in 1.13.6.
Gitter Bridge
@nethermind-bridge
[discord] <santamansanta> How did this go?
[discord] <santamansanta> So have to start syncing from the beginning?
Gitter Bridge
@nethermind-bridge
[discord] <Micah | serv.eth> Terribly. Removing only the state folder doesn't work. However, I learned you can trigger a full prune via RPC instead.
Gitter Bridge
@nethermind-bridge
[discord] <santamansanta> I did a full prune too. It took 3 days. state/0 directory is gone and only state/1 exists. But the size is like 13 gb more than what it was before the prune started. But I only had it running for 10 days before I tried to prune
[discord] <santamansanta> I am confused lol
Gitter Bridge
@nethermind-bridge
[discord] <Micah | serv.eth> How big is the state directory?
Gitter Bridge
@nethermind-bridge
[discord] <TobiWo> Hi, I'm new to nethermind and try to connect to our node via the Nethermind.CLI. While connecting I'm receiving Authentication error. This also happens while trying to send a command like eth.blockNumber. On the later I furthermore receive 0.0 indicating that the connection does not work.
Do I need the jwt for those rpc calls as well or how can I authenticate?
Gitter Bridge
@nethermind-bridge
[discord] <TobiWo> Same here
[discord] <TobiWo> Same here for that combi as well.
Gitter Bridge
@nethermind-bridge
[discord] <Yorick | cryptomanufaktur.io> This recovered with a goerli_testing for Nethermind and unstable for Nimbus
Gitter Bridge
@nethermind-bridge
[discord] <TobiWo> Mhh, no develop image tag for nethermind available 🤔
[discord] <TobiWo> Also no unstable or develop image tag for nimbus available.
[discord] <TobiWo> Then I need to build 👷
Gitter Bridge
@nethermind-bridge
[discord] <Yorick | cryptomanufaktur.io> goerli_testing docker tag for Nethermind; unstable source build for Nimbus
[discord] <Yorick | cryptomanufaktur.io> That did the trick here
[discord] <Yorick | cryptomanufaktur.io> Nethermind docker is nethermindeth/nethermind
[discord] <Yorick | cryptomanufaktur.io> That's where the dev stuff lives
Gitter Bridge
@nethermind-bridge
[discord] <santamansanta> It is 176 GB now. It was around 140 GB after the nethermind full/first sync 12 days ago. Before starting pruning it was 159 GB
Gitter Bridge
@nethermind-bridge
[discord] <santamansanta> Makes me wonder this - a fresh new install of nethermiind has the /nethermind_db/mainnet/state directory start at 140 GB. So when it grows to 200 GB in 3 weeks (lets assume) and we trigger a full prune, shouldn't it go back to 140-145 GB range when pruning is done? Or am I not understanding this correctly? @luk @MarekM @DanielC
Gitter Bridge
@nethermind-bridge
[discord] <DanielC> In theory, you're right. I don't know how efficient is the full pruning algorithm. I wonder how the numbers look like when let's say 140->300->full pruning -> ?
@luk
[discord] <santamansanta> So my attempt to prune at 159 GB (after 10 days) is probably not a good test? Pruning took 3 days and I was at 170 GB after it was done 😦
[discord] <santamansanta> So didnt really prune anything, it feels like?
Gitter Bridge
@nethermind-bridge
[discord] <DanielC> It looks that is not so efficient to prune ideally all obsolete trie nodes. I wonder if the process moves in the meantime from one root hash to another...
Probably full pruning makes sense when the numbers are much bigger. For example 2 x full state
Gitter Bridge
@nethermind-bridge
[discord] <luk> Bodies, Receipts and State always grows
Gitter Bridge
@nethermind-bridge
[discord] <DanielC> 11GB in 3 days only for State? And we talk about a clear state tree without obsolete nodes. Naaaaaah...
Gitter Bridge
@nethermind-bridge
[discord] <luk> hm....
[discord] <luk> btw this is not intended use case for FP
Gitter Bridge
@nethermind-bridge
[discord] <luk> As we had some issues with in-memory pruning we are currently disabling it while full pruning runs, this adds few GB for that run. This could be revisit now if its still a problem. I would rather focus on state db layout changes though
Gitter Bridge
@nethermind-bridge
[discord] <DanielC> Nobody is talking here about changing this behavior but about understanding the source of these additional GB. That was a concern of @santamansanta
[discord] <luk> yeah don't start FP unless you accumulated at least 100+GB of garbage
[discord] <luk> its more once per few months thing, not once per week