Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • 06:10

    github-actions[bot] on nightly

    (compare)

  • 06:10

    github-actions[bot] on nightly

    (compare)

  • Aug 10 06:10

    github-actions[bot] on nightly

    (compare)

  • Aug 10 06:10

    github-actions[bot] on nightly

    (compare)

  • Aug 09 12:32

    KonradStaniec on adhere-to-transport-limits-in-seed-methods

    (compare)

  • Aug 09 12:32
    KonradStaniec closed #1186
  • Aug 09 12:32

    KonradStaniec on master

    Adhere to transport limits in s… (compare)

  • Aug 09 09:44
    KonradStaniec opened #1186
  • Aug 09 09:37

    KonradStaniec on adhere-to-transport-limits-in-seed-methods

    Add means to limit offered cont… Add test for history network li… Change seed method api to retur… (compare)

  • Aug 09 06:08

    github-actions[bot] on nightly

    (compare)

  • Aug 09 06:08

    github-actions[bot] on nightly

    (compare)

  • Aug 08 18:00
    jakubgs commented #1159
  • Aug 08 11:14
    mjfh commented #1159
  • Aug 08 09:03
    jakubgs commented #1159
  • Aug 08 06:10

    github-actions[bot] on nightly

    (compare)

  • Aug 08 06:09

    github-actions[bot] on nightly

    (compare)

  • Aug 08 01:03

    github-actions[bot] on sim-stat

    (compare)

  • Aug 08 01:03

    github-actions[bot] on sim-stat

    (compare)

  • Aug 07 06:21

    github-actions[bot] on nightly

    (compare)

  • Aug 07 06:21

    github-actions[bot] on nightly

    (compare)

Autobot
@status-im-auto
mratsim@discord: You have all docker-related info here: https://nimbus.guide/docker.html
mratsim@discord: The EL client is not ready for prime-time yet.
Autobot
@status-im-auto
Mkkoll@discord: Im looking to setup a nimbus client to run with one of the merge testnets 🙂
Mkkoll@discord: Ive already got a mainnet prysm validator and beacon node set up so im not unfamiliar with CLI staking
Mkkoll@discord: Could anybody point me to any guides on how to setup for nimbus on one of the testnet merges?
Autobot
@status-im-auto
andrewrobbins@discord: Hey everyone. I've finished setting up Nimbus and importing my validator keys. However my nextActionWait value is saying n/a. What am I doing wrong?
https://cdn.discordapp.com/attachments/613988663034118153/976589678943809556/Screen_Shot_2022-05-18_at_3.58.27_PM.png
Autobot
@status-im-auto
andrewrobbins@discord: Nevermind, looks like it's just waiting to be activated I think. Is there a quick place I can go to see the activator queue on Prater?
Autobot
@status-im-auto
iicc | stakely.io@discord: Its about 4 days currently, mine just became active
Autobot
@status-im-auto
jconn93@discord: I'm getting unable to decode REST response when trying to sync from my local prysm node 😦
attempting to follow somer guide to switch prysm > nimbus
same story trying to use infura
Autobot
@status-im-auto

firebredd@discord: Hi, I'm running goerli geth as my eth1 client using this command:
docker run -it -d -v /home/ec2-user/.geth:/root -p 8545:8545 -p 30303:30303 --name geth-node --net=host ethereum/client-go:v1.10.15 --datadir=/root --http --http.port=8545 --http.addr=0.0.0.0 --http.vhosts=* --http.api=eth,net,web3,personal --goerli

The Node is running perfectly fine and is now SYNCING

Now I'm running Nimbus Beacon and Validator Node as my eth2 client, where I'm getting this error

Eth1 chain monitoring failure, restarting topics="eth1" err="getBlockByHash(m.dataProvider,\n BlockHash(m.depositsChain.finalizedBlockHash.data)) failed 3 times. Last error: Failed to send POST Request with JSON-RPC."

But the beacon node is syncing properly, is the error is because of the geth which is not completely synced?

You can see the logs in the snapshot attached.
https://cdn.discordapp.com/attachments/613988663034118153/976764603105181696/Screenshot_2022-05-19_at_2.02.51_PM.png

Autobot
@status-im-auto
TennisBowling@discord: are you using http for the eth1 node url?
Autobot
@status-im-auto
Abisoye148@discord: Are you rewarding people that donate to you on Gitcoin please?
Autobot
@status-im-auto
jconn93@discord: Just successfully switched from Prysm > Nimbus!! Happy to be part of the team
Autobot
@status-im-auto
LynxLove@discord: is it okay not to upgrade to the recent updates?
Autobot
@status-im-auto
mratsim@discord: We had a POAP for early contributors:
  • everyone in the very first funding round of Nimbus
  • those that contributed over $150 in subsequent funding round back in 2018-2020
mratsim@discord: Yes, updates that are low-urgency can be done at your convenience with little to no impact. At most you might miss slight performance improvements but those are mostly relevant for the folks overclocking their Raspberry Pi
Autobot
@status-im-auto
hanniabu@discord: does the nimbus team have a gitcoin or other way to donate? and how can we verify this address?
Autobot
@status-im-auto
arnetheduck@discord: our official donation address is: 0x70E47C843E0F6ab0991A3189c28F2957eb6d3842 (see https://github.com/status-im/nimbus-eth2/#donations= ) - there's also a gitcoin grant at https://gitcoin.co/grants/137/nimbus-2
arnetheduck@discord: our official donation address is: 0x70E47C843E0F6ab0991A3189c28F2957eb6d3842 (see https://github.com/status-im/nimbus-eth2/#donations= ) - there's also a gitcoin grant at https://gitcoin.co/grants/137/nimbus-2 (which points to the same wallet)
Autobot
@status-im-auto

Mattia@discord: Hi guys, not sure what's the best channel to ask this question so I'll try here:

Lighthouse user here, I would like to switch to another ETH2 client both because of Lighthouse's high usage % and for better performance/resource usage. I've been monitoring my server and noticed that Lighthouse is writing to disk quite a lot, more than 2 GB in 20 minutes. Since I am using an SSD I would like to try to minimize disk writes as much as possible and I heard that Nimbus is great regarding resource usage, especially RAM. My main concern is not RAM though but disk I/O. Can anyone share any insight on Nimbus disk I/O usage?
From the message I'm quoting it looks like you guys optimized for disk I/O as well, I'd like to know more if possible. How can Nimbus use less RAM, less CPU and less disk I/O than all other clients?

arnetheduck@discord: One recent independent comparison is https://someresat.medium.com/ethereum-staker-migration-guide-migrating-from-prysm-to-nimbus-b802a7dcb31e - there's a section on disk usage which should give you an idea of what to expect

mratsim@discord: For disk IO, we reduce it by using caches.

Regarding CPU, we started single-threaded, and are still mostly single-threaded, meaning we had to have deep looks into the actually bottlenecks.

Also we have an unfair advantage, we wrote most of the libraries (besides SQLite and cryptography) ourselves, and cryptography doesn't use RAM and SQLite is very optimized.

Autobot
@status-im-auto
Mattia@discord: That is just what I was looking for - thanks! The disk usage is very impressive.
Mattia@discord: How can you have low disk IO because of cache, but also have low RAM usage?🤔
Mattia@discord: How can you have low disk IO because of cache, but also have low RAM usage?🤔
Is the cache very small? If yes, could it be increased even more somehow (maybe in the config) if RAM is not an issue?
Autobot
@status-im-auto
arnetheduck@discord: we use a particular storage format which deduplicates a lot of the data - this has the double effect of reducing RAM usage and storage / IO at the same time, at the cost of some complexity when re-constituting the data, allowing the caches to be more efficient instead of having them use more RAM - in some cases, our caches are also compressed in-memory which allows us to keep more stuff off the disk that other clients persist
arnetheduck@discord: we use a particular storage format which deduplicates a lot of the data (such as the validator set) - this has the double effect of reducing RAM usage and storage / IO at the same time, at the cost of some complexity when re-constituting the data, allowing the caches to be more efficient instead of having them use more RAM - in some cases, our caches are also compressed in-memory which allows us to keep more stuff off the disk that other clients persist
Autobot
@status-im-auto
Mattia@discord: Very impressive, thanks! Is that complexity you talk about felt with a higher CPU usage when restarting Nimbus? Just wondering
Autobot
@status-im-auto
arnetheduck@discord: it's a similar story to writing: less data to read (because we wrote less) = faster read = faster startup and less CPU usage, also at startup - restarts are typically in the single-second range with most of the startup time is dedicated to looking for your UPnP configuration and decrypting your validator keys - the complexity is mainly in the code where we have to be a bit more careful not to load stale data, as opposed to using a more simple storage format
Mattia@discord: Looks like there's no drawback then. I'm sold, thank you guys!
Autobot
@status-im-auto
arnetheduck@discord: get in touch if your run into any trouble - we have a general migration guide here: https://nimbus.guide/migration.html#migrate-from-another-client
Autobot
@status-im-auto
andrewrobbins@discord: What EL is recommended with Nimbus? Have all of them been tested? I'm thinking of using Erigon but want to be sure it's compatible.
Autobot
@status-im-auto
iicc | stakely.io@discord: We use Erigon and Geth and both work fine
Autobot
@status-im-auto
Mattia@discord: I'm now a happy Nimbus user! Used checkpoint sync, now downloading the other blocks.
Two questions:
1) Should I enable --subscribe-all-subnets? Are rewards actually higher?
2) Can I increase Nimbus' cache? On geth I can change --cache to 6144 or more and it will use more RAM and sync faster. Is there anything similar for Nimbus? I haven't seen something like that on the documentation but it might worth to ask.
Mattia@discord: Also this is what my resource usage looks like when synching. 57 slots/s, ETA is actually increasing, started from 12 hours and now it's up to 17 hours
https://cdn.discordapp.com/attachments/613988663034118153/979093589164433478/unknown.png
Mattia@discord: I'm now a happy Nimbus user! Used checkpoint sync, now downloading the other blocks.
Two questions:
1) Should I enable --subscribe-all-subnets? Are rewards actually higher?
2) Can I increase Nimbus' cache? On geth I can change --cache to 6144 or more and it will use more RAM and sync faster. Is there anything similar for Nimbus? I haven't seen something like that on the documentation but it might worth asking.
Autobot
@status-im-auto
nop@discord: hi, is there a way to increment the number of proposal blocks? I know that nodes are randomly extracted but I got few in months...
Autobot
@status-im-auto
arnetheduck@discord: 1) slightly better blocks can be produced, but it takes a bit of bandwidth and cpu - given how rare it is that you produce a block, it really depends a lot on your setup - many people run with free bandwidth and beefy machines, there it makes sense - if you're not in that category, "probably" doesn't make sense (which is why it's off by default)
2) No exposed cache options, except for historical replays via rest (--rest-statecache-size and --rest-statecache-ttl) - these are useful if you make a lot of historical queries of the state
arnetheduck@discord: 1) slightly better blocks can be produced, but it takes a bit of bandwidth and cpu - given how rare it is that you produce a block, it really depends a lot on your setup - many people run with free bandwidth and beefy machines, there it makes sense - if you're not in that category, "probably" doesn't make sense (which is why it's off by default)
2) No exposed cache options, except for historical replays via rest (--rest-statecache-size and --rest-statecache-ttl) - these are useful if you make a lot of historical queries of the state (such as when building balance reports etc)
Autobot
@status-im-auto

LynxLove@discord: sorry guys, noob question. the command to DL nimbus new update is:

curl -LO https://github.com/status-im/nimbus-eth2/releases/download/nimbus-eth2_Linux_amd64_22.5.1_f7eff8fc.tar.gz

Right?

Autobot
@status-im-auto
Mkkoll@discord: Anybody know how big the ropsten chain-data is rn? Ive set my VM with 200GB and hoping that was enough but i seen that ropsten was 133GB long in 2020
Mkkoll@discord: Trying to find more modern estimates of the chain-data size
Autobot
@status-im-auto
Popcorn@discord: hello @liftlines ,
tobi here 👋
Autobot
@status-im-auto

Armaver@discord: For backing up Nimbus blockchain data, with minimal downtime, is it sensible to do this?

  • rsync the data dir while Nimbus is running (expecting some incomplete/corrupt files in target dir maybe)
  • stop Nimbus
  • rsync again, to fix incomplete/corrupt files (should be much faster than first run)
  • start Nimbus

Thanks!

Autobot
@status-im-auto
iicc | stakely.io@discord: Checkpoint sync is much faster and simpler
Autobot
@status-im-auto
Armaver@discord: are you replying to my backup question? 😅
iicc | stakely.io@discord: yes
Armaver@discord: how long does a checkpoint sync take?