Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
Dominik Schiener
@domschiener
and after that it will continue to sync?
I'm doing --fast btw
Rocky Fikki
@rfikki
@domschiener looks ok. Notice you will be at a conference in St. Louis I may be attending.
Dominik Schiener
@domschiener
yeh I'm just confused as before it was downloading blocks (first time I'm doing --fast, simply because I cannot sync normally, so I had to delete the entire chaindata)
sounds cool, I'll present IOTA there ;)
Rocky Fikki
@rfikki
I find that using the --verbosity=3 flag seems to be less problematic when syncing.
Péter Szilágyi
@karalabe
@domschiener Please try dev builds
They sync faster, by a lot
Also if you don't mind building geth for yourself, there's a PR aggregating a lot of further fixes, which also speed things considerably
This is the PR if you'd be willing to build manually ethereum/go-ethereum#2657
Otherwise pull a binary from our nightly build bot: https://gitter.im/ethereum/go-ethereum?at=57526dc7e8163f872c4de23c
This doesn't contain all the fixes, but it's a lot faster than the stable branch
i have a full synch in 3h and a fast in 20minutes
Péter Szilágyi
@karalabe
Awesome, those numbers seem to check out with my local numbers. Still higher than I'd like (for full sync), but hey, one step at a time :)
Thanks a lot for doing these benchmarks!
Really appreciate to have an outside comparison
@ellis2323 Can I tweet out the results? :D
Do you have a twitter user I can refer? :)
found it:)
ellis2323
@ellis2323
lol
Daniel Whitenack
@dwhitena
Hey guys. I was just wondering if any of you will be at gophercon this year? Thought maybe those of us there could meet up and talk about what we are all doing.
ellis2323
@ellis2323
i am writing articles on Ethereum. i will publish a chapter on geth and post my bench on it (https://www.gitbook.com/book/ellis2323/blckchn/details )
Péter Szilágyi
@karalabe
Really curious about it :)
ellis2323
@ellis2323
@karalabe do you read my benchmark with the ram disk too ?
i have the same number for fast
Péter Szilágyi
@karalabe
fast should be more or less the same, it's not really bound by disk
ellis2323
@ellis2323
i was supposing that the disk io was slowing but i was wrong
for the full, it seems the same
Péter Szilágyi
@karalabe
we found a few ugly bottlenecks in full sync imports
ellis2323
@ellis2323
the import of the 1M blocks is equivalent
Péter Szilágyi
@karalabe
I fixed on on develop + 1 pending PR
but there's one left that requirs a bit of work and we want to push out the current fixes
since network wise they are needed to stabilize connections
ellis2323
@ellis2323
ok
Péter Szilágyi
@karalabe
as long as the database is smaller than your available memory, it doesn't matter much
bottlenecks start to hurt when you run out of ram to cache
ellis2323
@ellis2323
the 2657 seems great
Péter Szilágyi
@karalabe
(OS cache that is)
yup, that PR is just a dump of 3 of my other PRs pending merge
ellis2323
@ellis2323
i have run 3 full and 6 fast and i have similar results... that's a good news. the first 1.5 was not as stable
Péter Szilágyi
@karalabe
No, my first attempt at QoS tuning did great for fast sync, but utterly screwed up full sync
since it assumed everyone a slow peer and relied on network packet RTTs to find out who is a good peer
however since full sync does relatively little network IO, there's not much to measure
so nobody escaped the "slow peer" status :D
and I couldn't differentiate between truly slow peers and not properly measured peers
hence slow once stalled sync
ellis2323
@ellis2323
ok
in my case the bandwidth for fast has peak at 6MB/s and < 1MB/s for full
i see often <.4MB/s for full :)