Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • Feb 28 2019 19:45
    @Arachnid banned @merkle_tree_twitter
  • Feb 17 2019 00:56
    @jpitts banned @Aquentson_twitter
  • Apr 09 2018 04:14
    @holiman banned @JennyJennywren_twitter
  • Oct 21 2017 19:12
    @Arachnid banned @Musk11
Martin Holst Swende
@holiman
It's just a temporary one, for now, I figure we might want to reset it and try out the transition at a couple of different points. This time it was in the early epoch 0, next time we might want to do at block 30K. Or 29999.
Peter (bitfly)
@peterbitfly
would be nice to have a dedicated channel & not to spam this one in case of any issues
Paweł Bylica
@chfast
@holiman @ppratscher I also want to try to add a mining node without progpow (will fork). And ethminer when ready. Might not have time to fix ethminer today.
Martin Holst Swende
@holiman
If it gets too much noise, we can move to the progpow channel I just created at the geth discord server: https://discord.gg/v7eg4KK
This docker image contains progpow-enabled geth: https://hub.docker.com/r/holiman/geth-experimental/
Hah, the poor cpu-miner took 25 minutes to mine the first progpow block, then 3 minutes, 5 minutes, 4 minutes, 8 minutes ... will be interesting to see the effect when a proper miner is used
Martin Holst Swende
@holiman
It reports about 500 H/s :) (only one thread used)
Paweł Bylica
@chfast
I have not benchmarked full DAG mining yet...
Peter Salanki
@salanki
As soon as ethminer-progpow is ready for testing I'll put some GPU miners on it!
5chdn
@5chdn
I'll put some ASICs on it ;-)
Peter Salanki
@salanki
:D
Paweł Bylica
@chfast
@salanki you can try build the reference implementation
Noel Maersk
@veox
@gcolvin Yes, it would seem there's an exponential component to it, - if state/tx/receipt storage is implemented as a trie.
The upside is that it's often not one single gigantic trie, but, say, one trie per contract account. Yet the upside's dwarfed by the way GastToken2 and a couple exchanges use the account address space... :(
Nick Savers
@nicksavers
I found a visualization of the differences between Ethash and ProgPoW here. Are these still accurate?
Martin Holst Swende
@holiman
If anyone is wondering, btw, the reason for the gigantic drop in hashrate is that the CPU miner doesn't cache the cdag (which the light verifier does). I didn't bother optimizing the cpu miner, but concentrated more on getting the verification somewhat fast. Once that's fixed, the progpow hashrate for the cpu miner should increase quite a bit
Peter (bitfly)
@peterbitfly
We have launched a stratum server for the Gangnam ProgPoW testnet at progpow.ethermine.org:4444 the server has a very low difficulty of 10000 so you should be able to test miners quite quickly. we will increase the difficulty once GPU miners are available. it is not yet connected to any statistics backend but share submission, validation & block generation should already work. the stratum node is currently connected to a Parity-Ethereum/v2.3.0-nightly-f589356-20181211 instance
Peter Salanki
@salanki
Nice! Add the node to ethstats at http://boot.gangnam.ethdevops.io!
Peter (bitfly)
@peterbitfly
Sure, please pm me the ethstats secret
Peter (bitfly)
@peterbitfly
both added now
Greg Colvin
@gcolvin
@veox Thanks. To summarize my understanding so far. The whole shebang is too complicated for now, so I’m looking just at the storage database—sload & sstore. We charge upfront per sstore, we charge per sload, and we control those rates. The question is whether those ongoing charges can support the storage. We need estimates for
  • how fast storage gets cheaper (doubling time?)
  • how fast our database grows (doubling time?)
  • the proportion of sloads to the size of our database(constant? log?)
  • the time for sloads and sstores per size of our database(log?)
    The effect of these estimates on the viability of “infinite” storage is
  • if storage gets cheaper enough faster than our database grows we don’t have a problem
  • if they are close to the same the income stream from sloads ands sstores becomes critical
  • if our database grows enough faster than storage gets cheaper we are hosed
  • if access time grows worse than the log of our database size we are hosed
    Am I getting close? Do we have estimates?
Noel Maersk
@veox
@gcolvin That's pretty exhaustive; one can say "we don't have a dire problem" if sticking to existing use cases - e.g. fully-verifying nodes on modern laptops are a necessity, but fully-verifying nodes on tablets are not up for consideration.
I haven't seen estimates myself.
Fredrik Harrysson
@folsen
FWIW I can't run an ethereum node on my laptop
Noel Maersk
@veox
The "trie saturation" I've mentioned before I first became aware of after reading @AlexeyAkhunov's https://medium.com/@akhounov/more-on-ethereum-accounts-trie-structure-8383a9fd4c93 (where one approach to solving it is described).
Greg Colvin
@gcolvin
The doubling time on disk drives we discussed. I’d guess what, more like 2 years in the real world? We must have some guess at the doubling time of our storage database?
Fredrik Harrysson
@folsen
IMO we're past that point already
Greg Colvin
@gcolvin
Past which point?
Noel Maersk
@veox
@folsen I don't try these days, too - but my laptop is a 5-year-old model (I think).
Fredrik Harrysson
@folsen
Being able to run a node on a laptop
Danny Ryan
@djrtwo

FWIW I can't run an ethereum node on my laptop

I’m currently fast-syncing a geth node that I started 24 hours ago. At block 6M. Laptop 4 years old but was premium when I bought it. (just a data point)

Fredrik Harrysson
@folsen
my connection seems a bit weird, i sent those two messages right next to each other ^^
Danny Ryan
@djrtwo
[gitter sucks]
Noel Maersk
@veox
Did Microsoft buy them out, too?..
Greg Colvin
@gcolvin
@veox We must have some guess at the doubling time of our storage database?
Noel Maersk
@veox
@gcolvin If assuming "full disk use" follows same trend as "just storage", then it's on the order of months. :(
https://etherscan.io/chart2/chaindatasizefast - very anecdotal! - shows approximately 4-time increase within the last year.
I've had to double a particular remote VM node's storage allowance within the last year twice - so, my anecdotal evidence shows the same (also running geth).
However, that assumption of mine is wrong: a geth "fast" sync has just the recent states, but all of the blocks/receipts AFAIK.
Noel Maersk
@veox
Also, this increase happened in the last year, which saw unprecedented demand (in terms of numbers of users), including some of the most egregious contracts (GasToken2, LivePeer's "shoot everybody in the feet with a machine gun" MerkleMine...), as well as UTXO-style exchange deposit addresses...
Greg Colvin
@gcolvin
So if we keep going at this rate we are hosed. (Much) higher prices might put a damper on the growth rate?
ledgerwatch
@AlexeyAkhunov
I had an idea, but I did not get around to produce it yet. It is called "short history of Ethereum state". It would be a static HTML page hosted somewhere, which, for each month since the launch of Ethereum, would list how much the state has grown, and what was the main cause of the growth. And update it not just once a month, but also when as we refine out classification of accounts and contracts. For example, at the current level of refinement, it would list which exchanges created how many new token sweeper contracts, how many GasToken2 contracts have been minted, how many new ERC20 tokens launched, how many new holders, etc. This would hopefully show that, unlike in Bitcoin, Ethereum state growth is more nuanced. It is not just number of UTXOs, it is new classes of contracts appearing (like DEXs, NFTs), and changing the growth rate
Fredrik Harrysson
@folsen
Chain growth and state growth should be discussed separately really, since they can be dealt with separately. The problem with state is what @karalabe has said many times; it affects IOPS and syncing. Both fast-sync and warp-sync is already broken for the current state size and we need to invent something new just to deal with what we already have, if we don't limit it we might have to invent something new again next year. IOPS is the main limiting factor, magnetic drives went out the window this past year and it's only possible to run ETH on an SSD today, soon enough the IOPS will require an NVMe drive.. and then what? This can be mitigated to some degree by better databases, better caching, trieless data structures explored in turbo-geth and other places. Major engineering efforts with no really promising solution in sight, engineering efforts that no-one is capable of undertaking because we're just trying to keep the thing alive and spending all our resources on that.
Greg Colvin
@gcolvin
We need more resources.
ledgerwatch
@AlexeyAkhunov
You mean cognitive resources?
Greg Colvin
@gcolvin
More people, more money. Whatever it takes to support major engineering efforts.
Or maybe you will wake up tomorrow morning with the answer clear as day ;)
But if our need for storage is growing at twice the cost of storage we are hosed.