It's just a temporary one, for now, I figure we might want to reset it and try out the transition at a couple of different points. This time it was in the early epoch 0, next time we might want to do at block 30K. Or 29999.
would be nice to have a dedicated channel & not to spam this one in case of any issues
@holiman@ppratscher I also want to try to add a mining node without progpow (will fork). And ethminer when ready. Might not have time to fix ethminer today.
Martin Holst Swende
If it gets too much noise, we can move to the progpow channel I just created at the geth discord server: https://discord.gg/v7eg4KK
Hah, the poor cpu-miner took 25 minutes to mine the first progpow block, then 3 minutes, 5 minutes, 4 minutes, 8 minutes ... will be interesting to see the effect when a proper miner is used
Martin Holst Swende
It reports about 500 H/s :) (only one thread used)
I have not benchmarked full DAG mining yet...
As soon as ethminer-progpow is ready for testing I'll put some GPU miners on it!
I'll put some ASICs on it ;-)
@salanki you can try build the reference implementation
@gcolvin Yes, it would seem there's an exponential component to it, - if state/tx/receipt storage is implemented as a trie.
The upside is that it's often not one single gigantic trie, but, say, one trie per contract account. Yet the upside's dwarfed by the way GastToken2 and a couple exchanges use the account address space... :(
I found a visualization of the differences between Ethash and ProgPoW here. Are these still accurate?
If anyone is wondering, btw, the reason for the gigantic drop in hashrate is that the CPU miner doesn't cache the cdag (which the light verifier does). I didn't bother optimizing the cpu miner, but concentrated more on getting the verification somewhat fast. Once that's fixed, the progpow hashrate for the cpu miner should increase quite a bit
We have launched a stratum server for the Gangnam ProgPoW testnet at progpow.ethermine.org:4444 the server has a very low difficulty of 10000 so you should be able to test miners quite quickly. we will increase the difficulty once GPU miners are available. it is not yet connected to any statistics backend but share submission, validation & block generation should already work. the stratum node is currently connected to a Parity-Ethereum/v2.3.0-nightly-f589356-20181211 instance
@veox Thanks. To summarize my understanding so far. The whole shebang is too complicated for now, so I’m looking just at the storage database—sload & sstore. We charge upfront per sstore, we charge per sload, and we control those rates. The question is whether those ongoing charges can support the storage. We need estimates for
how fast storage gets cheaper (doubling time?)
how fast our database grows (doubling time?)
the proportion of sloads to the size of our database(constant? log?)
the time for sloads and sstores per size of our database(log?) The effect of these estimates on the viability of “infinite” storage is
if storage gets cheaper enough faster than our database grows we don’t have a problem
if they are close to the same the income stream from sloads ands sstores becomes critical
if our database grows enough faster than storage gets cheaper we are hosed
if access time grows worse than the log of our database size we are hosed Am I getting close? Do we have estimates?
@gcolvin That's pretty exhaustive; one can say "we don't have a dire problem" if sticking to existing use cases - e.g. fully-verifying nodes on modern laptops are a necessity, but fully-verifying nodes on tablets are not up for consideration.
I've had to double a particular remote VM node's storage allowance within the last year twice - so, my anecdotal evidence shows the same (also running geth).
However, that assumption of mine is wrong: a geth "fast" sync has just the recent states, but all of the blocks/receipts AFAIK.
Also, this increase happened in the last year, which saw unprecedented demand (in terms of numbers of users), including some of the most egregious contracts (GasToken2, LivePeer's "shoot everybody in the feet with a machine gun" MerkleMine...), as well as UTXO-style exchange deposit addresses...
So if we keep going at this rate we are hosed. (Much) higher prices might put a damper on the growth rate?
I had an idea, but I did not get around to produce it yet. It is called "short history of Ethereum state". It would be a static HTML page hosted somewhere, which, for each month since the launch of Ethereum, would list how much the state has grown, and what was the main cause of the growth. And update it not just once a month, but also when as we refine out classification of accounts and contracts. For example, at the current level of refinement, it would list which exchanges created how many new token sweeper contracts, how many GasToken2 contracts have been minted, how many new ERC20 tokens launched, how many new holders, etc. This would hopefully show that, unlike in Bitcoin, Ethereum state growth is more nuanced. It is not just number of UTXOs, it is new classes of contracts appearing (like DEXs, NFTs), and changing the growth rate
Chain growth and state growth should be discussed separately really, since they can be dealt with separately. The problem with state is what @karalabe has said many times; it affects IOPS and syncing. Both fast-sync and warp-sync is already broken for the current state size and we need to invent something new just to deal with what we already have, if we don't limit it we might have to invent something new again next year. IOPS is the main limiting factor, magnetic drives went out the window this past year and it's only possible to run ETH on an SSD today, soon enough the IOPS will require an NVMe drive.. and then what? This can be mitigated to some degree by better databases, better caching, trieless data structures explored in turbo-geth and other places. Major engineering efforts with no really promising solution in sight, engineering efforts that no-one is capable of undertaking because we're just trying to keep the thing alive and spending all our resources on that.
We need more resources.
You mean cognitive resources?
More people, more money. Whatever it takes to support major engineering efforts.
Or maybe you will wake up tomorrow morning with the answer clear as day ;)
But if our need for storage is growing at twice the cost of storage we are hosed.