SSTORE, but the alternative option of scaling every non-reduced opcode upwards. That would mean the block gas limit would need to rise and likely these would be affected:
One note though: if we introduce versioning, then reduction can coexists on the same network without any change in block gas limit (and dapps) needed.
But increase still can’t really work with versioning, because you cannot just raise the block gas limit for a specific version :)
@sorpass issues with raising the cost of storage, SSTORE especially, were hinted at in https://eips.ethereum.org/EIPS/eip-2035:
The most problematic cases would be with the contracts that assume certain gas costs of SLOAD and SSTORE and hard-code them in their internal gas computations. For others, the cost of interacting with the contract storage will rise and may make some dApps based on such interactions, non-viable. This is a trade off to avoid even bigger adverse effect of the rent proportional to the contract storage size. However, more research is needed to more fully analyse the potentially impacted contracts.
- We surely do not need to wait for the entire State rent to be rolled out before increasing the block gas limit. Can we just make state expanding operations (SSTORE, CREATE, etc.) more expensive? Then, recommend the block size increase approximately in the same proportions? For example, make state expansion 3 times more expensive, and recommend raising block size limit by 3 times? Yes, but there are issues to overcome.
(please continue discussion there so I don't keep pasting between ACD)
And sorry I missed this request! Will do for future discussions!
@shamatar Thanks! The repricing is proposed because there doesn’t seem to be a good mapping between actual time an instruction takes and the current gas costs. (While the should be a close mapping between the two.)
Lets say if it takes 2.3ms on evmone, it may still be priced as if it would take 300-600ms, which means it is entirely unfeasible to do that with current prices.
Must also note that in some cases, evmeone was very close to “native speed”. These speeds however heavily depend on the implementations, both the implementation in EVM and on “the native code”.
The main benefit one gets with keeping these as EVM libraries is that the “trusted computing base” (e.g. all the code which is part of consensus) is not extended with precompiles. The more precompiles we add, the more code the clients need to carry.
@shamatar bn128mul in evmone takes 580 microseconds. on the same machine, native rust is 309 microseconds (see benchmarks here. the EVM bn128mul implementation, called Weierstrudel, was optimized for gas cost, not speed (so it uses MULMOD, which is underpriced, rather than MUL, which is much faster than MULMOD). If we (or rather, @zac-williamson who wrote it) optimize Weierstrudel for speed and use montgomery multiplication (i.e. use MUL instead of MULMOD), it will be much faster (we mentioned this on the previous allcoredevs call, Zac estimated it would be twice as fast). And there's an evmone PR to make evmone even faster. So that should bring the evmone + Weierstrudel pretty close to native rust/parity speed.
the native go/geth speed is at 127 microsends, but there are some EVM changes we could make to get even more speedups than the montgomery multiplication one above (e.g. setting the modulus once in memory with a SETMOD opcode, and then using a MULMONT opcode that reads the modulus from memory instead of the stack; in the current implementations there's a lot of overhead from repeatedly DUP'ing the modulus to keep it on the stack).
implementing these optimizations is a lot of work and takes time, but the speed of optimized EVM engine + optimized EVM bytecode can be quite fast (perhaps close to native - for some workloads - though we haven't proved this yet).
The wiki at https://github.com/ethereum/wiki/wiki is currently locked, surely due to spam. An official substitute has not been recognized (i.e. linked from https://github.com/ethereum/wiki/wiki). May I appeal for write access to the wiki?
A) I would like to correct some things; and
B) I volunteer to act as an editor for the time being.
So some decisions were made on the ACD Call #63, and I've tracked those in the wiki and the EIP dependency diagram as best I can. One of the higher impact decisions was to go ahead with Account Versioning 1702-Design-1. This allows multiple other EIPs to make progress based on this decision.
I was thinking about what is needed as a next step for EIP advocates and champions. There are some issues I've though about:
TODO, including those where I have seen elsewhere that the author has a plan.
10 Probable, if implementation and tests completed. (Trivial or polished EIPs)
8 Possible, with refinements based on feedback and implementation and testing
10 Could happen, with some very clear advocation, implementation and testing.
8 Will not happen, unless significant advocation seen.
I was able to edit the wiki with the edit link on the page (rather than through github)
@GuthL Thanks for the update. Basically what you have just done is super valuable. That is:
1) Keeping everyone informed of your plans, and
2) Making it clear that you're driving your EIP forward with concrete next-steps, which I see you have already listed clearly in your EIP.
I've moved 2028 to Probable in the wiki, which is really part-guide, part-sentiment-probe and by no means binding.
If anyone else sees their EIP lower down on the priority lists or not getting the traction they want, speak up too!
If any EIP is interested in participating, we could collaborate and reuse the infrastructure.
sent a DM
One EIP that I think needs a good discussion around is 615.
Three ideologies seem to arise:
I'm not saying that a decision should happen now, but I have seen this EIP brought up mostly in fragmented discussions.