CREATEof the same version, and code prefix is not required) should also work well for EIP-615. The variant 1 can still have code prefix, but compared with variant 2 (
CREATEuses code prefix to determine version), it just makes the prefix optional. So for EIP-615 we can still use code prefix if we want for variant 1. There are situations (like simpler EVM upgrade) where we don't necessarily want to change/add code prefix, but may still want to use account versioning. This may be an advantage of variant 1 compared with variant 2.
CREATE specifies a version, and the VM’s validator replies whether the code is valid for that version
For this we probably need to define two new EVM opcodes (VCREATE and VCREATE2), and the legacy CREATE and CREATE2, as in variant 1, just uses the current call frame VM version.
I think it may be good to do this step by step. Have base account versioning first, and then we can deploy VCREATE and VCREATE2 separately.
Currently we almost certainly either just want the newest version, or use the current version of the VM context -- all of our current upgrades are about improvements on EVM. So with base layer of account versioning, contract creation transaction always creates the newest version, and
CREATE2 always use the version of VM context. I tried to do some editing on the spec if that's clearer for what I mean: http://eips.ethereum.org/EIPS/eip-1702
And in the future we can define
VCREATE2 so that any particular version is deployable at any time.
CREATEopcode internally, then the owner can deploy an upgrade (which will be of the legacy version), and use that upgrade to turn the contract into proxy contract pattern. From this point on, the contract can upgrade its VM version. The only potential drawback is just that it will costs slightly more gas (another
DELEGATECALL) and use one more call frame depth, because we turned previously one indirection into double indirection.
versionof the execution frame always the same as the account code's version. That is, when fetching code from an account in state, we always fetch the
versionfield of that account together, and "associate" the version with the code. This should be easier to implement (we just need to change all definition of
(code, version)tuple), and it should provide more sensible results for
@/all ACD Call in less than 9 hours. Please review the agenda.
SSTORE, but the alternative option of scaling every non-reduced opcode upwards. That would mean the block gas limit would need to rise and likely these would be affected:
One note though: if we introduce versioning, then reduction can coexists on the same network without any change in block gas limit (and dapps) needed.
But increase still can’t really work with versioning, because you cannot just raise the block gas limit for a specific version :)
@sorpass issues with raising the cost of storage, SSTORE especially, were hinted at in https://eips.ethereum.org/EIPS/eip-2035:
The most problematic cases would be with the contracts that assume certain gas costs of SLOAD and SSTORE and hard-code them in their internal gas computations. For others, the cost of interacting with the contract storage will rise and may make some dApps based on such interactions, non-viable. This is a trade off to avoid even bigger adverse effect of the rent proportional to the contract storage size. However, more research is needed to more fully analyse the potentially impacted contracts.
- We surely do not need to wait for the entire State rent to be rolled out before increasing the block gas limit. Can we just make state expanding operations (SSTORE, CREATE, etc.) more expensive? Then, recommend the block size increase approximately in the same proportions? For example, make state expansion 3 times more expensive, and recommend raising block size limit by 3 times? Yes, but there are issues to overcome.
(please continue discussion there so I don't keep pasting between ACD)
And sorry I missed this request! Will do for future discussions!
@shamatar Thanks! The repricing is proposed because there doesn’t seem to be a good mapping between actual time an instruction takes and the current gas costs. (While the should be a close mapping between the two.)
Lets say if it takes 2.3ms on evmone, it may still be priced as if it would take 300-600ms, which means it is entirely unfeasible to do that with current prices.
Must also note that in some cases, evmeone was very close to “native speed”. These speeds however heavily depend on the implementations, both the implementation in EVM and on “the native code”.
The main benefit one gets with keeping these as EVM libraries is that the “trusted computing base” (e.g. all the code which is part of consensus) is not extended with precompiles. The more precompiles we add, the more code the clients need to carry.
@shamatar bn128mul in evmone takes 580 microseconds. on the same machine, native rust is 309 microseconds (see benchmarks here. the EVM bn128mul implementation, called Weierstrudel, was optimized for gas cost, not speed (so it uses MULMOD, which is underpriced, rather than MUL, which is much faster than MULMOD). If we (or rather, @zac-williamson who wrote it) optimize Weierstrudel for speed and use montgomery multiplication (i.e. use MUL instead of MULMOD), it will be much faster (we mentioned this on the previous allcoredevs call, Zac estimated it would be twice as fast). And there's an evmone PR to make evmone even faster. So that should bring the evmone + Weierstrudel pretty close to native rust/parity speed.
the native go/geth speed is at 127 microsends, but there are some EVM changes we could make to get even more speedups than the montgomery multiplication one above (e.g. setting the modulus once in memory with a SETMOD opcode, and then using a MULMONT opcode that reads the modulus from memory instead of the stack; in the current implementations there's a lot of overhead from repeatedly DUP'ing the modulus to keep it on the stack).
implementing these optimizations is a lot of work and takes time, but the speed of optimized EVM engine + optimized EVM bytecode can be quite fast (perhaps close to native - for some workloads - though we haven't proved this yet).