Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • Feb 28 19:45
    @Arachnid banned @merkle_tree_twitter
  • Feb 17 00:56
    @jpitts banned @Aquentson_twitter
  • Apr 09 2018 04:14
    @holiman banned @JennyJennywren_twitter
  • Oct 21 2017 19:12
    @Arachnid banned @Musk11
Alex Beregszaszi
@axic
@AlexeyAkhunov is locking so expensive that transient storage is a requirement or it is a bigger benefit for the proxy use cases you have listed?
Wei Tang
@sorpaas

It's a slippery slope to force specific optimizations and caching strategies onto clients.

I actually cannot agree with this. The whole idea of having gas cost is to make it reflect the actual costs of average implementations. A lot of other EIPs wouldn't pass the test under your argument, for example, 1108.

Martin Holst Swende
@holiman
Sure, I agree -- it's a subjective opinion about where to draw the line. I don't see how 1108 fits that same bill though. For 1108 , it's more about the algorithmic complexity, whereas here it's about internal caching strategies. But I agree it's not a strong argument
ledgerwatch
@AlexeyAkhunov
@axic I don't know if any of these changes (1283/1153) are requirement, to me they are nice to haves. But I am not developing large smart contract systems. Regarding the re-entrancy locking using SSTORE - it is bearable, but only if you use it sparingly. If it was something like 8 gas per lock/unlock operation, then I would assume it would be OK just to use it almost everywhere. Regarding the DELEGATECALL proxies - I have not spent lot of time thinking about it, but I assume transient storage does create a new unique resource in this case
ledgerwatch
@AlexeyAkhunov

My position is (and always was from the time this change was proposed), that if the problem being solved (cross-frame communication for things like re-entrancy locks, reliable passing of error messages, and others) is to be solved, it is better to solve it in a more specific way (https://github.com/ethereum/EIPs/blob/master/EIPS/eip-1153.md), with a huge added bonus of gas cost being not 200 gas, but 8 gas per operation

We have discussed thoroughly of this in previous ACDs. You cannot justify this because 1153 added a whole new structure to EVM, which affects a large portion of code, while 1283 only needs to consider gas costs.

Adding a whole new structure to EVM (basically a mapping which does not need to be persisted or have any interactions with the trie), and two operations with a flat gas cost in my opinion is actually simpler change that making gas cost context dependent.

All this only makes sense if there is a real need for any of this
Wei Tang
@sorpaas

Regarding this question put it in the title:

Whether to implement net gas metering directly on EVM or implement it once Alexey's state rend has been implemented

My observation from yesterday's state rent discussion is that we're still having a long road ahead to implement state rent, if it's happening. EIPs proposed in Istanbul related to state rent are only basic primitive EIPs, while some of them already requires account versioning to implement. I put 1283 into proposed EIP list because during the last several months I continue to see requests/interests from developers of net gas metering being implemented, so I think it would still be beneficial to implement 1283, and deal with state rent later.

One of the original argument why I brought up this is because Alexey thinks that those nice-to-haves might complicate implementation of state rent. But I believe we can prevent this, and reduce the assumptions needed for state rent, by cleverly utilize account versioning, and EIPs that can be easily enabled/disabled.

Wei Tang
@sorpaas

Adding a whole new structure to EVM (basically a mapping which does not need to be persisted or have any interactions with the trie), and two operations with a flat gas cost in my opinion is actually simpler change that making gas cost context dependent.

Actually no. Speaking of Parity implementation, 1153's data structure really complicates things a lot, because in EVM we usually have callframe local data structures, which is easy to handle. We also have global structures -- those structures are complicated because they must react to callframe call/revert. 1153 requires a global structure that has its own unique callframe call/revert rules. I still think this is something we have reached conclusion in past ACD calls -- I think our focus now should be whether we want net gas metering, but not choose between 1283/1153.

Also practically speaking, 1283 is one of the simplest EIPs that we can apply for Istanbul, because most existing clients have already implemented it, and it has also been mostly thoroughly tested. 1706 only requires several lines of code change for most implementations.

@AlexeyAkhunov Also I think a nice property of EIPs like 1283 is that you can enable / disable it multiple times on the same chain, without affecting deployed contracts. I think this might give us some flexibility in the future.
Martin Holst Swende
@holiman
@sorpaas regarding developer wanting this -- do you think that implementing "1283 with versioning" -- basically not retroactive would make them happy? Since it will mean that it won't apply for old contracts. So all popular tokens and dex:es will have to transition in order to benefit?
Alex Beregszaszi
@axic
I’d argue that 1706 may make sense to be implemented in any case as it may avoid any other future issue with repricings.
Wei Tang
@sorpaas

@holiman Right now I don't have strong opinions on whether to implement 1283 with versioning or without. That's something we need to decide soon.

Indeed, implementing without versioning have the benefits of applying to existing contracts, and I do see that being useful -- existing contracts already can use structures like interframe-locking.

Also with any current account versioning scheme, to make existing contracts able to benefit, we'd either need to apply net gas metering to legacy VM (same as first implement 1283 without versioning, and then add versioning on top), or existing contracts need to be upgradable on-chain.
Martin Holst Swende
@holiman

If we were to implement 1283, I'd prefer to at least not do versioning and 1283 in one go. I'd prefer either postpone 1283, or use 1283+1706.

And of those options, I'd prefer 1283+1706 since that's what I think most contracts devs would be happy about. But I'm also not a smart-contract deployer

Call today is postponed, right?
Wei Tang
@sorpaas
Yeah call is postponed.
Hayden Adams
@haydenadams
just wanna say as a DAPP dev working on a popular protocol I realllllyyyy want EIP 1283 to get through
massively reduced gas cost for mutex, massively reduced gas cost for multi-step token transfers necessary for synchronous cross dapp interactions, etccc
its a big scaling improvement and I think should be a high priority for Istanbul
will significantly impact future uniswap protocol designs
Philippe Castonguay
@PhABC

I second Hayden. EIP-1283 would have a significant impact on multiple applications, especially the ones involving multiple transfers of assets (e.g. One address paying multiple other addresses, a user filling multiple 0x orders with same payment currency, etc.)

Also, i'm sure most are aware of this now, but almost all OPCODE repricing EIPs can break existing contracts. Some contract's function might have tight restriction in terms of how much gas is used, such that an increase in price of one OPCODE (e.g. SLOAD) could lead this function to always fail. Reducing prices can also lead to complications as EIP-1283 showed. We need to thread carefully, but we shouldn't restrain ourselves when significant gains can be made.

Bryant Eisenbach
@fubuloubu
1283 would be very beneficial, but we aborted on it because of unknown impact. Is the impact better understood now? Is there an alternative workaround we can leverage?
Philippe Castonguay
@PhABC
1706 would solve the issue raised when 1283 was first attempted to be included
Hayden Adams
@haydenadams
On a high level - sstores are expensive to discourage storage bloat. If a storage slot remains unchanged at the end of an operation, there is no real reason to pay such a high cost. Current status feels more like a bug than a design choice and waiting a year for the HF after Istanbul feels like to long a wait time for this to be fixed
cdetrio
@cdetrio

Regarding @cdetrio 's reducing computational opcodes -- I think they're 'cheap enough'. We have one op which is only execution of a runloop without any operation being performed: JUMPDEST, which costs 1.

@holiman thanks, good idea to measure relative to JUMPDEST as the baseline opcode. My initial napkin estimates were using the data from the spreadsheet (created an issue here about the spreadsheet data: holiman/vmstats#1), which has SLOAD the most underpriced at 1060 gas/ms, JUMPDEST in the middle at 9008 gas/ms, and toward the other end is MUL 23287 gas/ms and SWAP{N} around 25000 gas/ms. Based on those numbers, a 4x increase for SLOAD (from 200 gas to 800 gas) would put SLOAD at ~4000 gas/ms, still half the cost of JUMPDEST. MUL could be cut in half to 11k (reduced from gas cost 5 to 3 or 2). If SWAP was cut to ~1/3, from gas cost 3 to 1, then it would go from 25000 gas/ms to ~8000 gas/ms, in line with JUMPDEST.

The spreadsheet data might be a bad sample, but with those adjustments JUMPDEST and SWAP{N} would still be 2x more expensive than SLOAD. So to balance them, either SLOAD would need to be doubled again (from 800 gas to 1600 gas) to from ~4000 gas/ms to ~8000 gas/ms. Or JUMPDEST and SWAP{N} would need to be reduced to 0.5 gas. Of course these are all napkin estimates, and aligning to SLOAD by these gas/ms numbers might not be what we want to do (prices should be set by worst-case gas/ms, not average gas/ms).

But yea, I'd agree with you that if we use these numbers, then 0.5 is "close enough" to 1 that its not worth the complexity of fractions/particle gas costs. But these are numbers for the current geth-evm. The big gains on the table are if geth-evm borrows some of the optimizations in evmone (or perhaps revive that old vm_jit.go), to get an order of magnitude speedup, it would be enough to obviate the need for certain classes of precompiles (plus it'd be a huge improvement for stateless contracts, and so on).

Martin Holst Swende
@holiman
Thanks @cdetrio , answered on the ticket
danibotcode
@danibotcode
Can someone please help me understand what tools and software are needed for the Ethereum development stack? 😊
Andrew Redden
@androolloyd
Hi @danibotcode this isn’t quite the forum for those questions but id be happy to get you sorted
Send you a PM
Martin Holst Swende
@holiman
Suggestion for an extension to statetests: https://gitter.im/ethereum/tests?at=5cfdff31bf4cbd167c5ea57d
Tomasz Kajetan Stańczak
@tkstanczak
+1
Greg Colvin
@gcolvin
@chfast @cdetrio @holiman @axic I’ve picked some of my perfomance measurements where I left off in Cancun, comparing evmone to aleth and geth. They may also be relevant to opcode pricing. Left the results as an issue on the evmone repo, so we can track improvements to evmone there. ethereum/evmone#71
Alex Beregszaszi
@axic
Nice! Interesting that the mul64/mul128/mul256 are the same cost on evmone.
Tomasz Kajetan Stańczak
@tkstanczak
any reason for div64 being slower?
Paweł Bylica
@chfast
nice, thanks @gcolvin
@tkstanczak Depending what div64 means, I'm guessing it is seen as 256 bit divided by 64 bit.
Greg Colvin
@gcolvin
The 64/128/256 versions start with constants that keep the values on the stack within that number of bits. Division is slow on most chips, @tkstanczak, and I don’t know how @chfast implemented intx that explains why mul* is flat in in evonne, @axic .
Tomasz Kajetan Stańczak
@tkstanczak
somehow last time I looked at the numbers I read the div64 being slower than div256 - it is not when I look now
Greg Colvin
@gcolvin
If you look at ns/OP or ns/gas div64 and mul64 are slower than div256 and mul256 for geth and aleth. For evmone they are close to the same. @chfast might be able to explain, or you could dive into the intx code.
And I just did some edits which will likely look like crap on the other side of the bridge to wherever. :)
Paweł Bylica
@chfast

@gcolvin @tkstanczak For mul intx do not check for leading zeros and it always does full 256-bit multiplication. That's why the times are the same.

Division is complex and you have to check for leading zeros of the divisor because division algorithms require this (indirectly this comes from the fact you cannot divide by 0).
The simple answer why div64 is slower than div128 is the fact the complexity of the division depends on the difference in size between dividend and divisior. So in div64 we have 256 by 64 and in div128 we have 256 by 128.
In detail, there are different algorithms used in each case. In div64 we have "division by single limb", in div128 we have division by 2 limbs, in div256 we use Knuth division.

Tomasz Kajetan Stańczak
@tkstanczak
this is detailed, thank you
Danny Ryan
@djrtwo
Daniel Ellison
@zigguratt
I can't moderate today because of a company meeting. Everyone's been pretty well-behaved lately, so it shouldn't be too much of an issue.
Danny Ryan
@djrtwo
no problem! thanks @zigguratt
call in 15 minutes
zoom: zoom: https://zoom.us/j/765545257
0age
@0age
Does anyone have a source on the total number of deployed contracts and/or the total number of unique EOAs with outbound txs?
Bryant Eisenbach
@fubuloubu
That's a big number. One of the members of the security community has an indexed database of non-trivial contracts I think. @holiman might know more.
Greg Colvin
@gcolvin