Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • Feb 28 2019 19:45
    @Arachnid banned @merkle_tree_twitter
  • Feb 17 2019 00:56
    @jpitts banned @Aquentson_twitter
  • Apr 09 2018 04:14
    @holiman banned @JennyJennywren_twitter
  • Oct 21 2017 19:12
    @Arachnid banned @Musk11
Péter Szilágyi
@karalabe
@AlexeyAkhunov Feel free to open a PR to enforce block propagation to 4 peers :)
Martin Holst Swende
@holiman

So yeah, I think the general idea was that it doesn't matter much if we have consensus issues on Ropsten (we don't need to wake anybody up at 2 am). And with that reasoning, we figured we could fork it even without all tests being done.

Now, if this is not a correct sentiment, feel free to yell about it.

ledgerwatch
@AlexeyAkhunov
@karalabe Great, I will do it (today most probably). I would also open another PR to prioritise block propagation over tx propagation, i.e. swap the order of these case statements: https://github.com/ethereum/go-ethereum/blob/cc21928e1246f860ede5160986ec3a95956fc8d4/eth/peer.go#L116
Eth-Gitter-Bridge
@Eth-Gitter-Bridge
[W Dimitry] are we doing this or not?
ethereum/tests#488
Péter Szilágyi
@karalabe
@AlexeyAkhunov select statements are theoretically random, if multiple channels are ready
but we can prioritize via 2 selects if it's important
ledgerwatch
@AlexeyAkhunov
I am thinking of a situation where we have "transaction storm" and that stops block propagation for a while, because queuedTxs never goes empty, could this happen?
but having 2 selects (first for blocks, and then the second for transactions, only if there are no blocks to propagate) sound like a good idea
Jason Carver
@carver

Propagating to all peers would lead to O(nm) messages being sent, where n is the number of nodes and m the number of peers per node - this devolves to O(n^2) in a fully connected network.
If we propagate to sqrt(m) peers, then in a fully connected network we send O(n * sqrt(n)) messages, which grows much slower.

Is a fully-connected network a desired/desirable goal? It surprised me that that it was a consideration.

Andrei Maiboroda
@gumb0
As we approach Ropsten fork, would be nice to see more clients on Ropsten's ethstats, see http://ropsten.ethstats.ethdevops.io/
I think @nonsense can provide details on how to connect to it
Anton Evangelatov
@nonsense
@gumb0 yes, I can do that. If someone wants to connect, please PM me.
Noel Maersk
@veox
@gumb0 Mine are listed on https://ropsten-stats.parity.io/. :/
Alex Beregszaszi
@axic
What is the current version of the CREATE2 instruction ? The EIP1014 lists a couple of different versions.
Eth-Gitter-Bridge
@Eth-Gitter-Bridge
[W Dimitry] Martin run the create2 tests on geth. So looks like geth implemented the one I sent you, axic
Martin Holst Swende
@holiman
I've made a PR against the Eip
To make it align with what we've agreed upon
ledgerwatch
@AlexeyAkhunov
@karalabe I have done the PR to put lower bound of 4 onto the block propagation peers: ethereum/go-ethereum#17725
I will do another one a bit later to prioritise blocks over transactions
Jacek Sieka
@arnetheduck
morning folks - when's the next PM call? ethereum/pm#56 is still open (there's also plenty of recent commentary on that issue) @lrettig @Souptacular
Lane Rettig
@lrettig
Hey @arnetheduck! Thanks for the reminder. Just opened a new one. Should be a week from today.
Jacek Sieka
@arnetheduck
@lrettig sweet! I got as far as putting my headphones on charge to be prepped for a meeting today before someone pointed out the date for me :)
Lane Rettig
@lrettig
Haha sorry about that. I've done that a few times
ledgerwatch
@AlexeyAkhunov
Hi! I have reviewed ProgPow Pull request and also played around with the code a bit. I concluded that the reason it is so slow is simply that it tries to access dataset around 100x times more than Hashimoto. Not sure it is intentional. So I would send it back to the original authors to think about
See my comments in the PR
ledgerwatch
@AlexeyAkhunov
I have just realised (perhaps it has been discussed before though), that the current spec of CREATE2 opcode allows the original creator of the contract to revive it after it has been self-destructed. Obviously, with the same code. Self-destruct moves (or burns) residual ETH, so it cannot be used for fund-recovery. And the storage will be also clear on such a revival, right? It is not necessarily bad, but a quirk worth knowing about.
Danny Ryan
@djrtwo
revive the previously existing code or create an entirely new contract at the same address with different bytecode?
ledgerwatch
@AlexeyAkhunov
revive the previous existing code at the same address. Essentially, it would make self-destruct not really terminal state of a contract. It can be further created and destroyed many times
I do not see a problem with that (so far)
Danny Ryan
@djrtwo
makes auditing code and dependencies a little trickier because it is not immediately obvious how the code will or had been deployed
and seems to break the semantics of self-destruct
ledgerwatch
@AlexeyAkhunov
Don't think it breaks semantics. From my understanding, the purpose and semantics of self-destruct is to remove contract from the state (and therefore shrink the state).
as for auditing, I would say if I see SELF-DESTRUCT possibility in the contract, then I know that at this point all bets are off :) so don't think it poses a problem IMO
Danny Ryan
@djrtwo
which changes your guarantees with SELF-DESTRUCT which imo changes the semantics. regardless, thanks for bringing this up. Definitely worth dicussing more
Martin Holst Swende
@holiman
It's been discussed before, back when eip86/208 was discussed. A brand new different contract can take it's place. So it can be used to make truly upgradeable contracts, for better or worse. The trick to do so is to have the initcode load runtime code from an oracle
All storage data is lost along the way, and yes, balance too
Therefore, it's fitting that extcodehash becomes active simultaneously as this
Cc @AlexeyAkhunov
ledgerwatch
@AlexeyAkhunov
@holiman great, thank you for the explanation
Martin Holst Swende
@holiman
More discussion on this exists on the PR 86/208. I dug up the link about a week ago, the link can be found on the PM discussion for last coredev call
Paweł Bylica
@chfast

@holiman @karalabe I tried to compare progpow verification results with https://github.com/chfast/ethash.

Single Ethash verification with the best currently available compiler is 0.9 ms.

The progpow implementation was done in this fork: https://github.com/ifdefelse/proghash/commit/dc8e918aa5cf68da0b6dc049634bde596170c14c#diff-551b1d6ef147e694c203c6e9a6dacbf7R521 but does not seem to work. At least they did not update the unit tests.

The same benchmark shows absurd 1.3 us.

ledgerwatch
@AlexeyAkhunov
@chfast I can see that this implementation you referenced below does 96 times fewer accesses to dataItems than the Go implementation I have reviewed. No wonder it takes 0.9 ms, which is almost exactly 80ms / 96 :)
it does not have data item access from progPow function itself (which contributed 2/3 of all accesses and was not actually present in the "white paper"). Also, access in the progPowLoop has been taken out of the inner loop (which has 32 iterations). Therefore we have factor 3*32 = 96
now it does exactly the same number of accesses as Hashimoto (existing ethash) = 128. And because they use simpler keccak function, it is faster than Hashimoto. Again, no wonders here too
Paweł Bylica
@chfast
Thanks for checking it out. I didn't have time yet to compare this with the EIP. The progpow verification takes here 0.0013 ms which is suspiciously fast. The 0.9 ms is for Ethash verification (Go time is 4 ms).
ledgerwatch
@AlexeyAkhunov
Ah, 0.9ms is EtHash. I see
ledgerwatch
@AlexeyAkhunov
actually, Progpow makes 64 accesses