Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • Feb 28 2019 19:45
    @Arachnid banned @merkle_tree_twitter
  • Feb 17 2019 00:56
    @jpitts banned @Aquentson_twitter
  • Apr 09 2018 04:14
    @holiman banned @JennyJennywren_twitter
  • Oct 21 2017 19:12
    @Arachnid banned @Musk11
Hudson Jameson
@Souptacular
Lane Rettig
@lrettig
@/all all core devs call in one hour!
Hudson Jameson
@Souptacular
Let's try to actually start at 14:00 UTC today and not 5 minutes after like usual as people come in late :)
Hudson Jameson
@Souptacular
eth2.0 calls organized here: https://github.com/ethresearch/eth2.0-pm
Alex Beregszaszi
@axic
… and the trollbox exploded from consensus by hudson
Lane Rettig
@lrettig
preliminary notes here, please PR with additions or corrections! https://github.com/ethereum/pm/blob/master/All%20Core%20Devs%20Meetings/Meeting%2046.md
Ghost
@ghost~55c3ed250fc9f982beac84b3
I have been looking at the block propagation code in both Geth and Parity and saw that PoW-checked blocks are only propagated to a randomly selected group of peers (the number is square root of number of peers who do not have that block yet), but hashes are sent to all the peers. I understand that this measure is to constrain the use of bandwidth. But does anyone know/remember why the square root, and why cannot we just propagate to all peers? And another question - why is square root taken not from the total number of peers, but from the number of peers who do not have the hash yet? Also, in Parity, there is lower bound on number of peers to propagate blocks to (which is 4), but Geth does not have that bound. Which means that if Geth has 4 peers that does not have the block, it will only send to 2, but Parity will send to all 4 (because of the lower bound)
here are relevant code lines
Ghost
@ghost~55c3ed250fc9f982beac84b3
Given that Ethereum blocks are relatively small at the moment (30k or so), it might be too much of a constraint to limit number of peers to propagate to 4 or 5 (for parity) and to 1-5 in Geth (given max number of peers of 25)
Or, and another thing - Geth truncates the float after the square root, but parity rounds it. That is another factor why Parity would generally propagate more
I suggest that if we have to stick with the square root thing, we should take the square root of the TOTAL number of connected peers, and not of the number of peers not knowing the block. Also, introduce lower bound to Geth, the way it is done in Parity
Chase Wright
@MysticRyuujin
This is blowing up, for those who haven't seen it yet reddit post
Ghost
@ghost~55c3ed250fc9f982beac84b3
That discussion on Reddit is way too emotional to actually be going anywhere useful. I assume this is sparked by the Linzhi's (ASIC manufacturer) presentation at the ETC summit. They will be making EtHash chips no matter what, because ETC is going to stay on PoW. I have watched the presentation and there was a question about what specific improvement made such efficiency gain possible. The answer was: very optimised memory bandwidth. And another important thing from the presentation - when you have CPU+memory+IO, you already have a computer which can do pretty much anything. So my guess is that ProgPOW, or any other modifications to EtHash will not prevent this type of ASIC to be as efficient as it is - it will only slow down their design process somewhat.
Nick Johnson
@Arachnid

The answer was: very optimised memory bandwidth.

I'm highly skeptical that that can lead to the sort of improvements they're claiminn.

But does anyone know/remember why the square root, and why cannot we just propagate to all peers?

Propagating to all peers would lead to O(nm) messages being sent, where n is the number of nodes and m the number of peers per node - this devolves to O(n^2) in a fully connected network.

If we propagate to sqrt(m) peers, then in a fully connected network we send O(n * sqrt(n)) messages, which grows much slower.

Log would probably make more sense than sqrt here, but I don't think it makes a big difference.
Ghost
@ghost~55c3ed250fc9f982beac84b3
It looks like Transactions are currently flooded to all peers though: https://github.com/ethereum/go-ethereum/blob/7c71e936a716794709e7a980b7da9010c4d0a98c/eth/handler.go#L737
:)
5chdn
@5chdn

Good morning, if we want to hard-fork ropsten in October, isn't it too late to wait with a block number until next core dev call?

I'm proposing 4_230_000 for ropsten, but happy to hear other opinions if we want to relax the timeline even further towards the end of october.

paritytech/parity-ethereum#9562

Péter Szilágyi
@karalabe
@AlexeyAkhunov the reason transactions are propagated to all peers is to avoid gaps
if we start propagating transactions to sqrt peers, we'll fill up every txpool with nonce gaps, causing transactions to be dumped
I agree with enforcing a minimum of 4 nodes to propagate the entire block to btw
that could help with smaller networks
I missed the last 2 core dev calls due to being on holiday, so I don't want to have too much say, but are the tests completed yet?
AFAIK most clients have the forks EIPs implemented, but it would be really nice to have a full test suite before announcing the fork, even on ropsten
Lane Rettig
@lrettig
On Friday Dimitry said he needs a couple more months to finish the Constantinople tests completely but we had consensus on moving forward with a testnet fork before this
Welcome back @karalabe !
Péter Szilágyi
@karalabe
Fair enough
Ghost
@ghost~55c3ed250fc9f982beac84b3
@karalabe Thanks for your comments! I will keep digging and thinking :)
Péter Szilágyi
@karalabe
@AlexeyAkhunov Feel free to open a PR to enforce block propagation to 4 peers :)
Martin Holst Swende
@holiman

So yeah, I think the general idea was that it doesn't matter much if we have consensus issues on Ropsten (we don't need to wake anybody up at 2 am). And with that reasoning, we figured we could fork it even without all tests being done.

Now, if this is not a correct sentiment, feel free to yell about it.

Ghost
@ghost~55c3ed250fc9f982beac84b3
@karalabe Great, I will do it (today most probably). I would also open another PR to prioritise block propagation over tx propagation, i.e. swap the order of these case statements: https://github.com/ethereum/go-ethereum/blob/cc21928e1246f860ede5160986ec3a95956fc8d4/eth/peer.go#L116
Eth-Gitter-Bridge
@Eth-Gitter-Bridge
[W Dimitry] are we doing this or not?
ethereum/tests#488
Péter Szilágyi
@karalabe
@AlexeyAkhunov select statements are theoretically random, if multiple channels are ready
but we can prioritize via 2 selects if it's important
Ghost
@ghost~55c3ed250fc9f982beac84b3
I am thinking of a situation where we have "transaction storm" and that stops block propagation for a while, because queuedTxs never goes empty, could this happen?
but having 2 selects (first for blocks, and then the second for transactions, only if there are no blocks to propagate) sound like a good idea
Jason Carver
@carver

Propagating to all peers would lead to O(nm) messages being sent, where n is the number of nodes and m the number of peers per node - this devolves to O(n^2) in a fully connected network.
If we propagate to sqrt(m) peers, then in a fully connected network we send O(n * sqrt(n)) messages, which grows much slower.

Is a fully-connected network a desired/desirable goal? It surprised me that that it was a consideration.

Andrei Maiboroda
@gumb0
As we approach Ropsten fork, would be nice to see more clients on Ropsten's ethstats, see http://ropsten.ethstats.ethdevops.io/
I think @nonsense can provide details on how to connect to it
Anton Evangelatov
@nonsense
@gumb0 yes, I can do that. If someone wants to connect, please PM me.
Noel Maersk
@veox
@gumb0 Mine are listed on https://ropsten-stats.parity.io/. :/
Alex Beregszaszi
@axic
What is the current version of the CREATE2 instruction ? The EIP1014 lists a couple of different versions.
Eth-Gitter-Bridge
@Eth-Gitter-Bridge
[W Dimitry] Martin run the create2 tests on geth. So looks like geth implemented the one I sent you, axic