Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Apr 03 19:23
    djrtwo synchronize #1704
  • Apr 03 19:23

    djrtwo on phase1-validator

    add lookahed for shard subnets … (compare)

  • Apr 03 17:39

    hwwhww on mypy_ssz

    Use "mypy==0.770", "mypy-extens… Add mypy config file Add mypy stubs for third party … and 1 more (compare)

  • Apr 03 17:34

    hwwhww on mypy_ssz

    WIP (compare)

  • Apr 03 16:47
    djrtwo edited #1703
  • Apr 03 16:47
    djrtwo edited #1704
  • Apr 03 16:47

    djrtwo on dev

    support tests with SLOTS_PER_EP… wip work to improve tests work in progress test improveme… and 37 more (compare)

  • Apr 03 16:47
    djrtwo closed #1629
  • Apr 03 16:30
    djrtwo commented #1629
  • Apr 03 16:29
    djrtwo synchronize #1629
  • Apr 03 16:29

    djrtwo on phase1-tests

    final PR nitpicks (compare)

  • Apr 03 15:54
    djrtwo synchronize #1629
  • Apr 03 15:54

    djrtwo on phase1-tests

    Tighten restriction on a "seen"… Merge pull request #1694 from p… Use constant variables to defin… and 4 more (compare)

  • Apr 03 15:47
    djrtwo commented #1629
  • Apr 03 15:47
    djrtwo synchronize #1629
  • Apr 03 15:47

    djrtwo on phase1-tests

    address @hwwhww feedback (compare)

  • Apr 03 15:46
    djrtwo synchronize #1629
  • Apr 03 15:46

    djrtwo on phase1-tests

    address @hwwhww feedback (compare)

  • Apr 03 15:20
    djrtwo synchronize #1629
  • Apr 03 15:20

    djrtwo on phase1-tests

    add compute_offset_slots (compare)

James Ray
@jamesray1

make it go all out on main chain overhead (eg. 30 tps)

How would that even be possible? The main chain does 7-15 tps.

AIUI you would have to make the main chain process more transactions e.g. by reducing the difficulty, although maybe with Casper we'll be able to process more tps on the main chain.

add a layer of indirection, so the only main chain overhead is a single committee that signs off on a root (or at least a list) of headers that have signatures
so basically, the main chain committee makes a cryptoeconomic threshold signature for each of the shard committees and signs the list of threshold sigs

Sounds like Dfinity. However, BLS is prone to 51% attacks, and there isn't any better signature scheme.

James Ray
@jamesray1
Not the same as Dfinity, just like it.

validators have to either accept the result of the vote or self-verify everything (unrealistic)

There are other options that of course you know of, e.g. zk-S(N/T)ARKs, a market for interactive verification (e.g. Golem with Truebit).

James Ray
@jamesray1
What I mean is that these other options mean you don't have to rely on votes or having to self-verify everything, you can verify zero knowledge proofs, or increase confidence in a vote through verification markets (even if you don't directly participate yourself, although you wouldn't have to verify everything to improve the security of the votes). And these extra options don't necessarily need to be in-protocol.
And there's no reason why you can't have these extra verifications extra-protocol, even though verifications and votes also occur at the notary level.
Seems like letting the main chain entirely work on interacting with the shards is more preferable than the second option.
James Ray
@jamesray1
Actually on second thought I don't think it would help verifying votes from a committee on the main chain, you'd need to do the verification of the collations and blobs.
But still, those are other options rather than self-verifying everything. Nevertheless, a threshold signature scheme is prone to 51% attacks.

Any gains from removing in-protocol thresholds would still have to respect the "only accept if yes votes > explicit no votes" principle, which could be replicated with explicit no votes. That said, I think that's a bad idea and we should keep with the 2/3 threshold

I assume you mean the 2/3 supermajority from Casper. Yes I agree.

James Ray
@jamesray1

Now you can make it "soft" in fancy ways, eg. by having the dependency cone of a collation grow over time, so computers with more computing power can afford to verify further back

Yeah but we want to decentralize everything, not rely on supercomputers.

Eth-Gitter-Bridge
@Eth-Gitter-Bridge
[discord] <vbuterin> yeah agree there; I was basically just saying that if any single user does have more computing power, then they can seamlessly verify more and benefit from the extra security increment of not degrading to proxy validation until a larger committee has approved something
Mustafa Al-Bassam
@musalbas
TIL multi-dimensional FEC codes were proposed as early as 1954 https://sci-hub.tw/https://ieeexplore.ieee.org/document/1057464/ and http://wireless.ece.ufl.edu/eel6550/lit/chapter_banner.pdf
This paper provides a nice visualisation of 3D FEC codes: https://sci-hub.tw/https://ieeexplore.ieee.org/document/7818772
I haven't seen much analysis on algorithms for decoding these, though
I've written a Go library for building and repairing merkle tree based 2D RS code for data availability schemes, (with fraud proof support for incorrectly constructed or inconsistent rows/columns): https://github.com/musalbas/rsmt2d
Mustafa Al-Bassam
@musalbas
image.png
Raul Jordan
@rauljordan
Hey guys, I wanna chat about local shard state storage...specifically, what modifications we could make to the current merkle patricia tree implementation.
Do we want to have state trie for each shard? That is, if there are 100 shards, there would be a total of 100 different state tries?
it could be interesting if we could modify the current merkle patricia tree so as to maintaining a single state trie across the network, but instead adding shard_id in the hex-prefix encoding (or introducing some modified branch/extension nodes?) so as to create some easy way of traversing data specific to a shard contained within this large trie.
that way we can maintain a single, global rootHash that describes the entire galaxy of shard states. I'm just thinking out loud here, would love to hear some thoughts.
Raul Jordan
@rauljordan
Ah nvm this doesnt solve key problems. Has any other team here started working on shard state local storage?
Eth-Gitter-Bridge
@Eth-Gitter-Bridge

[discord] <Hsiao-Wei Wang (hwwhww)> @rauljordan
There are two significant codebase changes in Py-EVM:

  1. Hexary Patricia Trie -> Binary Patrica Trie
  2. 2 layer account in side a single layer trie (https://ethresear.ch/t/a-two-layer-account-trie-inside-a-single-layer-trie/210)

I'm not sure how critical for 100-different-state-tries, what will it cause?

The tradeoff here is that the larger trie leads to increasing the witness data. Moreover, in this post V proposed multi-state root in one shard to decrease the size of branches: https://ethresear.ch/t/detailed-analysis-of-stateless-client-witness-size-and-gains-from-batching-and-multi-state-roots/862

jannikluhn
@jannikluhn
I like the idea of a single state trie from a conceptual point of view. But in practice I don't think it changes much. Maybe cross shard communication gets a bit easier because we have a global root? But we could calculate something similar by just putting all shard state roots in another trie.
Eth-Gitter-Bridge
@Eth-Gitter-Bridge
[discord] <vbuterin> > Hey guys, I wanna chat about local shard state storage...specifically, what modifications we could make to the current merkle patricia tree implementation.
[discord] <vbuterin> I think we want to use sparse merkle trees
Mustafa Al-Bassam
@musalbas
Haven't read the sharding proposal on Ethresearch yet, but the one on GitHub already achieves the equivalent of a global state trie, you just need the block header for the main shard, and you can get a merkle proof of all the shard headers -> state roots from the VMC
It's just a few extra hashes
Eth-Gitter-Bridge
@Eth-Gitter-Bridge
[discord] <vbuterin> yeah; in general, any "proposals and notarization verification go in the main chain" mechanism creates in-main-chain global state roots
Eth-Gitter-Bridge
@Eth-Gitter-Bridge

[discord] <JustinDrake> > any "proposals and notarization verification go in the main chain" mechanism creates in-main-chain global state roots

I'd say it creates global data roots. The corresponding state roots can be handled differently: included separately and finalised later, or even referenced and finalised implicitly.

James Ray
@jamesray1

@hwwhww

2 layer account in side a single layer trie

AIUI we won't have accounts until phase 2, I prefer to just think about phase 1 until that's done, otherwise there's too much to read related to phase 2 and beyond, although following along at a high level is OK, but not reading in detail every post related to phase 2 and afterwards.

Eth-Gitter-Bridge
@Eth-Gitter-Bridge
[discord] <Hsiao-Wei Wang (hwwhww)> yeah, just for replying to Raul's question.
Eth-Gitter-Bridge
@Eth-Gitter-Bridge
[discord] <vbuterin> > I'd say it creates global data roots. The corresponding state roots can be handled differently: included separately and finalised later, or even referenced and finalised implicitly.
[discord] <vbuterin> ah yes you're right
[discord] <vbuterin> global state roots only if state becomes part of collations again
Eth-Gitter-Bridge
@Eth-Gitter-Bridge
[discord] <vbuterin> I'd like to get actual numbers for the difficulty of downloading 200kb of data vs executing 8 million gas
[discord] <vbuterin> I tend to be much more conservative on bandwidth than many other people, because of my experiences on phone hotspots, cafes, etc; I generally don't like assuming more than 1 MB/sec downloading capacity
[discord] <vbuterin> and that's total; p2p has various inefficiencies
Eth-Gitter-Bridge
@Eth-Gitter-Bridge
[discord] <vbuterin> so if we assume 1 MB/sec, then that's 0.2s to download 200kb, and on my own laptop verifying a 200kb block takes ~0.2s
[discord] <vbuterin> though that's 8m gas including transactions; maybe only 2-3m of that is actual execution
[discord] <vbuterin> so they are on the same order of magnitude
[discord] <vbuterin> ---------------
[discord] <vbuterin> btw, how is WASM coming along?
James Ray
@jamesray1
Sure but if you have an unlimited internet connection, there's nothing more to pay for downloading, whereas you pay for gas.
Eth-Gitter-Bridge
@Eth-Gitter-Bridge
[discord] <vbuterin> > [gitter] <jamesray1> Sure but if you have an unlimited internet connection, there's nothing more to pay for downloading, where's you pay for gas.
[discord] <vbuterin> I think I have free electricity more often than I have unlimited internet 😂
Preston Van Loon
@prestonvanloon
we'd also want to support areas of the world that don't have a great connection, whenever possible
Eth-Gitter-Bridge
@Eth-Gitter-Bridge
[discord] <vbuterin> and also contexts that don't have a great connection because they're using a high-overhead mechanism to evade detection or censorship