Where communities thrive

  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
Repo info
  • Apr 10 03:54
    JamesTheAwesomeDude commented #92
  • Apr 10 03:25
    JamesTheAwesomeDude edited #92
  • Apr 10 03:09
    JamesTheAwesomeDude opened #92
  • Apr 10 03:04
    JamesTheAwesomeDude commented #88
  • Apr 04 06:01
    congruentsquare opened #91
  • Aug 21 2020 18:40
    okdistribute commented #90
  • Aug 20 2020 17:30
    lucaslasota opened #90
  • Apr 29 2020 01:12
    martinheidegger commented #77
  • Apr 28 2020 17:33
    serapath commented #77
  • Dec 20 2019 12:00
    jmatsushita commented #77
  • Nov 29 2019 16:12
    jmatsushita commented #77
  • Nov 29 2019 16:07
    jmatsushita commented #77
  • Sep 19 2019 09:33
    deepakedayillam opened #89
  • Jul 06 2019 11:46
    devxpy commented #69
  • Jul 06 2019 01:18
    RangerMauve commented #69
  • Jul 05 2019 22:08
    devxpy commented #69
  • Jun 24 2019 20:57
    joehand commented #88
  • Jun 24 2019 20:56
    joehand commented #88
  • Jun 24 2019 20:55
    joehand commented #88
  • Jun 24 2019 20:55
    joehand commented #88
cryptogoth @jbove thanks for sharing. One thing I noticed recently is that mobile (Android at least) phones allow you to start local servers, so one could run hyperswarm peers (the discovery layer underneath DAT).
A p2p emergency broadcast system like you describe could be implemented with something like Cabal chat, where messages are signed by an official trusted source (e.g. the government), also using hyperswarm
chamonix Basic question: when you download a hypercore file from some feed, is that file stored on your system inside a replica of the feed it's from? Or is it appended to your own dat? Where is someone elses dat, that you have downloaded, seeded from?
chamonix seeded from, on your system. Is it just a file on your hard drive with no feed info associated with it?
chamonix ok, so I think you replicate a feed in order to "seed" a hyper file, right?
substack chamonix: you can think of it like cloning a git repo
substack the specifics will vary based on which tools you use, whether the dat cli, beaker, etc
chamonix substack: What about sparse replication? If you are downloading from a peer that is not the author, does that mean that the partial merkle tree is enough to authenticate the file?
substack yes, and all blocks are individually signed by the author
chamonix substack: I have one more question: If you have the content hash, and only the content hash, can you download the file? I think that's how bittorrent works. You only need the content hash. But for hypercore, you also need the authors public key, so if you have no public keys, you can't download the file, even with the content hash, correct?
chamonix Or, you can only download the content (having the hash), only from peers that have that same file and whose public key you also know?
substack you only need the hash
chamonix substack: But you can't authenticate the file though?
substack the content hash is the public key of the archive. each secret key maps to exactly one archive
chamonix substack: So the exact same file will have a different content hash depending on the feed/author?
substack yes
chamonix substack: thanks.
substack the tradeoff here is that it makes mutability easier but deduplication is harder
substack and you need to join fewer swarms
chamonix substack: So, if you have public keys of a few feeds, can you use the contents of a particlar file (that you know could be on other feeds) somehow to calculate the possible content hash of that particular file of each of the feeds (possibly "seeding" that file) if the feed was making that file available?
chamonix I mean that was an awkward sentence. I'm asking if you can check other feeds for a particular file if you know their public keys.
chamonix And you know they could have that EXACT file on them.
substack there is no deduplication and the hashes will all be different
substack if you have the hash of a hyperdrive archive, you can download any files in that archive if a peer is online and cooperative
chamonix substack: ok, didn't know that was what deduplicatin meant. Do you know of any file sharing protocol that allows for mutability and deduplication?
substack ipfs deduplicates on a per-file basis but mutability is more difficult
substack with ipfs last i checked you would use something like ipns to create a pointer to the root of a merkle DAG in a similar way as you might do with bittorrent using BEP44
chamonix that's great info. I appreciate the help substack. I've heard bad things about IPFS, like it's fundamentally broken or something. I think that was someones opinion on hacker news...
substack i'm not completely sure but i think there was a recent extension announced in bittorrent that helps with deduplication but i'm not entirely sure
chamonix ok, I will investigate that, thanks again.
chamonix If I recall, the commenter critisizing IPFS said something along the lines that the design was fundamentally flawed. Something was wrong with it anyways, and the attitude was: it's used for bitcoins, and they don't care to fix it. But I could be all wrong about that...
chamonix Maybe it was IPNS, not IPFS that was broken...
okdistribute chamonix you can verify a hypercore content hash using what's called a 'strong link' but this is not implemented in any clients yet AFAICT
okdistribute pp.slack.com/
okdistribute so to get the benefits of content hashing, you need the authors key, the sequence number, and the content hash -- then you can get the content. but still, deduplication across the network will still be an issue; ipfs is designed specifically for this which yes makes it really great for bitcoins & global consensus but not so great for dynamic apps
chamonix Can you use hyperbee on a multifeed? Or is it restricted to only creating a database from a single hypercore?
okdistribute multifeed takes a hypercore option which you could replace with hyperbee, and then all the hypercores would be hyperbees instead. https://github.com/kappa-db/multifeed/blob/master/index.js#L32
chamonix okdistribute: Does that mean that, to get some range of data from the hyperbee, you would have to query every individual hyperbee for that range?
chamonix okdistribute: Is it common practice to "extract" all the data from each feed, and add it to a single hypercore?
okdistribute no, what we do in kappadb is create an index after iterating over all the feed structures
okdistribute this index could be in any on-disk or memory storage you want
okdistribute like a leveldb or whatever. but in theory you could put it in a hyperbee
okdistribute or a hypercore
okdistribute that would allow getting stored indexes from other people. you just have to trust the index you're getting from them
okdistribute 🤷
okdistribute not sure if that answers your question 😅 but I have to go now; good luck! interested in seeing what you're working on, if you want to share?
chamonix okdistribute: thanks
chamonix So, if all the appended messages in each feed, in a multifeed, are time stamped, could you create a hyperbee database that orders each multifeed message by time stamp?
okdistribute chamonix I'd recommend this talk re: timestamps. https://www.dotconferences.com/2019/12/james-long-crdts-for-mortals
okdistribute chamonix in a nutshell, yes! but it gets a bit more complicated if you want to support high-reliability in ordering