Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Apr 10 03:54
    JamesTheAwesomeDude commented #92
  • Apr 10 03:25
    JamesTheAwesomeDude edited #92
  • Apr 10 03:09
    JamesTheAwesomeDude opened #92
  • Apr 10 03:04
    JamesTheAwesomeDude commented #88
  • Apr 04 06:01
    congruentsquare opened #91
  • Aug 21 2020 18:40
    okdistribute commented #90
  • Aug 20 2020 17:30
    lucaslasota opened #90
  • Apr 29 2020 01:12
    martinheidegger commented #77
  • Apr 28 2020 17:33
    serapath commented #77
  • Dec 20 2019 12:00
    jmatsushita commented #77
  • Nov 29 2019 16:12
    jmatsushita commented #77
  • Nov 29 2019 16:07
    jmatsushita commented #77
  • Sep 19 2019 09:33
    deepakedayillam opened #89
  • Jul 06 2019 11:46
    devxpy commented #69
  • Jul 06 2019 01:18
    RangerMauve commented #69
  • Jul 05 2019 22:08
    devxpy commented #69
  • Jun 24 2019 20:57
    joehand commented #88
  • Jun 24 2019 20:56
    joehand commented #88
  • Jun 24 2019 20:55
    joehand commented #88
  • Jun 24 2019 20:55
    joehand commented #88
Dat
@dat_project_twitter
sknebel sigh, no
sknebel they tried to blame privacy laws, and after it got pointed out that that's a nonsensical argument they quietly changed the messaging to "it's not offerred by the networks"... (maybe because the state didn't ask for it?)
Dat
@dat_project_twitter
jbove Some interesting info here: https://en.wikipedia.org/wiki/Cell_Broadcast
jbove Until 10 minutes ago I had never heard of this either: https://en.wikipedia.org/wiki/EU-Alert
Alexander Praetorius
@serapath

jbove: i am german and grew up in germany and sadly the most likely answer to why this happened is incompetence ... but hey, at least they are trying.

  • a warn app
  • a corona app

:-) it's a start - maybe it goes somewhere

kvanstee
@kvanstee
Running dat share recognizes files but listening only for ipv6 addresses. See previous post. How can I listen for ipv4 as well?
matrixbot
@matrixbot
cryptogoth @jbove thanks for sharing. One thing I noticed recently is that mobile (Android at least) phones allow you to start local servers, so one could run hyperswarm peers (the discovery layer underneath DAT).
A p2p emergency broadcast system like you describe could be implemented with something like Cabal chat, where messages are signed by an official trusted source (e.g. the government), also using hyperswarm
Dat
@dat_project_twitter
chamonix Basic question: when you download a hypercore file from some feed, is that file stored on your system inside a replica of the feed it's from? Or is it appended to your own dat? Where is someone elses dat, that you have downloaded, seeded from?
chamonix seeded from, on your system. Is it just a file on your hard drive with no feed info associated with it?
Dat
@dat_project_twitter
chamonix ok, so I think you replicate a feed in order to "seed" a hyper file, right?
Dat
@dat_project_twitter
substack chamonix: you can think of it like cloning a git repo
substack the specifics will vary based on which tools you use, whether the dat cli, beaker, etc
chamonix substack: What about sparse replication? If you are downloading from a peer that is not the author, does that mean that the partial merkle tree is enough to authenticate the file?
substack yes, and all blocks are individually signed by the author
Dat
@dat_project_twitter
chamonix substack: I have one more question: If you have the content hash, and only the content hash, can you download the file? I think that's how bittorrent works. You only need the content hash. But for hypercore, you also need the authors public key, so if you have no public keys, you can't download the file, even with the content hash, correct?
chamonix Or, you can only download the content (having the hash), only from peers that have that same file and whose public key you also know?
substack you only need the hash
chamonix substack: But you can't authenticate the file though?
substack the content hash is the public key of the archive. each secret key maps to exactly one archive
chamonix substack: So the exact same file will have a different content hash depending on the feed/author?
substack yes
chamonix substack: thanks.
Dat
@dat_project_twitter
substack the tradeoff here is that it makes mutability easier but deduplication is harder
substack and you need to join fewer swarms
chamonix substack: So, if you have public keys of a few feeds, can you use the contents of a particlar file (that you know could be on other feeds) somehow to calculate the possible content hash of that particular file of each of the feeds (possibly "seeding" that file) if the feed was making that file available?
Dat
@dat_project_twitter
chamonix I mean that was an awkward sentence. I'm asking if you can check other feeds for a particular file if you know their public keys.
chamonix And you know they could have that EXACT file on them.
substack there is no deduplication and the hashes will all be different
substack if you have the hash of a hyperdrive archive, you can download any files in that archive if a peer is online and cooperative
chamonix substack: ok, didn't know that was what deduplicatin meant. Do you know of any file sharing protocol that allows for mutability and deduplication?
Dat
@dat_project_twitter
substack ipfs deduplicates on a per-file basis but mutability is more difficult
substack with ipfs last i checked you would use something like ipns to create a pointer to the root of a merkle DAG in a similar way as you might do with bittorrent using BEP44
chamonix that's great info. I appreciate the help substack. I've heard bad things about IPFS, like it's fundamentally broken or something. I think that was someones opinion on hacker news...
substack i'm not completely sure but i think there was a recent extension announced in bittorrent that helps with deduplication but i'm not entirely sure
chamonix ok, I will investigate that, thanks again.
Dat
@dat_project_twitter
chamonix If I recall, the commenter critisizing IPFS said something along the lines that the design was fundamentally flawed. Something was wrong with it anyways, and the attitude was: it's used for bitcoins, and they don't care to fix it. But I could be all wrong about that...
chamonix Maybe it was IPNS, not IPFS that was broken...
Dat
@dat_project_twitter
okdistribute chamonix you can verify a hypercore content hash using what's called a 'strong link' but this is not implemented in any clients yet AFAICT
okdistribute pp.slack.com/
okdistribute so to get the benefits of content hashing, you need the authors key, the sequence number, and the content hash -- then you can get the content. but still, deduplication across the network will still be an issue; ipfs is designed specifically for this which yes makes it really great for bitcoins & global consensus but not so great for dynamic apps
Dat
@dat_project_twitter
chamonix Can you use hyperbee on a multifeed? Or is it restricted to only creating a database from a single hypercore?
Dat
@dat_project_twitter
okdistribute multifeed takes a hypercore option which you could replace with hyperbee, and then all the hypercores would be hyperbees instead. https://github.com/kappa-db/multifeed/blob/master/index.js#L32
chamonix okdistribute: Does that mean that, to get some range of data from the hyperbee, you would have to query every individual hyperbee for that range?
chamonix okdistribute: Is it common practice to "extract" all the data from each feed, and add it to a single hypercore?
okdistribute no, what we do in kappadb is create an index after iterating over all the feed structures
okdistribute this index could be in any on-disk or memory storage you want
okdistribute like a leveldb or whatever. but in theory you could put it in a hyperbee
okdistribute or a hypercore
okdistribute that would allow getting stored indexes from other people. you just have to trust the index you're getting from them