Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Aug 21 18:40
    okdistribute commented #90
  • Aug 20 17:30
    lucaslasota opened #90
  • Apr 29 01:12
    martinheidegger commented #77
  • Apr 28 17:33
    serapath commented #77
  • Dec 20 2019 12:00
    jmatsushita commented #77
  • Nov 29 2019 16:12
    jmatsushita commented #77
  • Nov 29 2019 16:07
    jmatsushita commented #77
  • Sep 19 2019 09:33
    deepakedayillam opened #89
  • Jul 06 2019 11:46
    devxpy commented #69
  • Jul 06 2019 01:18
    RangerMauve commented #69
  • Jul 05 2019 22:08
    devxpy commented #69
  • Jun 24 2019 20:57
    joehand commented #88
  • Jun 24 2019 20:56
    joehand commented #88
  • Jun 24 2019 20:55
    joehand commented #88
  • Jun 24 2019 20:55
    joehand commented #88
  • Jun 24 2019 20:54
    joehand pinned #88
  • Jun 24 2019 20:54
    joehand closed #88
  • Jun 24 2019 20:54
    joehand commented #88
  • Jun 24 2019 20:47
    RangerMauve commented #88
  • Jun 24 2019 20:39
    todrobbins commented #88
Dat
@dat_project_twitter
substack yes
chamonix substack: thanks.
substack the tradeoff here is that it makes mutability easier but deduplication is harder
substack and you need to join fewer swarms
Dat
@dat_project_twitter
chamonix substack: So, if you have public keys of a few feeds, can you use the contents of a particlar file (that you know could be on other feeds) somehow to calculate the possible content hash of that particular file of each of the feeds (possibly "seeding" that file) if the feed was making that file available?
chamonix I mean that was an awkward sentence. I'm asking if you can check other feeds for a particular file if you know their public keys.
chamonix And you know they could have that EXACT file on them.
substack there is no deduplication and the hashes will all be different
substack if you have the hash of a hyperdrive archive, you can download any files in that archive if a peer is online and cooperative
chamonix substack: ok, didn't know that was what deduplicatin meant. Do you know of any file sharing protocol that allows for mutability and deduplication?
Dat
@dat_project_twitter
substack ipfs deduplicates on a per-file basis but mutability is more difficult
substack with ipfs last i checked you would use something like ipns to create a pointer to the root of a merkle DAG in a similar way as you might do with bittorrent using BEP44
chamonix that's great info. I appreciate the help substack. I've heard bad things about IPFS, like it's fundamentally broken or something. I think that was someones opinion on hacker news...
substack i'm not completely sure but i think there was a recent extension announced in bittorrent that helps with deduplication but i'm not entirely sure
chamonix ok, I will investigate that, thanks again.
Dat
@dat_project_twitter
chamonix If I recall, the commenter critisizing IPFS said something along the lines that the design was fundamentally flawed. Something was wrong with it anyways, and the attitude was: it's used for bitcoins, and they don't care to fix it. But I could be all wrong about that...
chamonix Maybe it was IPNS, not IPFS that was broken...
Dat
@dat_project_twitter
okdistribute chamonix you can verify a hypercore content hash using what's called a 'strong link' but this is not implemented in any clients yet AFAICT
okdistribute pp.slack.com/
okdistribute so to get the benefits of content hashing, you need the authors key, the sequence number, and the content hash -- then you can get the content. but still, deduplication across the network will still be an issue; ipfs is designed specifically for this which yes makes it really great for bitcoins & global consensus but not so great for dynamic apps
Dat
@dat_project_twitter
chamonix Can you use hyperbee on a multifeed? Or is it restricted to only creating a database from a single hypercore?
Dat
@dat_project_twitter
okdistribute multifeed takes a hypercore option which you could replace with hyperbee, and then all the hypercores would be hyperbees instead. https://github.com/kappa-db/multifeed/blob/master/index.js#L32
chamonix okdistribute: Does that mean that, to get some range of data from the hyperbee, you would have to query every individual hyperbee for that range?
chamonix okdistribute: Is it common practice to "extract" all the data from each feed, and add it to a single hypercore?
okdistribute no, what we do in kappadb is create an index after iterating over all the feed structures
okdistribute this index could be in any on-disk or memory storage you want
okdistribute like a leveldb or whatever. but in theory you could put it in a hyperbee
okdistribute or a hypercore
okdistribute that would allow getting stored indexes from other people. you just have to trust the index you're getting from them
Dat
@dat_project_twitter
okdistribute 🤷
okdistribute not sure if that answers your question 😅 but I have to go now; good luck! interested in seeing what you're working on, if you want to share?
chamonix okdistribute: thanks
chamonix So, if all the appended messages in each feed, in a multifeed, are time stamped, could you create a hyperbee database that orders each multifeed message by time stamp?
Dat
@dat_project_twitter
okdistribute chamonix I'd recommend this talk re: timestamps. https://www.dotconferences.com/2019/12/james-long-crdts-for-mortals
okdistribute chamonix in a nutshell, yes! but it gets a bit more complicated if you want to support high-reliability in ordering
Dat
@dat_project_twitter
chamonix okdistribute: thanks for the link
Dat
@dat_project_twitter
chamonix okdistribute: I don't understand what you mean by "high-reliability". Doesn't the B-Tree order things perfectly chronologically?
chamonix Assuming every timestamp is different.
chamonix And when I talk about multifeeds, I'm always trying to put all the append-only data into one hyperbee.
Dat
@dat_project_twitter
chamonix I'm watching the unreliable order part of the video you linked, but I don't think this applies. The feeds are separate. I think I misunderstand what multifeed is. I thought it was feeds of different authors, but you are suggesting that they are the same author...
chamonix In my use case, all the feeds are from different authors, and they independent of each other. So if the multifeed puts these separate feeds together, can you create a single hyperbee from it?
Ender Minyard
@genderev
How do you run a persistent local server? localtunnel and pm2 help but pm2 shuts down if your computer shuts down
(It's relevant to hyperswarm)
Dat
@dat_project_twitter
okdistribute genderev it's in the pm2 docs https://pm2.keymetrics.io/docs/usage/startup/
pasqui23
@pasqui23
I would like if anyone would help with RangerMauve/hyperswarm-web#7
It would be a real help
hp8wvvvgnj6asjm7
@hp8wvvvgnj6asjm7

```Sharing dat: 1 files (126 B)

0 connections | Download 0 B/s Upload 0 B/s

Checking for file updates...```

no matter how many files in the folder, its always just 1 file

hp8wvvvgnj6asjm7
@hp8wvvvgnj6asjm7
it doesn't detect any file updates
GECHO.EXE
@gecho_maniac_twitter
hey guyx! I was thinking if any of you have any examples of emerging art related (webart or distribution platforms) stuff using dat. Do you know any? Im a design grad student and I'm working on an essay for my future studies class, about the future of art!