Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Aug 21 18:40
    okdistribute commented #90
  • Aug 20 17:30
    lucaslasota opened #90
  • Apr 29 01:12
    martinheidegger commented #77
  • Apr 28 17:33
    serapath commented #77
  • Dec 20 2019 12:00
    jmatsushita commented #77
  • Nov 29 2019 16:12
    jmatsushita commented #77
  • Nov 29 2019 16:07
    jmatsushita commented #77
  • Sep 19 2019 09:33
    deepakedayillam opened #89
  • Jul 06 2019 11:46
    devxpy commented #69
  • Jul 06 2019 01:18
    RangerMauve commented #69
  • Jul 05 2019 22:08
    devxpy commented #69
  • Jun 24 2019 20:57
    joehand commented #88
  • Jun 24 2019 20:56
    joehand commented #88
  • Jun 24 2019 20:55
    joehand commented #88
  • Jun 24 2019 20:55
    joehand commented #88
  • Jun 24 2019 20:54
    joehand pinned #88
  • Jun 24 2019 20:54
    joehand closed #88
  • Jun 24 2019 20:54
    joehand commented #88
  • Jun 24 2019 20:47
    RangerMauve commented #88
  • Jun 24 2019 20:39
    todrobbins commented #88
Dat
@dat_project_twitter
substack i'm not completely sure but i think there was a recent extension announced in bittorrent that helps with deduplication but i'm not entirely sure
chamonix ok, I will investigate that, thanks again.
chamonix If I recall, the commenter critisizing IPFS said something along the lines that the design was fundamentally flawed. Something was wrong with it anyways, and the attitude was: it's used for bitcoins, and they don't care to fix it. But I could be all wrong about that...
Dat
@dat_project_twitter
chamonix Maybe it was IPNS, not IPFS that was broken...
Dat
@dat_project_twitter
okdistribute chamonix you can verify a hypercore content hash using what's called a 'strong link' but this is not implemented in any clients yet AFAICT
okdistribute pp.slack.com/
okdistribute so to get the benefits of content hashing, you need the authors key, the sequence number, and the content hash -- then you can get the content. but still, deduplication across the network will still be an issue; ipfs is designed specifically for this which yes makes it really great for bitcoins & global consensus but not so great for dynamic apps
Dat
@dat_project_twitter
chamonix Can you use hyperbee on a multifeed? Or is it restricted to only creating a database from a single hypercore?
Dat
@dat_project_twitter
okdistribute multifeed takes a hypercore option which you could replace with hyperbee, and then all the hypercores would be hyperbees instead. https://github.com/kappa-db/multifeed/blob/master/index.js#L32
chamonix okdistribute: Does that mean that, to get some range of data from the hyperbee, you would have to query every individual hyperbee for that range?
chamonix okdistribute: Is it common practice to "extract" all the data from each feed, and add it to a single hypercore?
okdistribute no, what we do in kappadb is create an index after iterating over all the feed structures
okdistribute this index could be in any on-disk or memory storage you want
okdistribute like a leveldb or whatever. but in theory you could put it in a hyperbee
okdistribute or a hypercore
okdistribute that would allow getting stored indexes from other people. you just have to trust the index you're getting from them
Dat
@dat_project_twitter
okdistribute 🤷
okdistribute not sure if that answers your question 😅 but I have to go now; good luck! interested in seeing what you're working on, if you want to share?
chamonix okdistribute: thanks
chamonix So, if all the appended messages in each feed, in a multifeed, are time stamped, could you create a hyperbee database that orders each multifeed message by time stamp?
Dat
@dat_project_twitter
okdistribute chamonix I'd recommend this talk re: timestamps. https://www.dotconferences.com/2019/12/james-long-crdts-for-mortals
okdistribute chamonix in a nutshell, yes! but it gets a bit more complicated if you want to support high-reliability in ordering
Dat
@dat_project_twitter
chamonix okdistribute: thanks for the link
Dat
@dat_project_twitter
chamonix okdistribute: I don't understand what you mean by "high-reliability". Doesn't the B-Tree order things perfectly chronologically?
chamonix Assuming every timestamp is different.
chamonix And when I talk about multifeeds, I'm always trying to put all the append-only data into one hyperbee.
Dat
@dat_project_twitter
chamonix I'm watching the unreliable order part of the video you linked, but I don't think this applies. The feeds are separate. I think I misunderstand what multifeed is. I thought it was feeds of different authors, but you are suggesting that they are the same author...
chamonix In my use case, all the feeds are from different authors, and they independent of each other. So if the multifeed puts these separate feeds together, can you create a single hyperbee from it?
Ender Minyard
@genderev
How do you run a persistent local server? localtunnel and pm2 help but pm2 shuts down if your computer shuts down
(It's relevant to hyperswarm)
Dat
@dat_project_twitter
okdistribute genderev it's in the pm2 docs https://pm2.keymetrics.io/docs/usage/startup/
pasqui23
@pasqui23
I would like if anyone would help with RangerMauve/hyperswarm-web#7
It would be a real help
hp8wvvvgnj6asjm7
@hp8wvvvgnj6asjm7

```Sharing dat: 1 files (126 B)

0 connections | Download 0 B/s Upload 0 B/s

Checking for file updates...```

no matter how many files in the folder, its always just 1 file

hp8wvvvgnj6asjm7
@hp8wvvvgnj6asjm7
it doesn't detect any file updates
GECHO.EXE
@gecho_maniac_twitter
hey guyx! I was thinking if any of you have any examples of emerging art related (webart or distribution platforms) stuff using dat. Do you know any? Im a design grad student and I'm working on an essay for my future studies class, about the future of art!
Fatih Mert Doğancan
@fatihmert
Can dat stream be used instead of RMTP stream? If yes, I wondered if it would be necessary.
Gene Vayngrib
@urbien
@gecho_maniac_twitter may be this will fit your definition of art https://medium.com/@pvh/pixelpusher-real-time-peer-to-peer-collaboration-with-react-7c7bc8ecbf74
Dat
@dat_project_twitter
Soni do we have a maven/gradle plugin for dat yet?
Soni for setting up maven mirrors over dat
Dat
@dat_project_twitter
Soni or like... how can one use dat with gradle?
Dat
@dat_project_twitter
substack what does it look like to download/seed with hyperdrive from the command-line?
substack i am a bit lost in all the changes
substack i see there is some daemon now. is there an analog to ipfs daemon/ipfs add?
Dat
@dat_project_twitter
substack i click on links and there are deprecation notices for example on hyperdrive-daemon-client which links me to hyperspace, but that appears to be another daemon
Dat
@dat_project_twitter
cblgh the only thing i have managed to use and download stuff is dat-store, which also is a daemon
Zach Dwiel
@dwiel
should I be able to access all of the versions of the files ever added to a dat? I am trying to use the dat command line --http option with a ?version=xyz in the address and it is working on some files, but not on recently changed files. On recently changed files the http server hangs the request and never returns any data
Zach Dwiel
@dwiel
do I need to be running a dat cli with non-standard command line switches for it to record all versions?
Zach Dwiel
@dwiel
alternatively, is there another service/tool I can use to record the history and not just the most recent version of each file?