Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Nov 29 16:12
    jmatsushita commented #77
  • Nov 29 16:07
    jmatsushita commented #77
  • Sep 19 09:33
    deepakedayillam opened #89
  • Jul 06 11:46
    devxpy commented #69
  • Jul 06 01:18
    RangerMauve commented #69
  • Jul 05 22:08
    devxpy commented #69
  • Jun 24 20:57
    joehand commented #88
  • Jun 24 20:56
    joehand commented #88
  • Jun 24 20:55
    joehand commented #88
  • Jun 24 20:55
    joehand commented #88
  • Jun 24 20:54
    joehand pinned #88
  • Jun 24 20:54
    joehand closed #88
  • Jun 24 20:54
    joehand commented #88
  • Jun 24 20:47
    RangerMauve commented #88
  • Jun 24 20:39
    todrobbins commented #88
  • May 31 16:11
    RangerMauve commented #88
  • May 31 08:14
    decentral1se commented #88
  • May 30 21:59
    todrobbins commented #88
  • May 30 21:56
    todrobbins commented #88
  • May 30 21:41
    RangerMauve commented #88
Martin Heidegger
@martinheidegger
datbase keeps a copy on the server but with limited size and time afaik.
dat-bot
@dat-bot
fleeky can it properly follow dat links?
fleeky for instance i am running something called webamp that uses a dat url in the dat url to seek dat archives to play music from
fleeky checks out datbase
Martin Heidegger
@martinheidegger
ahh, it probably doesn't do content rewriting.
dat-bot
@dat-bot
fleeky mostly looking for something that can follow dat links and forward/proxy that as well
fleeky so you go to a dat link, and if you click on another it will keep going
Martin Heidegger
@martinheidegger
fleeky: yeah, that might require content rewriting.
dat-bot
@dat-bot
konobi was someone mentioning making changes at the protocol/schema level recently to accomodate annotated messages and content? I saw it recently go by, but my laptop has been crashing more often of late =0(
RangerMauve
@RangerMauve
fleeky: you can load any arbitrary Dat URL with dat-gateway, so you can follow links just fine.
RangerMauve
@RangerMauve
Ohhhhh. Did you mean a proxy that would rewrite dat:// URLs to point to the proxy? I had something like that working with dat-polyfill
dat-bot
@dat-bot
fleeky ill check that out
fleeky in this case though , webamp needs to link to the dat url in the https url , it does http://webamp.site/?=url:dat:blaaaa
dat-bot
@dat-bot
pfrazee konobi: yeah, new version of hyperdrive will have metadata key-values
dat-bot
@dat-bot
fleeky RangerMauve : dat://520a7da1ac2d230f4e341801e52d229ffde3cf3e55852ae6d930b21657cecdbb/
fleeky so that works in beaker but not dat-gateway ,, any ideas ?
dat-bot
@dat-bot
pfrazee okay good news, I managed to track down and solve the bug that was causing the latest release of Beaker to freeze
dat-bot
@dat-bot
Frando mafintosh andrewosh: trying out hyperswarm for hyperdrive replication. should i use @hyperswarm/guts or @hyperswarm/network? unclear on the relation between the two
Frando i guess /network is better, as it does allow for both lookup and announce simultaneously
dat-bot
@dat-bot
Frando the more concrete question is: say i have a "server" that wants to share many hyperdrives. should i create a new @hyperswarm/network() for each drive and announce a single topic in each, or create one network() for the server and network.join() multiple topics? i tried the latter first, but in network.on('connection' the details.peer is null so i can't seem to get back the topic key (which i would need to replicate onto the right
Frando oh is see, that's intended behavior. also if a client connects to the same server on two topics, the messages arrive on the same socket, even in the same on('data') event. that means that the suggested way is to create a new network for every topic, which would mean for every independently shared hyperdrive? but does that scale to e.g. a few thousand drives?
dat-bot
@dat-bot
Frando or, option 3, use a single network on the "server" and match onto the right archives by parsing the first message in the received hypercore protocol stream?
dat-bot
@dat-bot
Frando but option 3 is not really viable with current state of hypercore protocol without basically reimplementing it i think. prepending the discoverykey might work though
dat-bot
@dat-bot
pfrazee new version of beaker is out (0.8.4) with a fix to the freezing issue plus better process-level sandboxing
dat-bot
@dat-bot
ctOS Top 10 Dat websites (from crawling the Top 2,4 million websites and domains in January 2019): https://gist.github.com/da2x/885806c253daf24c51861d7bfae7d375 (there were only ten, 28 IPFS sites for comparison)
James D
@jamesgecko
Thank you for the freeze fix, pfrazee!
dat-bot
@dat-bot
pfrazee @jamesgecko unfortunately it looks like it's still happening but in another case. Currently debugging
Martin Heidegger
@martinheidegger
ctOS: Wow, awesome effort!
dat-bot
@dat-bot
ctOS here is the crawler for anyone who’s interested https://gist.github.com/da2x/033dad3631f0622b8ccbf7e44b269808 it’s kinda slow by design to not abuse public DNS resources.
dat-bot
@dat-bot
ctOS wait, what – dat uses Rabin fingerprinting? https://github.com/datproject/rabin
dat-bot
@dat-bot
okdistribute ctOS: an older version did
dat-bot
@dat-bot
ctOS so no longer used? that’s a shame. it’s really good at deduplicating mixed data. maybe more applicable to IPFS, though.
dat-bot
@dat-bot
noffle okdistribute: why was it removed?
okdistribute noffle: there was a rewrite of dat (many times) i guess it didn't make it back in
okdistribute & hyperdrive
noffle ah ok, so it wasn't that there was a problem with it
dat-bot
@dat-bot
Frando noffle: IIRC i read a comment somewhere in an old issue where someone said it caused significant increased import times for large files, but don't remember what significant was like
okdistribute yeah it def should be optional, if implemented
Frando still, at least having the option to do rolling hashes for deduplicated storage should really come back in at some point and layer (i'd say hypercore)
dat-bot
@dat-bot
konobi even that layer you may want some extra space so that the deduplication can take things like parity into account
dat-bot
@dat-bot
ctOS have got statistical numbers of fixed-sizes versus rollin’ Rabin for a very large mixed dataset laying somewhere about
dat-bot
@dat-bot
ctOS Use a random DNS provider out of Cloudflare, Google, or Quad9 (instead of only relying on Google) – datprotocol/dat-dns#11
Rujak Ironhammer
@SirRujak
Hey everyone, I have been looking into the possibility of porting the dat protocol over to golang for a bit. I've started using the new "how dat works" documentation but I am curious about the state of hyperswarm. Is it at the point where I should try implementing it instead of dns-discovery or just go ahead and implement dns-discovery and wait for hyperswarm to stabalize? Any thoughts would be appreciated!
dat-bot
@dat-bot
okdistribute SirRujak have you seen the rust impl. https://datrs.yoshuawuyts.com/ ?
RangerMauve
@RangerMauve
mafintosh, pfrazee: What were the main motivating factors for switching over to hyperswarm?
dat-bot
@dat-bot
jhand @RangerMauve, a brief description here: https://github.com/datproject/planning#networking-improvements. I remember seeing a more detailed one but will need to find...
RangerMauve
@RangerMauve
jhand: So, it it correct to say the hole punching is an important part of it?
dat-bot
@dat-bot
jhand yep!
Rujak Ironhammer
@SirRujak
okdistribute: I have yes! I haven't spent as much time with it as the nodejs version but I have been using it as a secondary implementation source when I get stuck somewhere