rangermauveHaving a DHT to discover feeds related to pogs and then using a materialized view for the actual data seems a bit easier
okdistributeThen it's like AOL
okdistributelots of issues with that though in real life.
nettleunless you have superpeers (like internet archive or something) with many resources seeding that long tail
rangermauvenettle: could you elaborate on what you mean by long tail?
nettlerangermauve: the huge amount of relatively unpopular stuff
nettlelike the 10ks of npm modules almost nobody uses
nettlelots of resources needing to host them
nettlebut little demand
rangermauveYeah, I think it'd be cool to have stuff be more ephemeral by default. Like, in this world I'd like to see people that are actively advertising they're into pogs and have them remove their entires from the DHT after. It kinda works with hypercore and hyperswarm right now if you create fake hypercores by hashing topics for keys and discovering peers that way. After you close the swarm it'll remove itself from the DHT to
rangermauveprevent false positives
rangermauveAnd then finding data about others through links and hopefully being able to load data from the peers you got the links through
rangermauvesubstack: So if I understand correctly, this would be a layer sitting on top of the raw peer discovery / connection code and people would need to do some sort of query to pull data from the overlay?
substackconnecting to peers based on topics would happen separately from queries i think
substackbut also, implementing some of those papers is a large amount of work
substackthe combination of techniques is going to vary a lot based on the different types of apps people make
substackand these algorithms let you calculate a set of peers to connect to who are likely interested in the same topics
substackand with those sets you can also generate meta-topics to use with tools like hyperswarm
substackyou can do each without the other
substackkappa-view-query was mentioned earlier, but i think kappa-sparse-query was meant?
substackfor some applications, the payload for topic-discovery may be small enough to publish bits of topic metadata to the swarm tech
substacklike for example, for peermaps each peer could publish a quadtree bitmap to the swarm with a 1 set for each geographical bucket that the peer has some data for
substacki haven't looked at dat swarm tech in a while but i remember with signal-hub you could publish small metadata payloads
substacklooking forward to doing these kinds of experiments soon, after i finish up a few database things and get the ingest working
rangermauve512 layers of buckets, right?
substackfor 64 bytes
substackmultiple levels could make sense but i'd need to think about it more
rangermauveThis might be relevant to folks here. Found it in the dat comm comm notes. :P https://blog.mozilla.org/blog/2020/03/30/were-fixing-the-internet-join-us/
rangermauveBasically, Mozilla is offering to pay teams of up to 4 people 2500 each to work on some sort of distributed web project
decentral1sejust shows lack of funding opportunities once again...too little cash to spread around...
decentral1se(cash gone elsewhere no doubt)