Website: https://solid.mit.edu/ and https://solidproject.org/ - Repos: https://github.com/solid/ - Forum: https://forum.solidproject.org/
May 27 2019 06:08
User @Mitzi-Laszlo unbanned @in1t3r
May 23 2019 06:49
@Mitzi-Laszlo banned @in1t3r
May 16 2019 09:49
@Mitzi-Laszlo banned @mediaprophet
Feb 01 2019 22:04
User @melvincarvalho unbanned @namedgraph_twitter
Feb 01 2019 21:49
@melvincarvalho banned @namedgraph_twitter
So yes, you're right, it can do hundreds of GET requests, but is only doing those requests for the resources that matched that were requested by the user, and it will only be hundreds if there were hundreds of matching files.
after quite some time of non-tech distractions, i was refreshing what i know of solid... i figured out some of my difficulties. i don't clearly understand the relation of URIs in the "knowledge graph" rdf space, the separation in turtle files in the solid "filesystem", and HTTP "transport" URLs. i have read the docs several times, and i still don't have it clear. i was trying to read the fetcher sourcecode, but i figured i can also try to ask here =)
I think the best way to think about is to think about nodes in a graph, each node being a turtle resource, each node can reference other nodes by way of a URI, you as a user, or client, interact with nodes, and click through to other nodes using hyperlinks, if you fetch all of these nodes by following all the hyperlinks, you get a graph
@biappi Yes. The data web of RDF is like the hypertext web of HTML, except nodes are conceptual things insread of parts of a document, and links have well defined semantics. They both use the ‘#’. In both cases the use of a URI a#b is an invitation to load document a and find out about what it shoses to say about a#b. The document tries to have useful data about a#b and related things those interested in a#b would also be interested in.
The Fetcher fetches each document into the Store. Its a quad store, so it keeps track of which document each (s,p,o) triple came from by in fact storing (s,p,o,document). The store acts as a local cache of part of the data web. So once it has loaded a few documents, then you can query and navigate the RDF graph locally in memory.
You update the data web using UpdateManager.update() which sends a patch to the server, and makes sure the local Store is kept in sync. The UpdateManager can also subscribe to any changes in a document made by others and let each client app do live updates.
@fabiancook@timbl thanks! you gave me the key piece of understanding i was missing. i didn't consider the concept of "document" as being part of the "rdf data web"... i was confused because, ignoring documents, i thought all turtle files could be about about every resource, and of course now i have more understanding of the fourth element of the rdf triples =)
i love that "i am an idiot" feeling when the right piece of knowledge fits the slot and everything makes more sense =)
@timbl are you by any chance going to be in town for our event at May 13? I have received tremendous interest from local developers here
Hi Li Lu, Tim will be speaking in Toronto around that time but best not to plan around his attendance of Solid Toronto, happy to set up another chat and go through the Solid Toronto plans and possible content with you if that's helpful
Hey Mitzi! Thank you for the offering! We are planning the event assuming Tim’s absence. Just can’t help with the excitement that he might be in town. We will have the content flow figured about mid week and would love to get your feedback once the flow is set! Thank you!
Hey, just got off call with John and Kelly from @RubenVerborgh connecting us over the POD sync/replication (across different POD providers) and POD encryption (so POD providers can't read data), I have things ready to do this now just need to spearhead it in with somebody who can get me acquainted with codebase nuances... who is interested in SOLID sync & encryption!?
oh hey... it's mark from gundb
What do you think about caching private data in the localStorage? The way I understand localStorage, it shares it's data with all apps from which have the same domain. This would mean, that if the user has installed two different apps on his pod (or uses them from some kind of appstore), they could access the data from each other regardless of their unique access constraints. Furthermore, they could use the login credentials stored by solid-auth-client even if they weren't given any permission
I've read here, that "Two actors in the Web platform that share an origin are assumed to trust each other and to have the same authority", but I don't really think this applies for solid as giving unique permission control per app is (imo) a key feature of it. That they share the same host doesn't mean that they even know of each other if we think of storing apps on the own pod or using some kind of app store
But Solid relies on the Origin as well, so if two apps share an origin they are actually one app from solid point of view
Yes, I forgot that. But this suggests
... that we should host each app on its own (sub) domain. So hosting all the apps I use on my pod is a bad idea then...?
It is reasonable to host apps made by the same person in the same domain. As you are trusting that person I suppose. Unless you give the two apps different access. Then they each have to be on different domains
That design is forced on us by the browser Same Origin Policy
Ah no it's basically my undergrad students being assigned the task to build a blog app on Solid
Bonus point: demonstrate consuming of data created by another application :)
@Otto-AA the per-orign localStorage cache is working now in solid-rest-browser
Where can I ask a fundamental newbie question such as: why distribute data instead of creating a coop to own and control data, and sell memberships in it? (solve data ownership with ownership rather than technology). Food for thought, or indigestion (sorry)?
Better yet, support overpayments and.change the available solutions for those who do some sort of.work online, then use transactional incomes to ensure no one needs to sell personal data whatsoever... Billions of people, trillions of things, shouldn't take much to get an hourly wage for good work... other than the investment required to make the technology needed for said new forms of taxable revenue option. Obviously, the issues people had since exodus in trying to get paid for old types of work, like that of a stonemason, are now fairly well resolved. Perhaps the problem is that we need to bring about a global web strike!
gitter is a bit difficult to follow when discussions get lengthy
Should say, micropayments, not overpayments. So many jobs to do, there's a glut of jobs, no shortage of them. Just a complete lack of suitable financial instruments and related support infrastructure.
Floor price for a micropayment should be as low as possible, incorporating the cost of the energy consumed to support it.
https://digiconomist.net/bitcoin-energy-consumption won't work. I've been working to figure how how to share the revenue sourced from say, a 2c app, across hundreds of contributors providing thousands of hours work, to.make it happen, who say, want to get paid $50 per hour (only), for having done the work (a very basic example), then the app might be made free, via an approach I called last year, software as a utility...
Noting, many models can be defined using rdf / semweb goodness...