Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • 08:56
    csarven commented #141
  • 08:55
    csarven commented #141
  • 08:55
    csarven closed #141
  • 08:55
    csarven assigned #141
  • Sep 24 10:58
    bblfish commented #315
  • Sep 24 10:57
    bblfish commented #315
  • Sep 24 10:43
    kjetilk commented #310
  • Sep 24 09:02
    csarven transferred #167
  • Sep 24 09:02
    csarven transferred #166
  • Sep 24 09:01
    csarven transferred #53
  • Sep 24 09:01
    csarven transferred #52
  • Sep 24 09:01
    csarven transferred #51
  • Sep 24 09:01
    csarven transferred #168
  • Sep 24 09:01
    csarven transferred #278
  • Sep 24 09:00
    csarven transferred #155
  • Sep 24 08:59
    csarven transferred #300
  • Sep 24 08:59
    csarven transferred #299
  • Sep 24 08:59
    csarven transferred #296
  • Sep 24 08:59
    csarven transferred #295
  • Sep 24 08:58
    csarven transferred #294
Eric Prud'hommeaux
@ericprud
@acoburn , if i understand, Trellis won't respect containment triples on a Container PUT even if the client sends an appropriate If-Match header?
Aaron Coburn
@acoburn
that is correct. containment triples are server-managed, so a client is not able to manipulate them at all. Hence, the server just ignores any attempt by a client to set them

for instance: for the resource:
</container/> ldp:contains </container/a>, </container/b>, </container/c> .

if a client sends a PUT with:
</container/> dc:title “My container” ;
ldp:contains </container/d> .

The resulting resource will be:

</container/> dc:title “My container” ;
ldp:contains </container/a>, </container/b>, </container/c> .

(the containment triples will be unchanged)
Henry Story
@bblfish

That seems a reasonable difference. Would probably be useful to provide feedback and see if that MUST is really necessary.

(((I can't remember precisely, but I have the feeling that at the end of the LDP WG life there was a discussion about container triples being in a different linked to resource, to avoid having to PUT anything into the container. )))

Aaron Coburn
@acoburn
Internally, containment data is managed in a separate named graph, so there is already some level of partitioning happening. When a user manipulates data via PUT/POST/PATCH those operations only apply to the user-managed named graph.
Eric Prud'hommeaux
@ericprud
yeah, i like a bright line there
it could barf if it sees membership triples inconsistent with its metadata, but that's a fair amout of effort
Aaron Coburn
@acoburn
on write, there are consistency checks around ldp membership triples. Otherwise a 409 Conflict is triggered
Eric Prud'hommeaux
@ericprud
gotcha, tx
Sarven Capadisli
@csarven
Servers MUST NOT allow HTTP POST, PUT and PATCH to update a container’s containment triples; if the server receives such a request, it MUST respond with a 409 status code. [Source]
Sarven Capadisli
@csarven
So, @acoburn I thought we agreed that server will reject if request is trying to change the containment triples through the container. The only way to change the containment is indirectly by either adding or removing a member of a container.
Especially with PUT, it shouldn't do partial update. That's different than ignoring.
Sarven Capadisli
@csarven
I'm hungry and sleepy. Is it weekend yet?
Justin Bingham
@justinwb
:laughing:
Eric Prud'hommeaux
@ericprud
not for justin
Quit slacking, @justinwb !
Aaron Coburn
@acoburn

@csarven the key phrase is “trying to change the containment triples”. A client may very well not be “trying to change” these triples. It may be that between a GET and a PUT the containment triples have changed because of the actions of some other agent.

The issue largely boils down to architecture — if you have a distributed storage layer and if you want to be strict about this, you kind of have to lock your entire cluster, which results in a huge performance penalty.

Alternatively, the server can ignore containment triples and there is no need for locking.

If a client explicitly wants to avoid the “lost update problem”, that’s what ETags are for

Henry Story
@bblfish
that makes sense. So people are trying to do a PUT to add metadata? I would tend to think that that should be fetched from the resources themselves. A simple idea would be to add a SPARQL query to fill in those details.
Perhaps the metadata of those resources should be in their Link: <doc.meta>; rel=meta resource.
The only thing I think that makes sense in editing the container is adding properties on the ldp:Container itself. The way it is done with indirect containers.
Sarven Capadisli
@csarven

@acoburn The request semantics of PUT is simply that client intents to replace the resource state. What may happen before their PUT (eg. change of containment information from their last GET) is orthogonal. I used "trying" informally in chat for "requesting", but let's stick to the language in the current spec for now -- am open to paraphrasing.

Re your example requests in https://gitter.im/solid/specification?at=600ae96c36db01248a95544b , servers must reject the PUT request because it is an attempt to modify/update the containment triple set. 409. If server were to allow changes to the containment (with direct update to the container) the resource state would end up either having a) orphaned resources (by removal of containment statements), b) reference to non-existent resource (by adding containment statements, or both. a) conflicts with another requirement, and also unclear (unspecified) whether those dangling resource are completely unmanageable (re lifecycle) from here on end, b) although seems harmless, it creates additional burden on server eg. needs to respond to these non-existent resources (and possibly their non-existing auxiliary resources).

My understanding of the prior discussions (especially in the PR) was based on the above - happy to rephrase if choice of words can be better. What I'd like to know right now for starters is 1) if we have the same interpretation of the current spec text, and 2) whether there is a request to change the current requirement in light of implementation experience or better understanding of the matter.

If we need to clarify between the updates: 1) modifying existing containment statements 2) adding new containment statements or removing containment statements, I'm open to that as well. The text "update a container’s containment triples" intended to cover both cases ie. the change to the set of containment statements.

Alain Bourgeois
@bourgeoa
@csarven I feel that a problem lies in the use of the word containement could something like turtle image/description/representation of container content be easier to understand.
Sarven Capadisli
@csarven
I don't follow. Containment triples are only really about statements including ldp:contains - that's deemed to be server-managed. Client can't directly alter them.
Aaron Coburn
@acoburn

@csarven my position is that a resource’s state consists of two things: server-managed data and client-managed data. A client can manipulate all of the client-managed data (with some restrictions) and none of the server managed data.

For the most part, server-managed data becomes part of HTTP headers while client-managed data is RDF. The challege with containment triples is that they appear in the body of the RDF so they look like client-managed triples.

If we dealt with quads, it would be more obvious that the data were partitioned, but it’s ambiguous with triples.

What I think we both agree on is this: a client MUST NOT be able to change a target container’s containment triples via PUT. The difference is really about whether attempting to do so results in a 4xx or just ignoring that part.

I would argue that both patterns should be possible. Requiring a 4xx response is basically a non-starter for implementations with a distributed storage layer because you’d need to lock on the entire container on every write. In the worst case, you’d need to lock the entire server on every write.

Sarven Capadisli
@csarven

@acoburn "Ignoring" may not be accurate, especially if 200 or 204 are used for the response. That tells the client that the representation it provided is now the latest resource state - which would include the requested changes. I can see that "ignoring" could work if the response status was along the lines of a 202 so that there is an out of band step that can make sure the integrity of the resource state is maintained - I'm not saying this is a good idea or making it a proposal.. just want to illustrate but perhaps something of a consideration for implementations that need to lock the container state.

Another approach is that if servers allow PUT to update containers then they are willing to handle potential conflicts. If servers don't want to get into that, they can simply omit PUT on containers.

"Ignoring" may require a 200 with payload as response because server may need (or want?) to indicate the state as the result. Perhaps in addition to ETag.

Sarven Capadisli
@csarven

Does anyone have any new suggestions or preferences re solid/specification#215 -- need to make this happen very soon.
Sarven Capadisli
@csarven

@/all To get the most out of spec/panel meetings, I propose to prioritise meeting agenda items along these lines:

  • Announcements: General announcements, agenda review, call for scribe..
  • ReviewMinutes: Review/approve previous meeting minutes
  • ContinueDiscussion: Continue unresolved items from previous meeting
  • PullRequests: Review open pull requests
  • Issues: Take up existing issues
  • Discussion: Community feedback and discussion

This is not a strict order and there is no strict time allocation for each. The group should make reasonable effort to touch all items with sufficient time, make sure to mark unfinished discussions to be taken up in future meetings..

If there is something else that should be covered or handled differently, please say so. We can update when there are significant changes to the way meetings are held.

Fred Gibson
@gibsonf1
@acoburn & @csarven I think state level permissions are needed to solve problems like system vs user data on the same resource, where a state is a uri representing a triple. For example, the following triples (and others) are standard for a solid user:
<https://frederick.trinpod.us/@>  
    solid:account <https://frederick.trinpod.us/> ;
    solid:oidcIssuer "https://trinpod.us"^^<xsd:string> ;
    solid:privateTypeIndex frederick:t_72 ;
    solid:publicTypeIndex frederick:t_6x ;
    space:preferencesFile frederick:t_8d ;
    space:storage <https://frederick.trinpod.us/> ...
A user can easily destroy their pod by editing triples like these, so in our case we put system acl control on these states and give public read permission
Sarven Capadisli
@csarven
@gibsonf1 That's fair. I've noted https://github.com/solid/specification/issues/67#issuecomment-766962934 -- @acoburn WDYT?
Fred Gibson
@gibsonf1
On a similar note, - how would a system set an acl on system required containers like / and /inbox/ etc such that user could not delete the container and destroy their pod?
It would be great if user could have append/read on those containers, but current solid spec would not allow user to create subcontainers etc in that case
Fred Gibson
@gibsonf1
Our workaround for now is that we'll have a system control acl on system required containers, and in all cases where system has control acl, user will not be permitted to make any changes
Yvo Brevoort
@ylebre
I have a setup where the containers are created if they don't exist, so that the pod stays in a sane state. Not sure what the best approach is to this though.
Aaron Coburn
@acoburn
@csarven mandating particular containers seems fine for a particular app or Pod server, but I can’t see how that would be something for the spec. A linked data client should just “follow its nose” to find these locations
Sarven Capadisli
@csarven
@acoburn Hmm? Sorry, not sure what you're referring to. Maybe the threads are getting mixed up?
Fred Gibson
@gibsonf1
@acoburn I think there are some mandatory containers, like inbox?
If not, thats great - so discovery of that uri from the ldp:inbox predicate. I guess the issue would be for the implementation then to protect that resource being there
Aaron Coburn
@acoburn
@gibsonf1 there are conventions for various containers. They certainly aren’t mandatory
as for protecting the resources in those locations, that would be entirely an implementation decision. One could also argue that users should be in control of their own data, and if they want to delete a container called /inbox/, they should be able to do that
Justin Bingham
@justinwb

as for protecting the resources in those locations, that would be entirely an implementation decision. One could also argue that users should be in control of their own data, and if they want to delete a container called /inbox/, they should be able to do that

agree - this isn’t something for the core protocol to determine, but a matter of user choice.

if the user chose to maintain some enforcement, a scheme based on resource name should be avoided. better to focus on the composition of the data itself
@gibsonf1 if you’re interested in tree-centric validation you should take a read through https://shapetrees.org/TR/specification/index.html
Fred Gibson
@gibsonf1
@justinwb Yes, the shapetrees are good, but if user deletes the designated ldp:inbox container, lets say by accident, then that user will be pretty unhappy that suddenly all their correspondence is gone
When it comes to the masses in the millions, most people have very little geek abilities, and they will click buttons and things will happen that they didn't plan, which is why all major mass user services protect against inadvertent user actions
Justin Bingham
@justinwb
my point is that a shape tree gives you the structure to identify the container for an inbox as required in a given tree hierarchy
Sarven Capadisli
@csarven
Inbox (for anything) is not a mandatory container in Solid Protocol. The only required container in Solid Protocol is the root container which is of type pim:Storage. Root containers can't be deleted.