csarven on minutes-template
Init minutes template (compare)
for instance: for the resource:
</container/> ldp:contains </container/a>, </container/b>, </container/c> .
if a client sends a PUT with:
</container/> dc:title “My container” ;
ldp:contains </container/d> .
The resulting resource will be:
</container/> dc:title “My container” ;
ldp:contains </container/a>, </container/b>, </container/c> .
That seems a reasonable difference. Would probably be useful to provide feedback and see if that MUST is really necessary.
(((I can't remember precisely, but I have the feeling that at the end of the LDP WG life there was a discussion about container triples being in a different linked to resource, to avoid having to PUT anything into the container. )))
@csarven the key phrase is “trying to change the containment triples”. A client may very well not be “trying to change” these triples. It may be that between a GET and a PUT the containment triples have changed because of the actions of some other agent.
The issue largely boils down to architecture — if you have a distributed storage layer and if you want to be strict about this, you kind of have to lock your entire cluster, which results in a huge performance penalty.
Alternatively, the server can ignore containment triples and there is no need for locking.
If a client explicitly wants to avoid the “lost update problem”, that’s what ETags are for
Link: <doc.meta>; rel=metaresource.
@acoburn The request semantics of PUT is simply that client intents to replace the resource state. What may happen before their PUT (eg. change of containment information from their last GET) is orthogonal. I used "trying" informally in chat for "requesting", but let's stick to the language in the current spec for now -- am open to paraphrasing.
Re your example requests in https://gitter.im/solid/specification?at=600ae96c36db01248a95544b , servers must reject the PUT request because it is an attempt to modify/update the containment triple set. 409. If server were to allow changes to the containment (with direct update to the container) the resource state would end up either having a) orphaned resources (by removal of containment statements), b) reference to non-existent resource (by adding containment statements, or both. a) conflicts with another requirement, and also unclear (unspecified) whether those dangling resource are completely unmanageable (re lifecycle) from here on end, b) although seems harmless, it creates additional burden on server eg. needs to respond to these non-existent resources (and possibly their non-existing auxiliary resources).
My understanding of the prior discussions (especially in the PR) was based on the above - happy to rephrase if choice of words can be better. What I'd like to know right now for starters is 1) if we have the same interpretation of the current spec text, and 2) whether there is a request to change the current requirement in light of implementation experience or better understanding of the matter.
If we need to clarify between the updates: 1) modifying existing containment statements 2) adding new containment statements or removing containment statements, I'm open to that as well. The text "update a container’s containment triples" intended to cover both cases ie. the change to the set of containment statements.
@csarven my position is that a resource’s state consists of two things: server-managed data and client-managed data. A client can manipulate all of the client-managed data (with some restrictions) and none of the server managed data.
For the most part, server-managed data becomes part of HTTP headers while client-managed data is RDF. The challege with containment triples is that they appear in the body of the RDF so they look like client-managed triples.
If we dealt with quads, it would be more obvious that the data were partitioned, but it’s ambiguous with triples.
What I think we both agree on is this: a client MUST NOT be able to change a target container’s containment triples via PUT. The difference is really about whether attempting to do so results in a 4xx or just ignoring that part.
I would argue that both patterns should be possible. Requiring a 4xx response is basically a non-starter for implementations with a distributed storage layer because you’d need to lock on the entire container on every write. In the worst case, you’d need to lock the entire server on every write.
@acoburn "Ignoring" may not be accurate, especially if 200 or 204 are used for the response. That tells the client that the representation it provided is now the latest resource state - which would include the requested changes. I can see that "ignoring" could work if the response status was along the lines of a 202 so that there is an out of band step that can make sure the integrity of the resource state is maintained - I'm not saying this is a good idea or making it a proposal.. just want to illustrate but perhaps something of a consideration for implementations that need to lock the container state.
Another approach is that if servers allow PUT to update containers then they are willing to handle potential conflicts. If servers don't want to get into that, they can simply omit PUT on containers.
"Ignoring" may require a 200 with payload as response because server may need (or want?) to indicate the state as the result. Perhaps in addition to ETag.
@/all To get the most out of spec/panel meetings, I propose to prioritise meeting agenda items along these lines:
This is not a strict order and there is no strict time allocation for each. The group should make reasonable effort to touch all items with sufficient time, make sure to mark unfinished discussions to be taken up in future meetings..
If there is something else that should be covered or handled differently, please say so. We can update when there are significant changes to the way meetings are held.
<https://frederick.trinpod.us/@> solid:account <https://frederick.trinpod.us/> ; solid:oidcIssuer "https://trinpod.us"^^<xsd:string> ; solid:privateTypeIndex frederick:t_72 ; solid:publicTypeIndex frederick:t_6x ; space:preferencesFile frederick:t_8d ; space:storage <https://frederick.trinpod.us/> ...
as for protecting the resources in those locations, that would be entirely an implementation decision. One could also argue that users should be in control of their own data, and if they want to delete a container called /inbox/, they should be able to do that
agree - this isn’t something for the core protocol to determine, but a matter of user choice.