Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Jul 28 12:49
    csarven commented #288
  • Jul 28 08:03
    csarven commented #288
  • Jul 28 06:47
    rubensworks commented #291
  • Jul 28 01:08
    gibsonf1 commented #288
  • Jul 28 01:05
    acoburn commented #288
  • Jul 28 00:18
    acoburn commented #291
  • Jul 27 19:50
    kjetilk labeled #291
  • Jul 27 19:50
    kjetilk labeled #291
  • Jul 27 19:50
    kjetilk opened #291
  • Jul 27 19:50
    kjetilk labeled #291
  • Jul 27 18:39
    kjetilk commented #220
  • Jul 27 18:36
    kjetilk demilestoned #125
  • Jul 27 18:36
    kjetilk commented #125
  • Jul 27 18:21
    kjetilk commented #125
  • Jul 27 18:16
    kjetilk commented #288
  • Jul 27 18:06
    kjetilk unlabeled #267
  • Jul 27 13:48
    csarven labeled #290
  • Jul 27 13:48
    csarven milestoned #290
  • Jul 27 13:48
    csarven opened #290
  • Jul 27 13:46

    csarven on trust-between-owners-same-origin-multiple-storages

    Trust between owners on same or… (compare)

Sarven Capadisli
@csarven

@/all To get the most out of spec/panel meetings, I propose to prioritise meeting agenda items along these lines:

  • Announcements: General announcements, agenda review, call for scribe..
  • ReviewMinutes: Review/approve previous meeting minutes
  • ContinueDiscussion: Continue unresolved items from previous meeting
  • PullRequests: Review open pull requests
  • Issues: Take up existing issues
  • Discussion: Community feedback and discussion

This is not a strict order and there is no strict time allocation for each. The group should make reasonable effort to touch all items with sufficient time, make sure to mark unfinished discussions to be taken up in future meetings..

If there is something else that should be covered or handled differently, please say so. We can update when there are significant changes to the way meetings are held.

Fred Gibson
@gibsonf1
@acoburn & @csarven I think state level permissions are needed to solve problems like system vs user data on the same resource, where a state is a uri representing a triple. For example, the following triples (and others) are standard for a solid user:
<https://frederick.trinpod.us/@>  
    solid:account <https://frederick.trinpod.us/> ;
    solid:oidcIssuer "https://trinpod.us"^^<xsd:string> ;
    solid:privateTypeIndex frederick:t_72 ;
    solid:publicTypeIndex frederick:t_6x ;
    space:preferencesFile frederick:t_8d ;
    space:storage <https://frederick.trinpod.us/> ...
A user can easily destroy their pod by editing triples like these, so in our case we put system acl control on these states and give public read permission
Sarven Capadisli
@csarven
@gibsonf1 That's fair. I've noted https://github.com/solid/specification/issues/67#issuecomment-766962934 -- @acoburn WDYT?
Fred Gibson
@gibsonf1
On a similar note, - how would a system set an acl on system required containers like / and /inbox/ etc such that user could not delete the container and destroy their pod?
It would be great if user could have append/read on those containers, but current solid spec would not allow user to create subcontainers etc in that case
Fred Gibson
@gibsonf1
Our workaround for now is that we'll have a system control acl on system required containers, and in all cases where system has control acl, user will not be permitted to make any changes
Yvo Brevoort
@ylebre
I have a setup where the containers are created if they don't exist, so that the pod stays in a sane state. Not sure what the best approach is to this though.
Aaron Coburn
@acoburn
@csarven mandating particular containers seems fine for a particular app or Pod server, but I can’t see how that would be something for the spec. A linked data client should just “follow its nose” to find these locations
Sarven Capadisli
@csarven
@acoburn Hmm? Sorry, not sure what you're referring to. Maybe the threads are getting mixed up?
Fred Gibson
@gibsonf1
@acoburn I think there are some mandatory containers, like inbox?
If not, thats great - so discovery of that uri from the ldp:inbox predicate. I guess the issue would be for the implementation then to protect that resource being there
Aaron Coburn
@acoburn
@gibsonf1 there are conventions for various containers. They certainly aren’t mandatory
as for protecting the resources in those locations, that would be entirely an implementation decision. One could also argue that users should be in control of their own data, and if they want to delete a container called /inbox/, they should be able to do that
Justin Bingham
@justinwb

as for protecting the resources in those locations, that would be entirely an implementation decision. One could also argue that users should be in control of their own data, and if they want to delete a container called /inbox/, they should be able to do that

agree - this isn’t something for the core protocol to determine, but a matter of user choice.

if the user chose to maintain some enforcement, a scheme based on resource name should be avoided. better to focus on the composition of the data itself
@gibsonf1 if you’re interested in tree-centric validation you should take a read through https://shapetrees.org/TR/specification/index.html
Fred Gibson
@gibsonf1
@justinwb Yes, the shapetrees are good, but if user deletes the designated ldp:inbox container, lets say by accident, then that user will be pretty unhappy that suddenly all their correspondence is gone
When it comes to the masses in the millions, most people have very little geek abilities, and they will click buttons and things will happen that they didn't plan, which is why all major mass user services protect against inadvertent user actions
Justin Bingham
@justinwb
my point is that a shape tree gives you the structure to identify the container for an inbox as required in a given tree hierarchy
Sarven Capadisli
@csarven
Inbox (for anything) is not a mandatory container in Solid Protocol. The only required container in Solid Protocol is the root container which is of type pim:Storage. Root containers can't be deleted.
Sarven Capadisli
@csarven
Having said that, Inbox could be required in the Solid ecosystem in context of certain specifications eg. https://www.w3.org/TR/activitypub/ requires LDN inbox for server-server interactions (for federated servers). When AP is introduced into the Solid ecosystem, we can set constraints on a resource (eg. WebID/actor profile document) with a shape to persist the inbox relation, as well as the Inbox container/collection itself.
Sarven Capadisli
@csarven

@/all What does everyone think about only using quoted access modes for the WAC-Allow header? The current ABNF for WAC-Allow makes the quotes optional eg. user="read" and user=read are both valid. This was intentional. However, as brought up by @edwardsph in testing, if we take the ABNF as is, a parser that's following the ABNF as gospel could potentially generate unmatching quotes, and so it is not fun for clients to bother with something like user="read public="" . This would be silly and in most situations it would probably be considered a bug by implementations.. but it might be better to tighten this up... so to simplify client's parsing.

Aside: when I originally wrote the ABNF, I followed Content-Type's lead ie. allowing both quoted and unquoted values. However, those don't say anything about unmatching cases.. or factored in.

ABNF is supposed to be "aspirational".. so, we have a couple of choices. We can do one of the following (relatively complex to simple order):

  1. Update current ABNF to address unmatching quotes
  2. Update current ABNF to use quoted values only
  3. Leave ABNF as is (allowing quoted and unquoted) but add surrounding text for implementers to make sure not to generate unmatched quotes.

I think 2 or 3 will be fine. Slight preference for 2. 1 is overkill... and may still need some handholding with supporting text.

Sarven Capadisli
@csarven
If I can get a few quick opinions.. I can make a PR right away.
Ruben Verborgh
@RubenVerborgh
As the initial implementer of WAC-Allow, I have no preference.
Justin Bingham
@justinwb
i like 2 - simplest and most explicit
Aaron Coburn
@acoburn
no strong opinion
Sarven Capadisli
@csarven
OKie dokie. good enough for me. Let's go with bachelor/bachelorette number 2.
thanks all

WOW, I guess I don't remember anything... since we already had this text:

The quoted and unquoted values for <code>access-modes</code> are equivalent. Servers are recommended to use quoted values in the response. Clients' are recommended to be able to parse both quoted and unquoted values.

Going to remove that line and update ABNF
Pete Edwards
@edwardsph
#2 had my vote too
Sarven Capadisli
@csarven
Alain Bourgeois
@bourgeoa

Solid has the notion of containers to represent a collection of linked resources to help with resource discovery and lifecycle management.

Are there any reference in solid spec to the lifecycle management. Is it within the containement triples with dates or etags .... Is there a W3 spec ? NSS use dates.

Sarven Capadisli
@csarven

@bourgeoa In that context, "lifecycle" is as described in https://www.w3.org/TR/ldp/#dfn-containment . Happy to clarify this in the Protocol spec though. The intention is that a container gets to be aware of what happens to its resources eg. deleting a resource also entails that there is a cleanup task in which the containment statement is removed from the delete resource's container. There is also related requirements like disallowing a request to delete a non-empty container.

I'm curious to know what hinted at dates/etags for you though..

Alain Bourgeois
@bourgeoa

@csarven there where 2 questions : the first one came from solidos meeting where @timbl stated that in containement triples there should be date creation and date modified. I did not find any reference in solid spec. And the second was around the discussion in app-development chat around how to know that a resource has changed and where the first response was to check the body content.

If I may suggest that the solid specification give not only a global reference link but also to a paragraph link to the w3 specification.

Sarven Capadisli
@csarven

Suggestion noted, thanks. I'm aware of the date info consideration about container resources in the container description.. will come back to this. (It is currently not a requirement).

For resource changes, yes, well, if authorized, and if present, Last-Modified or ETags are good indicators on each resource. To detect those changes from the container, yes, resource description (but again, if it is available.. and right now it is not required).

Sarven Capadisli
@csarven

re exchange with the Credentials CG, can we perhaps commit to Feb 24 (Wednesday).. with two sessions? One of those will be the authz-panel's slot, and the other will be CG's slot later on. And lets see what works for them?
See https://gitter.im/solid/specification?at=6001ada781c55b09c70d2da8 for details from earlier. Everyone is welcome to attend. Make sure to be a member of at least one of the CGs.
Justin Bingham
@justinwb
@csarven confirming 24th would be for the Authorization Panel’s presentation specifically? if so we could reuse the authz panel session time slot since its same day (probably why you suggested that day)
Sarven Capadisli
@csarven
Right. Both groups have a meeting on that day so reusing both slots. We'll figure out the rest as we go.
Justin Bingham
@justinwb
:+1:
Sarven Capadisli
@csarven
Ours starts at 16:00 CET and theirs at 19:00 CET. So, it'd be good to give a confirmation at least from our end for those two times.
Justin Bingham
@justinwb
16:00 CET should be a safe bet since it overlaps the current time. i can make both slots :white_check_mark: can raise second slot w/ panel on weds unless we need answer sooner
Henry Story
@bblfish
Ok. that gives me 3 weeks to prepare some implementation of credentials parsing to be really up to scratch on what is going on there.
Sarven Capadisli
@csarven
@bourgeoa I don't quite understand your question or relevance in https://github.com/solid/specification/issues/227#issuecomment-773402869 . If that's a separate need, can we discuss here? Perhaps delete the comment?
Alain Bourgeois
@bourgeoa
@csarven May be I do not understand your process. Is containment triples not the object of your issue ?
These containment triples are produced out of information available somehow on the server. Some of these informations should be available because solid is following the server http specification. For these one there should be no real cost. This do not imply that they must be available in containment triples and other may be solid specific not available from existing spec.
It is just an information in the discussion.
If you still feel inappropriate I shall delete my comment.
Sarven Capadisli
@csarven
No, not containment triples. What information is available in a container representation? Containment triples are expected but what other information, if at all, should it include?
Alain Bourgeois
@bourgeoa
That is exactly my point. I suppose that my wording above then is bad.
Is container representation something different from a collection of triples ?
Where you to something different like .meta being included by NSS in the container representation.
Aaron Coburn
@acoburn

This is definitely something to clarify at the spec level, since different servers behave differently in this regard. Requiring child descriptions in the container listing, however, is problematic.

Consider a structure such as </container/> ldp:contains <a>, <b>, <c> .

In order to view that data, an agent needs read access to /container/, but may not necessarily have read access to a, b or c. Including descriptions of a, b and c in that container listing, however, will mean that the server will need to perform access checks on each of those child resources. In this simple case, that means 4 authZ checks.

Containers, however, can include an arbitrary number of child resources, and once that number grows, that means that every GET request to a large container could, potentially, be its own DoS attack.

One can achieve the same goals by using a query endpoint without the scalability issues