Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
    Austin Wright
    @awwright
    @namedgraph_twitter And in case you're curious, what's going on there is it's converting SPARQL to an object the same way SPARQL.js does, but with additional attributes (similar to JSON-LD) so js3 can produce an RDF graph out of it
    Martynas Jusevicius
    @namedgraph_twitter
    @awwright is it generated somehow or hand-coded?
    Austin Wright
    @awwright
    @namedgraph_twitter The grammar is coded to output the correct property names and values
    Thomas Bergwinkl
    @bergos
    @elf-pavlik @RubenVerborgh i don't see a review button in this PR: rdfjs/stream-spec#11 but i thought i should see it, cause i'm member of the parent team. any ideas if there is something missing in the settings?
    elf Pavlik
    @elf-pavlik
    Thomas Bergwinkl
    @bergos
    ah, there i can see it. when i'm directly added as reviewer, i can also see a button on the start page: rdfjs/stream-spec#11
    Blake Regalia
    @blake-regalia
    Has there been any discussion on the prospect of putting a getSpecVersion method into the specification? I imagine it could be future-proof to have some way of determining at run time which version an impl supports, maybe it's just silly.. thought I'd ask
    Thomas Bergwinkl
    @bergos
    i see your point. i'm not sure if it makes sense now, as developers would choose the version of the spec, by choosing the package + version of the implementation. once we have native RDF/JS support in the browser, a member variable or getter for the version would be a useful feature.
    Austin Wright
    @awwright
    What other ECMAScript API has version numbers?
    Most get regular updates/publications, but that's not really the same thing.
    The DOM API has "levels", but I find that really terrible and not something to duplicate.
    Blake Regalia
    @blake-regalia
    i haven't spent a lot of time watching the stream initiatives evolve, but was there a reason for wanting to use .import() as opposed to the conventional .pipe() ?
    Adrian Gschwend
    @ktk
    @blake-regalia interesting library (graphy.link), cli is useful, any plans for json-ld support? that's like the one thing missing from the ones I know
    Blake Regalia
    @blake-regalia
    @ktk thanks! yes, JSON-LD will be after next major, i.e., probably 4.2
    Thomas Bergwinkl
    @bergos
    we just have a readable stream in the spec. that allows us to implement streams without the back preasure pain. that's based on @RubenVerborgh experience and performance tests with full features streams.
    anyway, many of us use the node stream / readable-stream package. also i started a package to wrap the .import() construct into a duplex stream. it's a quick implementation. i think it can be done with better performance, but needs some more tests: https://github.com/rdfjs/sink-to-duplex
    elf Pavlik
    @elf-pavlik

    @/all rdfjs/data-model-spec#158 a month later still have 2 approvals, I think we may need to adjust our practice and maybe require at least 2 approvals and wait one full week for objections. After that merging.

    Any objections to me merging that PR tomorrow, unless 3rd person approves and merges it already today.

    Thomas Bergwinkl
    @bergos
    did we define how many people need to approve it?
    if i should have already merged it, sorry i got totally use to merge my stuff myself once there are enough approvals, which i think is also ok after one week, like you proposed.
    also i think there are not that many members in the spec teams, so not many people get automatically assigned to PRs.
    @/all if you would like to review PRs for the specs please tell me, i will assign you to a team or to the team for all specs.
    elf Pavlik
    @elf-pavlik

    :+1:

    did we define how many people need to approve it?

    I think we agreed on 3 but based on my experience since then 2 seems more realistic, I also think that not fundamental changes can get merged after a week and for more significant ones we should give 2 weeks to make sure everyone had chance to review.

    Thomas Bergwinkl
    @bergos
    sounds good. and somebody else should merge it or do we allow to merge own PRs?
    elf Pavlik
    @elf-pavlik
    I would say preferably people listed as Editors of the spec will take responsibility of merging PRs in timely fashion.

    https://www.w3.org/2019/Process-20190301/#general-requirements

    Every Technical Report published as part of the Technical Report development process is edited by one or more editors appointed by a Group Chair. It is the responsibility of these editors to ensure that the decisions of the Group are correctly reflected in subsequent drafts of the technical report. An editor must be a participant, per section 5.2.1 in the Group responsible for the document(s) they are editing.

    For each spec at least one person (probably no more than 3) should act as editor and many more people can act as authors.
    elf Pavlik
    @elf-pavlik
    Preferably only editors merge PRs while all authors can make PRs and review PRs
    Thomas Bergwinkl
    @bergos
    @/all please have a look and give feedback to the following issues: rdfjs/dataset-spec#52 rdfjs/dataset-spec#53
    it's about replacing the code in https://github.com/rdfjs/dataset with an indexed implementation based on @RubenVerborgh N3Store
    elf Pavlik
    @elf-pavlik
    Who works with Web Workers (including Service Worker) ? I think we should make sure that applications can easily use it, which means keep https://developer.mozilla.org/en-US/docs/Web/API/Web_Workers_API/Structured_clone_algorithm in mind. If we define interfaces which don't allow structured cloning I think we should at least have clear methods for round trip conversion.
    Thomas Bergwinkl
    @bergos
    @elf-pavlik right now the solution would be serializer -> string/object -> parser. for sure this should be easier. one option would be a package that wraps all that stuff and provides just two methods. this could be done outside the specs. if we want to add it to the spec, the alternative would be adding .toJSON() and .fromJSON() methods. for the last option i always fear that we start defining a new serialization and people start to use it outside js.
    Thom van Kalkeren
    @fletcher91

    @elf-pavlik We've used them at one point, but passing the data to the render thread (IIRC JSON.stringify would output a large [{termtype: x, value: y}, ...] string -> put it in.postMessage -> call JSON.parse (which are now POJSO compliant objects) -> Iterate new Statement(new S, etc)) created a larger hit than benefit for mobile devices (in rdflib the render thread also has to perform indexing again). The same problem occurred with a POC when reading and writing data via indexedDB

    Some more liberal thoughts; from a performance perspective, using a SharedArrayBuffer with HDT formatting would seem the end-game. Libraries can transform the buffer representation to a spec compliant one, skipping the cloning algorithm altogether (though possible still needing to iterate to instantiate)

    Thomas Bergwinkl
    @bergos
    @fletcher91 i haven't checked in detail, but on the linked page it looks like also objects and arrays would be supported. so at least the JSON.stringify() and JSON.parse() step could be skipped.
    if there would be a dataset/data model wrapper around binary formats that converts stuff on the fly, the binary format could have a benefit, but if the complete content gets decoded i expect it will use even more memory, cause the value strings of the plain objects could be reused for the data model objects.
    Thomas Bergwinkl
    @bergos
    HDT only supports triples, but anyway binary format for RDF is something we should have a look at. HDT focus is archives, but there are many more use case (like this one) which has different requirements.
    Dmitri Zagidulin
    @dmitrizagidulin
    @bergos triples meaning, not quads?
    elf Pavlik
    @elf-pavlik

    https://html.spec.whatwg.org/multipage/structured-data.html#safe-passing-of-structured-data mentions

    Platform objects can be serializable objects if they implement only interfaces decorated with the [Serializable] IDL extended attribute.

    So I don't think we should conflate it with JSON which does stringify and parse. Recently added DataFactory#fromTerm and DataFactory#fromQuad already can upgrade Serializable objects to full RDF/JS interface (pretty much adding .equals() method). I would like to reconsider having something like toSerializable() method which would make it symmetric and provide clear way to do round trip. I think this would also come helpful for Dataset so WebWorker (including ServiceWorker) can fetch data and pass whole dataset to the main thread. IMO one never should need to run any RDF parser on the main thread.

    Some use cases could use [Transferable] objects https://html.spec.whatwg.org/multipage/structured-data.html#transferable-objects but I think we still have to support [Serializable] since they seem to have broader set of use cases.
    elf Pavlik
    @elf-pavlik

    in rdflib the render thread also has to perform indexing again

    @fletcher91 would [Transferable] help with passing indexed rdfjs store ?

    Thom van Kalkeren
    @fletcher91

    @elf-pavlik That's a tough question to answer from pure reasoning, I unfortunately don't currently have a good setup to test it. When using Transferable the sender loses access to the object, so the worker can't just pass its own index if it wants to reconcile data from future requests (it'd need both). The worker would have to make a deep copy of all the indices and pass those copies to the render thread which can replace them in-memory.

    When the UI-related state is in the store then those changes will have to be communicated back to the worker as well or else that state will be lost (assuming using a postMessage -> worker reindex -> tranfer cycle could easily take 20ms)

    Jacopo Scazzosi
    @jacoscaz
    Hi all! Would anyone have something against quadstore moving to a different persistence method in the next major release? We'd still be guaranteeing both node.js and browser compatibility, of course.
    Dmitri Zagidulin
    @dmitrizagidulin
    @jacoscaz +1 to that. (I myself have been wondering at the level of effort required to port it to use Pouch/Couch DB as backend
    (since i have master-master replication requirement)
    Jacopo Scazzosi
    @jacoscaz
    @dmitrizagidulin would multi-backend support be a useful feature for you?
    Dmitri Zagidulin
    @dmitrizagidulin
    @jacoscaz definitely, yeah
    Jacopo Scazzosi
    @jacoscaz
    I'm researching alternative backend and index implementation for quadstore to support multi-term range queries with decent performance. Is anyone using quadstore with a dataset that is too big to (comfortably) fit in memory?
    Adrian Gschwend
    @ktk
    we wanted to use it recently but noticed it does not do SPARQL UPDATE yet
    @jacoscaz if you need larger datasets, I have some statistical data cubes that could be worth playing with
    never tried them with quadstore though
    Jacopo Scazzosi
    @jacoscaz
    @ktk thank you for that offer, I'll take you up on it in a few weeks after settling on a new indexing method/backend.
    Adrian Gschwend
    @ktk
    @jacoscaz ok ping me