Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • 16:16
    jeff-zucker synchronize #532
  • 16:16

    jeff-zucker on jeff-zucker-alternate-fetches

    added text on global vars (compare)

  • 13:38
    bourgeoa commented #532
  • 13:19
    jeff-zucker commented #532
  • 13:09

    jeff-zucker on jeff-zucker-alternate-fetches

    Update Documentation/alternate-… (compare)

  • 13:09
    jeff-zucker synchronize #532
  • 10:53
    bourgeoa commented #532
  • 10:51
    bourgeoa commented #532
  • Nov 27 20:27
    jeff-zucker synchronize #532
  • Nov 27 20:27

    jeff-zucker on jeff-zucker-alternate-fetches

    Update Documentation/alternate-… (compare)

  • Nov 27 20:27
    jeff-zucker synchronize #532
  • Nov 27 20:27

    jeff-zucker on jeff-zucker-alternate-fetches

    Update Documentation/alternate-… (compare)

  • Nov 27 20:27
    jeff-zucker synchronize #532
  • Nov 27 20:27

    jeff-zucker on jeff-zucker-alternate-fetches

    Update Documentation/alternate-… (compare)

  • Nov 27 20:27
    jeff-zucker synchronize #532
  • Nov 27 20:27

    jeff-zucker on jeff-zucker-alternate-fetches

    Update Documentation/alternate-… (compare)

  • Nov 27 20:26
    jeff-zucker synchronize #532
  • Nov 27 20:26

    jeff-zucker on jeff-zucker-alternate-fetches

    Update Documentation/alternate-… (compare)

  • Nov 27 20:26

    jeff-zucker on jeff-zucker-alternate-fetches

    Update Documentation/alternate-… (compare)

  • Nov 27 20:26
    jeff-zucker synchronize #532
Tim McIver
@tmciver
If I want to create my own ontology, what is the best way to document it? I'm thinking of perhaps a template in the style of schema.org or something similar. Currently, I'm creating a markdown file with ad-hoc formatting to document my ontology.
4 replies
@tclasen I think @bblfish has done some work in this area for Scala.
bblfish
@bblfish:matrix.org
[m]
yes @tclasen banana-rdf covered Jena, Sesame and a native JS DB.
I am rewriting it for Scala3 banana-rdf/banana-rdf#372
So I now have a first wrapper for rdflib.js there.
Iwan Aucamp
@aucampia
@tclasen I agree, it is Java, but RDFLib development is more active now, and it is decent enough. So it's worth a try.
Tory Clasen
@tclasen
I love RDFLib itself, but it is lower level than what I'm looking for, which isn't much more than that for Python
Martynas Jusevicius
@namedgraph_twitter
@tclasen so what are you looking for?
Tory Clasen
@tclasen
No idea, just trying to get the lay of the land of all the different languages and where each sits.
Martynas Jusevicius
@namedgraph_twitter
use SPARQL to have more portable code
Tory Clasen
@tclasen
I have a couple of ontologies that I want to try to build an application around. And while RDFLib is great for generic RDF data, I find it a bit lacking for ontologies specifically. Owlready2 has some parsing issues with the ontologies I'm working with, and OWL-RL is niche but does what it advertises.
And yeah, whatever server I end up building, the API between that and the client will likely be something sparql based.
Martynas Jusevicius
@namedgraph_twitter
Jena has nice OWL API (but OWL 1 only)
Tory Clasen
@tclasen
I plan on running this on something similar to a Raspberry Pi, so I can't really afford the overhead of the JVM.
Tends to be very RAM hungry.
Martynas Jusevicius
@namedgraph_twitter
someone asked about similar stuff recently
Tory Clasen
@tclasen
But I also don't expect there to be a lot of data, maybe a few GB total.
Martynas Jusevicius
@namedgraph_twitter
can't remember in which channel
Tory Clasen
@tclasen
If you have more channels I should check out, please let me know.
Jeff Zucker
@jeff-zucker
Anyone have a list of RSS feeds related to linked data?
some more linked data channels
Iwan Aucamp
@aucampia
@tclasen python is a bit of a slacker with performance, A few GB total may be a few GB too many (likely is).
So most definitely for that Java is best
Iwan Aucamp
@aucampia
If anyone has time for a review: RDFLib/rdflib#1452 - Migrate from nosetest to pytest
Iwan Aucamp
@aucampia
anyone interested in helping out with reviews on RDFLib python?
I will review on other projects in exchange
Dirk Roeleveld
@dirkesquire
Yes sure! I can't promise anything but python and RDF and turtle is my interest
Iwan Aucamp
@aucampia
I would say the current biggest blocker is this move to pytest (RDFLib/rdflib#1452), once that is merged I will make a couple of more PRs
Tim McIver
@tmciver
Hey folks. I think I have a gap in my understanding of linked data. Is there an expectation that IRI's used as identifiers in my triple store are URLs where an HTTP request could return triples where that IRI is the subject? It seems like a burden to have a triple store and to also have a server to service such requests.
Jeff Zucker
@jeff-zucker
@tmciver document fragments point to things within documents or potentially within triplestores. E.g. in Turtle https://example.com/foo.ttl#Bar points to a the Bar object in the foo.ttl document. Presumably you'd have the equivalent in a triplestore where the triplestore is the main address and subjects in it are relative to its address. If you are just working offline and don't need/want https, you can use any scheme e.g. chrome:Session.
Tim McIver
@tmciver
@jeff-zucker So do triplestores respond to GET requests appropriately or are you saying that the file https://example.com/foo.ttl should exist?
Jeff Zucker
@jeff-zucker
Frankly, I don't know squat about triplestores :-) but yes, my impression is that the store should respond to fragment URLs without external files.
Tim McIver
@tmciver
@jeff-zucker out of curiousity, if you don't use a triple store to store triples, do you just keep them in ttl files?
Jeff Zucker
@jeff-zucker
I mostly work in Solid, so yes turtle or json-ld
Tim McIver
@tmciver
Thanks!
Tomasz Pluskiewicz
@tpluscode
there is loose coupling between triple stores and HTTP requests

Is there an expectation that IRI's used as identifiers in my triple store are URLs where an HTTP request could return triples where that IRI is the subject

that is only one possible way. I on the other hand prefer to partition my store so that a request to GET X returns the contents of a named graph X

it is but a choice, just like you do not expect that a request is backed but say, exactly one row in an relational database
and ultimately the triplestore behind a HTTP interface is just an implementation detail. the client who operates the uniform interface must not make any assumptions about how the representation is populated (see Fielding's REST dissertation)
Martynas Jusevicius
@namedgraph_twitter
@tmciver triplestores can contain any URIs, but only those that are document URIs (URLs), i.e. no fragment identifiers and other URI schemes than http:///https:// have any hope of being successfully dereferenced
on top of that you need a Linked Data server which is backed by that triplestore and the base URI of the server is aligned with the base URI of the data
then you're good to go
but it doesn't mean that such Linked Data API cannot return descriptions of non-document resources (#Bar as above). they simply have to be connected to some of the document URIs (e.g. via foaf:isPrimaryTopicOf), and then you can reach them via a SPARQL pattern and include in the response, if you want
that is what Linked Data Templates are for: https://atomgraph.github.io/Linked-Data-Templates/
Tim McIver
@tmciver
@tpluscode @namedgraph_twitter Thanks, that's very helpful.
Martynas Jusevicius
@namedgraph_twitter
Graph Store Protocol is another option, but the idea is the same
you could say Linked Data Templates is a generalization of the GSP
Jeff Zucker
@jeff-zucker
Hopefully Solid will one day have triplestore backends, in which case the fragments would be addressable ... but even without triplestore the foaf:PrimaryTopicOf kind of pointer from the document to the subject of triples is a good thing
Martynas Jusevicius
@namedgraph_twitter
the filesystem backend is one of the weirdest things in the Solid implementations
Tim McIver
@tmciver
I just signed up for the "Web of Data" Linked Data course on coursera: https://www.coursera.org/learn/web-data/. It's probably too basic for most folks here, but I thought I'd mention it.