Thanks @aucampia! I am starting from scratch here, so if RDF* was available then I'd probably go with that since it's the next big standard, but just good old RDF 1.1 would be ok
import rdflib
from rdflib.namespace import RDF
lit2019 = rdflib.Literal('2019-01-01', datatype=rdflib.XSD.date)
lit2020 = rdflib.Literal('2020-01-01', datatype=rdflib.XSD.date)
bob = rdflib.URIRef("http://example.org/people/Bob")
google = rdflib.URIRef("http://example.org/companies/Google")
workedAt = rdflib.URIRef("http://example.org/stuff/0.1/workedAt")
startTime = rdflib.URIRef("http://example.org/stuff/0.1/startTime")
endTime = rdflib.URIRef("http://example.org/stuff/0.1/endTime")
g = rdflib.Graph()
position = (bob, workedAt, google)
reified_position = (position, RDF.type, RDF.Statement)
g.add(position)
g.add(reified_position)
g.add((reified_position, startTime, lit2019))
I tried running this but can't add that second triple, I guess I'd have to make it a BNode?
default1:foobar
for :foobar
Dear rdflib manitainers,
I am a user of rdflib. When I use rdflib.Graph to parse one .nt file, the terminal output is
rdflib.exceptions.ParserError: Invalid line: http://dbpedia.org/resource/2015_African_Rugby_Under-19_Cup_Division_"A" .
While I remove the double quotations of ”A", i.e., "A" -> A,the program is OK. I want to know if there are other ways to parsing the .nt file rather than remove the double quotations of ”A".
Thanks a lot.
Yours Sincerely.
Appendix
Data{ <a:> http://www.w3.org/2002/07/owl#sameAs http://dbpedia.org/resource/2015_African_Rugby_Under-19_Cup_Division_"A" .}
Code{
from rdflib import Graph
g = Graph()
g.parse("./data/example.nt")
print(f"Graph g has {len(g)} statements.")
print(g.serialize(format="turtle"))
}
https://dbpedia.org/resource/2015_African_Rugby_Under-19_Cup_Division_%22A%22
- which itself fails to resolve. OTOH, wikipedia returns a corresponding redirect https://en.wikipedia.org/wiki/2015_African_Rugby_Under-19_Cup_Division_%22A%22
which is found. The dbpedia page is essentially blank, it has none of the info for that entry on wikipedia.