in my mental model, CONSTRUCT creates a graph of objects whereas a SELECT is more a table. so using a SELECT to build a graph of objects sounds unnatural to me.
Yes, you are right. The point is that KOMMA uses the underlying RDF store as "the graph" and not a subset that is extracted via construct (only for prefetching of bean properties as discussed before).
oh, btw, can i implement my own equals() and hashCode() on beans, via the behaviours? (that i call Support classes)
Yes, you can. But I would avoid this since KOMMA uses RDF's resources identity by comparing URIs or BNode IDs.
i'm Fredy, currently working on https://linkedopenactors.org & https://www.iese.fraunhofer.de/de/innovation_trends/sra/smarte-landregionen.html.
I use rdf4j to handle rdf data in both projects. This weekend i was playing with komma and getting it running for writing/loading my loa organisations.
In loa i use a rdf4j httpRepo and maybe this is the problem of the really bad performance. I create and read a really small entity
(String addressCountry, String addressLocality, String addressRegion, String streetAddress, String postalCode) and it takes ~ 1 minute.
Is it possible in principle to use komma with http repos, or is this rather not a use case for komma.
btw. @kenwenzel thanks for your really fast answer!
KOMMA normally uses lazy loading. This results in one query per bean for its RDF types and one query for each of its properties. The alternative is to use prefetching with construct queries.
You can find an example for this at https://github.com/numerateweb/numerateweb/blob/60154f1b543049695c3363f71736a7ee45571ae7/bundles/core/org.numerateweb.math/src/main/java/org/numerateweb/math/rdf/ObjectSupport.java#L21
Usually it is perfectly possible to use an HTTP repo but you have to reduce the overall number of queries (by prefetching and/or caching).
Maybe you can give some more explanations on your use case and the related SPARQL queries.