Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
    Tom Kralidis
    @tomkralidis
    @ingenieroariel ah, ok, yeah.
    Ariel Núñez
    @ingenieroariel
    can GHC be applied to the same lucene records?
    right now it is done via tables by Paolo
    and while it is super nice, I believe it is better if we annotate the backend directly
    Tom Kralidis
    @tomkralidis
    @ingenieroariel sure, easy enough
    @ingenieroariel "annotate the backend"?
    Ariel Núñez
    @ingenieroariel
    if we consider lucene (solr | es) our database now
    once GHC runs, where will it store what it learned about?
    Tom Kralidis
    @tomkralidis
    right now it stores anywhere supported by SQLAlchemy
    Ariel Núñez
    @ingenieroariel
    right, but lucene (solr | es) is not in that list I suppose
    Tom Kralidis
    @tomkralidis
    it's not, work is needed to do that.
    Ariel Núñez
    @ingenieroariel
    I think the question is, where should we store what GHC learned about the datasets once it runs?
    Tom Kralidis
    @tomkralidis
    "in the database" :)
    Ariel Núñez
    @ingenieroariel
    right :)
    I would think it is okay to denormalize it next to the record
    Paolo has been talking about using reliability as part of the search items
    After the brainstorming, let me go back to why we created this channel :)
    We need to work on a very scalable and performant search for geonode
    Tom Kralidis
    @tomkralidis
    good idea. Then again reliability usually also feed into admin workflows to suppress the resource
    @ingenieroariel yes, in the context of federation/aggregration?
    Ariel Núñez
    @ingenieroariel
    we believe the way to do that is by not trying to do it right into geonode but to make a search that works as well with only one geonode or many (i.e. continue our push to treat local layers and remote layers equally)
    Tom Kralidis
    @tomkralidis
    so harvesting is a workflow. and healthchecking is another.
    Ariel Núñez
    @ingenieroariel
    right
    ingenieroariel @ingenieroariel thinking
    Jeffrey Johnson
    @jj0hns0n
    healthchecking is kind of out of scope for this client
    not our responsibility
    that stuff is to be handled by this project https://github.com/venicegeo
    for a much wider variety of services than just OGC stuff
    Tom Kralidis
    @tomkralidis
    nice
    Jeffrey Johnson
    @jj0hns0n
    this is a kind of ‘service broker’ for bespoke analytic services
    so you can pass jobs around like “I want to run this analysis on this dataset and put the results here"
    and not bound by what WPS defines in any way shape or form :)
    Tom Kralidis
    @tomkralidis
    TRUE
    Jeffrey Johnson
    @jj0hns0n
    so, our focus for this client is on ‘search’ and behind that aggregation, harvesting etc
    brb
    Ariel Núñez
    @ingenieroariel
    Okay, what I was going to say was that now we know how really nice search will look like (thanks to Paolo and Ben at Harvard)
    the question is how to efficiently do federation
    is that right Jeff?
    we want pycsw output just so that we can do federation?
    ingenieroariel @ingenieroariel walks out of the room to get lunch
    Jeffrey Johnson
    @jj0hns0n
    yes
    Jeffrey Johnson
    @jj0hns0n
    capooti, there is currently no API for hhypermap insofar as adding new services etc right?
    @capooti ^
    Paolo Corti
    @capooti
    no @jj0hns0n (sorry was out for a while now)
    Jeffrey Johnson
    @jj0hns0n
    ok
    Paolo Corti
    @capooti
    it is just a prototype, with a lot of code to refactor
    an API would be interesting
    Jeffrey Johnson
    @jj0hns0n
    cool
    hopefully we can help :)
    Paolo Corti
    @capooti
    would be nice!