Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
    paul van genuchten
    @pvgenuchten
    hi processes gitter, does the group have ideas on the concept of job runs, can a job be configured to run at intervals, and can the results of individual job runs be listed and retrieved?
    or is a job actually intended as a 'run' of a process? I'd see a job more as a specific configuration of a process, e.g process is calculate-height, job would be calculate height of Mont Blanc, runs would be height in 1950, 1980, 2000 and 2020...
    Francis Charette Migneault
    @fmigneault
    Hi. The current definition of Job in OGC API Processes is more to represent an execution run of a process.
    The results of that job can be retrieved with this method: https://docs.ogc.org/DRAFTS/18-062.html#sc_retrieve_job_results
    Processes are typically intended for single run per call, but you could have a process implementation that has for execution steps to start further processes with different values, or simply call the desired process multiple times with each of the desired values.
    KoalaGeo
    @KoalaGeo
    Hi, thinking out loud about a new API we want to implement - it'd have 2 functions, to validate borehole data files (.txt) conform to standard schema & libraries and convert between file formats txt <> xlsx . Is this the sort of thing which could come under OGCAPI-Processes?
    Looking to use OGCAPI where ever possible
    Tom Kralidis
    @tomkralidis
    @KoalaGeo we’ve done exactly this (in WPS) in the context of quality assessment (file checking) and format translation, so I would say this is in scope.
    3 replies
    paul van genuchten
    @pvgenuchten
    thanks @fmigneault. Our use case is a metadata harvester which runs at intervals, grabbing metadata from a remote csw, oai-pmh or opensearch. I hoped to use the concept of process for a harvester, have a certain csw target as parameter to define the job and then schedule that job to run at intervals. But your approach could also work; set up a new process dynamically containing the parameters (harvest target, run schedule) and then use the jobs and job results for retrieving individual runs. eg /api/processes/worldbank-csw-harvest/jobs/2021-05-12 18:00/results