pywps.processing
module could be a common interface for the different job execution implementations. I think you already figured out how the scheduler part works: dump the job status and run the joblauncher
with this status document (json) on a remote batch node. A shared file-system and the postgres DB are used to get the outputs and update the job status. The drmaa
library is only the interface to schedulers like slurm. We might even skip it because it doesn’t look well maintained. We would then call slurm directly (skipping support for other scheduler systems like GridEngine).
@jachym thanks for the 4.2.4 release :)
https://github.com/geopython/pywps/releases/tag/4.2.4
Could you please upload it to pypi?
The conda-forge package is build from github and a build for 4.2.4 is triggered:
https://github.com/conda-forge/pywps-feedstock
@jachym @ldesousa @tomkralidis I have released pywps 4.2.6:
https://pypi.org/project/pywps/4.2.6/
It has a patch for the scheduler extension and backports from the master branch (4.2.5). In 4.2.6 the travis tests are fixed and an import issue from a backport for the GPX validator.