w-gao on 3875-update-wes-setup-docs
Update version_template.py Bump mypy from 0.930 to 0.931 (… Scale TES to be able to run rea… and 10 more (compare)
Hexotical on 3699-fix-typos
Fix assorted typos within assor… (compare)
mr-c on 4021-sge-1core-jobs
SGE batch system change to supp… Merge branch 'master' into 5.6.… (compare)
+character is a filename to
%2B. This results in a error:
Cannot make job: Invalid filename: 'P233%2B35_structure.txt' contains illegal characters
The two lines with no test coverage are annotated at https://github.com/common-workflow-language/cwltool/pull/1446/files#annotation_2008443310
For local checking you'll need to run all the tests with
Hm, that fails. ```$ make diff-cover
python --version 2>&1 | grep "Python 3"
python -m pytest -rs --cov --cov-config=.coveragerc --cov-report=
ERROR: usage: main.py [options] [file_or_dir] [file_or_dir] [...]
main.py: error: unrecognized arguments: -n --cov --cov-config=.coveragerc --cov-report=
Makefile:155: recipe for target 'testcov' failed
make: * [testcov] Error 4
make install-depseemed to do the trick.
So when I run
make diff-cover on that PR locally I get:
cwltool/command_line_tool.py (62.5%): Missing lines 207,256-257
revmap_fileand the new test. I get the impression that the test (in its current setup) can only check that what you put in as filename, also gets out (i.e. the external filename representation). I guess that's why only the
ifclause is covered by the test. I guess the
elseclause will only be executed if you supply an internal filename representation (at least, that's what I'm guessing right now). I'm not sure how I would have to supply an internal filename representation in that current test, because it uses a
CommandLineTool, which is an external thingy.
internalin this case refers to a path within a software (docker) container
DockerRequirementat https://github.com/common-workflow-language/cwltool/pull/1446/files#diff-39c8c56d7c38aab05d7eb4a8a765fcc4ea98d28bc4d0fedd22bce834e28dc843R123 is enough?
file:///reference to a tmpdir
(tmp_path / "outdir").as_uri()
I encounter NFS issues when running Toil with Slurm on one of our clusters, causing jobs to fail. Two typical tracebacks follow below:
Traceback (most recent call last): File "/project/rapthor/Software/rapthor/bin/_toil_worker", line 8, in <module> sys.exit(main()) File "/project/rapthor/Software/rapthor/lib/python3.6/site-packages/toil/worker.py", line 710, in main with in_contexts(options.context): File "/usr/lib/python3.6/contextlib.py", line 81, in __enter__ return next(self.gen) File "/project/rapthor/Software/rapthor/lib/python3.6/site-packages/toil/worker.py", line 684, in in_contexts with manager: File "/project/rapthor/Software/rapthor/lib/python3.6/site-packages/toil/batchSystems/abstractBatchSystem.py", line 505, in __enter__ self.arena.enter() File "/project/rapthor/Software/rapthor/lib/python3.6/site-packages/toil/lib/threading.py", line 438, in enter with global_mutex(self.workDir, self.mutex): File "/usr/lib/python3.6/contextlib.py", line 81, in __enter__ return next(self.gen) File "/project/rapthor/Software/rapthor/lib/python3.6/site-packages/toil/lib/threading.py", line 340, in global_mutex fd_stats = os.fstat(fd) OSError: [Errno 116] Stale file handle
Traceback (most recent call last): File "/project/rapthor/Software/rapthor/lib/python3.6/site-packages/toil/deferred.py", line 215, in cleanupWorker robust_rmtree(os.path.join(stateDirBase, cls.STATE_DIR_STEM)) File "/project/rapthor/Software/rapthor/lib/python3.6/site-packages/toil/lib/io.py", line 51, in robust_rmtree robust_rmtree(child_path) File "/project/rapthor/Software/rapthor/lib/python3.6/site-packages/toil/lib/io.py", line 64, in robust_rmtree os.unlink(path) OSError: [Errno 16] Device or resource busy: b'/project/rapthor/Share/prefactor/L667520/working/f7a704078c8f54fc8a7ccb44a8d5d5f6/deferred/.nfs00000000000e74070000e57a'
Both types of error seem to occur during clean-up.
Kindly suggest the reason this happens. I do not see any errors as such in the execution. I am currently using slurm bacthSystem.
[2021-11-01T11:50:40+0530] [MainThread] [W] [toil.leader] Job failed with exit value 1: 'JobFunctionWrappingJob' kind-JobFunctionWrappingJob/instance-l0jq0ypl Exit reason: None [2021-11-01T11:50:40+0530] [MainThread] [W] [toil.leader] No log file is present, despite job failing: 'JobFunctionWrappingJob' kind-JobFunctionWrappingJob/instance-l0jq0ypl [2021-11-01T11:50:54+0530] [MainThread] [W] [toil.job] Due to failure we are reducing the remaining try count of job 'JobFunctionWrappingJob' kind-JobFunctionWrappingJob/instance-l0jq0ypl with ID kind-JobFunctionWrappingJob/instance-l0jq0ypl to 2
Anybody ever see anything like this from a Toil worker? Maybe @mr-c:matrix.org ?
Traceback (most recent call last): File "/usr/local/lib/python3.6/dist-packages/toil/worker.py", line 376, in workerScript job = Job.loadJob(jobStore, jobDesc) File "/usr/local/lib/python3.6/dist-packages/toil/job.py", line 2251, in loadJob job = cls._unpickle(userModule, fileHandle, requireInstanceOf=Job) File "/usr/local/lib/python3.6/dist-packages/toil/job.py", line 1876, in _unpickle runnable = unpickler.load() AttributeError: 'Comment' object has no attribute '_end'
I'm trying to run some CWL CI tests, which broke when we rebuilt our Gitlab, with a local leader against our Kubernetes, and I'm getting this. I'd say it's a
cwltool version mismatch, but as far as I can tell I have
cwltool==3.1.20211020155521 in both my container and my leader virtualenv. Does CWL have a Comment object that recently grew or lost a