oneFile.first()
as input to other processes too?
oneFile_ch = oneFile.first()
process foo {
input:
file x from oneFile_ch
}
process bar {
input:
file x from oneFile_ch
}
oneFile_ch = ...
until it is available?
templates/mapping_kallisto.nf
- should this instead be .sh
?
.sh
files - I was confused at first why there were two different files with almost the same name.
mapping_kallisto.sh
and mapping_kallisto.nf
, which I think is a potential source of confusion for readers.
Hi @pditommaso do you know anything about how slurm associates JobId
s with tasks? I've run into a situation where I see a process hanging in nextflow:
~> TaskHandler[jobId: 11762; id: 11728; name: splitInfernalFastaBySize (4888); status: SUBMITTED; exit: -; error: -; workDir: /mnt/efs/nextflow/run.0a0a85ee-f4cb-4c90-ae7e-ce277ba9e016/work/18/c05f904d729e317733bf842686acd7 started: -; exited: -; ]
Now, if I look in the slurm job_comp.log
for that same job id, I find this entry:
JobId=11762 UserId=ubuntu(1000) GroupId=ubuntu(1000) Name=nf-splitInfernalFastaBySize_(4893) JobState=COMPLETED Partition=normal TimeLimit=UNLIMITED StartTime=2017-11-06T08:54:38 EndTime=2017-11-06T08:54:40 NodeList=ip-172-20-22-86 NodeCnt=1 ProcCnt=1 WorkDir=/mnt/efs/nextflow/run.0a0a85ee-f4cb-4c90-ae7e-ce277ba9e016/work/07/1a9b9b6be2811383bcbf3c48db4c21
However, if you look closely you'll see that what's in the job_comp.log
is for a different task id (4893 instead of 4888) and the work dir is different. In the slurmctl.log
it looks like task 4888 is allocated to job id 11762 first, but before the job runs, task 4893 comes along and IT then seems to get assigned to job id 11762. Have you ever seen anything like this? Any idea what might be going on?
jobId
attribute is written, so I don't see any potential race condition that mess-up things