ataqc
ataqc
, but nothing guaranteeing that Sample
is the same sample each time. The ataqc
is triggered when there is something on each of the specified input channels, but not necessarily the same sample. You can't guarantee that the earlier processes output things in the correct order because they may be running concurrently and thus out of order. To fix this you'll need something like groupTuple on the input channels to ataqc
to make sure that all input files are aligned with the same sample.
view
) such that the input channel is properly constructed. Probably overkill, but it helps me stay sane...
n.processor.TaskPollingMonitor - No more task to compute -- Execution may be stalled (see the log file for details)
? It seems my tasks have been killed by the scheduler because they ran too long, but nf does not seem to notice this and just prints this warning every 5 minutes forever...
errorStrategy = { task.exitStatus == Integer.MAX_VALUE ? 'retry' : 'finish' }
in my config, could that be the reason?
sshfs
!
my colleagues are mostly skeptical about the whole pipeline/reproducibility story
but the uni policy is that one should not rely on external services. they want do develop everything in-house
This is totally insane! :confounded:
/bin/bash: .command.sh: No such file or directory
what could be the cause of this error? I'm trying to run a NF pipeline inside docker (cicd stuff) and I get this error
bash -x .command.run
Sample
, since it could end up being a different sample.
Hi @DoaneAS here's what I'm suggesting:
Channel.from( [1, "file1"], [2, "file2"], [3, "file3"] ).set{ a }
Channel.from( [1, "result1"], [2, "result2"], [3, "result3"] ).set{ b }
Channel.from( [1, "whatever1"], [2, "whatever2"], [3, "whatever3"] ).set{ c }
Channel.from( [1, "something1"], [2, "something2"], [3, "something3"] ).set{ d }
a.mix(b)
.mix(c)
.mix(d)
.groupTuple(sort: true)
.view()
.set{ inch }
process xzy {
input:
set val(sample), val(file_list) from inch
output:
stdout into ouch
script:
"""
echo ${sample} ${file_list}
"""
}
ouch.view()
Channels a
, b
, c
, and d
are the outputs of your upstream processes where each contains a tuple containing the sample id and a result. mix
those channels to combine them into a single channel containing all the results, and then use groupTuple
to group things by sample id. Then your process xyz
has a single input channel where each result in file_list
is properly associated with sample
. Note that in my canned example val(file_list)
is a val
because my examples are just strings, but since I believe you have actual files you should have file(file_list)