Where communities thrive

  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
Repo info
  • 15:00
    bentsherman opened #3463
  • 14:44

    bentsherman on task-graph

    Add initial task graph [ci fast… (compare)

  • 11:56

    pditommaso on master

    Add sentence on bash options (#… (compare)

  • 11:52

    pditommaso on master

    Add sentence on bash options (#… (compare)

  • 11:52
    pditommaso closed #3454
  • 10:42
    pditommaso commented #3443
  • 09:57
    pditommaso synchronize #3443
  • 08:37
    pditommaso commented #3012
  • 07:17
    marcodelapierre commented #3012
  • 03:22
    aaronegolden commented #3443
  • 03:21
    aaronegolden commented #3443
  • 02:31
    aaronegolden synchronize #3443
  • Dec 01 22:00
    Jessime opened #3461
  • Dec 01 19:56
    bentsherman commented #3459
  • Dec 01 17:26
    corneliusroemer commented #3458
  • Dec 01 17:23
    corneliusroemer commented #3457
  • Dec 01 17:08
    bentsherman commented #3457
  • Dec 01 16:57
    bentsherman commented #3458
  • Dec 01 16:43
    bentsherman labeled #3457
  • Dec 01 16:43
    bentsherman labeled #3459
rpetit3 (Robert Petit)
Ignore me! Answer is yes. Looks like my issue is something else
rpetit3 (Robert Petit)
Issue is related to using "File().exists()" from Groovy pre-Nextflow
Moritz E. Beber
Has anyone put together a script that allows you to delete all work directories of a particular process in a pipeline run using nextflow log and some clever bash transformations?
Moritz E. Beber
Okay, quick thing that I came up with is:
nextflow log <run name>  -F 'process =~ /<process name>/' | while read line; do rm -r "$line"; done
If using singularity is there a way to get it to look for files in a specific local folder without specifying the full path in each process.container directive? I tried setting the singularity.cacheDir to one which contained my image files but unfortunately that didn't seem to work. I specified the image file name including the extension (image_name.img) perhaps that is the issue?
6 replies
rpetit3 (Robert Petit)
Is there a way to ignore or hide the first() warning (WARN: The operator first is useless when applied to a value channel which returns a single value by definition)?
Sam Birch

I'm having a strange issue, and wonder if the way I'm using DSL2 modules is not correct. I have a workflow defined in one file with the following structure:

workflow foo {

        c(inputsPath, a.out, b.out)

a, b and c are just processes that run a python script over each file in the directory inputsPath and store the results in an output directory

Then in main.nf I have:

include { foo as foo1 } from './file.nf'
include { foo as foo2 } from './file.nf'
include { foo as foo3 } from './file.nf'

workflow {
    foo1(Channel.fromList(['inputs1-1', 'inputs1-2']))
    foo1(Channel.fromList(['inputs2-1', 'inputs2-2']))
    foo1(Channel.fromList(['inputs3-1', 'inputs3-2']))

The problem is that when the foo workflow executes the c process, the inputs are mixed up, i.e. it might be called with the output of a run on inputs1 and b run on inputs2.

This seems like it should be impossible, as the processes run by foo1 should only be looking at inputs/outputs associated with inputs1, correct?

Unless when I import foo it imports separate instances of the workflow, but not separate instances of the processes, and so a.out doesn't necessarily refer to the same a process that was started by the current workflow instance?
Sam Birch
There's an runnable, minimal test case here: https://github.com/hotplot/nf-wf-issues
1 reply
Brandon Cazander
Is there a way to use the count() operator for control flow? I am struggling to make it work. What I would like to do is to ensure that a channel has at least one item; otherwise, I'll throw an error.
3 replies

hi all

I am having a issue with staging multiple files and I have reported here - https://github.com/nextflow-io/nextflow/issues/1364#issuecomment-999285314

Wondering if anyone else in the gitter community has faced this issue and what was the workaround

I am using NF with AWSBatch and as suggested there, I have tried beforeScript: 'ulimit -s unlimited' but it does not seem to work

Thanks in advance

Asaf Peer
A basic question: If a process is called with an empty channel it won't be executed while if a subworkflow is called with an empty channel, it will be processed, is this correct?
1 reply
Paolo Di Tommaso
the sub-workflow is just a grouping for processes, therefore it gets executed is any process gets executed
Asaf Peer

Hello all, I am facing an error trying to use nextflow with azure batch. Currently, I am using the nf-core/sarek pipeline and at the begining of the pipeline it gives me the error below:

Error executing process > 'MapReads (1543_18-1)'

Caused by:
Cannot find a VM for task 'MapReads (1543_18-1)' matching this requirements: type=Standard_E8_v3, cpus=16, mem=60 GB, location=brazilsouth

Any help will be appreciated,

4 replies
I supose that somebody used Singularity on Nextflow. I'm trying to mount some local dirs on the container, but reading the documentation I faced this:
Nextflow expects that data paths are defined system wide, and your Singularity images need to be created having the mount paths defined in the container file system.
This mean that I need to create the Singularity image defining inside the mount points? Or I can "map" externally the dirs that I wan to mount inside? As the Docker style -v /something:/something
Sablin AMON
Hello, I am new to Nextflow and would like to know how to send each task (process) via ssh for execution on a remote cluster with Nextflow? Thanks

Hello people,
I could solve the previous error on Azure Batch and now another problem appear:

Caused by:
  At least one value of specified task container settings is invalid

Command executed:

  alleleCounter --version &> v_allelecount.txt 2>&1 || true
  bcftools --version &> v_bcftools.txt 2>&1 || true
  bwa version &> v_bwa.txt 2>&1 || true
  cnvkit.py version &> v_cnvkit.txt 2>&1 || true
  configManta.py --version &> v_manta.txt 2>&1 || true
  configureStrelkaGermlineWorkflow.py --version &> v_strelka.txt 2>&1 || true
  echo "2.7.1" &> v_pipeline.txt 2>&1 || true
  echo "21.10.6" &> v_nextflow.txt 2>&1 || true
  snpEff -version &> v_snpeff.txt 2>&1 || true
  fastqc --version &> v_fastqc.txt 2>&1 || true
  freebayes --version &> v_freebayes.txt 2>&1 || true
  freec &> v_controlfreec.txt 2>&1 || true
  gatk ApplyBQSR --help &> v_gatk.txt 2>&1 || true
  msisensor &> v_msisensor.txt 2>&1 || true
  multiqc --version &> v_multiqc.txt 2>&1 || true
  qualimap --version &> v_qualimap.txt 2>&1 || true
  R --version &> v_r.txt 2>&1 || true
  R -e "library(ASCAT); help(package='ASCAT')" &> v_ascat.txt 2>&1 || true
  samtools --version &> v_samtools.txt 2>&1 || true
  tiddit &> v_tiddit.txt 2>&1 || true
  trim_galore -v &> v_trim_galore.txt 2>&1 || true
  vcftools --version &> v_vcftools.txt 2>&1 || true
  vep --help &> v_vep.txt 2>&1 || true

  scrape_software_versions.py &> software_versions_mqc.yaml

Command exit status:

Command output:

Work dir:

Any ideas?
Thanks in advance,

Paolo Di Tommaso
What version of Nextflow are you using?
version 21.10.6 build 5660
Paolo Di Tommaso
I've had this same problem recently, however it's not clear what it's happening
please move this discussion here

I'm using DSL2, and I've been building a workflow where I want to allow the user to use different "flavors" of the workflow by specifying command line parameters. I started by using the if()/else() blocks described in the documentation, but I found that when I have multiple configurable steps this pattern explodes into nested conditional logic that becomes hard to read.

With a little fiddling I found that I could define a variable that holds a reference to a runnable step, and then use this variable in the workflow block as though it was a regular process or workflow. I haven't seen this described in the documentation, but it was a concise way of describing what I wanted. Is there a more conventional (syntactic sugar) "nextflow-way" of doing this?

to give a more concrete example, the pattern I was using was:

process variant1 {

process variant2{

def variable_step = params.flag ? variant1.&run : variant2.&run

workflow {

Do you think this is a "safe" thing to do? will it be stable-ish/compatible with future versions of nextflow?


when I don't define that variable_step such that it points to the run method, or bind call() to run(), e.g.:

variable_step.metaClass.call = variable_step.&run

then I get this error:

Missing process or function with name 'call'

when a process-def is "called" in a workflow context is there something that does call()->run() binding at runtime? Why does assigning it to an intermediate variable change how the binding happens?

Paolo Di Tommaso
hacking internal structures is not guaranteed to work. it's not enough an if else to eachive the same?
Does anyone know what to do if a job finishes with Exit Code 0 but is still marked as 0? the .command.err and .command.out files are empty, and the job indeed finished successfully
Alex Mestiashvili
How can I print a message if a channel is empty? I've tried something like that, but it prints the channel content when it is not empty files_ch.ifEmpty("emtpy").view() Also is there a way to exit the workflow if a channel is empty?
Tim Dudgeon

I'm having a problem defining optional outputs in DSL2. I have a workflow where part of it is optional:

if (params.run_extra) {

That works fine. But now I want to emit the outputs. I've tried:

if (params.run_extra) {

But that doesn't seem to be allowed.

Any ideas?

Paolo Di Tommaso
you need a conditional expression
1 reply
some_ch = params.run_extra ? process2.out : some_default
Jong Ha Shin
Hello, I was wondering if there is a way to ask nextflow to resubmit the run again and again. Because sometimes the job crashed due to incompatibility with the system, but it is random error. Running again the job with -resume function solve the problem.
3 replies
#2546 was closed so I guess that functionality is not supported with DSL2 anymore. As an alternative I am attempting to execute Groovy code in a Closure provided to the subscribe method. Is it possible to catch an error that is thrown at that point? Basically something like this:
Channel.of(1, 2, 3).toList().subscribe({ list ->
  // Raise error
  throw new Exception("Something in the list is not valid")
// Would like to catch the error and stop execution here
println("Execution continues anyway") // This shouldn't be printed but it is
Hi, I'm trying to bind one directory form the localhost to a Singularity container. I tried both containerOptions --volume /data/db:/db on the process section and also singularity.runOptions = '-B /data/db:/db' but I cannot mout it properly
I need to modify something on the Singularity configuration to allow this?
singularity.autoMounts = true is defined on my profile section too
I'm trying something like this, just for test:
echo true
containerOptions '--volume /tmp:/tmp/tmp-mounted'

cat /etc/*release >> /tmp/tmp-mounted/testing-bind.txt
echo "Hello world! From Singularity container" >> /tmp/tmp-mounted/testing-bind.txt
touch /tmp/tmp-mounted/thepipelinehasrun
ls /tmp/tmp-mounted/
cat /tmp/tmp-mounted/testing-bind.txt

with a profile:
profiles { singularity { singularity.enabled = true singularity.autoMounts = true process.container = 'alpine.3.8.simg' } }
1 reply

Hi all,

It seems I still have 'configuration conflict' when I run awsbatch. Like following bug: nextflow-io/nextflow#2370 .

Configuration conflict
This value was submitted using containerOverrides.memory which has been deprecated and was not used as an override. Instead, the MEMORY value found in the job definition’s resourceRequirements key was used instead. More information about the deprecated key can be found in the AWS Batch API documentation.

Nextflow version

  Version: 21.10.6 build 5660
  Created: 21-12-2021 16:55 UTC 
  System: Linux 5.11.0-1022-aws
  Runtime: Groovy 3.0.9 on OpenJDK 64-Bit Server VM 1.8.0_312-8u312-b07-0ubuntu1~20.04-b07
  Encoding: UTF-8 (UTF-8)

How to solve this?

Issue does not seem to happen with NXF_VER=21.04.1 nextflow run main.nf
Jeffrey Massung
Is there a process directive I can use to fail a workflow? Maybe I can just throw an exception , but not sure if there's something nicer I should do instead? I basically have an if block in the directives section of a process, and if it's false I want to stop running and fail.
Nathan Spix

I have a workflow that involves splitting up files per chromosome and then merging them later. To make the workflow a bit more flexible, I first pull the chromosomes out of the reference file using a bit of grep, so I have a channel with all the chromosome names. I can then do something like this:

tuple val(id), path(file) from channel_a
each chr from chromosomes

tuple val(id), val(chr), path(outfile) into channel_b

then I can group things up:

channel_b.groupTuple(by: 0)

and use that as input for the next process.
My question is, since the number of chromosomes is constant for any given run of the workflow, can I extract that value (e.g. map{ it.readLines().size() }) and feed that into groupTuple? I thought perhaps I could assign this value to a variable and then pass that variable to the groupTuple call but this doesn't work (the type of the variable is something fancy, not an Int).

1 reply
The docs say that the merge operator will be removed soon. What's the replacement?