Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Oct 03 20:38
    cnexcale commented #2668
  • Oct 03 14:50

    pditommaso on v22.10.0-RC1

    (compare)

  • Oct 03 14:25
    sonatype-lift[bot] commented #3263
  • Oct 03 14:09
    bentsherman synchronize #3263
  • Oct 03 14:09

    bentsherman on 3260-fix-resource-labels-docs

    fix type [ci skip] Signed-off-… (compare)

  • Oct 03 14:08
    bentsherman labeled #3263
  • Oct 03 14:08
    bentsherman review_requested #3263
  • Oct 03 14:08
    bentsherman opened #3263
  • Oct 03 14:08
    Midnighter opened #3262
  • Oct 03 14:08

    bentsherman on 3260-fix-resource-labels-docs

    Add resourceLabels to docs [ci … (compare)

  • Oct 03 13:52
    Midnighter opened #3261
  • Oct 03 13:36
    bentsherman labeled #3260
  • Oct 03 13:33
    sonatype-lift[bot] commented #3215
  • Oct 03 13:32
    Nortamo commented #3251
  • Oct 03 13:31
    Nortamo commented #3251
  • Oct 03 13:21
    kanor1306 synchronize #3215
  • Oct 03 13:17
    kanor1306 synchronize #3215
  • Oct 03 12:35
    pditommaso commented #3260
  • Oct 03 12:34
    pditommaso assigned #3260
  • Oct 03 10:29

    pditommaso on master

    Bump fusion config v0.5.1 Sign… Update changelog Signed-off-by… [release 22.10.0-RC1] Update ti… (compare)

Jeremy Leipzig
@leipzig
i'm using nextflow-21.10.5 - any workflow does this including hello-world
Jeremy Leipzig
@leipzig
one clue might be
gsutil cat gs://mygcpbucket/nextflow/f4/72c513e7a923fb0c80b30fc74c669d/google/logs/output
/bin/bash: /nextflow/f4/72c513e7a923fb0c80b30fc74c669d/.command.log: Permission denied
+ trap 'err=$?; exec 1>&2; gsutil -m -q cp -R /nextflow/f4/72c513e7a923fb0c80b30fc74c669d/.command.log gs://truwl-internal-inputs/nextflow/f4/72c513e7a923fb0c80b30fc74c669d/.command.log || true; [[ $err -gt 0 || $GOOGLE_LAST_EXIT_STATUS -gt 0 || $NXF_DEBUG -gt 0 ]] && { ls -lah /nextflow/f4/72c513e7a923fb0c80b30fc74c669d || true; gsutil -m -q cp -R /google/ gs://truwl-internal-inputs/nextflow/f4/72c513e7a923fb0c80b30fc74c669d; } || rm -rf /nextflow/f4/72c513e7a923fb0c80b30fc74c669d; exit $err' EXIT
+ err=1
+ exec
+ gsutil -m -q cp -R /nextflow/f4/72c513e7a923fb0c80b30fc74c669d/.command.log gs://truwl-internal-inputs/nextflow/f4/72c513e7a923fb0c80b30fc74c669d/.command.log
+ [[ 1 -gt 0 ]]
+ ls -lah /nextflow/f4/72c513e7a923fb0c80b30fc74c669d
total 40K
drwxr-xr-x 3 root root 4.0K Dec 10 23:51 .
drwxr-xr-x 3 root root 4.0K Dec 10 23:51 ..
-rw-r--r-- 1 root root 3.3K Dec 10 23:51 .command.log
-rw-r--r-- 1 root root 5.3K Dec 10 23:51 .command.run
-rw-r--r-- 1 root root   36 Dec 10 23:51 .command.sh
drwx------ 2 root root  16K Dec 10 23:50 lost+found
+ gsutil -m -q cp -R /google/ gs://mygcpbucket/nextflow/f4/72c513e7a923fb0c80b30fc74c669d
1 reply
seems weird to see that generic permissions error
pmtempy
@pmtempy

I was running a Nextflow job with about 2k tasks on AWS Batch. Unfortunately, the Docker container for one of the processes contained an error (Exception in thread "Thread-1" java.awt.AWTError: Assistive Technology not found: org.GNOME.Accessibility.AtkWrapper), and I had to kill the nextflow run. I guess I must have hit CTRL+C twice, because while the interactive nextflow CLI progress stopped, I'm still left with thousands of RUNNABLE jobs in AWS Batch.

Is there any quick way to remove them without potentially affecting other nextflow runs using the same compute queue?
How can I avoid similar issues in the future? I.e. how should I properly cancel a running nextflow run and make it clean up its jobs in Batch?

1 reply
Yasset Perez-Riverol
@ypriverol
hi all, i have a code like this:
ch_spectra_summary.map { tuple_summary ->
                         def key = tuple_summary[0]
                         def summary_file = tuple_summary[1]
                         def list_spectra = tuple_summary[1].splitCsv(skip: 1, sep: '\t')
                         .flatten{it -> it}
                         .collect()
                         return tuple(key.toString(), list_spectra) 
                        }
                   .groupTuple()
                   .set { ch_spectra_tuple_results}
is resturning something like
[supp_info.mzid.gz, [[supp_info.mzid, 2014-06-24, Velos005137.mgf, MGF, Velos005137.mgf, ftp://ftp.ebi.ac.uk/pride-archive/2014/06/PXD001077/Velos005137.mgf]]]
but I would like to selec only the last column of the Csv
ftp://ftp.ebi.ac.uk/pride-archive/2014/06/PXD001077/Velos005137.mgf
the result should be:
[supp_info.mzid.gz, [ftp://ftp.ebi.ac.uk/pride-archive/2014/06/PXD001077/Velos005137.mgf]]
Sofia Stamouli
@sofstam

Hello,

I have the following python script in a nextflow process.

process update_image { 

script:
"""
#!/usr/bin/env python3 
import os, subprocess

subprocess.check_call(['singularity', 'pull', 'docker://busybox'])

}

The singularity is installed and is in the $PATH. The config file looks like:

singularity {
    singularity.enabled = true 
    singularity.autoMounts = true
}

However, I get the error: No such file or directory: 'singularity'. Any ideas what might be wrong here?

ChillyMomo
@ChillyMomo709
Hi Nextflow community, I was wondering what exactly determines what's cached and what's not? It seems there are some processes of mine that always start when I resume the pipeline, even though the process has finished before.
ChillyMomo
@ChillyMomo709

Hello,

I have the following python script in a nextflow process.

process update_image { 

script:
"""
#!/usr/bin/env python3 
import os, subprocess

subprocess.check_call(['singularity', 'pull', 'docker://busybox'])

}

The singularity is installed and is in the $PATH. The config file looks like:

singularity {
    singularity.enabled = true 
    singularity.autoMounts = true
}

However, I get the error: No such file or directory: 'singularity'. Any ideas what might be wrong here?

Try the following:

singularity {
    enabled = true
    autoMounts = true
}
Alex Mestiashvili
@mestia
is there a way to assign output of a process to a variable which can be evaluated later in workflow ?
4 replies
Brandon Cazander
@brandoncazander

I have a regular expression in my parameters

params {
    normal_name = /^NF-.*-3.*/
}

that I use to match in my workflow elsewhere

def split_normal = branchCriteria {
    normal: it.name =~ params.normal_name
}

I'm trying to override this parameter as a CLI argument with --normal_name '/^NF-.*-4.*/' but then it's treated as a string in the workflow instead. Is there a good way to handle this, perhaps by compiling the parameter in the workflow?

1 reply
ramakrishnas
@ramakrishnas

Hi, I'm trying to get the sarek pipeline work on our hpc cluster.

here is my command
nextflow run nf-core/sarek -r 2.7.1 -profile singularity -c nfcore_ui_hpc.config --input '/Users/rsompallae/projects/janz_lab_wes/fastqs_1.tsv' --genome mm10

and the error I get is

There is insufficient memory for the Java Runtime Environment to continue.

Native memory allocation (malloc) failed to allocate 32 bytes for AllocateHeap

An error report file with more information is saved as:

I feel like this has to do with some parameter adjustment but I'm not sure how to fix it.

Thanks in advance for your help

KyleStiers
@KyleStiers

I would like to use split -l 1 on an input file and then emit each small file out on its own with a tuple maintaining metadata for the initial file that was split, instead of having all of them contained in that one field of the tuple.

something like this:

process split {

    input:
    tuple path(run), val(plateid), path(file) from ex_ch

    output:
    tuple path(run), val(plateid), path("file_*") into parse_ch

    script:
    """
    tail -n +17 $file  > sample_lines.csv
    split -l 1 -d sample_lines.csv file_
    """
}

But this should ideally emit the number of lines in sample_lines.csv as tasks. With this setup they're all caught into a single channel and you get a tuple that looks like:

['/path/to/run', 'plate_id', 'file_1, file_2, file_3, file_4']

Anyone have a quick way to do this? Maybe it's just a .multimap / .map but I can't seem to get it right.

1 reply
Tobias Neumann
@t-neumann
Hi - I have a script that demultiplexes a fastq file based on an input barcodes file. Now I want to start one mapping process for each demultiplexed fastq file. I do this by writing a CSV file with the file locations for each demultiplexed file, which I then supply to a process that reads a channel based upon the .splitCsv operator. Now this works nicely when running this locally or on a cluster, but now I tried to move this to AWS S3. Is there a way to retreive the s3 directory location where the files are stored and put it in the CSV file? Or how would I approach this?
David Mas-Ponte
@davidmasp
Hello all, I am not sure if this is a silly question. If I have a process that generates 10 files in the same folder. Can I somehow transform them into a channel that emits each file separately. I can get a list of the files like file "*.rds" into group_chunks_list. I tried then fromList but without luck. I am not sure if I am missing smth.
5 replies
anoronh4
@anoronh4
Error: Could not find or load main class run i sometimes get this error immediately when running nextflow, and no files are produced by nextflow. what does this error actually mean? i can see that the .nf script i am calling is accessible and the nextflow executable is also accessible.
rpetit3 (Robert Petit)
@rpetit3:matrix.org
[m]
Hi! Can I feed a non-process file() a path to gs://? It's not liking it at the moment, and I'm guessing it might be user error. Works fine for file() in a process
Basically trying to read a TSV of inputs from gs
rpetit3 (Robert Petit)
@rpetit3:matrix.org
[m]
Ignore me! Answer is yes. Looks like my issue is something else
rpetit3 (Robert Petit)
@rpetit3:matrix.org
[m]
Issue is related to using "File().exists()" from Groovy pre-Nextflow
Moritz E. Beber
@Midnighter
Has anyone put together a script that allows you to delete all work directories of a particular process in a pipeline run using nextflow log and some clever bash transformations?
Moritz E. Beber
@Midnighter
Okay, quick thing that I came up with is:
nextflow log <run name>  -F 'process =~ /<process name>/' | while read line; do rm -r "$line"; done
awgymer
@awgymer
If using singularity is there a way to get it to look for files in a specific local folder without specifying the full path in each process.container directive? I tried setting the singularity.cacheDir to one which contained my image files but unfortunately that didn't seem to work. I specified the image file name including the extension (image_name.img) perhaps that is the issue?
6 replies
rpetit3 (Robert Petit)
@rpetit3:matrix.org
[m]
Is there a way to ignore or hide the first() warning (WARN: The operator first is useless when applied to a value channel which returns a single value by definition)?
Sam Birch
@hotplot

I'm having a strange issue, and wonder if the way I'm using DSL2 modules is not correct. I have a workflow defined in one file with the following structure:

workflow foo {
    take:
        inputsPath

    main:
        a(inputsPath)
        b(inputsPath)
        c(inputsPath, a.out, b.out)
}

a, b and c are just processes that run a python script over each file in the directory inputsPath and store the results in an output directory

Then in main.nf I have:

include { foo as foo1 } from './file.nf'
include { foo as foo2 } from './file.nf'
include { foo as foo3 } from './file.nf'

workflow {
    foo1(Channel.fromList(['inputs1-1', 'inputs1-2']))
    foo1(Channel.fromList(['inputs2-1', 'inputs2-2']))
    foo1(Channel.fromList(['inputs3-1', 'inputs3-2']))
}

The problem is that when the foo workflow executes the c process, the inputs are mixed up, i.e. it might be called with the output of a run on inputs1 and b run on inputs2.

This seems like it should be impossible, as the processes run by foo1 should only be looking at inputs/outputs associated with inputs1, correct?

Unless when I import foo it imports separate instances of the workflow, but not separate instances of the processes, and so a.out doesn't necessarily refer to the same a process that was started by the current workflow instance?
Sam Birch
@hotplot
There's an runnable, minimal test case here: https://github.com/hotplot/nf-wf-issues
1 reply
Brandon Cazander
@brandoncazander
Is there a way to use the count() operator for control flow? I am struggling to make it work. What I would like to do is to ensure that a channel has at least one item; otherwise, I'll throw an error.
3 replies
tamuanand
@tamuanand

hi all

I am having a issue with staging multiple files and I have reported here - https://github.com/nextflow-io/nextflow/issues/1364#issuecomment-999285314

Wondering if anyone else in the gitter community has faced this issue and what was the workaround

I am using NF with AWSBatch and as suggested there, I have tried beforeScript: 'ulimit -s unlimited' but it does not seem to work

Thanks in advance

Asaf Peer
@asafpr
A basic question: If a process is called with an empty channel it won't be executed while if a subworkflow is called with an empty channel, it will be processed, is this correct?
1 reply
Paolo Di Tommaso
@pditommaso
the sub-workflow is just a grouping for processes, therefore it gets executed is any process gets executed
Asaf Peer
@asafpr
Thanks
ziltonvasconcelos
@ziltonvasconcelos

Hello all, I am facing an error trying to use nextflow with azure batch. Currently, I am using the nf-core/sarek pipeline and at the begining of the pipeline it gives me the error below:

Error executing process > 'MapReads (1543_18-1)'

Caused by:
Cannot find a VM for task 'MapReads (1543_18-1)' matching this requirements: type=Standard_E8_v3, cpus=16, mem=60 GB, location=brazilsouth

Any help will be appreciated,
Zilton.

4 replies
9d0cd7d2
@9d0cd7d2:matrix.org
[m]
I supose that somebody used Singularity on Nextflow. I'm trying to mount some local dirs on the container, but reading the documentation I faced this:
Nextflow expects that data paths are defined system wide, and your Singularity images need to be created having the mount paths defined in the container file system.
This mean that I need to create the Singularity image defining inside the mount points? Or I can "map" externally the dirs that I wan to mount inside? As the Docker style -v /something:/something
Sablin AMON
@wiztrust
Hello, I am new to Nextflow and would like to know how to send each task (process) via ssh for execution on a remote cluster with Nextflow? Thanks
ziltonvasconcelos
@ziltonvasconcelos

Hello people,
I could solve the previous error on Azure Batch and now another problem appear:

Caused by:
  At least one value of specified task container settings is invalid

Command executed:

  alleleCounter --version &> v_allelecount.txt 2>&1 || true
  bcftools --version &> v_bcftools.txt 2>&1 || true
  bwa version &> v_bwa.txt 2>&1 || true
  cnvkit.py version &> v_cnvkit.txt 2>&1 || true
  configManta.py --version &> v_manta.txt 2>&1 || true
  configureStrelkaGermlineWorkflow.py --version &> v_strelka.txt 2>&1 || true
  echo "2.7.1" &> v_pipeline.txt 2>&1 || true
  echo "21.10.6" &> v_nextflow.txt 2>&1 || true
  snpEff -version &> v_snpeff.txt 2>&1 || true
  fastqc --version &> v_fastqc.txt 2>&1 || true
  freebayes --version &> v_freebayes.txt 2>&1 || true
  freec &> v_controlfreec.txt 2>&1 || true
  gatk ApplyBQSR --help &> v_gatk.txt 2>&1 || true
  msisensor &> v_msisensor.txt 2>&1 || true
  multiqc --version &> v_multiqc.txt 2>&1 || true
  qualimap --version &> v_qualimap.txt 2>&1 || true
  R --version &> v_r.txt 2>&1 || true
  R -e "library(ASCAT); help(package='ASCAT')" &> v_ascat.txt 2>&1 || true
  samtools --version &> v_samtools.txt 2>&1 || true
  tiddit &> v_tiddit.txt 2>&1 || true
  trim_galore -v &> v_trim_galore.txt 2>&1 || true
  vcftools --version &> v_vcftools.txt 2>&1 || true
  vep --help &> v_vep.txt 2>&1 || true

  scrape_software_versions.py &> software_versions_mqc.yaml

Command exit status:
  -

Command output:
  (empty)

Work dir:
  az://genomas-raros/work/11/f701b3eb21426f510b935f9e75da58

Any ideas?
Thanks in advance,
Zilton.

Paolo Di Tommaso
@pditommaso
What version of Nextflow are you using?
ziltonvasconcelos
@ziltonvasconcelos
version 21.10.6 build 5660
Paolo Di Tommaso
@pditommaso
I've had this same problem recently, however it's not clear what it's happening
please move this discussion here