Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • 01:59
    SixPlusSeven opened #1864
  • Jan 21 17:53
    JoseEspinosa synchronize #1861
  • Jan 21 17:51
    JoseEspinosa synchronize #1861
  • Jan 21 16:50
    junjun-zhang commented #1851
  • Jan 21 15:57
    Z-Zen edited #1863
  • Jan 21 15:56
    Z-Zen edited #1863
  • Jan 21 15:50
    Z-Zen opened #1863
  • Jan 21 15:16
    abhi18av closed #1857
  • Jan 21 15:16
    abhi18av commented #1857
  • Jan 21 14:39
    junjun-zhang commented #1858
  • Jan 21 13:36

    pditommaso on testing

    Fix error on plugin downloading… Make repository provider access… Make plugin services singleton and 2 more (compare)

  • Jan 21 12:11
    IdoBar commented #1850
  • Jan 21 11:47
    pditommaso commented #1859
  • Jan 21 11:26
    IdoBar commented #1850
  • Jan 21 10:48
    lcabus-flomics commented #1658
  • Jan 21 10:47
    lcabus-flomics commented #1658
  • Jan 21 10:45
    lcabus-flomics commented #1658
  • Jan 21 10:43
    stale[bot] unlabeled #1658
  • Jan 21 10:43
    lcabus-flomics commented #1658
  • Jan 21 10:34
    IdoBar commented #1850
sureshhewa
@sureshhewabi
Screen Shot 2021-01-18 at 4.32.27 PM.png
Screen Shot 2021-01-18 at 4.32.56 PM.png
I have installed click library in my machine
is conda like a docker/vm? does it maintain a separate environment? if so do I need to install "click" library in the bioconda library?
sureshhewa
@sureshhewabi
OK I understood .
Michael Milton
@TMiguelT
Hi, I asked a NextFlow question over here which has got a few votes so I gather other people are interested also: https://bioinformatics.stackexchange.com/questions/9099/passing-around-complex-metadata-in-nextflow. Answers much appreciated.
Petr Danecek
@pd3
Hi, I am a nextflow newbie and was wondering if the following is possible: in my workflow I would like to keep an object defined in groovy (imagine a table) which keeps growing as steps of the pipeline finish (imagine each step adds a new column). I tested this concept but it is not working because nextflow executes things in parallel, not waiting untill the previous step finishes. This is what I'd like to do:
data = create_data()
Channel
     .fromList( data.list_columns_relevant_for_process1() )
     .set{ data_for_process1 }
run_process1( data_for_process1 )

// here I'd like the workflow to wait until process1 finishes

data.add_data( process1.out.data )

Channel
     .fromList( data.list_columns_relevant_for_process2() )
     .set{ data_for_process2 }
run_process2( data_for_process2 )
Paolo Di Tommaso
@pditommaso
use a map, and make a clone (copy) in each task execution
Flic Anderson
@FlicAnderson
Hi there, does anyone know of any good resources about handling outputs in nextflow for R flexdashboards (https://rmarkdown.rstudio.com/flexdashboard/) ? Given that successfully running a flexdashboard rmarkdown file automatically opens a browser to mess about interactively with data, does anyone have any ideas how to approach this? I'm still quite new to nextflow and flexdashboard, so I can't see any obvious solutions to something like this, or any similar things to investigate further.
Alexander Peltzer
@apeltzer
I'd be interested too. The way I've done it is to precompute something using nextflow & then filling in a separate process a RMarkdown file and render this as HTML and produce this as output. That makes sometimes sense, as the precompute steps are somewhat timeconsuming but we do explore the final output in a more interactive fashion / allow filtering in the dashboard HTML etc ...
But no nice resources about these types of things as far as I can tell
Flic Anderson
@FlicAnderson
@apeltzer ah, that's a shame. We do a lot of precompute analysis in a previous step in our pipeline too, but a student working with us has created a really nice flexdashboard which runs on the outputs of these previous steps and lets us select samples (genomics data pipeline) to view plots for those samples. We haven't got the flexdashboard part running seamlessly from commandline yet, so I'm still not 100% on the output format of flexdashboards myself, but I'm attempting to pull together a process in our .nf to integrate it as it's coming along. Is there any chance your project is open and I could have a peek at how you're handling it? It sounds like you're doing something similar...
Alexander Peltzer
@apeltzer
I'm having this open sourced soon, need to wait for approval first unfortunately :-(
But yes, it sounds very similar to what we are doing and there are some other project where I can check if this is already open
Similar approach could be to generate a standardized output + provide some app in RShiny to read that file and make interaction possible
(other project we do this, writing a RData object and then parsing this in the Shiny App for visual / interactive exploration)
Flic Anderson
@FlicAnderson
@apeltzer No worries, I understand re: approval needed :) Good to know about other folks doing this sort of thing though, we're still at development stage for this part so our approach might evolve depending on what we bump into along the way!
Alexander Peltzer
@apeltzer
Feel free to reach out, I'm hoping to have it approved sooner than later (Maybe Feb/march)
Leon Kuchenbecker
@lkuchenb
Hi all. I was wondering, is there a way to assign a closure for use with multiMapto a variable and re-use it?
Edmund Miller
@Emiller88
Hello all, I was hoping to get some help to figure out why the bin dir isn't getting copied over correctly when I'm using pytest-workflow to test some workflow(It makes sym-links under the hood) nf-core/rnaseq#546
Vahid
@VJalili
Hello everyone. I fairly new to the Nextflow ecosystem and I'm trying to learn the basic dynamics. I'd appreciate it if you could point me to the source code of the command nextflow run.
2 replies
BarryDigby
@BarryDigby

Hi all,

Wondering if anyone has observed this behaviour when pulling a container via config process withLabel.

Config:

process {
..
..
withLabel: multiqc {
     container = 'docker://barryd237/week1:test'
    }
}

singularity.enabled = true
singularity.autoMounts = true

The container is successfully pulled, however the naming of the file throws a java.nio.file.NoSuchFileException: /data/../work/singularity/barryd237-week1-test.img.pulling.1611136739603 error. The container is downloaded as barryd237-week1-test.pulling.1611136739603.img.

I can see the discrepancy in how the container was named and how nextflow looks for it, but I don't know what the root cause is.

sureshhewa
@sureshhewabi
Screen Shot 2021-01-20 at 10.51.57 AM.png
Hi all, I have two processes. first one - findFileRelationships - which produces three text files
and second one - launchSubmissionValidation which should take each file and run a command
I get this error when I pass the input variable assay_file
Error: Got unexpected extra arguments (7_assay_relation.txt 8_assay_relation.txt)
it seems my assay_file variable contains all three file names.
How can I make it a channel and execute them in parallel ?
sureshhewa
@sureshhewabi
 * Find the relationship between RESULT file(s) and PEAK file(s) from Submission.px file
 */
process findFileRelationships {

   output:
   file '*_assay_relation.txt' into assay_files

   script:
   """
   python3 $workflow.launchDir/scripts/readPX.py rel -i $params.submission_px
   """
}

/*
 * Launch the validation for all the results
 */
process launchSubmissionValidation {

   conda 'submission-tool-validator=1.0.1 click'
   errorStrategy 'retry'
   maxErrors 3

   input:
   file assay_file from assay_files

   output:
   file 'stv_output.txt' into validation_results

   script:
   """
    python3 $workflow.launchDir/scripts/readPX.py validation -f $assay_file -d $params.data_base_dir
   """
}
sureshhewa
@sureshhewabi
assay_files.view { "value: $it" } gives me value: [/Users/hewapathirana/Desktop/validation/work/5f/866e63d592a56dd5ecbe92c9e7622e/6_assay_relation.txt, /Users/hewapathirana/Desktop/validation/work/5f/866e63d592a56dd5ecbe92c9e7622e/7_assay_relation.txt, /Users/hewapathirana/Desktop/validation/work/5f/866e63d592a56dd5ecbe92c9e7622e/8_assay_relation.txt]
Firas
@FirasSadiyah
Hi all, is there a way to define the working directory (where the temp files are stored) in the main workflow file e.g. main.nf rather than the config file e.g. nextflow.config?
Paolo Di Tommaso
@pditommaso
no, it's not possible
Firas
@FirasSadiyah
Thanks @pditommaso
Steven P. Vensko II
@spvensko_gitlab
Is there a Nextflow DSL2 module linter/sanitizer? I've got a module that seems to work in 20.04 that I'm attempting to use in 20.10 but get an error regarding an unexpected { in the workflow declaration.
sureshhewa
@sureshhewabi
Could you please comment on my question? I am really stuck there..
Benjamin Wingfield
@nebfield
I have a lot of path inputs I want to programmatically define (ftp urls, mostly). I have made an ArrayList in nextflow containing the paths as strings. Is it possible to feed this into Channel.fromPath? Or am I best to try a different approach?
YPHa
@YPHa
I have 4 modules with split of related processes. but some steps in the 4th file can start as soon as some basic ones in the 1st finish, while others have to first pass through 2 & 3.
So I was wondering, is there a way to import all the functions from a module? So how I can directly access individual process output from a module without first having to wait for the entire module to finish?
Is there an import * from './module' like syntax? Because the alternative would be something like:
include { func1, func2, func3, funcN, func20+ } from './moduleName'
Another issue I am having is that some steps always cache succesfully, but many always fail to cache.
Whilst the only thing they do is download a file based on the input, this file is garanteed to never change for the same input. I download it as an archive, and output the decompressed file
Benjamin Wingfield
@nebfield

I think anonymous imports have been deprecated @YPHa

Check the DSL2 migration notes here: https://www.nextflow.io/docs/latest/dsl2.html

For my complicated modules I just have one include per line

Try cache = 'deep' to cache based on file input instead of timestamps (this is slower)

5 replies
Check cache directives for a full list and explanation: https://www.nextflow.io/docs/latest/process.html
sureshhewa
@sureshhewabi
I am new to nextflow, I really appreciate if anyone help me on my question posted above?
2 replies
Jacques Dainat
@Juke34
Is there way to access a data set on a protected gitlab using the access token?
e.g:
params.annotation = 'https://gitlab.address.com/bioinfo/test/raw/homo_sapiens.test.gff?private_token=<MyToken>'
I end up with this error: Glob pattern not allowed for files with scheme: https
Alaa Badredine
@AlaaBadredine_twitter
Hello
I am using Docker with Nextflow, I am getting the docker: Error response from daemon: Duplicate mount point: /media.
Nextflow version: v20.10.0, build 5430
Alaa Badredine
@AlaaBadredine_twitter
I don't understand why in .command.run, i see the mounting point duplicated twice since I only specified it once in nextflow.config
Steven P. Vensko II
@spvensko_gitlab
I've got a fairly complicated workflow (somatic and germline variant calling, transcription quantification, MHC binding predictions, etc.) that seems to work well on my local SLURM cluster. I'm trying to run the same workflow on AWS Batch and am running into an error regarding a process, get_fastqs, having already been used and to use an alias (e.g. include { foo as bar } from ...) if using it again. This doesn't make any sense to me because get_fastqs is being called in a differing context each time. Are there any known limitation on AWS Batch regarding reusing processes that don't exists on other executors? I'll reply to this comment with relevant information.
6 replies
Brent Pedersen
@brentp
hi all, anyone seen this error:
Missing 'bind' declaration in input parameter
6 replies
and any idea how to debug/fix?
Ashley S Doane
@DoaneAS
Hi, I recently started getting an error when running nextflow pipelines that use singularity. The error occurs when singularity exec is called with `--vm-cpu flag. The error occurs outside of nextflow, so it may be a singularity issue. Has nextflow always called singularity using --vm-cpu flag? I'm on centos 7. thanks