Nextflow community chat moved to Slack! https://www.nextflow.io/blog/2022/nextflow-is-moving-to-slack.html
Is it possible to create files with the native execution mode of a process? For example, I attempted the following:
process WRITE_FASTP_METRICS{
input:
val (rna_result)
val (adt_result)
output:
path "fastp_metrics.csv"
exec:
write_out = file("fastp_metrics.csv")
rna_result.forEach{key, value ->
write_out << key << ',' << value << '\n'
}
adt_result.forEach{key, value ->
write_out << key << ',' << value << '\n'
}
}
But the fastp_metrics.csv
is not created in the work directory, causing this error: Missing output file(s) ``fastp_metrics.csv`` expected by process ``WRITE_FASTP_METRICS (1)``
i'm wondering if we can pass in a container as a variable, as i want to test the same process over various versions of a software. something like this:
process A {
container= container_label
input:
tuple val(container_label), path(inputFile)
...
}
this code did not work, however. can it be done in another way?
process A {
container= params.container_label
input:
path(inputFile)
...
}
process A {
input:
tuple val(container_label), path(inputFile)
...
script:
task.container = container_label
...
}
process generate_readset {
tag "$sample_id"
cpus 48
input:
tuple val(read_name), val(chromosome1), val(chromosome2), val(cuteSV_pos1), val(cuteSV_pos2),
val(sniffle_pos1), val(sniffle_pos2),
path(cuteSV_vcf), path(sniffles_vcf) from vcf_input
output:
path 'complete_read_set.txt' into receiver
script:
"""
${bcftools_1_11} view --threads ${task.cpus} $cuteSV_vcf -r chr$chromosome1:$cuteSV_pos1-$cuteSV_pos2 > complete.txt
"""}
Remote resource not found: https://api.github.com/repos/PATH/TO/contents/main.nf
. What am I doing wrong?
Hey All,
I have a error relate to nextflow azurebatch. The first process using a default D4_v3 vm work alright, but the second process I fail to request a larger vm (I set it in queue, but apparently, it is not working, do I make some naive mistake?)
'''
Error executing process > 'secondprocess'
Caused by:
Cannot find a VM for task 'secondprocess' matching this requirements: type=Standard_D4_v3, cpus=16, mem=14 GB, location=eastus
'''
The config file I used:
process {
executor = 'azurebatch'
}
docker {
enabled = true
}
azure {
batch {
location = 'eastus'
accountName = 'xxxbatch'
accountKey = 'xxx'
autoPoolMode = true
allowPoolCreation = true
deletePoolsOnCompletion = true
deleteJobsOnCompletion = true
pools {
small {
autoScale = true
vmType = 'Standard_D4_v3'
vmCount = 5
maxVmCount = 50
}
large {
autoScale = true
vmType = 'Standard_D16_v3'
vmCount = 5
maxVmCount = 50
}
}
}
storage {
accountName = "xxx"
accountKey = "xxx"
}
}
process {
withName: firstprocess {
queue = 'small'
}
withName: secondprocess {
queue = 'large'
}
}
nextflow run
, despite specifying an unquota'd path with -w
aka -work-dir
. Any ideas? In this thread, Paolo suggests -w
is the solution… https://groups.google.com/g/nextflow/c/401Tp_6H57k/m/va8ACNeTAQAJ
$ nextflow run nf-core/viralrecon -w /users/xxx/test --help
N E X T F L O W ~ version 21.04.0
Pulling nf-core/viralrecon ...
Disk quota exceeded
How can I use this as the 'size' part of a groupTuple? I tried:
aligned_bams
.groupTuple(by:0,size:lane_calc)
But it did not like it - complained about the value type etc. All thoughts gladly received!
.collect()
the files from each tuple at the same time. Anyone has done this before? I've already read about each
and combine
, so my input channel is of the right format. The problem I have is .collect()
- not sure how to incorporate it in the input tuple.
Hi, in DSL2, do functions allow the passing in of channels? I was hoping to clean up a workflow where a similar sequence of operations is applied to channels, but it looks like channels passed into functions are demoted to plain Java collections.
The error reads:
No signature of method: java.util.LinkedHashMap.collectFile() is applicable for argument types: (LinkedHashMap, Script_77e02758$_collect_file_tuples_closure1) values: [[storeDir:null, sort:hash], Script_77e02758$_collect_file_tuples_closure1@54e2fe]
process {
withName: structural_alignment {
if (task.exitStatus in 140..143 )
"""
errorStrategy = 'retry'
cpus = {2 * task.attempt}
maxRetries = 5
"""
else
errorStrategy = 'retry'
maxRetries = 10
}
}
process {
withName: structural_alignment {
if (task.exitStatus in 140..143 ) {
errorStrategy = 'retry'
cpus = {2 * task.attempt}
maxRetries = 5
}
else {
errorStrategy = 'retry'
maxRetries = 10
}
}
Hi guys.
I have a simple doubt (I think). I just need to save my output process in an external directory so I'm using the publishDir
function for it. However I need 2 things: i) use variables (collected from input tuple); ii) Create the directory because it doesn't exist.
I successfully run it like this:
publishDir "${params.out_dir}/${tcga_project}/${tcga_barcode}/RSEM_quantification", mode: 'move'
input:
file STAR_bam_file from STAR_alignment_bam
set val(sample_UUID), val(tcga_barcode), val(tcga_project) from samples_ch_3
But It doesn't create the directory. So I tried switching to this:
outDir = file("${params.out_dir}/${tcga_project}/${tcga_barcode}/RSEM_quantification")
outDir.mkdirs()
publishDir outDir, mode: 'move'
input:
file STAR_bam_file from STAR_alignment_bam
set val(sample_UUID), val(tcga_barcode), val(tcga_project) from samples_ch_3
However it states the error of:No such variable: tcga_project
Any help in this situtation? Thanks :)
-profile docker
at the command line level, and instead I would like to have in my nextflow.config file profile = docker
such that it successfully imports all of the config options specified by the docker profile. thanks