These are chat archives for nextflow-io/nextflow

30th
Oct 2018
Maxime HEBRARD
@mhebrard
Oct 30 2018 02:29
Hello. Is there a way to run a process sequentially ? I have a list of files, I run a process in parallel on all my files (light computation) after that I have a heavy process to run on each file. I feed the process with the list of files but I wish nextflow wait until the first file finish to processed before the second task is lunched
if I well understand that behavior depend of the type of my input channel ... I try to output a set, but all my task are still created at once
Maxime HEBRARD
@mhebrard
Oct 30 2018 02:43
#!/usr/bin/env nextflow

Channel.from('A', 'B', 'C')
  .set{chList}

process procLight {
  input: val char from chList
  output: set val(char), stdout into chLightOut
  "echo ${char}_${char}"
}

process procHeavy {
  input:
    val set from chLightOut
  output:
    stdout chHeavyOut
  """
  sleep 10s
  echo "${set[0]} => ${set[1]}"
  """
}

chHeavyOut.subscribe{println "out: $it"}
3 procLight are created instantly (expecting) then 3 procHeavy are created instantly (I wish to create 1 procHeavy at a time)
Rad Suchecki
@rsuchecki
Oct 30 2018 04:07
Well you could, for example, use cpus directive by setting it, for the heavy process to the number of logical cores available on your machine e.g.
process procHeavy {
  cpus 8
  input:
One of the great things about NF is that this is something that you normally don't need to worry about and just let NF deal with it. In what way is your heavy process heavy @mhebrard ?
Maxime HEBRARD
@mhebrard
Oct 30 2018 04:12
hmm I found a similar option
process procHeavy {
  maxForks 1
  input:

The process run a software that allows to define the number of threads ... so I have

process procHeavy {
  maxForks 1
  script:
  """
  myParallelSoftware -Threads 8
  """

doea that make sens ?

Rad Suchecki
@rsuchecki
Oct 30 2018 04:16
looks fine
Maxime HEBRARD
@mhebrard
Oct 30 2018 04:17
like this I ask the number of threads as nextflow parameters and ensure that only 1 task is launch but use the required threads number
Maxime HEBRARD
@mhebrard
Oct 30 2018 04:23

oh but the cpus option might be better...

process procHeavy {
  cpus $params.threads
  script: """ myParallelSoftware -Threads ${params.threads}"

like this I can use 1 task on 10 threads or 2 tasks on 5 threads each ...

Rad Suchecki
@rsuchecki
Oct 30 2018 04:24
more generally I opt for setting the required number of cpus for a process and let nextflow deal with how many can be submitted at a time, this way when my switch from 2-core VM to a 20 core server or 000's core cluster there is nothing I need to change in the pipeline
Maxime HEBRARD
@mhebrard
Oct 30 2018 04:24
yes I get that :)
Rad Suchecki
@rsuchecki
Oct 30 2018 04:25
:+1:
Maxime HEBRARD
@mhebrard
Oct 30 2018 04:26
first time I run my flow, nextflow created 10 tasks that were configure to use 10 threads each ... my machine died lol
Rad Suchecki
@rsuchecki
Oct 30 2018 04:28
yep, so cpus directive should help :grin:
Winni Kretzschmar
@winni2k
Oct 30 2018 05:29
Hey, I just got the warning: WARN: The into operator should be used to connect two or more target channels -- consider to replace it with .set { pruned_for_traverse_ch }
It strikes me that the interface to nextflow could be simplified if into with one argument was made to work like set?
Pierre Lindenbaum
@lindenb
Oct 30 2018 07:52
Hi all, a dev question. I'm currently working with a server where SLURM has been wrapped into another system ( https://www.vi-hps.org/upload/material/tw09/vi-hps-tw09-Curie_Info.pdf cf. ccc_msub ) . So I've started writing a subclass of SlurmExecutor (although I don't know much about groovy or/and I don't have a clear view of the nextflow code). So far : https://github.com/lindenb/nextflow/blob/1c60a744a3c3cd2f5f63f186d88a5abeba763476/src/main/groovy/nextflow/executor/CngExecutor.groovy . Ok , now I need to set a project-name to the ccc_msub command line. From an executor method List<String> getDirectives(TaskRun task, List<String> result) is there a way to acces the general config -c of the workflow ? Oris there a simple way to add a new property to a process. Thanks !
Paolo Di Tommaso
@pditommaso
Oct 30 2018 08:31
hi @lindenb, groovy is essentially Java without semicolons :smile:
regarding the project-name, for that I would just use clusterOptions
alternatively, you case define custom executor settings in the config file as
executor {
  foo = 'hello'
}
Bioninbo
@Bioninbo
Oct 30 2018 08:34
Hello everyone. Sometimes I get an error but without the path to the failing folder. It can be tough to solve the problem in this case. Is there a workaround?
Paolo Di Tommaso
@pditommaso
Oct 30 2018 08:35
and retrieve it/them using session.getExecConfigProp('cng','foo', defaultValue), see here.
@Bioninbo test case and complete error message, even better as GitHub issue
@winni2k you are right, there's already a enhancement proposal for that nextflow-io/nextflow#831
Winni Kretzschmar
@winni2k
Oct 30 2018 08:39
Right-o!
Paolo Di Tommaso
@pditommaso
Oct 30 2018 08:39
:smile:
Winni Kretzschmar
@winni2k
Oct 30 2018 08:39
:D
Bioninbo
@Bioninbo
Oct 30 2018 08:44
I see, thanks Paolo. Error message is: Processmerge_pdfsinput file name collision -- There are multiple input files for each of the following file names: ... I will see if I can make a test case
Maxime Garcia
@MaxUlysse
Oct 30 2018 09:02
@Bioninbo I've seen a similar one, most likely one of your process take into input several files, some of which have the same name
Winni Kretzschmar
@winni2k
Oct 30 2018 09:03
Does anyone unit test parts of their pipelines?
Bioninbo
@Bioninbo
Oct 30 2018 09:03
I see thanks @MaxUlysse
Winni Kretzschmar
@winni2k
Oct 30 2018 09:03
So far all I have are end-to-end tests, but the turn-around time for my tests is getting a bit long
Any ideas?
Paolo Di Tommaso
@pditommaso
Oct 30 2018 09:18
if you want to test your task commands, save them as separate bash scripts, and then test individually
Pierre Lindenbaum
@lindenb
Oct 30 2018 09:22
@pditommaso thanks, i'll try
Paolo Di Tommaso
@pditommaso
Oct 30 2018 09:22
:+1:
Christopher Mohr
@christopher-mohr
Oct 30 2018 12:38
Hi, I would like to split a file (TSV file) from a source channel into multiple files and tried to use "splitText". However, as far as I can see there is currently no option to keep the header in each file chunk. Am I missing something here?
Paolo Di Tommaso
@pditommaso
Oct 30 2018 13:11
I think you are right, you may want to open a enhancement request in github
Christopher Mohr
@christopher-mohr
Oct 30 2018 13:16
Sure I will do that, thanks! So I assume at the moment the easiest solution would be to have a process that does the splitting?
Paolo Di Tommaso
@pditommaso
Oct 30 2018 13:18
I guess so
Christopher Mohr
@christopher-mohr
Oct 30 2018 13:19
Ok thank you
Pierre Lindenbaum
@lindenb
Oct 30 2018 13:25

@pditommaso thanks Paolo, I've thing I've got something working with my Custom Executor. However, I keep getting a It appears you have never run this project before -- Option-resumeis ignored when I re-run a workflow without specific executor

[lindenbp@cobalt172 NEXTFLOW]$ java -jar nextflow-0.33.0-SNAPSHOT-all.jar run -resume test02.nf 
N E X T F L O W  ~  version 0.33.0-SNAPSHOT
Launching `test02.nf` [cheeky_curie] - revision: 6a63965461
WARN: It appears you have never run this project before -- Option `-resume` is ignored
[warm up] executor > local
[70/7051d2] Submitted process > splitLetters (run 10 seconds)
[b1/72e86b] Submitted process > splitLetters (run 30 seconds)
[37/e41cc0] Submitted process > splitLetters (run 40 seconds)
[29/af717e] Submitted process > splitLetters (run 20 seconds)
[62/86134a] Submitted process > p2
[lindenbp@cobalt172 NEXTFLOW]$ java -jar nextflow-0.33.0-SNAPSHOT-all.jar run -resume test02.nf 
N E X T F L O W  ~  version 0.33.0-SNAPSHOT
Launching `test02.nf` [nasty_lamarr] - revision: 6a63965461
WARN: It appears you have never run this project before -- Option `-resume` is ignored
[warm up] executor > local
[79/11a274] Submitted process > splitLetters (run 30 seconds)
[f5/7b53bc] Submitted process > splitLetters (run 10 seconds)
[9d/587caa] Submitted process > splitLetters (run 20 seconds)
[0c/95a24a] Submitted process > splitLetters (run 40 seconds)
[61/ee04e9] Submitted process > p2

with test02.nf:

secs = Channel.from(10,20,30,40)

process splitLetters {
    tag "run ${sec} seconds"
    input:
    val sec from secs
    output:
        file("jeter${sec}.txt") into out1
    script:
    """
    sleep ${sec} && echo "Hello ${sec}" > jeter${sec}.txt
    """
}

process p2 {
    input:
    val files  from out1.collect()
    output:
    file("done.txt") into out2
    script:
    """
    echo "${files}" > done.txt
    """
   }

The files were created.

Do you have any idea of the cause of this ?

Paolo Di Tommaso
@pditommaso
Oct 30 2018 13:27
well, that is expected if you specify -resume and you never run it before
Pierre Lindenbaum
@lindenb
Oct 30 2018 13:32
no, look at my example, I ran it twice
Paolo Di Tommaso
@pditommaso
Oct 30 2018 13:33
weird, I don't see how it can be related to the custom executor
are you deleting the .nextflow directory before/after each run ?
Maxime Garcia
@MaxUlysse
Oct 30 2018 13:34
Do you have any particular thing in your config file like process.scratch = true?
Pierre Lindenbaum
@lindenb
Oct 30 2018 13:34
I'm going to test it with the 'master' branch...
(yes, I deleted the .nextflow directory after I saw this failing)
there is no config file in my example
Paolo Di Tommaso
@pditommaso
Oct 30 2018 13:35
if you delete the .nextflow dir it's expected, because resume info are kept there
Pierre Lindenbaum
@lindenb
Oct 30 2018 13:41

ok, I've switched to the master branch, recompiled make pack , removed the .nextflowand re-run twice:

$ rm -rf .nextflow && java -jar nextflow-0.33.0-SNAPSHOT-all.jar run -resume test02.nf   && java -jar nextflow-0.33.0-SNAPSHOT-all.jar run -resume test02.nf
N E X T F L O W  ~  version 0.33.0-SNAPSHOT
Launching `test02.nf` [boring_lovelace] - revision: 6a63965461
WARN: It appears you have never run this project before -- Option `-resume` is ignored
[warm up] executor > local
[cd/0f1542] Submitted process > splitLetters (run 40 seconds)
[c3/a7b371] Submitted process > splitLetters (run 20 seconds)
[64/549f1d] Submitted process > splitLetters (run 30 seconds)
[f0/f37d20] Submitted process > splitLetters (run 10 seconds)
[e2/ee8837] Submitted process > p2
N E X T F L O W  ~  version 0.33.0-SNAPSHOT
Launching `test02.nf` [kickass_goldberg] - revision: 6a63965461
WARN: It appears you have never run this project before -- Option `-resume` is ignored
[warm up] executor > local
[87/98ad98] Submitted process > splitLetters (run 20 seconds)
[84/6ed207] Submitted process > splitLetters (run 30 seconds)
[13/5046d6] Submitted process > splitLetters (run 10 seconds)
[3f/ca3b6f] Submitted process > splitLetters (run 40 seconds)
[ba/855f31] Submitted process > p2

could it be a 'weird' filesystem ? can I enable the log.debug from the command-line ?

Paolo Di Tommaso
@pditommaso
Oct 30 2018 13:43
the log debug is enabled by default, check the .nextflow.log file
try also the stock nextflow, I mean the nextflow command instead of your build
Pierre Lindenbaum
@lindenb
Oct 30 2018 13:44
Oct-30 14:39:17.004 [main] DEBUG nextflow.cli.Launcher - $> null
Oct-30 14:39:17.167 [main] INFO  nextflow.cli.CmdRun - N E X T F L O W  ~  version 0.33.0-SNAPSHOT
Oct-30 14:39:17.181 [main] INFO  nextflow.cli.CmdRun - Launching `test02.nf` [kickass_goldberg] - revision: 6a63965461
Oct-30 14:39:17.229 [main] WARN  nextflow.config.ConfigBuilder - It appears you have never run this project before -- Option `-resume` is ignored
Oct-30 14:39:17.266 [main] DEBUG nextflow.Session - Session uuid: 16dab013-cb2e-448e-9680-98b900bd41e6
Oct-30 14:39:17.266 [main] DEBUG nextflow.Session - Run name: kickass_goldberg
Oct-30 14:39:17.269 [main] DEBUG nextflow.Session - Executor pool size: 56
Oct-30 14:39:17.296 [main] DEBUG nextflow.cli.CmdRun - 
  Version: 0.33.0-SNAPSHOT build 4906
  Modified: 29-10-2018 15:14 UTC (16:14 CEST)
  System: Linux 3.10.0-693.37.4.el7.x86_64
  Runtime: Groovy 2.5.1 on Java HotSpot(TM) 64-Bit Server VM 1.8.0_192-b12
  Encoding: UTF-8 (UTF-8)
  Process: 18951@cobalt172 [10.128.3.72]
  CPUs: 56 - Mem: 125.7 GB (43.8 GB) - Swap: 0 (0)
Oct-30 14:39:17.361 [main] DEBUG nextflow.Session - Work-dir: /ccc/scratch/cont007/fg0073/lindenbp/NEXTFLOW/work [lustre]
Oct-30 14:39:17.362 [main] DEBUG nextflow.Session - Script base path does not exist or is not a directory: /ccc/scratch/cont007/fg0073/lindenbp/NEXTFLOW/bin
Oct-30 14:39:17.621 [main] DEBUG nextflow.Session - Session start invoked
Oct-30 14:39:17.626 [main] DEBUG nextflow.processor.TaskDispatcher - Dispatcher > start
Oct-30 14:39:17.627 [main] DEBUG nextflow.script.ScriptRunner - > Script parsing
Oct-30 14:39:18.133 [main] DEBUG nextflow.script.ScriptRunner - > Launching execution
Oct-30 14:39:18.234 [main] DEBUG nextflow.processor.ProcessFactory - Discovered executor class: nextflow.executor.IgExecutor
Oct-30 14:39:18.301 [main] DEBUG nextflow.processor.ProcessFactory - << taskConfig executor: null
Oct-30 14:39:18.301 [main] DEBUG nextflow.processor.ProcessFactory - >> processorType: 'local'
Oct-30 14:39:18.307 [main] DEBUG nextflow.executor.Executor - Initializing executor: local
Oct-30 14:39:18.309 [main] INFO  nextflow.executor.Executor - [warm up] executor > local
Oct-30 14:39:18.315 [main] DEBUG n.processor.LocalPollingMonitor - Creating local task monitor for executor 'local' > cpus=56; memory=125.7 GB; capacity=56; pollInterval=100ms; dumpInterval=5m
Oct-30 14:39:18.318 [main] DEBUG nextflow.processor.TaskDispatcher - Starting monitor: LocalPollingMonitor
Oct-30 14:39:18.319 [main] DEBUG n.processor.TaskPollingMonitor - >>> barrier register (monitor: local)
Oct-30 14:39:18.320 [main] DEBUG nextflow.executor.Executor - Invoke register for executor: local
Oct-30 14:39:18.378 [main] DEBUG nextflow.Session - >>> barrier register (process: splitLetters)
Oct-30 14:39:18.382 [main] DEBUG nextflow.processor.TaskProcessor - Creating operator > splitLetters -- maxForks: 56
Oct-30 14:39:18.438 [main] DEBUG nextflow.processor.ProcessFactory - << taskConfig executor: null
Oct-30 14:39:18.438 [main] DEBUG nextflow.processor.ProcessFactory - >> processorType: 'local'
Oct-30 14:39:18.439 [main] DEBUG nextflow.executor.Executor - Initializing executor: local
Oct-30 14:39:18.440 [main] DEBUG nextflow.Session - >>> barrier register (process: p2)
Oct-30 14:39:18.440 [main] DEBUG nextflow.processor.TaskProcessor - Creating operator > p2 -- maxForks: 56
Oct-30 14:39:18.441 [main] DEBUG nextflow.script.ScriptRunner - > Await termination 
Oct-30 14:39:18.441 [main] DEBUG nextflow.Session - Session await
Oct-30 14:39:18.542 [Task submitter] DEBUG nextflow.executor.LocalTaskHandler - Launch cmd line: /bin/bash -ue .command.run
Oct-30 14:39:18.547 [Task submitter] INFO  nextflow.Session - [87/98ad98] Submitted process > splitLetters (run 20 seconds)
Oct-30 14:39:18.558 [Task submitter] DEBUG nextflow.executor.LocalTaskHandler - Launch cmd line: /bin/bash -ue .command.run
Oct-30 14:39:18.558 [Task submitter] INFO  nextflow.Session - [84/6ed207] Submitted process > splitLetters (run 30 seconds)
Oct-30 14:39:18.564 [Task submitter] DEBUG nextflow.executor.LocalTaskHandler -
Paolo Di Tommaso
@pditommaso
Oct 30 2018 13:47
don't help much, try to with the standard nextflow command
Pierre Lindenbaum
@lindenb
Oct 30 2018 13:48

try also the stock nextflow, I mean the nextflow command instead of your build

that's would be too complicated for now: my server is isolated from the internet, I can only scpthings from one server to another. I tried to copy the gradle cache and run with --offline but it doesn't work (still trying to connect, some gradle files are missing, etc... ). I don't want to accumulate the technical problems.

anyway, I'm going to sleep on this and try later... :-)
Paolo Di Tommaso
@pditommaso
Oct 30 2018 13:49
in spain we are supposed to have siesta, not in france :smile:
for future reference, you can download the self-contained package from the github release page
the nextflow-xxx-all file, download it, chmod +x and run it
Pierre Lindenbaum
@lindenb
Oct 30 2018 13:50
:-)
Maxime Garcia
@MaxUlysse
Oct 30 2018 13:50
defintively usefull, I had the same problem
Paolo Di Tommaso
@pditommaso
Oct 30 2018 13:50
have a good nap ! :wink:
Maxime Garcia
@MaxUlysse
Oct 30 2018 13:51
Someone should mention that more in the docs
Paolo Di Tommaso
@pditommaso
Oct 30 2018 13:52
you are never happy folks :satisfied:
Maxime Garcia
@MaxUlysse
Oct 30 2018 13:55
French people :-D
Paolo Di Tommaso
@pditommaso
Oct 30 2018 14:02
well, fortunately still no complains :satisfied:
Riccardo Giannico
@giannicorik_twitter
Oct 30 2018 15:24

Hi, are the following example equivalent? Or is there any reason why I should prefer one or the other?
example1:

if(params.flag == 'go' ){
   process myprocess {
      ...
   }
}

example2:

process myprocess {
   ...
   when: 
   params.flag == 'go'
   ...
}
micans
@micans
Oct 30 2018 15:29
In the first case you need to set up empty channels in some circumstances (sorry, not 100% clear which those are). The second case is more generic NF programming is my understanding. I had a similar question recently; I wondered if the first instance could not be combined with some sort of scoping mechanism for channels. I use both idioms actually. But I'll defer to more knowledgeable minds.
Paolo Di Tommaso
@pditommaso
Oct 30 2018 15:40
the second is generally easier
Riccardo Giannico
@giannicorik_twitter
Oct 30 2018 15:45
thanks @micans and @pditommaso :)
Paolo Di Tommaso
@pditommaso
Oct 30 2018 15:46
:+1:
Riccardo Giannico
@giannicorik_twitter
Oct 30 2018 16:02
bigger question now.. I do not understand how a process can collect all files from all instances of the previous process.
For example here process "mapping" creates a bam for each sample. How can I write process "multisample_variantcalling" to obtain one single vcf file from all bam files together?
process mapping {
   input: 
   set val(sample), file(reads) from samplelist

   output: 
   file "${sample}.bam" into ch_bam
   file "${sample}.bam.bai" into ch_bambai

   """
   bwa mem ${bwaindex} ${reads} | samtools view -bS - > ${sample}.bam
   samtools index ${sample}.bam
   """
}

process multisample_variantcalling {
   input: 
   ???
   output: 
   file "multisample.vcf" into ch_multisamplevcf

   """
   ???
   # should be something that will be translated into this:
   # freebayes -b sample1.bam -b sample2.bam -b sample3.bam -f ${genome} -v multisample.vcf
   """
}
Riccardo Giannico
@giannicorik_twitter
Oct 30 2018 16:25

uh.. I found from this google groups: https://groups.google.com/forum/#!topic/nextflow/MhK7boM2c1Y I should use toList()

process multisample_variantcalling {
   input: 
   file(bamlist) from ch_bam.toList()
   file(bambailist) from ch_bambai.toList()

   output: 
   file "multisample.vcf" into ch_multisamplevcf

   """
   ???
   # should be something that will be translated into this:
   # freebayes -b sample1.bam -b sample2.bam -b sample3.bam -f ${genome} -v multisample.vcf
   """
}

But I still have some problem to manipulate the list from ['sample1.bam', 'sample2.bam', 'sample3.bam'] to "-b sample1.bam -b sample2.bam -b sample3.bam"

micans
@micans
Oct 30 2018 16:26
You can use collect() on a Channel to get all the files from the channel. Is that what you want?
E.g. file ('featureCounts/*') from ch_multiqc_fc.collect()
Riccardo Giannico
@giannicorik_twitter
Oct 30 2018 16:28
thanks @micans I just noticed the "toList()" function too, I see it works in the same way as "collect()". But I still have some poblem to manipulate the list (see my post above)
micans
@micans
Oct 30 2018 16:29
Aha, the software actually uses -b file1 -b file2 -b file3 I see.
Mmm. I wonder if there is a shell trick, similar to echo -b{a,b,c} that you could use
micans
@micans
Oct 30 2018 16:36

Say,

foo=$(echo a b c)
echo -b ${foo// / -b }

(and use foo="$bamlist") ... but I'm not sure it will work, apart from being hideous.

Riccardo Giannico
@giannicorik_twitter
Oct 30 2018 16:39
Yes I could do that in the bash script that way, thanks!
I'm just wandering if there is a better cleaner solution in groovy to manipulate it
micans
@micans
Oct 30 2018 16:40
I'm testing this, there is some issue with evaluation, won't work as is
and I agree, groovy solution needed :-)
bc-3,svd/test, list="foo bar zut tim boo"
bc-3,svd/test, ./t.sh $(echo -m ${list// / -m })
This seems to work, don't crucify me please whether it does or doesnt
It's the worst thing I've written for years.
Riccardo Giannico
@giannicorik_twitter
Oct 30 2018 16:44
lol ... thank you for your effort but it's really awkward XD I'll try to find a safer and cleaner one in groovy :P
micans
@micans
Oct 30 2018 16:44
:+1:
With a map operator you can construct the string easily, the thing that stumps me right now is that the things in your channel will be considered files rather than strings.
Riccardo Giannico
@giannicorik_twitter
Oct 30 2018 16:47
here it is!
a=["foo","bar","zut"].collect{ "-b " + it}.join()
uhm.. but how I can use it in the process ?
micans
@micans
Oct 30 2018 16:51
$a, or perhaps $(echo $a), in the script section.
I mean \$(echo $a)
NF will interpret the last $, and bash the first $
Riccardo Giannico
@giannicorik_twitter
Oct 30 2018 16:53
I need some tests here, because I still need file(bamlist) from ch_bam.toList()if I change it from file to val I think I will lose the bam files links. But if I let it be as "file" I suppose I cannot manipulate the object with "collect". I need some tests.
micans
@micans
Oct 30 2018 16:53
Yep that was my concern.
Riccardo Giannico
@giannicorik_twitter
Oct 30 2018 16:54
thanks you very much @micans , I'll run some tests tomorrow (my working day is over :P )
micans
@micans
Oct 30 2018 16:55
Sorry, I was more entertaining I hope than helping :grin: have a nice evening!
Tobias "Tobi" Schraink
@tobsecret
Oct 30 2018 16:59
When you use the conda directive to create an environment with NextFlow, how do you invoke python in the script block? #!python?
Paolo Di Tommaso
@pditommaso
Oct 30 2018 17:08
shebang is your friend
Tobias "Tobi" Schraink
@tobsecret
Oct 30 2018 17:10
seems like #!python works
Paolo Di Tommaso
@pditommaso
Oct 30 2018 17:11
well, maybe you are lucky :)
Tobias "Tobi" Schraink
@tobsecret
Oct 30 2018 17:12
How would /usr/bin/envwork though when the conda env is created in work/conda/?
Paolo Di Tommaso
@pditommaso
Oct 30 2018 17:12
conda is completely unrelated here
the mechanism is the same for conda, container, modules or whatever
it's just plain PATH env var
Tobias "Tobi" Schraink
@tobsecret
Oct 30 2018 17:13
Huh, interesting...
I don't know enough about Unix
Paolo Di Tommaso
@pditommaso
Oct 30 2018 17:14
well, I think only linus can say to fully know it ..
Tobias "Tobi" Schraink
@tobsecret
Oct 30 2018 17:16
wait, so you would write #!/usr/bin/python to use the python of the conda command?
Paolo Di Tommaso
@pditommaso
Oct 30 2018 17:16
#!/usr/bin/env python
the command is /usr/bin/env to which to specify python as argument
it returns the path where python is
Tobias "Tobi" Schraink
@tobsecret
Oct 30 2018 17:17
aaaah ok, that definitely works
thanks so much!
Paolo Di Tommaso
@pditommaso
Oct 30 2018 17:17
welcome
Tobias "Tobi" Schraink
@tobsecret
Oct 30 2018 17:28
Wow, either NextFlow or Conda is really smart when it gets to conda envs. I just defined two separate processes which have the exact same environment and NextFlow didn't create the same env twice
:smile:
Paolo Di Tommaso
@pditommaso
Oct 30 2018 17:29
:sunglasses:
Tobias "Tobi" Schraink
@tobsecret
Oct 30 2018 17:30
Is NextFlow just scanning all the conda directives and deduplicating them?
Paolo Di Tommaso
@pditommaso
Oct 30 2018 17:31
hashing is the magic for everything
Tobias "Tobi" Schraink
@tobsecret
Oct 30 2018 17:35
:ok_hand: sick