Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • 14:14
    pditommaso commented #1417
  • 14:12
    tverbeiren opened #1418
  • 14:09
    ewels commented #1417
  • 13:40
    pditommaso commented #1417
  • 13:38
    ewels opened #1417
  • 06:13
    darcyabjones commented #1276
  • 04:40
    helios commented #1409
  • 04:30
    helios synchronize #1409
  • Dec 10 23:06
    haqle314 opened #1416
  • Dec 10 22:40
    jerdra commented #1414
  • Dec 10 22:40
    jerdra commented #1414
  • Dec 10 22:39
    jerdra commented #1414
  • Dec 10 22:21
    jerdra opened #1415
  • Dec 10 16:53
    bobamess commented #1414
  • Dec 10 13:40
    pditommaso closed #1331
  • Dec 10 12:58
    mjafin commented #1388
  • Dec 10 12:58
    pditommaso commented #1408
  • Dec 10 12:17
    bobamess commented #1414
  • Dec 10 12:10
    bobamess opened #1414
  • Dec 10 11:41
    pditommaso commented #1331
Paolo Di Tommaso
@pditommaso
yes, indeed
the glob is expanded to the actual files
Daniel E Cook
@danielecook
This works:
import java.text.SimpleDateFormat;
date_parse = new SimpleDateFormat("yyyy-MM-dd");
date_filter = date_parse.parse(params.date)
Paolo Di Tommaso
@pditommaso
well, datetime is a mess in any programing lang
Maxime Garcia
@MaxUlysse
It is?? I'll go check my tests then, I just tried and it wasn't working
Paolo Di Tommaso
@pditommaso
well, actually I'm not sure :joy:
Maxime Garcia
@MaxUlysse
It works
I must had another issue when I tried earlier
I'm guessing it'll work as well with s3://
Paolo Di Tommaso
@pditommaso
s3 can be traversed as a regular fs
Maxime Garcia
@MaxUlysse
Ok
thanks
Ashley S Doane
@DoaneAS
@rsuchecki thanks, yes pipeline is here: https://github.com/DoaneAS/realign.nf
Paolo Di Tommaso
@pditommaso
there could be something wrong in your script, NF is not supposed all that CPU
Daniel E Cook
@danielecook
@pditommaso agreed - but I'm curious why the groovy Date module doesn't work? Is it not imported?
Paolo Di Tommaso
@pditommaso
actually I'm not aware of it, therefore is not imported :smile:
is tehre a groovy-date module ?
Ashley S Doane
@DoaneAS
@rsuchecki pretty simple nextflow with only 1 process actually. Just takes a sampleindex.csv file with sample name, sample type (tumor or normal), and bam file path, and does a realignment with speedseq. The executor is sge, and the error seems to happen when nextflow is determining which results are cached.
Daniel E Cook
@danielecook
err maybe not - but the language spec suggests you should be able to do just Date.parse(format, input)
examples also present here: http://groovy-lang.org/single-page-documentation.html; Just curious maybe I'm missing something here
Paolo Di Tommaso
@pditommaso
can't check now, you may wont to report an issue
Daniel E Cook
@danielecook
I can do that
thanks
Paolo Di Tommaso
@pditommaso
welcome
Maxime Garcia
@MaxUlysse
OK, so I made a mistake earlier, it is not working in fact
I'll make a minimal example and an issue
Sri Harsha Meghadri
@harshameghadri
Hey folks, I am pretty new to using docker and singularity. I want to use the nf-core/rnaseq for my analysis. I consistently get errors while trying to execute this command singularity pull --name nf-core-rnaseq-1.3.img docker://nf-core/rnaseq:1.3
Unable to pull docker://nf-core/rnaseq:1.3: conveyor failed to get: Error reading manifest 1.3 in docker.io/nf-core/rnaseq: errors:
I am trying to pull to rackham. My analysis needs to be executed on bianca. Any tips are super appreciated.
Maxime Garcia
@MaxUlysse
try singularity pull --name nf-core-rnaseq-1.3.img docker://nfcore/rnaseq:1.3 instead, without the - in nf-core
I'm guessing if you have more question about nf-core pipelines, it should be better on our slack: https://nf-co.re/join
Sri Harsha Meghadri
@harshameghadri
I tried that as well getting the same error @MaxUlysse
Maxime Garcia
@MaxUlysse
You tried that on bianca?
Sri Harsha Meghadri
@harshameghadri
nope on rackham, I guess it needs internet
Maxime Garcia
@MaxUlysse
Sure
Just trying to find an easy mistake, sorry ;-)
Have you tried running that on an interactive node?
Sri Harsha Meghadri
@harshameghadri
hmmm on rackham? I dnt have allocation there.
Maxime Garcia
@MaxUlysse
I'm afraid singularity might be too demanding on the regular login node
Let me see if I can help you in an other way
Sri Harsha Meghadri
@harshameghadri
perfect, thank you Maxime.
Maxime Garcia
@MaxUlysse
@harshameghadri I messaged you ;-)
Marko Melnick
@Senorelegans
Is there any way to force a process to wait for another process to finish before it is started? I tryed to do it with a dummy channel but I am concatenating files to the channel amounts change from one process to the next.
Maxime Garcia
@MaxUlysse
can't you use the output from the process that need to be finished as an input for the other?
maybe with a .collect() to be sure to catch multiple executions of said process
Marko Melnick
@Senorelegans
I am concatenating fastq files by groups (I made a parser in groovy to separate by group). Is there no way to just make one process wait for another one with a dummy channel or some null variable?
I guess my real issue is I am reading channels from pairs earlier. And I have a list of the file name with condition group in a sample table that I can read, but I am struggling bringing them and operating on them by group.
Stijn van Dongen
@micans
@Senorelegans can you make a functioning toy example that illustrates your issue? Grouping is usually done by groupKey -> transpose -> groupTuple; e.g.
Channel.from(['a', [1, 2, 3]], ['b', [4, 5]], ['c', [6, 7, 8]])
   .map { tag, stuff -> tuple( groupKey(tag, stuff.size()), stuff ) }
   .view()
   .transpose()
   .map { tag, num -> [tag, num*num+1 ] }
   .view()
   .groupTuple()
   .view()
Ashley S Doane
@DoaneAS
@rsuchecki any ideas on this issue with java cpu use when nextflow is reading the results cache following nextflow -resume? I’m running nextflow on plenty of available resources (192 CPUs, 10T ram), but I’ll try submitting nextflow command as a job. This way SGE will kill it if CPU usage is too high.