Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • 08:17
    pditommaso commented #1409
  • 08:01

    pditommaso on testing

    Add missing message (compare)

  • 07:48

    pditommaso on google-lifescience

    Add GLS integration test (compare)

  • 01:14
    nhoffman opened #1412
  • Dec 09 21:48
    helios commented #1409
  • Dec 09 21:25
    helios edited #1409
  • Dec 09 21:11
    emorice commented #1408
  • Dec 09 21:04

    pditommaso on google-lifescience

    Fix executor tests (compare)

  • Dec 09 20:22

    pditommaso on google-lifescience

    More tests fixes (compare)

  • Dec 09 19:44

    pditommaso on google-lifescience

    fix tests (compare)

  • Dec 09 18:06

    pditommaso on google-lifescience

    wip wip wip and 3 more (compare)

  • Dec 09 14:33
    apeltzer closed #1411
  • Dec 09 14:33
    apeltzer commented #1411
  • Dec 09 14:32
    olifly synchronize #1389
  • Dec 09 14:15
    pditommaso commented #1396
  • Dec 09 14:06
    pditommaso commented #1409
  • Dec 09 13:14
    ewels commented #1411
  • Dec 09 12:48
    connorcarolan edited #1410
  • Dec 09 12:20
    apeltzer opened #1411
  • Dec 09 11:04
    emorice commented #1408
Rad Suchecki
@rsuchecki
Paolo et al will correct me if I am wrong but I think one possibility could be that JVM heap space was running low, leading to excessive GC activity. Is pipeline code available to view? @DoaneAS
Daniel E Cook
@danielecook
Anyone know why Date.parse doesn't work?
Maxime Garcia
@MaxUlysse
Hi @pditommaso I have some follow up question on @drpatelh question here
I understand why we can't have traverse a directory over http, but is it possible to have something like https://raw.githubusercontent.com/maxulysse/test-datasets/sarek/file{1,2}.ext become two files: https://raw.githubusercontent.com/maxulysse/test-datasets/sarek/file1.ext https://raw.githubusercontent.com/maxulysse/test-datasets/sarek/file2.ext as it is with a regular path?
Paolo Di Tommaso
@pditommaso
yes, indeed
the glob is expanded to the actual files
Daniel E Cook
@danielecook
This works:
import java.text.SimpleDateFormat;
date_parse = new SimpleDateFormat("yyyy-MM-dd");
date_filter = date_parse.parse(params.date)
Paolo Di Tommaso
@pditommaso
well, datetime is a mess in any programing lang
Maxime Garcia
@MaxUlysse
It is?? I'll go check my tests then, I just tried and it wasn't working
Paolo Di Tommaso
@pditommaso
well, actually I'm not sure :joy:
Maxime Garcia
@MaxUlysse
It works
I must had another issue when I tried earlier
I'm guessing it'll work as well with s3://
Paolo Di Tommaso
@pditommaso
s3 can be traversed as a regular fs
Maxime Garcia
@MaxUlysse
Ok
thanks
Ashley S Doane
@DoaneAS
@rsuchecki thanks, yes pipeline is here: https://github.com/DoaneAS/realign.nf
Paolo Di Tommaso
@pditommaso
there could be something wrong in your script, NF is not supposed all that CPU
Daniel E Cook
@danielecook
@pditommaso agreed - but I'm curious why the groovy Date module doesn't work? Is it not imported?
Paolo Di Tommaso
@pditommaso
actually I'm not aware of it, therefore is not imported :smile:
is tehre a groovy-date module ?
Ashley S Doane
@DoaneAS
@rsuchecki pretty simple nextflow with only 1 process actually. Just takes a sampleindex.csv file with sample name, sample type (tumor or normal), and bam file path, and does a realignment with speedseq. The executor is sge, and the error seems to happen when nextflow is determining which results are cached.
Daniel E Cook
@danielecook
err maybe not - but the language spec suggests you should be able to do just Date.parse(format, input)
examples also present here: http://groovy-lang.org/single-page-documentation.html; Just curious maybe I'm missing something here
Paolo Di Tommaso
@pditommaso
can't check now, you may wont to report an issue
Daniel E Cook
@danielecook
I can do that
thanks
Paolo Di Tommaso
@pditommaso
welcome
Maxime Garcia
@MaxUlysse
OK, so I made a mistake earlier, it is not working in fact
I'll make a minimal example and an issue
Sri Harsha Meghadri
@harshameghadri
Hey folks, I am pretty new to using docker and singularity. I want to use the nf-core/rnaseq for my analysis. I consistently get errors while trying to execute this command singularity pull --name nf-core-rnaseq-1.3.img docker://nf-core/rnaseq:1.3
Unable to pull docker://nf-core/rnaseq:1.3: conveyor failed to get: Error reading manifest 1.3 in docker.io/nf-core/rnaseq: errors:
I am trying to pull to rackham. My analysis needs to be executed on bianca. Any tips are super appreciated.
Maxime Garcia
@MaxUlysse
try singularity pull --name nf-core-rnaseq-1.3.img docker://nfcore/rnaseq:1.3 instead, without the - in nf-core
I'm guessing if you have more question about nf-core pipelines, it should be better on our slack: https://nf-co.re/join
Sri Harsha Meghadri
@harshameghadri
I tried that as well getting the same error @MaxUlysse
Maxime Garcia
@MaxUlysse
You tried that on bianca?
Sri Harsha Meghadri
@harshameghadri
nope on rackham, I guess it needs internet
Maxime Garcia
@MaxUlysse
Sure
Just trying to find an easy mistake, sorry ;-)
Have you tried running that on an interactive node?
Sri Harsha Meghadri
@harshameghadri
hmmm on rackham? I dnt have allocation there.
Maxime Garcia
@MaxUlysse
I'm afraid singularity might be too demanding on the regular login node
Let me see if I can help you in an other way
Sri Harsha Meghadri
@harshameghadri
perfect, thank you Maxime.
Maxime Garcia
@MaxUlysse
@harshameghadri I messaged you ;-)
Marko Melnick
@Senorelegans
Is there any way to force a process to wait for another process to finish before it is started? I tryed to do it with a dummy channel but I am concatenating files to the channel amounts change from one process to the next.
Maxime Garcia
@MaxUlysse
can't you use the output from the process that need to be finished as an input for the other?
maybe with a .collect() to be sure to catch multiple executions of said process
Marko Melnick
@Senorelegans
I am concatenating fastq files by groups (I made a parser in groovy to separate by group). Is there no way to just make one process wait for another one with a dummy channel or some null variable?