Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • Jan 31 2019 22:52
    matq007 commented #261
  • Jan 31 2019 22:49
    matq007 edited #261
  • Jan 31 2019 22:46
    matq007 opened #261
  • Jan 31 2019 14:33
    ewels edited #252
  • Jan 31 2019 14:33
    ewels commented #252
  • Jan 31 2019 13:29
    pachiras commented #252
  • Jan 31 2019 12:34
    matq007 assigned #252
  • Jan 31 2019 11:53
    apeltzer commented #252
  • Jan 31 2019 10:51
    matq007 commented #252
  • Jan 31 2019 08:22
    apeltzer commented #252
  • Jan 31 2019 02:13
    pachiras commented #252
  • Jan 29 2019 20:42
    af8 starred nf-core/tools
  • Jan 29 2019 08:47

    nf-core-bot on api-doc

    Travis build: 816 (compare)

  • Jan 29 2019 08:42

    apeltzer on dev

    Markdown parsing: Fix all the e… Add new markdown rules to templ… Apparently whitespace isn't eno… and 2 more (compare)

  • Jan 29 2019 08:42
    apeltzer closed #260
  • Jan 28 2019 23:42
    ewels synchronize #260
  • Jan 28 2019 23:07
    ewels synchronize #260
  • Jan 28 2019 22:57
    ewels synchronize #260
  • Jan 28 2019 22:57
    ewels review_requested #260
  • Jan 28 2019 22:57
    ewels review_requested #260
Qi ZHAO
@likelet
Since i found most of my pipeline users are running the pipe in a local server instead of a well configured cluster.
Phil Ewels
@ewels
Sure - we can add a profile that does this. Could be a good general thing to have for all pipelines.
Do you have a reliable way to find the number of cpus available?
Qi ZHAO
@likelet
Yes i have done this before, but deprecated in the later version, as i thought there will be more elegant way to do this . I will check the former code to reverse those functions back to see it wether it can meet our requirement .
ava_mem = (double) (Runtime.getRuntime().freeMemory()) ava_cpu = Runtime.getRuntime().availableProcessors()
Qi ZHAO
@likelet
what do u think of the java method to do this ? Runtime also have Runtime.getRuntime.totalMemory() system method to get total memory, and same for cpus
Phil Ewels
@ewels
@pditommaso have you done this before? Any recommendations?
Paolo Di Tommaso
@pditommaso
Runtime.runtime.availableProcessors() and Runtime.runtime.totalMemory() is the best way so far to fetch these info
maybe we could a more idiomatic syntax, however the biggest issue is that you are limited to the resources of the launching machine
Hugues Fontenelle
@huguesfontenelle

Hello here :-)
By any luck, would anyone have a circle-ci version 2 config to share (instead of travis)?
I can't get it to work with containerized processes...
My .circleci/config.yml looks like this:

version: 2
jobs:
    build:
        docker:
            - image: circleci/openjdk:8-jdk-stretch-node-browsers
        working_directory: ~/repo
        steps:
            - checkout
            - run:
                name: Install nextflow
                command: |
                    cd /tmp
                    wget -qO- https://get.nextflow.io | bash
                    chmod 777 nextflow
                    sudo ln -s /tmp/nextflow /usr/local/bin/nextflow
            - setup_remote_docker:
                docker_layer_caching: true
            - run:
                name: Testing nextflow
                command: |
                    chmod 775 ~/repo
                    cd ~/repo
                    nextflow -dockerize run -w ~/repo hello

and the failure message from circle-ci :

ERROR ~ .nextflow/history.lock (No such file or directory)

("project" at https://github.com/huguesfontenelle/nextflow-circleci )
Thanks :-)

Phil Ewels
@ewels
Sorry, we're pretty locked into Travis for our pipelines now.
We played with Circle a little in the early days (more memory available) but struggled with the interface plus random errors quite a lot before giving up and going back to Travis
Phil Ewels
@ewels
What's nextflow -dockerize? I've never seen that option before
I'm not sure if setting -w ~/repo is a great idea either. Not sure what that will do. Any reason why you can't leave that as the default? (will end up being ~/repo/work/)
  -d, -dockerize
     Launch nextflow via Docker (experimental)
Cool - so this puts nextflow itself inside docker? Any reason why this is useful?
Evan Floden
@evanfloden
I think quite a few people use it. Some people want to put Nextflow itself in a conainer to be ultra reproducible.
Phil Ewels
@ewels
ah ok, so that the underlying OS for the nextflow runtime is standardised?
I've never really thought about that before
Hugues Fontenelle
@huguesfontenelle
I'm not sure about -d nor -w, these were suggestions from Paolo
The -d he suggested after I tried Docker in Docker, where the bind path were wrong, https://groups.google.com/forum/#!topic/nextflow/bL9lZpvRFPE
I don't understand it too much, since you still need to install nextflow to be able to run nextflow -dockerize
Hugues Fontenelle
@huguesfontenelle

OK I got it:

It's not possible to use volume mounting with the docker executor, but using the machine executor it's possible to mount local directories to your running Docker containers. You can learn more about the machine executor here on our docs page.
ref

and Machine has become a pay feature
Hugues Fontenelle
@huguesfontenelle
(the default machine: true works though)
Phil Ewels
@ewels
nice!
marchoeppner
@marchoeppner
hi! got a question about the deepvariant pipeline. The documentation is missing information on where exactly the reference genomes come from - does anyone know, or do I have to start digging? ;)
marchoeppner
@marchoeppner
ah never mind, it's an S3 bucket...
Alexander Peltzer
@apeltzer
iGenomes Probably
arontommi
@arontommi
singularity image for deep variant is not on singularity hub
Alexander Peltzer
@apeltzer
Yeah saw it
just download from docker hub instead
singularity pull --name nf-core-deepvariant.simg docker://nfcore/deepvariant
I guess this singularity hub thing needs to either be solved soon or we need to get rid of the entire singularity documentation on our side as well
Current situation is confusing
arontommi
@arontommi

Thanks for addressing,
the "problem" with building it, is that it takes cpu, so locally is not always a possibility. and if you are working on "GDPR proof " cluster you dont have that option.

of course it is easy to get around this by having an open cluster somewhere, but that means another layer for someone to set up.

Phil Ewels
@ewels
yes exactly
using singularity hub has been very problematic though, so not an easy solution as we originally thought
but yes, leaving it half-hanging as a solution is not ideal
@apeltzer - did we ever try building on travis? I guess it takes too long?
Could be a good thing for Travis / GitHub actions etc to automatically build the docker image, convert to singularity and then push to singularity hub perhaps.
As I think it's just the automatic builds on singularity hub which won't work.
Sven F.
@sven1103
the shub api does not allow to push from remote i think
we had that discussion before if i remember correctly
Phil Ewels
@ewels
ah damn
Sven F.
@sven1103
that was the issue
so i voted for an own registry
Phil Ewels
@ewels
ah right, and then I said no we don't want our own hardware involved probably
Sven F.
@sven1103
or we get rid of singularity container registry support