Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
    Paolo Di Tommaso
    @pditommaso
    does the basic https://github.com/nextflow-io/hello works?
    Vlad Kiselev
    @wikiselev
    yes, just tried it on Tower and it succeeded. however, no .command.* files in the work directories again...
    Vlad Kiselev
    @wikiselev
    and nothing suspicious in the log file...
    Paolo Di Tommaso
    @pditommaso
    ummm, scroll down the runs page in tasks table
    click on one task for the hello
    when open task dialog click in the execution logs tab
    then donwload the 1) task stdout, 2) task stderr, 3) task log and upload them here
    Vlad Kiselev
    @wikiselev
    thanks for your help, Paolo! I followed your instructions and downloaded task-1.command.out.txt which had Bonjour world! inside, task-1.command.err.txt was empty and when tried to download task-1.command.log.txt got Unable to download file: .command.log message on the website. I've also checked a corresponding work directory here: /fusion/s3/my-bucket/scratch/5DQRTBKJsIIay1/bf/71c55281bdbbb44ead372c5acf3746 and it was empty.
    Paolo Di Tommaso
    @pditommaso
    I suspect that this happens because the job role has not have enough permissions to write in that buckets
    let's followup tmorrow
    Vlad Kiselev
    @wikiselev
    Hi Paolo, thanks! I've used this policy - https://github.com/seqeralabs/nf-tower-aws/blob/4aa2b6f913928cdf5ac9a270022b80d306d56b18/forge/forge-policy.json#L59, which only mentions s3:get and s3:list. Is that the correct one?
    though, looks like I completely missed this section... https://help.tower.nf/compute-envs/aws-batch/#access-to-s3-buckets
    Ok, let me try
    Vlad Kiselev
    @wikiselev
    haha, trying to add that S3 policy to tower user - now AWS complains that I exceed 2048 character limit for inline policies for the tower user...
    Vlad Kiselev
    @wikiselev
    I ended up creating a user group (it has larger character limit on policies), I've added both https://github.com/seqeralabs/nf-tower-aws/blob/master/forge/forge-policy.json (compute) and https://github.com/seqeralabs/nf-tower-aws/blob/master/launch/s3-bucket-write.json (s3 access) policies to that group. Then I added my tower user to that group. Then I reran the Hello world pipeline, but it's still the same, the pipeline finishes without problems, but there are no .command.* files in the work directories. Am I missing anything?
    Danilo Imparato
    @daniloimparato
    👆 I wonder if it has anything to do with this: https://github.com/seqeralabs/nf-tower/issues/327#issuecomment-956339788
    bioinfo
    @bioinfo:matrix.org
    [m]
    I wonder could I deploy nf-tower in my local PC, and discard the SMTP auth to login?
    if it is OK, any guide / link to the deploy? Thanks
    Vlad Kiselev
    @wikiselev
    What is the main tactics to handle spot instance restart (after it was terminated by AWS)? It looks like there is no exit code when this happens... Shall I just set maxRetries to some reasonable value for all of the processes by default? (At the moment my maxRetries depends on the exit code, but as there is not exit code provided by AWS, it should be applied to any reason)
    Paolo Di Tommaso
    @pditommaso
    add in your config
    process.errorStrategy = 'retry' 
    process.maxRetries = 5 // or more
    Vlad Kiselev
    @wikiselev
    beautiful, many thanks, Paolo!
    kkerns85
    @kkerns85
    Hello to the NextFlow Community! What is the best way to trouble shoot an issue with trying to launch a nf workflow from tower using AWS Batch. My workflow worked great until recently when I tried to rerun it. There is likely an issue on the AWS side with my Jobs stuck in runnable status. I have exhausted all of my resources and looked at every option to resolve this issue from AWS, stack-overflow, etc. All my environments are healthy and functional. I have migrated to nf Tower thinking this would bypass or resolve my issues but they still persist. I don't know if this is the correct place to post this but I am desperate for help now. Thank you in advance!
    Graham Wright
    @gwright99
    Hello @kkerns85 , in my experience troubleshooting this kind of problem is "part art, part science" given the number of inter-connecting factors. We'd be happy to assist if you open an issue at https://github.com/seqeralabs/nf-tower/issues and can provide more details on your setup.
    One initial suggestion I have for you is to check the underlying ECS clusters once your Jobs become Runnable - are Worker instances able to join the cluster(s), or does the membership count remain at 0?
    4 replies
    Phil Ewels
    @ewels
    Regarding nf-core pipelines, see the docs here: https://nf-co.re/developers/adding_pipelines (basically, join slack and tell us about it in the #new-pipelines channel)
    1 reply
    Will Fondrie
    @wfondrie
    Hi all - we're launching NF Tower actions programmatically. Our workflow runs often share identical parameters for the initial processes, but differ in the parameters used at later ones. Is there a way to always use the -resume option in NF Tower, so that the outputs from these initial processes can be reused? Thanks!
    21 replies
    Kathleen Keough
    @keoughkath_twitter
    Hi all, I'm attempting to build HISAT2 indices for a large genome as part of the nf-core rnaseq pipeline. This is a high-memory, long running type of job since it's a big genome. It's a non-reference organisms, so I can't download these indices. I used tower to set up my compute environment on AWS batch with mainly default parameters. This particular job is getting stuck in the "submitted" state, and based on conversation with the slack channel, we're thinking this may be a scheduler / resource issue. Has anyone else run into something similar and know how to address it?
    2 replies
    suchita880
    @suchita880
    I cant see launchpad and workspace , if I deployed it on local system.. http://localhost8000 .How to setup this on local?
    Phil Ewels
    @ewels
    The open source version is not the same as the version that runs at https://tower.nf - it's possible that it doesn't have these features
    suchita880
    @suchita880
    Which are the api's supported here - http://localhost:8080 ? I can see only GET api's like http://localhost:8080/service-info?token, http://localhost:8080/workflow/list?token
    Phil Ewels
    @ewels
    All of the API end points are listed in the documentation
    moira-dillon
    @moira-dillon
    Has anyone come across (or documented) a breakdown of what capabilities are available between hosted, community and enterprise deployments of nextflow tower? Thank you! https://help.tower.nf/getting-started/deployments/
    Phil Ewels
    @ewels
    I haven't seen anything no, and to be honest it's the kind of thing that would likely be out of date almost as soon as it's written
    Off the top of my head I think that the hosted / enterprise editions can launch pipelines, have organisational structures with multi-user access and roles, workspaces for organising runs and credentials and so on. The community edition is basically just for monitoring runs that you launch yourself, and you can't really share those runs with anyone.
    3 replies
    (this is pretty much what those docs say too, just a little more verbose)
    And a bunch of other stuff that I haven't mentioned probably.
    9d0cd7d2
    @9d0cd7d2:matrix.org
    [m]
    Hi all, I'm testing Tower to see the capabilities of the tool and it's truly awesome. One doubt that I've is, is it possible to describe the same detail level for the nextflow CLI the compute environments? I mean, using Tower, I can define some parameters for example to create an Slurm cluster: login hostname, login port, etc. How these parameters could be used on a CLI Nextflow pipeline? Using executors? Thanks in advance!
    Paolo Di Tommaso
    @pditommaso
    not always there's an equivalent setting in Nextflow for all options that are available in Tower compute env
    for example, nextlow does not need to be aware of slurm login host, therefore there isn't such option
    9d0cd7d2
    @9d0cd7d2:matrix.org
    [m]
    thanks for the answer @pditommaso , and how can we approach something similar without using Tower? for example, having a central node with Nextflow to deploy processes or pipelines against remote Slurm clusters? Maybe a wrapper or a custom script?
    21 replies
    9d0cd7d2
    @9d0cd7d2:matrix.org
    [m]
    Many many thanks for your answers again @ewels , and of course, congratule always the great work of the devs on the Nextflow/Tower tools because are truly awesome without doubts
    kaitlinchaung
    @kaitlinchaung

    Hi there, thanks for the great tool.

    I’ve noticed lately that the jobs remain in the blue running state, even though they have been cancelled and are not running anymore. Failed jobs and successfully completed jobs are accurately reflected, however. For reference, I am running all my jobs on SLURM. I was wondering if there is maybe something I am doing wrong on my part or if there is any parameter I should change to fix this? I noticed this happening about a month ago. Thanks!

    Paolo Di Tommaso
    @pditommaso
    Hi, we need more details to troubleshoot the problem. Please open an issue her e
    including the workflow id, and the nextflow logs
    Patrick Hüther
    @phue
    Hi, I spotted a mention of tower pipeline reports in the 21.12.0 changelog :mag:
    Looks amazing in the community showcase, how do we enable it on our instance?
    Paolo Di Tommaso
    @pditommaso
    Hi Patrick, this is still a dev preview. We plan to include it into Q1 release for customers
    1 reply
    Phil Ewels
    @ewels
    oooh, I hadn't seen this yet :star2:
    @pditommaso an "open in new tab" button would be nice :eyes: :sweat_smile:
    Paolo Di Tommaso
    @pditommaso
    you are right!