Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
    G Fedewa
    @harper357
    I am having trouble launching pipelines when I select my organizations workspace. In other words, when I select a pipeline from my Launchpad, I select a different Workspace. It gives me the error "Http failure response for https://tower.nf/api/workflow/launch?workspaceId=<>: 403 Forbidden". Is there some setting I need to change/add to let me launch pipelines in different workspaces?
    Ido Tamir
    @idot
    Hello, since yesterday i get Unexpected response code 400 for request https://api.tower.nf/trace/create . I am now desperately trying to not use tower: I unset TOWER_ACCESS_TOKEN and don't use -with-tower ( version 21.04.1) but its always the same error
    Ido Tamir
    @idot
    Now it started and I get Unexpected response code 400 for request https://api.tower.nf/trace/MDfCwae87pju7/progress
    Error ID: 1e1h4Ib3Fd14YYvCOdQHLP
    Ido Tamir
    @idot
    Starting again I get Unexpected response code 400 for request https://api.tower.nf/trace/create and the workflow does not run at all
    Ido Tamir
    @idot
    tower seems to work again
    Ido Tamir
    @idot
    tower was active because i had it in ~/.nextflow/config
    Jeffrey Massung
    @massung

    I have a workflow I’m building for which some of the early processes take considerable CPU/memory to run, so I’m forced to run workflow on AWS (no biggie). So far, I’ve been slowly adding steps and relaunching the same pipeline with resume and it’s all working great.

    However, the time spent provisioning EC2 instances for later processes that don’t require much CPU/memory is frustrating. I could (in theory) just run those processes locally while I build out the rest of the workflow.

    Is it possible for me to do something like copy the work/scratch directory from S3 to my local work/ directory and -resume the workflow running locally? I’ve tried the naive way of doing it, but it won’t detect that the workflow has already successfully run some of the processes. Are there any logs/files I can just manually edit - or pass something on the CLI - to get it to do so?

    Paolo Di Tommaso
    @pditommaso
    Ec2 provisioning time is something we can do little
    however, best practice for building NF pipelines consists of doing in your local computer with a small test dataset and then deploy on AWS
    this is crucial for testing and troubleshooting
    Combiz Khozoie
    @combiz
    Hi, is there a way to obtain a tabulated version of the nf-tower cloud cost estimates for all tasks? It's possible to see the 'cost' field when clicking on a single task; however, we need to aggregate the predicted costs for hundreds of tasks.
    Paolo Di Tommaso
    @pditommaso
    Hi, currently it's only possible via workflow/tasks API endpoint
    Combiz Khozoie
    @combiz
    Perfect, thanks
    Combiz Khozoie
    @combiz
    Does this endpoint paginate the results? I seem to obtain only the first 10 tasks. (https://tower.nf/openapi/index.html#get-/workflow/-workflowId-/tasks)
    Combiz Khozoie
    @combiz
    Filed an issue.. seqeralabs/nf-tower#325
    rmeinl
    @rmeinl
    Hey! I'm looking to run a workflow in nf-tower that accesses a Postgres DB. When I run it locally I store the credentials in the nextflow config. Is there a way to securely store them somewhere in nf-tower to initiate my workflow?
    Danilo Imparato
    @daniloimparato

    Hi all!

    Super hyped to be trying tower. The user experience has been impressive so far. 🚀

    However, I could not get even a single workflow to execute on the Google Life Sciences backend.

    I have set up this very minimal example below. Can someone please enlighten me what might be wrong?

    #!/usr/bin/env nextflow
    
    nextflow.enable.dsl=2
    
    process echo_remote_file_content {
    
      container = "docker.io/taniguti/wf-cas9:latest"   // this does not work :(
      // container = "docker.io/docker/whalesay:latest" // this works!! both images are public
    
      input: path remote_file
    
      output: stdout emit: cat
    
      script: "cat $remote_file"
    }
    
    workflow {
      echo_remote_file_content(params.remote_file)
      println echo_remote_file_content.out.cat.view()
    }

    This is the error report:

    Error executing process > 'echo_remote_file_content'
    
    Caused by:
      Process `echo_remote_file_content` terminated with an error exit status (9)
    
    Command executed:
      cat str.txt
    
    Command exit status:
      9
    
    Command output:
      (empty)
    
    Command error:
      Execution failed: generic::failed_precondition: while running "nf-6f1c929e312542a7ee1699175d05f753-main": unexpected exit status 1 was not ignored
    
    Work dir:
      gs://sensitive-bucket-name/scratch/1uc7mIoqwEIZV0/6f/1c929e312542a7ee1699175d05f753
    
    Tip: view the complete command output by changing to the process work dir and entering the command `cat .command.out`
    Paolo Di Tommaso
    @pditommaso
    hello please open an issue including the "Nextflow console output" and "Nextflow log file" (you can find in the Exection logs panel)
    Danilo Imparato
    @daniloimparato
    Done, thanks! @pditommaso
    Vlad Kiselev
    @wikiselev

    Hi All, I am trying to start a pipeline using NF-tower on AWS batch, however the head job (nextflow itself) fails to start... I get:

    Status reason
    Task failed to start

    And also

    CannotStartContainerError: Error response from daemon: OCI runtime create failed: runc did not terminate successfully: unknown

    The image that it is trying to start is:

    public.ecr.aws/seqera-labs/tower/nf-launcher:21.08.0-edge

    Has anyone had this before?

    Vlad Kiselev
    @wikiselev

    there was several fails as described before, and now at the latest attempt it managed to start but this is the log from the nextflow container:

    /usr/local/bin/nf-launcher.sh: line 25: /usr/bin/tee: Cannot allocate memory
    /usr/local/bin/nf-launcher.sh: line 71:    12 Killed                  aws s3 sync --only-show-errors "$NXF_WORK/$cache_path" "$cache_path"
    Failed to launch the Java virtual machine
    NOTE: Nextflow is trying to use the Java VM defined by the following environment variables:
     JAVA_CMD: /usr/lib/jvm/java-11-amazon-corretto/bin/java
     NXF_OPTS: 
    /usr/local/bin/nf-launcher.sh: line 43:    30 Killed                  [[ "$NXF_WORK" == s3://* ]]

    obviously something is very wrong, but I feel I've checked everything and not sure where else to look...

    Arghhh, ok, please ignore all of the above - I specified 8MB of RAM instead of 8Gb!..
    Paolo Di Tommaso
    @pditommaso
    :smile:
    Vlad Kiselev
    @wikiselev
    Is that normal that the hosted tower server has API request problems. Seeing a lot of Oops... Unable to process request - Error ID: Ac7sDnGIoR0r7IKFYLkbR both on the website and in NF logs
    Vlad Kiselev
    @wikiselev
    And also this: Http failure response for https://tower.nf/api/orgs: 502 Bad Gateway, can't really use it at the moment
    Combiz Khozoie
    @combiz
    I'm finding that the tasks API (https://tower.nf/openapi/index.html#get-/workflow/-workflowId-/tasks) is returning info for a previous run of the same data (-resume was not used). Any ideas?
    Yes, @wikiselev, same here. Hopefully back up soon!
    Thankfully downtime is v. rare, the first time I've seen it, so definitely not normal. :)
    Vlad Kiselev
    @wikiselev
    good to know! I used for the first time yesterday and was super excited, but can't do anything today :-)
    probably an API service requires a restart
    Paolo Di Tommaso
    @pditommaso
    um, let me check
    @wikiselev give another try please
    1 reply
    Combiz Khozoie
    @combiz

    I'm finding that the tasks API (https://tower.nf/openapi/index.html#get-/workflow/-workflowId-/tasks) is returning info for a previous run of the same data (-resume was not used). Any ideas?

    @pditommaso shall I file a gh issue for this or have I made an error? I'm receiving data for a previous run from the tasks API (ID 'EyoDOtC0ruDyv') when querying the latest run (ID '47kppVWxAkUEWl'), though '-resume' wasn't used for the latter.

    Paolo Di Tommaso
    @pditommaso
    weird, yes please including your request as an example
    Vlad Kiselev
    @wikiselev
    When I run my pipeline with tower on AWS Batch it does not write any of the .command.* files to my S3 bucket. Literally, all folders in work directory contain only input and output files. I need to collect .command.log from several process and at the moment cannot do that. Has anyone seen similar behaviour? When I ran AWS Batch from my laptop before I remember seeing a couple warnings in .nextflow.log related to S3, but in Tower I don't seem to be able to find .nextflow.log... When I setup up Tower and role policies on AWS I strictly followed Tower documentation.
    Vlad Kiselev
    @wikiselev

    Just rerun it from my local laptop on AWS and check .nextflow.log. There does not seem to be any warnings or errors except for:

    WARN  com.amazonaws.util.Base64 - JAXB is unavailable. Will fallback to SDK implementation which may be less performant

    Though I still do not see any of the .command.* files in my work directory. The pipeline is working with no problem until I specifically ask for .command.log and then it fails as the file does not exist.

    Paolo Di Tommaso
    @pditommaso
    I need to collect .command.log from several process and at the moment cannot do that.
    what do you mean?
    is the execution failing?
    Vlad Kiselev
    @wikiselev
    Yes, it is failing, because i am trying to copy .command.log which does not exist. The error is cp: cannot stat '.command.log': No such file or directory.
    Somehow, none of the .command* files are written to folders in work directory and the pipeline runs OK until it gets to the process where .command.log is needed to be copied
    Paolo Di Tommaso
    @pditommaso
    does the basic https://github.com/nextflow-io/hello works?
    Vlad Kiselev
    @wikiselev
    yes, just tried it on Tower and it succeeded. however, no .command.* files in the work directories again...
    Vlad Kiselev
    @wikiselev
    and nothing suspicious in the log file...
    Paolo Di Tommaso
    @pditommaso
    ummm, scroll down the runs page in tasks table
    click on one task for the hello
    when open task dialog click in the execution logs tab
    then donwload the 1) task stdout, 2) task stderr, 3) task log and upload them here