Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
    Viren Amin
    @virenar
    Thanks @ewels will keep using cli then
    Matthieu Pichaud
    @MatPich_twitter
    Hi guys, have you encountered this error before?
    No such property: inittag for class: Script_c812b1e6
    It stops the workflow before anything is run.
    Thanks for your help!
    Paolo Di Tommaso
    @pditommaso
    please fill-out an issue including the nextflow log file
    Bryan Lajoie
    @bryanlajoie_twitter

    Using nf + tower + aws-batch. Blocked due to the default aws uploadChunkSize default of 10mb, which limits file size to 100G. No matter where I change this param (base.conf, nextflow.config, etc) it seems to always revert to the default of 10485760 via the tower 'resolved configuration'.

    uploadChunkSize = 10485760

    If I however enter this conf change into the UI manually, via the launch / nextflow config file option, then it does stick in the workflow resolved configuration. Any ideas?

    2 replies
    Nextflow Tower 8-24-2021 9-07-41 AM.png
    Paolo Di Tommaso
    @pditommaso
    yes, tower defaults that attributes to 10 MB
    Phil Ewels
    @ewels
    I think that the config parsing goes pipeline, then tower defaults, then UI inputs
    So if it's something that Tower sets as a default, it's not something that you can ever affect from the pipeline code / config
    And there is no way around this that I know of..(?)
    Paolo Di Tommaso
    @pditommaso
    I think that the config parsing goes pipeline, then tower defaults, then UI inputs
    exactly, what's wrong with that?
    Bryan Lajoie
    @bryanlajoie_twitter
    I'd like to ideally set that param in the pipeline config, and not have tower override my set chunk size.
    Can I somehow set that UI option via api? I can't be setting this manually for every launch :)
    Phil Ewels
    @ewels
    Would be nice if Tower had an equivalent of ~/nextflow/config
    eg. A config that is always applied for every launch
    (After Tower defaults πŸ˜‰)
    Bryan Lajoie
    @bryanlajoie_twitter
    ^ yes please !
    Phil Ewels
    @ewels
    @pditommaso nothing wrong with it per se, just no way to have a default that takes priority over Tower defaults. So for stuff like this you have to manually enter a specific config in the UI for every launch.
    Is there a place to see the default config that Tower gives Nextflow?
    Bryan Lajoie
    @bryanlajoie_twitter
    Why wouldn't user/pipeline conf always come last, to have the ability to override anything and everything?
    Phil Ewels
    @ewels
    I guess so that Tower can "make stuff work" on cloud etc. But yeah I guess most pipelines don't set these things so in majority of cases the order would have no effect.
    Bryan Lajoie
    @bryanlajoie_twitter
    agreed, so for now, perhaps tower should only set things that the user would never to tweak. Later change around the ordering to give the user full control?
    As a workaround, we've added the necessary aws tweaks to the launchpad default nextflow-config options. Re. uploadChunkSize, it may be better for tower/nf to use a dynamically set chunksize by default. Basespace cli does similar I believe.
    As is, tower has a limit of 100gb via the 10mb default.
    Paolo Di Tommaso
    @pditommaso
    nothing wrong with it per se, just no way to have a default that takes priority over Tower defaults. So for stuff like this you have to manually enter a specific config in the UI for every launch.
    this is a good point, there were some thought, but main problem was: where the nextflow config default should be:
    1. compute env level
    2. workspace level
    3. org level?
    don't say all of them :D
    Re. uploadChunkSize, it may be better for tower/nf to use a dynamically set chunksize by default. Basespace cli does similar I believe.
    s3 client used by nextflow need to be re-enginneed from scratch
    Phil Ewels
    @ewels
    don't say all of them :D
    Damn, you read my mind! πŸ˜†
    Bryan Lajoie
    @bryanlajoie_twitter
    a default is a default, no? If any conf overrides then that should take precedence?
    Jeffrey Massung
    @massung
    In the Tower API, when launching a pipeline, what's the "id" input parameter supposed to be? Is it the workspace id? for example, when using POST workflow/launch:
    {
      "launch": {
        "id": "???",
        "pipeline": "test_pipeline",
        "revision": "main",
        "configProfiles": [
          "standard"
        ]
      }
    }
    Paolo Di Tommaso
    @pditommaso
    it's not expected to be provided
    tip: open the browser developer tool, make a launch and look in the lanch request made by it in the network tab
    Jeffrey Massung
    @massung
    ok. what should be the pipeline name then? I'm getting 403 errors, but am the owner (and have the token set -- using the "try" samples), so I assume i just have some inputs wrong. Does the pipeline need to be the full "organization/workspace/pipeline" or just the pipeline name? an id?
    If there's an example somewhere launching one of the community pipelines, that would be helpful
    Paolo Di Tommaso
    @pditommaso
    something like this
    launch: 
      computeEnvId: "4woukvRfAz0cGCvZPtncho"
      configProfiles: null
      configText: null
      dateCreated: "2021-08-31T15:57:24.747Z"
      entryName: null
      id: null
      mainScript: null
      paramsText: null
      pipeline: "https://github.com/pditommaso/hello"
      postRunScript: null
      preRunScript: null
      pullLatest: null
      revision: null
      schemaName: null
      stubRun: null
      workDir: "/home/ubuntu/nf-work"
    Jeffrey Massung
    @massung
    I expected that to be the quick launch. I was assuming (perhaps incorrectly?) that if I set up a pipeline in the launchpad, I could setup all the defaults there and launch it with the API - perhaps only overriding a couple things (like paramsText)?
    Paolo Di Tommaso
    @pditommaso
    correct
    if you want to launch a pre-configured pipeline, that id is the lauch associated with that pipeline
    but likely it's easier using the action API
    Screenshot 2021-08-31 at 18.20.26.png
    Jeffrey Massung
    @massung
    Thanks. I hope you don't mind one more question... Is there any supported, conventional way to support N pipelines in a single GitHub repo as opposed to one repo per pipeline (trying to make it easier for several people)? Aside from overriding the mainScript entry point, I haven't really found a nice way, and other things don't appear to play nicely with that.
    Jeffrey Massung
    @massung

    if you want to launch a pre-configured pipeline, that id is the lauch associated with that pipeline

    I think I'm missing something incredibly obvious, but I'm not seeing an ID anywhere for the pipeline. I see them for the workspaces, compute environments, etc. But not for the pipelines.

    Lasse Folkersen
    @lassefolkersen
    Hi nf-tower community. I'm having trouble with something as simple as the input-fastq file writing. I followed AWS-batch instructions, and have two small test files in an amazon S3 bucket: s3://lasses-tower-bucket/test1/ - the input field tells me I should just be able to write that (It can also be used to specify the path to a directory on mapping step with a single germline sample only. it says).. but that won't work. Logs say No FASTQ files found in --input directory 's3://lasses-tower-bucket/test1/'
    Jeffrey Massung
    @massung
    https://tower.nf/openapi/index.html appears to be down. Also, it appears as though my personal access token can't be used to access API end-points for an organization I own? Is there something I'm missing (like passing the organization w/ the request in a header or perhaps a setting in the organization I need to enable)?
    Kevin
    @klkeys

    has anybody else come across a recent AWS ECS/Batch failure with NF Tower?

    I think that it’s related to this email to my org from 23 Aug 2021:

    Hello,
    Your action is required to avoid potential service interruption once Amazon ECS API request validation improvements take effect on September 24, 2021. We have identified the following API requests to Amazon ECS from your account that could be impacted by these changes:
    DescribeContainerInstances
    With these improvements, Amazon ECS APIs will validate that the Service and Cluster name parameters in the API match the Cluster and Service name in the ARN.