Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
    Phil Ewels
    @ewels
    So if it's something that Tower sets as a default, it's not something that you can ever affect from the pipeline code / config
    And there is no way around this that I know of..(?)
    Paolo Di Tommaso
    @pditommaso
    I think that the config parsing goes pipeline, then tower defaults, then UI inputs
    exactly, what's wrong with that?
    Bryan Lajoie
    @bryanlajoie_twitter
    I'd like to ideally set that param in the pipeline config, and not have tower override my set chunk size.
    Can I somehow set that UI option via api? I can't be setting this manually for every launch :)
    Phil Ewels
    @ewels
    Would be nice if Tower had an equivalent of ~/nextflow/config
    eg. A config that is always applied for every launch
    (After Tower defaults 😉)
    Bryan Lajoie
    @bryanlajoie_twitter
    ^ yes please !
    Phil Ewels
    @ewels
    @pditommaso nothing wrong with it per se, just no way to have a default that takes priority over Tower defaults. So for stuff like this you have to manually enter a specific config in the UI for every launch.
    Is there a place to see the default config that Tower gives Nextflow?
    Bryan Lajoie
    @bryanlajoie_twitter
    Why wouldn't user/pipeline conf always come last, to have the ability to override anything and everything?
    Phil Ewels
    @ewels
    I guess so that Tower can "make stuff work" on cloud etc. But yeah I guess most pipelines don't set these things so in majority of cases the order would have no effect.
    Bryan Lajoie
    @bryanlajoie_twitter
    agreed, so for now, perhaps tower should only set things that the user would never to tweak. Later change around the ordering to give the user full control?
    As a workaround, we've added the necessary aws tweaks to the launchpad default nextflow-config options. Re. uploadChunkSize, it may be better for tower/nf to use a dynamically set chunksize by default. Basespace cli does similar I believe.
    As is, tower has a limit of 100gb via the 10mb default.
    Paolo Di Tommaso
    @pditommaso
    nothing wrong with it per se, just no way to have a default that takes priority over Tower defaults. So for stuff like this you have to manually enter a specific config in the UI for every launch.
    this is a good point, there were some thought, but main problem was: where the nextflow config default should be:
    1. compute env level
    2. workspace level
    3. org level?
    don't say all of them :D
    Re. uploadChunkSize, it may be better for tower/nf to use a dynamically set chunksize by default. Basespace cli does similar I believe.
    s3 client used by nextflow need to be re-enginneed from scratch
    Phil Ewels
    @ewels
    don't say all of them :D
    Damn, you read my mind! 😆
    Bryan Lajoie
    @bryanlajoie_twitter
    a default is a default, no? If any conf overrides then that should take precedence?
    Jeffrey Massung
    @massung
    In the Tower API, when launching a pipeline, what's the "id" input parameter supposed to be? Is it the workspace id? for example, when using POST workflow/launch:
    {
      "launch": {
        "id": "???",
        "pipeline": "test_pipeline",
        "revision": "main",
        "configProfiles": [
          "standard"
        ]
      }
    }
    Paolo Di Tommaso
    @pditommaso
    it's not expected to be provided
    tip: open the browser developer tool, make a launch and look in the lanch request made by it in the network tab
    Jeffrey Massung
    @massung
    ok. what should be the pipeline name then? I'm getting 403 errors, but am the owner (and have the token set -- using the "try" samples), so I assume i just have some inputs wrong. Does the pipeline need to be the full "organization/workspace/pipeline" or just the pipeline name? an id?
    If there's an example somewhere launching one of the community pipelines, that would be helpful
    Paolo Di Tommaso
    @pditommaso
    something like this
    launch: 
      computeEnvId: "4woukvRfAz0cGCvZPtncho"
      configProfiles: null
      configText: null
      dateCreated: "2021-08-31T15:57:24.747Z"
      entryName: null
      id: null
      mainScript: null
      paramsText: null
      pipeline: "https://github.com/pditommaso/hello"
      postRunScript: null
      preRunScript: null
      pullLatest: null
      revision: null
      schemaName: null
      stubRun: null
      workDir: "/home/ubuntu/nf-work"
    Jeffrey Massung
    @massung
    I expected that to be the quick launch. I was assuming (perhaps incorrectly?) that if I set up a pipeline in the launchpad, I could setup all the defaults there and launch it with the API - perhaps only overriding a couple things (like paramsText)?
    Paolo Di Tommaso
    @pditommaso
    correct
    if you want to launch a pre-configured pipeline, that id is the lauch associated with that pipeline
    but likely it's easier using the action API
    Screenshot 2021-08-31 at 18.20.26.png
    Jeffrey Massung
    @massung
    Thanks. I hope you don't mind one more question... Is there any supported, conventional way to support N pipelines in a single GitHub repo as opposed to one repo per pipeline (trying to make it easier for several people)? Aside from overriding the mainScript entry point, I haven't really found a nice way, and other things don't appear to play nicely with that.
    Jeffrey Massung
    @massung

    if you want to launch a pre-configured pipeline, that id is the lauch associated with that pipeline

    I think I'm missing something incredibly obvious, but I'm not seeing an ID anywhere for the pipeline. I see them for the workspaces, compute environments, etc. But not for the pipelines.

    Lasse Folkersen
    @lassefolkersen
    Hi nf-tower community. I'm having trouble with something as simple as the input-fastq file writing. I followed AWS-batch instructions, and have two small test files in an amazon S3 bucket: s3://lasses-tower-bucket/test1/ - the input field tells me I should just be able to write that (It can also be used to specify the path to a directory on mapping step with a single germline sample only. it says).. but that won't work. Logs say No FASTQ files found in --input directory 's3://lasses-tower-bucket/test1/'
    Jeffrey Massung
    @massung
    https://tower.nf/openapi/index.html appears to be down. Also, it appears as though my personal access token can't be used to access API end-points for an organization I own? Is there something I'm missing (like passing the organization w/ the request in a header or perhaps a setting in the organization I need to enable)?
    Kevin
    @klkeys

    has anybody else come across a recent AWS ECS/Batch failure with NF Tower?

    I think that it’s related to this email to my org from 23 Aug 2021:

    Hello,
    Your action is required to avoid potential service interruption once Amazon ECS API request validation improvements take effect on September 24, 2021. We have identified the following API requests to Amazon ECS from your account that could be impacted by these changes:
    DescribeContainerInstances
    With these improvements, Amazon ECS APIs will validate that the Service and Cluster name parameters in the API match the Cluster and Service name in the ARN.

    a recent launch into our Tower Forge infrastructure on AWS yielded this notice from AWS:

    Hello,
    On Wed, 1 Sep 2021 08:57:30 GMT, all EC2 instances in your Batch compute environment “arn:aws:batch:us-west-2:478885234993:compute-environment/TowerForge-2y3V6L8gnk6kM09yoB0vmS-head“ were scaled down. The compute environment is now in an INVALID state due to a misconfiguration preventing the EC2 instances from joining the underlying ECS Cluster. While in this state, the compute environment will not scale up or run any jobs. Batch will continue to monitor your compute environments and will move any compute environment whose instances do not join the cluster to INVALID.
    To fix this issue, please review and update/recreate the compute environment configuration. Common compute environment misconfigurations which can prevent instances from joining the cluster include: a VPC/Subnet configuration preventing communication to ECS, incorrect Instance Profile policy preventing authorization to ECS, or a bad custom Amazon Machine Image or LaunchTemplate configuration affecting the ECS agent.

    Kevin
    @klkeys
    something about a new longer ARN maybe? does the Tower Forge launch template account for this?
    Kevin
    @klkeys
    FWIW, I didn’t know what else to do, so I played “did you reboot the computer” by disabling and reenabling the ECS compute environment and suddenly things work again :eyes: I’m still curious to know what caused this error
    Paolo Di Tommaso
    @pditommaso
    never seen this before, if it's happening again worth investigating with AWS supprot what's the root cause
    Jeffrey Massung
    @massung
    Is there a way - from within a process block to determine if -with-tower was passed along or not?
    Jeffrey Massung
    @massung
    To be more precise, I guess I care whether or not it's being run locally. I'd like to use the containerOptions only if running locally
    Moritz E. Beber
    @Midnighter
    How about managing that with a profile?
    1 reply
    Paolo Di Tommaso
    @pditommaso
    indeed, that info is not exposed, pipeline should depend on tower execution