Using nf + tower + aws-batch. Blocked due to the default aws uploadChunkSize default of 10mb, which limits file size to 100G. No matter where I change this param (base.conf, nextflow.config, etc) it seems to always revert to the default of 10485760 via the tower 'resolved configuration'.
uploadChunkSize = 10485760
If I however enter this conf change into the UI manually, via the launch / nextflow config file option, then it does stick in the workflow resolved configuration. Any ideas?
Re. uploadChunkSize, it may be better for tower/nf to use a dynamically set chunksize by default. Basespace cli does similar I believe.
launch: computeEnvId: "4woukvRfAz0cGCvZPtncho" configProfiles: null configText: null dateCreated: "2021-08-31T15:57:24.747Z" entryName: null id: null mainScript: null paramsText: null pipeline: "https://github.com/pditommaso/hello" postRunScript: null preRunScript: null pullLatest: null revision: null schemaName: null stubRun: null workDir: "/home/ubuntu/nf-work"
idis the lauch associated with that pipeline