Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
    Jesse Marks
    @jaamarks
    Hi all, is this the best place to ask questions about the amazon-genomics-cli?
    Mark Schreiber
    @markjschreiber
    Yes, ask away
    Jesse Marks
    @jaamarks
    I'm getting the following error message—in the log files on CloudWatch—when I try to use a file on S3 as input.
    An error occurred (403) when calling the HeadObject operation: Forbidden
    Mark Schreiber
    @markjschreiber
    Is the object in an S3 bucket that you have declared in the data section of your context? See https://aws.github.io/amazon-genomics-cli/docs/concepts/data/ for examples.
    You should also check that the bucket isn't restricted by an access control list on the bucket itself.
    Julien Lafaye
    @jlafaye
    Hi everyone, the announcement blog post (https://aws.amazon.com/blogs/aws/new-amazon-genomics-cli-is-now-open-source-and-generally-available/) mentions WDL, Nextflow, CWL & snakemake. After looking through the github repo, it seems that only WDL & nextflow are supported. Are there any plans/community effort to add support for the last two ?
    1 reply
    Grigoriy Sterin
    @grsterin
    Hey! Is it possible to run workflows located in private repositories? For example, if I would like to run a Nextflow workflow, which is located in private Gitlab repo, is there a way to supply Gitlab credentials in some way?
    1 reply
    Jesse Marks
    @jaamarks

    On the Amazon Genomics CLI Context page there is a warning "Because the status command will only show contexts that are listed in the project YAML you should take care to destroy any running contexts before deleting them from the project YAML. "

    Suppose someone deletes the project YAML before destroying the context. Is there a way to see which contexts are deployed and have the ability to destroy them from the AWS console? I'm just thinking ahead about how I can destroy contexts that were accidentally left deployed and forgotten about. Thanks

    2 replies
    Arlind Nocaj
    @ArlindNocaj
    Hi AGC team, how can I add a custom workflow, e.g. a nextflow folder from an s3 bucket, is it enough to e.g. modify "mainWorkflowURL": "https://github.com/nextflow-io/rnaseq-nf.git"? Do I have to put in a public workflow here?
    1 reply
    Arlind Nocaj
    @ArlindNocaj
    Dear agc team, enterprises, as recommended by AWS, enforce S3 Server Side Encryption, which currently is not supported by CDK, please +1 the first entry in this issue, to make make awareness for cdk, so that more customers can use AGC: aws/aws-cdk#11265
    1 reply
    fabianpeterg
    @fabianpeterg
    dear team
    my question is regarding the installation of version 1.2.
    is it necessary to do run a "cdk bootstrap accountId/region" command when installing v 1.2 of the cli as the docs mentions?
    when running agc account activate it looks like doing another cdk bootstrapping.
    so now Ive three similar S3 buckets, cdk-randomnumber-assets-myaccountid-myregion, cdk-agc-assets-myaccountid-myregion and agc-myaccountid-myregion
    4 replies
    mstone-modulus
    @mstone-modulus

    Hi all,

    Thanks for providing the agc resources, it looks very promising. Are there any examples or suggested practices for running a workflow in an automated or serverless fashion? Looking through the docs, it seems like I could set up a lambda function that creates a context and adds a workflow, then invokes agc workflow run, and run that lambda in response to some trigger (e.g. an S3 event upon upload of a samplesheet).

    Just want to make sure I'm on the right track here. As far as I can tell from the docs, agc workflow run will create an on-demand ec2 instance to act as the head node and run the workflow manager, but I'm not sure how workflow failures are handled if the machine running agc is no longer available .

    I'm also not sure if there are technical or cost reasons not to create a context for each workflow run, or what the best approach for shutting down contexts and cleaning up after workflow completion would look like.

    Any thoughts or suggestions would be much appreciated. (And we're currently interested in using agc with nextflow, in case that's relevant.)

    Thank you!

    4 replies
    Juan Felipe Ortiz
    @jfortiz_gitlab
    Hi! I am getting to know agc, and it looks very interesting. Currently I am working with snakemake pipelines, and was having errors when trying to run them with agc. Looking at the Dockerfile (I think) agc uses for snakemake, a very old snakemake version is listed (6.4.0). Is there a way of running snakemake pipelines with a more current version of snakemake?
    6 replies
    Juan Felipe Ortiz
    @jfortiz_gitlab

    Hi, guys!
    I am trying to run a snakemake workflow on agc, but when I run agc workflow run workflow_name -c context_name, it gives me an error about either

    Error: an error occurred invoking 'workflow run'
    with variables: {WorkflowName:variants Arguments: ContextName:ctx1}
    caused by: unable to run workflow: write /tmp/workflow_825108273/.snakemake/conda/3c9b45427f57fad9ce68dd60d0fdf8ac/lib/icu/current: copy_file_range: is a directory

    or

    2022-03-31T19:55:17+08:00 𝒊  Running workflow. Workflow name: 'variants', InputsFile: '', Context: 'ctx1'
    2022-03-31T19:55:17+08:00 ⚠️  Failed to delete temporary folder ''
    2022-03-31T19:55:17+08:00 ✘   error="unable to run workflow: open /tmp/workflow_2021675201/.snakemake/conda/3c9b45427f57fad9ce68dd60d0fdf8ac/include/python3.10/pystrcmp.h: too many open files"
    Error: an error occurred invoking 'workflow run'
    with variables: {WorkflowName:variants Arguments: ContextName:ctx1}
    caused by: unable to run workflow: open /tmp/workflow_2021675201/.snakemake/conda/3c9b45427f57fad9ce68dd60d0fdf8ac/include/python3.10/pystrcmp.h: too many open files

    Interestingly enough, I dont change anything and sometimes one error appears, other times the other error shows up. I tried destroying and deploying the context but no dice. Has anyone encountered (and fixed) such behavior?
    Thanks a lot! :)

    3 replies
    Yoshiki Vázquez Baeza
    @ElDeveloper
    Hi AGC team, I'm trying to figure out how to set CPU and memory requirements for individual Snakemake rules. As best as I can tell from the source code, the resources tag is read and if there are properties for "_cpus" and "mem_mb", then those values will be used. I set these values in the rules of my Snakemake workflow but these are always ignored and the Batch jobs all run with VCPU=1 and MEMORY=1024 which are the defaults set in the job definition and when these values are missing. Any idea of what I might be doing wrong? I've also tried running the "countries example" with these tags and see that all the jobs are executed with 1 CPU and 1024 MB of memory (happy to share that Snakefile if that's of interest).
    4 replies
    Hyunmin Kim (Brandon)
    @hmkim
    Hi, all.
    Dear, AGC team.
    Do you have a plan for supporting DRAGEN in marketplace with agc?
    1 reply
    Matt Olm
    @MrOlm

    Hi team! I've read through almost the entirely of the agc documentation and am excited about working with it in my day-to-day work. I'm having a problem getting the nf-core-project example workflows to work, however.

    Specifically, I'm unable to pass any input files to the nextflow workflow. For example, I am able to get the following example manifest to successfully launch and run:

    {
      "mainWorkflowURL": "https://github.com/nextflow-io/rnaseq-nf.git",
      "inputFileURLs": [
        "inputs.json"
      ],
      "engineOptions": "-resume"
    }

    But when I do so, none of the inputs in inputs.json are actually passed to the command; the default values are used. I have tried this a couple of ways and with the example nf-core workflows as well, but I have not been able to successfully pass workflow inputs with the nextflow engine. Any advice, or just an example command of this working on your end would be very appreciated! Happy to provide more detail if needed

    2 replies
    fabianpeterg
    @fabianpeterg
    Hi all,
    Ive a question about --usePublicSubnets. When I run agc account activate with --usePublicSubnets it fails saying that the subnets it tries to create conflicts with another subnet (Ive an exsisting VPC made by an earlier agc version). as usePublicSubnets" involves creating a minimal VPC" how is this possible?
    2 replies
    fabianpeterg
    @fabianpeterg
    Just as a quick test, Im trying to run parts of the demo-wdl-project (using v 1.4.)
    when I try to create the context myContext (agc context deploy --context myContext), it never finishes. I only put the usePublicSubnets: true line into the myContext definition part of agc-project.yaml
    in ECS, the corresponding task is in pending status forever.
    I see this error in cloudformation, any hints on repairing it?
    Resource template validation failed for resource cromwellEngineTaskDefD21A4E86 as the template has invalid properties. Please refer to the resource documentation to fix the template. Properties validation failed for resource cromwellEngineTaskDefD21A4E86 with message: #/Volumes/0: extraneous key [EfsVolumeConfiguration] is not permitted
    3 replies
    fabianpeterg
    @fabianpeterg
    I try to run the hello workflow from the demo-wdl-project (using v 1.4.) in my context I'm using the usePublicSubnets:true directive.
    however running hello (agc workflow run hello --context myContext)
    fails with the error below. any hints? the endpoint below is ok
    Call to WES endpoint 'https://xxxxx-api.eu-central-1.amazonaws.com/prod/yyy/wes/v1' failed: 504 Gateway Timeout
    2 replies
    Yoshiki Vázquez Baeza
    @ElDeveloper
    Sharing in case someone has any ideas. I ran a snakemake workflow (with ~450 jobs) that eventually failed except as the system moved further and further along the way each job started taking longer and longer (jobs went from taking 10 mins to taking 1 hour and a half). I checked the input data and there was no reason why this should happen (in fact it didn't happen once I refactored the workflow to submit fewer jobs). After the workflow failed it left 19 lingering instances running for 20 hours which lead to a ~300 dollar charge. I got in touch with support and sorted this out, except I have no way to explain how it happened. This is particularly odd because of this issue aws/amazon-genomics-cli#446 where I haven't been able to parallelize my workflows beyond 2 simultaneous instances.
    5 replies
    W. Lee Pang, PhD
    @wleepang
    Hi Everyone, We've enabled Github Discussions on the Amazon Genomics CLI repo. This will make it easier to engage with the Amazon Genomics CLI community by having source code and community conversations in one place. Check it out here:
    https://github.com/aws/amazon-genomics-cli/discussions
    Asier Gonzalez
    @AsierGonzalez
    Hello everyone, first of all kudos to you all for the amazing tool you have created! Unfortunately, I am coming here asking for help.
    I have been testing some of the GATK examples that use Cromwell that come with the AGC installation and it all went well. Then I have tried to run the Whole Genome Germline Single Sample workflow (v. 3.1.3) developed by the Broad Institute. I have run this workflow with an small test dataset locally without any issues but it fails when I change the inputs config files to point to s3 buckets and try to run it using AGC. I have looked into the engine logs but I could not interpret the errors I get:
    Thu, 14 Jul 2022 16:00:37 +0200    2022-07-14 14:00:37,535 cromwell-system-akka.dispatchers.engine-dispatcher-29 INFO  - WorkflowManagerActor: Workflow 383f4817-24ed-4bbe-8914-b9d915b1176b failed (during ExecutingWorkflowState): cromwell.engine.io.IoAttempts$EnhancedCromwellIoException: [Attempted 1 time(s)] - IOException: Could not read from s3://<bucket_name>/project/<project_name/userid/<user_id>/context/<Cromwell_context_name>/cromwell-execution/WholeGenomeGermlineSingleSample/<run_id>/call-UnmappedBamToAlignedBam/UnmappedBamToAlignedBam/299223f5-f740-4ead-94c8-2f788161ee8a/call-SamToFastqAndBwaMemAndMba/shard-0/SamToFastqAndBwaMemAndMba-0-rc.txt: s3://s3.amazonaws.com/<bucket_name>/project/<project_name/userid/<user_id>/context/<Cromwell_context_name>/cromwell-execution/WholeGenomeGermlineSingleSample/<run_id>/call-UnmappedBamToAlignedBam/UnmappedBamToAlignedBam/299223f5-f740-4ead-94c8-2f788161ee8a/call-SamToFastqAndBwaMemAndMba/shard-0/SamToFastqAndBwaMemAndMba-0-rc.txt
    Caused by: java.io.IOException: Could not read from s3://<bucket_name>/project/<project_name/userid/<user_id>/context/<Cromwell_context_name>/cromwell-execution/WholeGenomeGermlineSingleSample/<run_id>/call-UnmappedBamToAlignedBam/UnmappedBamToAlignedBam/299223f5-f740-4ead-94c8-2f788161ee8a/call-SamToFastqAndBwaMemAndMba/shard-0/SamToFastqAndBwaMemAndMba-0-rc.txt: s3://s3.amazonaws.com/<bucket_name>/project/<project_name/userid/<user_id>/context/<Cromwell_context_name>/cromwell-execution/WholeGenomeGermlineSingleSample/<run_id>/call-UnmappedBamToAlignedBam/UnmappedBamToAlignedBam/299223f5-f740-4ead-94c8-2f788161ee8a/call-SamToFastqAndBwaMemAndMba/shard-0/SamToFastqAndBwaMemAndMba-0-rc.txt
        at cromwell.core.path.EvenBetterPathMethods$$anonfun$fileIoErrorPf$1.applyOrElse(EvenBetterPathMethods.scala:117)
        at cromwell.core.path.EvenBetterPathMethods$$anonfun$fileIoErrorPf$1.applyOrElse(EvenBetterPathMethods.scala:116)
        at map @ cromwell.engine.io.nio.NioFlow.handleSingleCommand(NioFlow.scala:68)
        at map @ cromwell.engine.io.nio.NioFlow.$anonfun$processCommand$5(NioFlow.scala:54)
        at flatMap @ cromwell.engine.io.nio.NioFlow.$anonfun$processCommand$1(NioFlow.scala:53)
    Caused by: java.nio.file.NoSuchFileException: s3://s3.amazonaws.com/<bucket_name>/project/<project_name/userid/<user_id>/context/<Cromwell_context_name>/cromwell-execution/WholeGenomeGermlineSingleSample/<run_id>/call-UnmappedBamToAlignedBam/UnmappedBamToAlignedBam/299223f5-f740-4ead-94c8-2f788161ee8a/call-SamToFastqAndBwaMemAndMba/shard-0/SamToFastqAndBwaMemAndMba-0-rc.txt
    21 replies
    Asier Gonzalez
    @AsierGonzalez

    Hello, I'm extracting this message from the thread as I am clueless as to how to debug it:

    This message: failed to copy s3://bios-test-agc/scripts/b60e6d52a1d7b22e33a6dfcc3538fd16 after 5 attempts. aborting means that the workflow task was not able to copy it's task script from S3. Can you check if that object exists? Can you also ensure that the role used by the EC2 workers would allow read access to that object. You may also want to check if the bucket or object have any policy or ACL that would prevent access.
    If someone knows how to debug it I would greatly appreciate it since it's blocking me.

    What really confuses me is that when this happens there are three tasks running and only one of them fails with this error, the other complete successfully. I have checked that the object exists and I believe there are no policy or ACL restrictions but I am not sure I looked into it correctly.
    Asier Gonzalez
    @AsierGonzalez

    Let me summarise what I have found out so far:

    • This is what the /aws/batch/joblog of the failing task looks like:
      [17] Error loading Python lib '/usr/local/aws-cli/v2/2.7.17/dist/libpython3.9.so.1.0': dlopen: Error relocating /usr/local/aws-cli/v2/2.7.17/dist/libpython3.9.so.1.0: __wcstol_internal: symbol not found
      [22] Error loading Python lib '/usr/local/aws-cli/v2/2.7.17/dist/libpython3.9.so.1.0': dlopen: Error relocating /usr/local/aws-cli/v2/2.7.17/dist/libpython3.9.so.1.0: __wcstol_internal: symbol not found
      attempt 1 to copy s3://bios-test-agc/scripts/4f5c67f114d2310921919ec9f4975446 failed
      [28] Error loading Python lib '/usr/local/aws-cli/v2/2.7.17/dist/libpython3.9.so.1.0': dlopen: Error relocating /usr/local/aws-cli/v2/2.7.17/dist/libpython3.9.so.1.0: __wcstol_internal: symbol not found
      [33] Error loading Python lib '/usr/local/aws-cli/v2/2.7.17/dist/libpython3.9.so.1.0': dlopen: Error relocating /usr/local/aws-cli/v2/2.7.17/dist/libpython3.9.so.1.0: __wcstol_internal: symbol not found
      attempt 2 to copy s3://bios-test-agc/scripts/4f5c67f114d2310921919ec9f4975446 failed
      [39] Error loading Python lib '/usr/local/aws-cli/v2/2.7.17/dist/libpython3.9.so.1.0': dlopen: Error relocating /usr/local/aws-cli/v2/2.7.17/dist/libpython3.9.so.1.0: __wcstol_internal: symbol not found
      [44] Error loading Python lib '/usr/local/aws-cli/v2/2.7.17/dist/libpython3.9.so.1.0': dlopen: Error relocating /usr/local/aws-cli/v2/2.7.17/dist/libpython3.9.so.1.0: __wcstol_internal: symbol not found
      attempt 3 to copy s3://bios-test-agc/scripts/4f5c67f114d2310921919ec9f4975446 failed
      [50] Error loading Python lib '/usr/local/aws-cli/v2/2.7.17/dist/libpython3.9.so.1.0': dlopen: Error relocating /usr/local/aws-cli/v2/2.7.17/dist/libpython3.9.so.1.0: __wcstol_internal: symbol not found
      [55] Error loading Python lib '/usr/local/aws-cli/v2/2.7.17/dist/libpython3.9.so.1.0': dlopen: Error relocating /usr/local/aws-cli/v2/2.7.17/dist/libpython3.9.so.1.0: __wcstol_internal: symbol not found
      attempt 4 to copy s3://bios-test-agc/scripts/4f5c67f114d2310921919ec9f4975446 failed
      [61] Error loading Python lib '/usr/local/aws-cli/v2/2.7.17/dist/libpython3.9.so.1.0': dlopen: Error relocating /usr/local/aws-cli/v2/2.7.17/dist/libpython3.9.so.1.0: __wcstol_internal: symbol not found
      [66] Error loading Python lib '/usr/local/aws-cli/v2/2.7.17/dist/libpython3.9.so.1.0': dlopen: Error relocating /usr/local/aws-cli/v2/2.7.17/dist/libpython3.9.so.1.0: __wcstol_internal: symbol not found
      attempt 5 to copy s3://bios-test-agc/scripts/4f5c67f114d2310921919ec9f4975446 failed
      failed to copy s3://bios-test-agc/scripts/4f5c67f114d2310921919ec9f4975446 after 5 attempts. aborting

    My understanding is that this happens in the fetch_and_run.sh script while trying to download the script from s3.
    @markjschreiber mentioned that these messages suggest that the AGC context may not be correctly setup and that destroying and redeploying it could help. Unfortunately, this is not the case, the error appears every time I have tried it.

    Now I wonder if this may indicate there was an error while installing the AWS CLI in the machine where this runs?
    The pattern of twoError loading Python lib messages followed by one failed to copy s3://... is consistent with the two aws cli calls (aws s3api head-object ... first and aws s3 cp ...then) done in the _s3_localize_with_retry() function where the copy doesn't work
    I mean, if the AWS CLI was not installed correctly then it would make sense that an error appears every time an aws cli command is invoked. This would also explain why the download doesn't work.
    Asier Gonzalez
    @AsierGonzalez
    I have checked all the Cloudwatch log groups looking for any hints of what may have gone wrong but I didn't see anything. If someone has any suggestions of where I should look I would be very grateful
    Asier Gonzalez
    @AsierGonzalez
    What really puzzles me is why this happens in this particular task whereas there are no problems with the other two. @markjschreiber suggested it could be related to permissions but I have not seen anything odd, although I don't really understand how roles are assigned to the resources created by AGC.
    As an alternative explanation, I wonder if this could be related to the docker image used? I don't really understand how the "Fetch and Run Strategy" mentioned in the Cromwell page of the AGC documentation works but if it's trying to install the AWS CLI in the container used to run the task perhaps there could be problems there?
    If someone could explain me how this works that would also be helpful. Or point me to the relevant documentation.
    mflynn-lanl
    @mflynn-lanl
    Our workflows specify directories for reference data. This works fine when we run Cromwell/WDL on our infrastructure, we can setup the Cromwell conf file to bind mount the reference data directories. We specify the paths as a String in the WDL file. From what I understand about amazon-genomics-cli, this doesn't work since it needs to localize all of the input files, including the reference data, by copying them to a S3 tmp bucket. Is there some way we can tell amazon-genomics-cli to localize all of the files in a directory?
    13 replies
    Daniel Donovan
    @spitfiredd

    I am running into an issue where my containers keep running out over memory, even when I specify an instance with 16GB of Memory.

    name: foo
    schemaVersion: 1
    workflows:
      foo:
        type:
          language: nextflow
          version: dsl2
        sourceURL: workflows/foo
    contexts:
      dev:
        instanceTypes:
          - "r5.large"
        engines:
          - type: nextflow
            engine: nextflow

    When I check the batch management console the child processes are spawing with 1vCPU and 1024 MEMORY

    Main Process

    2022-11-17T14:00:01.866-08:00    Version: 22.04.3 build 5703
    2022-11-17T14:00:01.866-08:00    Created: 18-05-2022 19:22 UTC
    2022-11-17T14:00:01.866-08:00    System: Linux 4.14.294-220.533.amzn2.x86_64
    2022-11-17T14:00:01.866-08:00    Runtime: Groovy 3.0.10 on OpenJDK 64-Bit Server VM 11.0.16.1+9-LTS
    2022-11-17T14:00:01.866-08:00    Encoding: UTF-8 (ANSI_X3.4-1968)
    2022-11-17T14:00:01.866-08:00    Process: 47@ip-redacted.compute.internal [redacted]
    2022-11-17T14:00:01.866-08:00    CPUs: 2 - Mem: 2 GB (1.5 GB) - Swap: 2 GB (2 GB)
    2022-11-17T14:00:01.866-08:00    Nov-17 21:53:57.780 [main] WARN com.amazonaws.util.Base64 - JAXB is unavailable. Will fallback to SDK implementation which may be less performant.If you are using Java 9+, you will need to include javax.xml.bind:jaxb-api as a dependency.
    2022-11-17T14:00:01.866-08:00    Nov-17 21:53:57.799 [main] DEBUG nextflow.file.FileHelper - Can't check if specified path is NFS (1): redacted
    2022-11-17T14:00:01.866-08:00    Nov-17 21:53:57.799 [main] DEBUG nextflow.Session - Work-dir: redacted
    2022-11-17T14:00:01.866-08:00    Nov-17 21:53:57.799 [main] DEBUG nextflow.Session - Script base path does not exist or is not a directory: /root/.nextflow/assets/redacted/bin
    2022-11-17T14:00:01.866-08:00    Nov-17 21:53:57.871 [main] DEBUG nextflow.executor.ExecutorFactory - Extension executors providers=[AwsBatchExecutor]
    2022-11-17T14:00:01.866-08:00    Nov-17 21:53:57.886 [main] DEBUG nextflow.Session - Observer factory: DefaultObserverFactory
    2022-11-17T14:00:01.866-08:00    Nov-17 21:53:57.954 [main] DEBUG nextflow.cache.CacheFactory - Using Nextflow cache factory: nextflow.cache.DefaultCacheFactory
    2022-11-17T14:00:01.866-08:00    Nov-17 21:53:57.975 [main] DEBUG nextflow.util.CustomThreadPool - Creating default thread pool > poolSize: 3; maxThreads: 1000
    2022-11-17T14:00:01.866-08:00    Nov-17 21:53:58.123 [main] DEBUG nextflow.Session - Session start invoked
    2022-11-17T14:00:01.866-08:00    Nov-17 21:53:59.049 [main] DEBUG nextflow.script.ScriptRunner - > Launching execution

    Child Process

    2022-11-17T14:00:01.867-08:00    Essential container in task exited - OutOfMemoryError: Container killed due to memory usage
    2022-11-17T14:00:01.867-08:00    Command executed:
    2022-11-17T14:00:01.867-08:00    fastp     -i USDA_soil_C35-5-1_1.fastq.gz     -I USDA_soil_C35-5-1_2.fastq.gz     -o "USDA_soil_C35-5-1.trim.R1.fq.gz"     -O "USDA_soil_C35-5-1.trim.R2.fq.gz"     --length_required 50     -h "USDA_soil_C35-5-1.html"     -w 16
    2022-11-17T14:00:01.867-08:00    Command exit status:
    2022-11-17T14:00:01.867-08:00    137
    2022-11-17T14:00:01.867-08:00    Command output:
    2022-11-17T14:00:01.867-08:00    (empty)
    2022-11-17T14:00:01.867-08:00    Command error:
    2022-11-17T14:00:01.867-08:00      .command.sh: line 2:   188 Killed                  fastp -i USDA_soil_C35-5-1_1.fastq.gz -I USDA_soil_C35-5-1_2.fastq.gz -o "USDA_soil_C35-5-1.trim.R1.fq.gz" -O "USDA_soil_C35-5-1.trim.R2.fq.gz" --length_required 50 -h "USDA_soil_C35-5-1.html" -w 16

    How do I either manually request more vCPU and Memory or set it so so that it autoscales so it doesn't run out of memory?

    3 replies