Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
    Bram van Dijk
    @bramvandijk88_twitter

    Ah that's actually great. I couldn't figure this out sorry.

    Yeah, this is great for a free service. I think our institute would be willing to pay for dedicated servers though. You should consider this.

    Björn Grüning
    @bgruening
    @bramvandijk88_twitter we do. Please contact me on this issue.
    For the time being, the only spades jobs that I see queued have been actually resubmitted because they crashed.
    Helena Rasche
    @hexylena
    I guess in general it has been "instutites adding capacity for everyone" and not just specific groups/institutes but maybe EU's policies can be changed.
    Björn Grüning
    @bgruening
    Crash, most likely due to out of memory.
    Bram van Dijk
    @bramvandijk88_twitter

    For the time being, the only spades jobs that I see queued have been actually resubmitted because they crashed.

    Yeah, I cancelled the ones that weren't crashing about an hour ago

    Björn Grüning
    @bgruening

    I guess in general it has been "instutites adding capacity for everyone"

    That is still true.

    10 Spades jobs are currently running, requesting each 400GB memory.
    Bram van Dijk
    @bramvandijk88_twitter
    Both could be the case. I'm sure we can make all ships rise with the same tide.
    Oh really?
    I thought I cancelled them
    Also: it should take no more than 2 GB :')
    It's really a small number of reads
    Where can I find these jobs though? Or do you mean there not my jobs?
    they're (sorry, not native :')))
    Björn Grüning
    @bgruening
    Sorry, I meant overall spades jobs
    @bramvandijk88_twitter sorry that we are slow in responding, not a good week here.
    Bram van Dijk
    @bramvandijk88_twitter
    No worries, I totally understand.
    I enjoy the tool, so please know it's deeply appreciated
    (IT being your work XD)
    Arthur Eschenlauer
    @eschen42

    Regarding my Query Tabular issue, I found a workaround: when I rerun it, if I expand the Table Options for each input table when re-invoking the tool then the options are correctly selected. If I don't they seem to be lost more likely than not. See e.g.
    https://usegalaxy.eu/u/eschen42/h/forgetfulquerytabular
    where dataset 7 was created from datasets 1-6. Dataset 8 failed when the "rerun" button was pressed followed immediately by the Execute button. Using the "circle-i" button, inspection shows that Use first line as column names is set to false for all tables for Dataset 8 whereas it was set to true for all tables for Dataset 7.

    Right now I am thwarted in my effort to set up another galaxy instance to reproduce it there.

    Arthur Eschenlauer
    @eschen42
    I just pulled 20.05 using https://galaxyproject.org/admin/get-galaxy/#cloning-new and don't seem to see the issue yet.
    Nicola Soranzo
    @nsoranzo
    UseGalaxy.eu is using the (still unreleased) 20.09
    Marius van den Beek
    @mvdbeek
    @eschen42 galaxyproject/galaxy#10584 should fix the problem
    Arthur Eschenlauer
    @eschen42
    I upgraded to release_20.09 as described on https://galaxyproject.org/admin/get-galaxy/#updating-existing and reproduced the problem. Thanks @nsoranzo and @mvdbeek . I will try the PR and let you know how it goes.
    Arthur Eschenlauer
    @eschen42
    @mvdbeek I am delighted to report that after update-existing with mvdbeek:do_render_sections the tool does not fail when I rerun it!
    Marius van den Beek
    @mvdbeek
    cool, thanks for confirming!
    and sorry for breaking it in the first place :/
    Arthur Eschenlauer
    @eschen42
    You are welcome, and I know the feeling...
    Bram van Dijk
    @bramvandijk88_twitter

    I've got a fasta file of 2 lines. A header, and a 3 billion base pair long genome. BWA mem / index somehow trips over this long genome, so I'm thinking it may be resolved if I split is into chunks. I wrote a bash oneliner to do that:

    i=0; while read -n 300000 chars; do i=$((i+1)); printf ">Chunk_${i}\n%s\n" "$chars"; done < myfile.fa

    For as far as I can understand, the faSplit tool does something similar, but when I feed it my file it crashes and states "this list is empty". Help appreciated.

    Björn Grüning
    @bgruening
    @bramvandijk88_twitter try this tool
    Bram van Dijk
    @bramvandijk88_twitter
    Not sure I understand what it does
    Could you explain?
    Björn Grüning
    @bgruening
    it reformats the FASTA file, maybe that is already the problem
    if this is not the problem you can then easily split the resulting text file into smaller chunks, simply by doing some text operations
    Bram van Dijk
    @bramvandijk88_twitter
    I've tried doing that with the text operations, but it keeps crashing on me
    The reformat also crashes ^^
    Björn Grüning
    @bgruening
    too long :)
    how do you create those files?
    Bram van Dijk
    @bramvandijk88_twitter
    Yeah, it's soo long. I'll fix how I created them :')
    too*
    Björn Grüning
    @bgruening
    how do you do this?
    Bram van Dijk
    @bramvandijk88_twitter
    It's a bunch of concatenated files, so it's totally preventable
    No wories
    Björn Grüning
    @bgruening
    you can concat in Galaxy if this helps
    Bram van Dijk
    @bramvandijk88_twitter
    I did, but that ends up with the same problem
    Björn Grüning
    @bgruening
    can you share me this?
    Bram van Dijk
    @bramvandijk88_twitter
    grep -v "^>" seperate_contigs.fasta | awk 'BEGIN { ORS=""; print ">All_Contigs\n" } { print }' > contigs_all_one_sequence.fasta