Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
    Marius
    @mvdbeek:matrix.org
    [m]
    If the slurm job fails it certainly killed the metadata script (or it didn't even start running ...) that runs at the end of the jobs
    pvanheusden
    @pvanheusden:matrix.org
    [m]
    gotcha will check
    Morgan Ludwig
    @mjbludwig
    Has anyone run into issues importing history archives from one galaxy to another via the link they make? I am running into somewhat generic tarfile.ReadError: file could not be opened successfull errors
    bgruening
    @bgruening:matrix.org
    [m]
    @mjbludwig: please make sure your history is public and accessible when you try to access it from outside.
    Morgan Ludwig
    @mjbludwig
    @bgruening:matrix.org Thanks for the response! I have the histories set to published and shared so they should be open? Do I need to allow anonymous access to the server maybe?
    bgruening
    @bgruening:matrix.org
    [m]
    We have his discussion here to make it more clear: galaxyproject/galaxy#14447
    but I assume that for exporting the simple public sharing should be enough
    bgruening
    @bgruening:matrix.org
    [m]
    Martin Wolstencroft
    @martinwolst:matrix.org
    [m]
    It's probably in front of me, but I can't see it - can Pulsar be configured to delete staging files once it has successfully returned the outputs to the galaxy server? Thank you in advance :-)
    Kasun Buddika
    @KasunBuddika7_twitter

    Hi All. We had a problem with our Galaxy instance. When I tried to reboot it keeps on generating this error message. Tried googling, had no luck. Can some one please help us troubleshooting this issue?

    uWSGI running as root, you can use --uid/--gid/--chroot options *** WARNING: you are running uWSGI as root !!! (use the --uid flag) *** *** WARNING: you are running uWSGI without its master process manager *** your memory page size is 4096 bytes detected max file descriptor number: 65535 building mime-types dictionary from file /etc/mime.types...1060 entry found lock engine: pthread robust mutexes thunder lock: disabled (you can enable it with --thunder-lock) probably another instance of uWSGI is running on the same address (0.0.0.0:80). bind(): Address already in use [core/socket.c line 769]

    We are using Galaxy v21.09. Thanks in advance.

    Nuwan Goonasekera
    @nuwan_ag:matrix.org
    [m]
    @KasunBuddika7_twitter: Looks like another process is already running on port 80. Can you try using sudo lsof -n -i :80 | grep LISTEN to check which application it might be? If it's another uwsgi process, try killing the process and restarting galaxy.
    Kasun Buddika
    @KasunBuddika7_twitter
    @nuwan_ag:matrix.org : Thanks for the prompt response. It worked!!!
    M Bernt
    @bernt-matthias:matrix.org
    [m]
    Someone has a hint where to dig when I get this:
    Executing: galaxyctl start                                                                                                                                                                    │····················
    celery                           BACKOFF   unknown error making dispatchers for 'celery': EACCES                                                                                              │····················
    celery-beat                      BACKOFF   unknown error making dispatchers for 'celery-beat': EACCES                                                                                         │····················
    gunicorn                         BACKOFF   unknown error making dispatchers for 'gunicorn': EACCES
    Also see Log files are in /var/log/galaxy/gravity-dev/ ... but the dir is empty
    We are moving our Galaxy to new hardware.
    M Bernt
    @bernt-matthias:matrix.org
    [m]
    OK. Solved: Was a permission problem for the log folder .. LOL.
    Siva Chudalayandi
    @Sivanandan
    Hello! My name is Siva, I am a bioinformatician at Iowa State University and along with others in the HPC, we run an instance of galaxy on a USDA HPC
    Users run jobs on our galaxy from the GUI and on slurm we have made it such that an automated user, galaxy-user is running htose jobs and thus owns those files. This leads to some problems with downloading data etc, due to permissions. Is there a way around this? The sys-admin of the HPC tell me that mapping the email address to the user names and freely letting a user download data etc could lead to security vulnerabilities. Do you have any suggestions to make these easier for the individual user?
    Nolan Woods
    @innovate-invent
    @Sivanandan You might want to set up Galaxy to authenticate users using your ldap server
    although, how are you accessing files outside of galaxy?
    er, I mean, why are users trying to access files owned by galaxy-user
    Siva Chudalayandi
    @Sivanandan
    Let's say a user is doing a job on galaxy. However, those users also have an account on the HPC. The outputs of that job are accessible to the user via the GUI (they could potentially copy the link and wget it to their folder on the HPC, but it isn't very intuitive). Some users feel if they have access to those outputs directly on the command line, they would be able to copy it to their folder of choice. However, like I said earlier those files are, by default in our case, owned by an automated user called galaxy-user, so they don't have access to those files on the command line. I hope this clarifies my question.
    Nolan Woods
    @innovate-invent
    Galaxy generally maintains its datasets with cryptic file names. How are users locating the files within Galaxies datastore?
    Siva Chudalayandi
    @Sivanandan
    Thats true. The files are located in a folder called datasets with each run getting a new number. If the user has the slurm job ID, they can locate the relevant folder.
    Jennifer Hillman-Jackson
    @jennaj
    Nolan Woods
    @innovate-invent
    Galaxy needs to own those files in order to have access to them, I am not sure how you would get around that
    You could add everyone to the galaxy group, but that means that anyone can access all of the files
    You could potentially create a tool that exports the data to a folder owned by the user, kind of the reverse of the ftp upload tool
    Nolan Woods
    @innovate-invent
    this would mean giving the tool elevated privileges though
    Siva Chudalayandi
    @Sivanandan
    Ya! Thats right!
    Your last suggestion is clever, I will run it by our sys admins.
    Nolan Woods
    @innovate-invent
    martenson
    @martenson:matrix.org
    [m]
    @Sivanandan: Various Galaxies at some point implement some clone of the "export to cluster" tool but be aware of the implications of allowing a tool to write to a remote file system or similar.
    The more restrictive the tool behaves, the better. Also consider locking this tool to only a subset of users -- not everyone using the Galaxy instance.
    Linelle
    @abueg

    hello! :wave: we are trying to set up a galaxy instance on our SLURM cluster on RHEL 7.8 following the general workflow of the GAT training, with as much as possible not needing root permissions
    I have gotten up to the systemd part of the tutorial, and have added the following to my group_vars/galaxyservers.yml :

    # systemd
    galaxy_manage_systemd: yes
    galaxy_systemd_root: false

    I have added the galaxy_systemd_root: false based off the guidance in the systemd section of the ansible-galaxy readme. when I run the playbook, though, I get this error: fatal: [vglgalaxy.rockefeller.edu]: FAILED! => {"changed": false, "checksum": "c8a6b5aa307cef7953a5d3cddedb269a329a9550", "msg": "Destination /etc/systemd/system not writable"}
    the full output and contents of my group_vars/galaxyservers.yml is here: https://gist.github.com/abueg/0ce8c93e4cfe904bc261a8da85761e2e
    I would appreciate any feedback on how I might address this error, thank you for your time!

    Fred
    @FredericBGA
    Hi, i'm trying to test my galaxy 22.01 installed using Ansible.
    I've installed ncbi Blast using the toolshed.
    I've installed planemo and I'm trying to launch the tests.
    it does not work
    I see in the logs: /api/tools/ncbi_blastn_wrapper/test_data_download?filename=rhodopsin_nucs.fasta&tool_version=2.10.1+galaxy2 HTTP/1.1" 404
    What do I miss?
    On the interface, using the same api URL I see err_msg "Specified test data path not found."
    a lot of things have changed since 20.05 I guess, I need a little help here I'm afraid.
    Fred
    @FredericBGA
    so the path is not good (bad access rights? not existing at all?) https://docs.galaxyproject.org/en/master/_modules/galaxy/webapps/galaxy/api/tools.html
    Marius
    @mvdbeek:matrix.org
    [m]
    Those tools are pretty old and not following the standard location of having test-data next to the tool
    3 replies
    yes, but the standard location is database/shed_tools/toolshed.g2.bx.psu.edu/repos/devteam/ncbi_blast_plus/0e3cf9594bb7/ncbi_blast_plus/tools/ncbi_blast_plus/test-data
    Fred
    @FredericBGA
    can I try a symbolic link (to validate my install)?
    Marius
    @mvdbeek:matrix.org
    [m]
    yes
    Fred
    @FredericBGA
    ^_^