Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
    Peter Cock
    @peterjc
    Thanks both - we're not starting immediately (currently in a grant silly season), but that's very reassuring.
    Peter Cock
    @peterjc
    There is room for improvement, but galaxyproject/galaxy-hub#732 makes an initial stab at updating the old Galaxy Admins pages and points people here and/or the working group.
    Nikolay Vazov
    @vazovn
    @natefoo:matrix.org Hi, could anybody authorise the following pull request : galaxyproject/galaxy#11917 Thank you.
    Nicola Soranzo
    @nsoranzo
    @vazovn Replied in the issue.
    Nikolay Vazov
    @vazovn
    Thank you
    M Bernt
    @bernt-matthias
    Does anyone have experiences with conda 4.8.x? .. I would need to upgrade from 4.6.14
    bgruening
    @bgruening:matrix.org
    [m]
    we are running 4.8.3 on EU
    M Bernt
    @bernt-matthias
    Thanks @bgruening:matrix.org .. just updated
    Maiken Pedersen
    @maikenp
    Hi there. Joining for first time. I need some help related to galaxy_ext module not found, probably related to the galaxy installation not being on the shared filesystem of the cluster. I described the issue here: https://help.galaxyproject.org/t/no-module-named-galaxy-ext/5906
    Nikolay Vazov
    @vazovn
    HI, trying to install usegalaxy_eu.gie_proxy role (0.0.2) on RHEL8. Failing with the following error :
    2073 verbose stack Error: sqlite3@4.0.4 install: `node-pre-gyp install --fallback-to-build`
    2073 verbose stack spawn ENOENT
    2073 verbose stack     at ChildProcess.<anonymous> (/srv/galaxy/gie-proxy/venv/lib/node_modules/npm/node_modules/npm-lifecycle/lib/spawn.js:48:18)
    2073 verbose stack     at ChildProcess.emit (events.js:182:13)
    2073 verbose stack     at maybeClose (internal/child_process.js:962:16)
    2073 verbose stack     at Process.ChildProcess._handle.onexit (internal/child_process.js:251:5)
    2074 verbose pkgid sqlite3@4.0.4
    2075 verbose cwd /srv/galaxy/gie-proxy/proxy
    2076 verbose Linux 4.18.0-240.22.1.el8_3.x86_64
    2077 verbose argv "/srv/galaxy/gie-proxy/venv/bin/node" "/srv/galaxy/gie-proxy/venv/bin/npm" "install"
    2078 verbose node v10.13.0
    2079 verbose npm  v6.4.1
    2080 error file sh
    2081 error code ELIFECYCLE
    2082 error errno ENOENT
    2083 error syscall spawn
    2084 error sqlite3@4.0.4 install: `node-pre-gyp install --fallback-to-build`
    2084 error spawn ENOENT
    2085 error Failed at the sqlite3@4.0.4 install script.
    2085 error This is probably not a problem with npm. There is likely additional logging output above.
    2086 verbose exit [ 1, true ]
    Nikolay Vazov
    @vazovn
    I managed to install sqlite3@5.0.2 manually (npm install sqlite3) Is this a game of versions? What about nodeenvbe replaced by package?
    wm75 (Wolfgang Maier)
    @wm75:matrix.org
    [m]
    Any idea why https://usegalaxy.org/histories/list is really slow compared to the same thing on .eu?
    slugger70
    @slugger70:matrix.org
    [m]
    Ours was really slow for ages then it got faster when I moved across the country. Db artifacts?
    Lcornet
    @Lcornet

    Hello all,

    I have a problem with singularity while installing galaxy with ansible.

    TASK [cyverse-ansible.singularity : download the singularity release] **************************************************************************************
    fatal: [galaxy.inbios.uliege.be]: FAILED! => {"changed": false, "dest": "/tmp/singularity-3.7.0.tar.gz", "elapsed": 0, "msg": "Request failed", "response": "HTTP Error 404: Not Found", "status_code": 404, "url": "https://github.com/sylabs/singularity/releases/download/v3.7.0/singularity-3.7.0.tar.gz"}

    The URL for singularity has changed, how can i fix that ?

    Lcornet
    @Lcornet
    a more recent version of : - src: cyverse-ansible.singularity ?
    I used : version: 048c4f178077d05c1e67ae8d9893809aac9ab3b7
    6 replies
    Giuseppe Profiti
    @profgiuseppe
    Hello, does anyone has experience in using the ansible role for installing Galaxy on a shared NFS filesystem, so it could be run in a cluster? I would rather not double check all the "become" to see what should be done by the (shared) galaxy user and what needs to be performed by root (thus the issues with NFS). Thanks!
    slugger70
    @slugger70:matrix.org
    [m]
    Hiprofgiuseppe (Giuseppe Profiti) , we used to run galaxy off an NFS share but it was a bit slow once we got a few users. Instead we split the system up and have user data, job working dirs etc on NFS with the galaxy web app on a local disk with a synced copy shared with the cluster workers. Hope this makes sense.
    Greg Von Kuster
    @gregvonkuster
    image.png
    I'm running galaxy version 20.09, which does not support importing a list:paired collection from a history into a data library. I haven't seen any mentions of this in the 21.01 release notes. Is it still the case that I should flatten the list:paired collection into a list to import into the data library? And then when exporting a selection of them back to a history for analysis, re-build the list:paired collection?
    Giuseppe Profiti
    @profgiuseppe
    Hi @slugger70:matrix.org thanks for the info. So if I understand correctly, I should install galaxy on the frontend machine, mount the NFS share so the data directories are the shared ones and then periodically sync the environments/tools etc from the front-end machine to the cluster workers. Is this what you did?
    slugger70
    @slugger70:matrix.org
    [m]
    Sort of. We ran lsync from the galaxy app on the head node to an NFS share of the galaxy app that the workers all see
    Lcornet
    @Lcornet
    How can i force ansible to reinstall everything without skip ?
    1 reply
    Jennifer Hillman-Jackson
    @jennaj
    HI -- this looks like a possble permissions problem but I'm not sure. Does anyone recognize the problem? It is about sending jobs to a cluster and have the env info accessible to jobs: https://help.galaxyproject.org/t/no-module-named-galaxy-ext/5906
    1 reply
    Maiken Pedersen
    @maikenp
    Hi, when using Galaxy for other things than biology related computation, is there a way to remove the dataset-stuff and Genome stuff related to biology. E.g from the "Upload File from your computer" there is a Type drop-down menu with a lot of file types which are not relevant for the non-biology case and a Genome drop-down menu.
    M Bernt
    @bernt-matthias
    @maikenp I guess you can edit config/datatypes_conf.xml.sample .. in particular the display_in_upload attribute
    3 replies
    Maiken Pedersen
    @maikenp
    Hi there, I have created a support-request in https://help.galaxyproject.org/t/avoid-one-history-item-per-output-for-tool-execution-with-many-output-files/5938 related to history and getting 1 history box/item per output file. This clutters the history if there are like hundreds of output files. I assume there is a way to avoid this? See link for more details :)
    mvdbeek
    @mvdbeek:matrix.org
    [m]
    Hey @maikenp, are those datasets hidden ?
    2 replies
    If not, can you post the full outputs section of the wrapper ?
    The section you've posted there should not produce unhidden datasets
    mvdbeek
    @mvdbeek:matrix.org
    [m]
    might be a copy-paste mistake, but as it written in that post it's not correct
    I'd suggest adding a test section to the tool and testing it with planemo
    deoliveiralf
    @deoliveiralf
    Hi there! I'm trying to install galaxy in a local server by docker, through these links: https://phoenixnap.com/kb/how-to-install-docker-on-ubuntu-18-04 (to install docker) and https://hub.docker.com/r/bgruening/galaxy-stable/ (to install Galaxy). However, this installation doesn't work well. I can't open local:host (which return 403 forbidden) and when I run 'docker exec <container-name> supervisorctl status', some parts seem to not working, like in the below image. Has anyone installed it this way or had a problem like this? I choose this option because the IT guy has updated our server where a galaxy was installed by the simple way, and then, after updating the ubuntu, Galaxy has gone! So, I found that a docker could be a secure way to preserve the Galaxy on our server. Thanks for the help.
    image.png
    IjonTich
    @IjonTich
    What is your favourite reference book, guide or website for ansible?
    martenson
    @martenson:matrix.org
    [m]
    @ljonTich Galaxy training materials, I may be biased though 🤣
    gmauro
    @gmauro:matrix.org
    [m]

    I have hard time with mothur tools. The binary crash badly consuming all the memory and starting a huge number of processes.

    top - 16:34:53 up 22:10,  1 user,  load average: 19391.86, 19368.43, 17192.83
    Tasks: 25727 total, 157 running, 20337 sleeping,   0 stopped, 5233 zombie
    %Cpu(s):  2.1 us,  7.0 sy,  0.0 ni, 90.8 id,  0.1 wa,  0.0 hi,  0.0 si,  0.0 st
    MiB Mem : 428015.6 total,  35536.1 free,  13283.9 used, 379195.5 buff/cache
    MiB Swap:      0.0 total,      0.0 free,      0.0 used. 411225.3 avail Mem
    vgcnbwc-worker-c125m425-9515:~$ ps aux | grep mothur|wc -l
    24621

    Anyone had a similar experience?

    mvdbeek
    @mvdbeek:matrix.org
    [m]
    I meant to comment on this, how come condor doesn't kill the job ?
    gmauro
    @gmauro:matrix.org
    [m]
    cgroups limit is able to kill the process but in the meanwhile mothur creates a lot of trouble to others
    ...
    [Fri May 14 17:04:30 2021] SLUB: Unable to allocate memory on node -1, gfp=0x6000c0(GFP_KERNEL)
    [Fri May 14 17:04:30 2021]   cache: nfs_inode_cache(23785:condor_var_lib_condor_execute_slot1_4@vgcnbwc-worker-c125m425-9515.novalocal), object size: 1136, buffer size: 1144, default order: 3, min order: 0
    [Fri May 14 17:04:30 2021]   node 0: slabs: 83, objs: 849, free: 0
    [Fri May 14 17:04:37 2021] SLUB: Unable to allocate memory on node -1, gfp=0x6000c0(GFP_KERNEL)
    [Fri May 14 17:04:37 2021]   cache: nfs_inode_cache(23785:condor_var_lib_condor_execute_slot1_4@vgcnbwc-worker-c125m425-9515.novalocal), object size: 1136, buffer size: 1144, default order: 3, min order: 0
    [Fri May 14 17:04:37 2021]   node 0: slabs: 83, objs: 849, free: 0
    [Fri May 14 17:04:37 2021] SLUB: Unable to allocate memory on node -1, gfp=0x6000c0(GFP_KERNEL)
    [Fri May 14 17:04:37 2021]   cache: nfs_inode_cache(23785:condor_var_lib_condor_execute_slot1_4@vgcnbwc-worker-c125m425-9515.novalocal), object size: 1136, buffer size: 1144, default order: 3, min order: 0
    [Fri May 14 17:04:37 2021]   node 0: slabs: 83, objs: 849, free: 0
    and a lot of this in the mothur log
    ...
    Using 8 processors.
    153     1
    
    Using 8 processors.
    [ERROR]: std::bad_allocRAM used: 0.00512314Gigabytes . Total Ram: 417.984Gigabytes.
    
     has occurred in the DistanceCommand class function driver. This error indicates your computer is running out of memory.  This is most commonly caused by trying to process a dataset too large, using multiple processors, or a file format issue. If you are running our 32bit version, your memory usage is limited to 4G.
      If you have more than 4G of RAM and are running a 64bit OS, using our 64bit version may resolve your issue.  If you are using multiple processors, try running the command with processors=1, the more processors you use the more memory is required. Also, you may be able to reduce the size of your dataset by using th
    e commands outlined in the Schloss SOP, http://www.mothur.org/wiki/Schloss_SOP. If you are unable to resolve the issue, please contact Pat Schloss at mothur.bugs@gmail.com, and be sure to include the mothur.logFile with your inquiry.
    Using 8 processors.
    [ERROR]: std::bad_allocRAM used: 0.00512314Gigabytes . Total Ram: 417.984Gigabytes.
    
     has occurred in the DistanceCommand class function driver. This error indicates your computer is running out of memory.  This is most commonly caused by trying to process a dataset too large, using multiple processors, or a file format issue. If you are running our 32bit version, your memory usage is limited to 4G.
      If you have more than 4G of RAM and are running a 64bit OS, using our 64bit version may resolve your issue.  If you are using multiple processors, try running the command with processors=1, the more processors you use the more memory is required. Also, you may be able to reduce the size of your dataset by using th
    e commands outlined in the Schloss SOP, http://www.mothur.org/wiki/Schloss_SOP. If you are unable to resolve the issue, please contact Pat Schloss at mothur.bugs@gmail.com, and be sure to include the mothur.logFile with your inquiry.
    Using 8 processors.
    168     1
    
    Using 8 processors.
    38      1
    
    Using 8 processors.
    59      1
    ...
    mvdbeek
    @mvdbeek:matrix.org
    [m]
    I have never really used cgroups, is it expected that a program can go beyond the memory limit ?
    gmauro
    @gmauro:matrix.org
    [m]
    It shouldn't
    1 reply
    Nolan Woods
    @innovate-invent
    is mothur written in Java? I know earlier jvms failed to respect cgroups and tried to over allocate resources even though the OS prevents it from actually doing it
    M Bernt
    @bernt-matthias
    C++
    Lcornet
    @Lcornet

    I have an error with cvmfs apt key:

    Does someone know how to solve that?

    TASK [galaxyproject.cvmfs : Install CernVM apt key] ****************************************************************************************************************************
    task path: /home/galaxyluc/galaxy/roles/galaxyproject.cvmfs/tasks/init_debian.yml:9
    <galaxy.inbios.uliege.be> ESTABLISH LOCAL CONNECTION FOR USER: root
    <galaxy.inbios.uliege.be> EXEC /bin/sh -c 'echo ~root && sleep 0'
    <galaxy.inbios.uliege.be> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /root/.ansible/tmp/ansible-tmp-1621340712.8928573-196280123386920 `" && echo ansible-tmp-1621340712.8928573-196280123386920="` echo /root/.ansible/tmp/ansible-tmp-1621340712.8928573-196280123386920 `" ) && sleep 0'
    Using module file /usr/lib/python3/dist-packages/ansible/modules/packaging/os/apt_key.py
    <galaxy.inbios.uliege.be> PUT /root/.ansible/tmp/ansible-local-481402if90ck3o/tmpssnsd67q TO /root/.ansible/tmp/ansible-tmp-1621340712.8928573-196280123386920/AnsiballZ_apt_key.py
    <galaxy.inbios.uliege.be> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1621340712.8928573-196280123386920/ /root/.ansible/tmp/ansible-tmp-1621340712.8928573-196280123386920/AnsiballZ_apt_key.py && sleep 0'
    <galaxy.inbios.uliege.be> EXEC /bin/sh -c '/usr/bin/python3 /root/.ansible/tmp/ansible-tmp-1621340712.8928573-196280123386920/AnsiballZ_apt_key.py && sleep 0'
    <galaxy.inbios.uliege.be> EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1621340712.8928573-196280123386920/ > /dev/null 2>&1 && sleep 0'
    The full traceback is:
      File "/tmp/ansible_apt_key_payload_z9pwgkho/ansible_apt_key_payload.zip/ansible/modules/packaging/os/apt_key.py", line 214, in download_key
      File "/usr/lib/python3.8/http/client.py", line 471, in read
        s = self._safe_read(self.length)
      File "/usr/lib/python3.8/http/client.py", line 612, in _safe_read
        data = self.fp.read(amt)
      File "/usr/lib/python3.8/socket.py", line 669, in readinto
        return self._sock.recv_into(b)
      File "/usr/lib/python3.8/ssl.py", line 1241, in recv_into
        return self.read(nbytes, buffer)
      File "/usr/lib/python3.8/ssl.py", line 1099, in read
        return self._sslobj.read(len, buffer)
    fatal: [galaxy.inbios.uliege.be]: FAILED! => {
        "changed": false,
        "invocation": {
            "module_args": {
                "data": null,
                "file": null,
                "id": null,
                "key": null,
                "keyring": null,
                "keyserver": null,
                "state": "present",
                "url": "https://cvmrepo.web.cern.ch/cvmrepo/apt/cernvm.gpg",
                "validate_certs": true
            }
        },
        "msg": "error getting key id from url: https://cvmrepo.web.cern.ch/cvmrepo/apt/cernvm.gpg",
        "traceback": "Traceback (most recent call last):\n  File \"/tmp/ansible_apt_key_payload_z9pwgkho/ansible_apt_key_payload.zip/ansible/modules/packaging/os/apt_key.py\", line 214, in download_key\n  File \"/usr/lib/python3.8/http/client.py\", line 471, in read\n    s = self._safe_read(self.length)\n  File \"/usr/lib/python3.8/http/client.py\", line 612, in _safe_read\n    data = self.fp.read(amt)\n  File \"/usr/lib/python3.8/socket.py\", line 669, in readinto\n    return self._sock.recv_into(b)\n  File \"/usr/lib/python3.8/ssl.py\", line 1241, in recv_into\n    return self.read(nbytes, buffer)\n  File \"/usr/lib/python3.8/ssl.py\", line 1099, in read\n    return self._sslobj.read(len, buffer)\nsocket.timeout: The read operation timed out\n"
    }
    Helena Rasche
    @hexylena:matrix.org
    [m]
    nsocket.timeout: The read operation timed out
    looks like a connection issue
    Lcornet
    @Lcornet
    Ok, thanks, i will try again later then
    Nikolay Vazov
    @vazovn
    Hi, if my galaxy instance is running on socket: 127.0.0.1:8080, shall I use uwsgi_* directives when configuring Interactive Tools or can I use proxy_*ones as defined in the tutorial for the proxy nginx config file?