Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
sveinugu
@sveinugu
Does anyone have any info on the current and projected size of the galaxyproject CVMFS reference data servers, in the perspective of the lifetime of a new disk rack for a stratum 1 server? @natefoo? @erasche? @bgruening?
Björn Grüning
@bgruening
mh, if I count well its 16TB already?
Oo
it gets every 6 month substantially bigger -> because of new bioconductor release
2 replies
so maybe every 6 month 3TB more?
sveinugu
@sveinugu
@bgruening Thanks! Exactly what I needed. I suppose one should add a bit on top of that to allow for unplanned extensions?
Oleksandr Moskalenko
@moskalenko

In our instance, the featureCounts tool seems to rely on its built-in references to be inside the dependency (container, conda env)

   ## Export fc path for its built-in annotation
    export FC_PATH=\$(command -v featureCounts | sed 's@/bin/featureCounts$@@') &&

and the annotation directory is present in the directory e.g. database/dependencies/_conda/envs/mulled-v1-39786c1966303ef2ef27f1708fd92087f19ec27505a1cad54a74932345e20ab0/annotation/hg38_RefSeq_exon.txt . However, the tool interface tells the user that no references are available. https://usegalaxy.org has the same issue. Where should I post a bug report? I'm unsure which github repo or support forum would be appropriate.

2 replies
Björn Grüning
@bgruening
@sveinugu probably
github-ic
@github-ic

Hi - I'm running a custom Galaxy v21.01 instance with 7 job handlers. I pasted my job config file below. I noticed that if my job lands on handler0 or handler3 it is placed in the queue but if it goes to any other handler it runs. I know that another user has jobs running on handler0 and handler3. Is there a way to skip handlers that already have two jobs running and move to an empty handler automatically? Right now I kill my paused job and run it again to get on a different handler for it to run.

<job_conf>
<plugins>
<plugin id="local" type="runner" load="galaxy.jobs.runners.local:LocalJobRunner" workers="4"/>
</plugins>
<handlers assign_with="db-skip-locked" max_grab="2" />
<destinations>
<destination id="local" runner="local"/>
</destinations>
<limits>
<limit type="registered_user_concurrent_jobs">4</limit>
<limit type="anonymous_user_concurrent_jobs">1</limit>
</limits>
</job_conf>

Nate Coraor
@natefoo:matrix.org
[m]
No, the handlers aren't designed to be used as a sort of "queue" like this. Even if you are running everything on one server I would recommend installing Slurm, so that you can queue jobs that way. You get some good additional benefits as well like being able to restart Galaxy while jobs are running.
github-ic
@github-ic
Thank you. Can you share a URL with details on how we can do this and configure our job_config.xml.
Nicola Soranzo
@nsoranzo:matrix.org
[m]
Look at "Connecting Galaxy to a compute cluster" slides and tutorial at https://training.galaxyproject.org/training-material/topics/admin/
github-ic
@github-ic
Thank you.
Jessica Conway
@jessicaconway__twitter
Hi, The Galaxy Project's main platform appears to be down. Whenever I go to https://usegalaxy.org/ I get the following error message "This page isn’t working. usegalaxy.org is currently unable to handle this request. HTTP ERROR 500". Does anyone know when the site will be back up and running?
3 replies
Arthur Eschenlauer
@eschen42
Is anyone successfully running MaxQuant with Thermo RAW data on Galaxy?
I'm stuck (it crashes with "file not found" while configuring, without identifying what file is not found).
Thank you (and sorry that the only connection to Galaxy is that I'm trying to run the toolshed tool on Galaxy).
thepineapplepirate
@thepineapplepirate
Does anyone know if the tool "Compound conversion" is still available in galaxy? Or is it under a new name? I'm doing the "protein-ligand docking" tutorial of the computational chemistry training tutorials
thepineapplepirate
@thepineapplepirate
issue resolved - tool is available on european server only, at the moment
Lucille Delisle
@lldelisle
Hi there,
I feel super stupid. I know there is an alternative of the '.ga' workflow format but I forgot which one it is and how to convert '.ga' to it...
Björn Grüning
@bgruening
@lldelisle I think you are searching for https://github.com/galaxyproject/gxformat2
Lucille Delisle
@lldelisle
What is the extension of the gxformat2? It is also ga?
Björn Grüning
@bgruening
its a yaml file
Lucille Delisle
@lldelisle
Is there an example in the repo?
Or somewhere else...
Lucille Delisle
@lldelisle
Thanks
and I use planemo workflow_convert, right?
Björn Grüning
@bgruening
Your welcome!
Yes, that should work
Lucille Delisle
@lldelisle
Thanks, indeed it worked.
github-ic
@github-ic
@natefoo:matrix.org
We can't around this error - any idea what might be the problem?
job.c: In function ‘slurmdrmaa_job_control’:
job.c:117:8: error: too few arguments to function ‘slurm_kill_job2’
if(slurm_kill_job2(self->job_id, SIGKILL, 0) == -1) {
^~~~~~~
In file included from ../slurm_drmaa/job.h:29,
from job.c:38:
/usr/include/slurm/slurm.h:3531:12: note: declared here
extern int slurm_kill_job2(const char *job_id, uint16_t signal, uint16_t flags,
Nate Coraor
@natefoo:matrix.org
[m]
It's fixed, I need to create a new slurm-drmaa release.
If you have the GNU autotools installed you can clone the main branch and run ./autogen.sh followed by the usual ./configure, make, etc.
You need ragel and gperf as well.
If not I should be able to create a release today or tomorrow some time.
github-ic
@github-ic
we will try the ragel and gperf
Nate Coraor
@natefoo:matrix.org
[m]
autodist.sh can do it for you in Docker if you would prefer that to installing system packages.
github-ic
@github-ic

We installed ragel and gperf but get these errors.

./autogen.sh
running sh autogen.sh (/usr/local/slurm-drmaa/drmaa_utils)
sh: autogen.sh: No such file or directory

./configure
=== configuring in drmaa_utils (/usr/local/slurm-drmaa/drmaa_utils)
configure: WARNING: no configuration information is in drmaa_utils

make
make[2]: Entering directory '/usr/local/slurm-drmaa/drmaa_utils'
make[2]: * No rule to make target 'all'. Stop

Nate Coraor
@natefoo:matrix.org
[m]
You may've missed doing a recursive clone, drmaa_utils is a separate repo. You can fix that with git submodule init && git submodule update
23 replies
However, I have just published a new release: https://github.com/natefoo/slurm-drmaa/releases/tag/1.1.3
github-ic
@github-ic
Thanks!
thepineapplepirate
@thepineapplepirate
is the EU server down?
its's under heavy load
thepineapplepirate
@thepineapplepirate
Ok, thanks!
github-ic
@github-ic
But when we restart galaxy since we are using slurm the jobs will continue running?
Nate Coraor
@natefoo:matrix.org
[m]
Correct
seahu1
@seahu1
Hello, when I click"ALL workflows"on https://usegalaxy.org/,it tells