Data intensive science for everyone. https://galaxyproject.org | https://usegalaxy.org | https://help.galaxyproject.org
relation "galaxy_user" does not exist
, does anyone know how to solve this problem? Thank you.$ docker run -it -p 8090:80 -p 8091:21 -p 8092:22 -v /galaxy-dist/galaxy-store/:/export/ bgruening/galaxy-stable:20.05
Unable to find image 'bgruening/galaxy-stable:20.05' locally
20.05: Pulling from bgruening/galaxy-stable
5d9821c94847: Pull complete
a610eae58dfc: Pull complete
a40e0eb9f140: Pull complete
c22f3d9f90a5: Pull complete
9cd1fa643178: Pull complete
ff6a2a1c50aa: Pull complete
a6ae5ce9fc6c: Pull complete
046482794b25: Pull complete
d7589383c485: Pull complete
2fbe1a09a967: Pull complete
609dee4bd11a: Pull complete
3886f006cc87: Pull complete
a004131862a0: Pull complete
a7fd2cb23184: Pull complete
521623a34944: Pull complete
cca00ea03979: Pull complete
3396c885f0ca: Pull complete
10949b53410d: Pull complete
49d17980fa8a: Pull complete
c1175adabc93: Pull complete
df25b53b7e71: Pull complete
fd05c08ec188: Pull complete
15714e7f1059: Pull complete
418dc856821c: Pull complete
6fa262cc4b17: Pull complete
5b55c4eccd99: Pull complete
72d55fecbe00: Pull complete
4fdde884cea1: Pull complete
Digest: sha256:732c679ae81ed2beb432eab3cb5963961e956e49f17d34a6d046481882e67cae
Status: Downloaded newer image for bgruening/galaxy-stable:20.05
Enable Galaxy reports authentification
Checking /export...
Disable Galaxy Interactive Environments. Start with --privileged to enable IE's.
Starting postgres
postgresql: started
Checking if database is up and running
Traceback (most recent call last):
File "/galaxy_venv/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 1248, in _execute_context
cursor, statement, parameters, context
File "/galaxy_venv/lib/python3.7/site-packages/sqlalchemy/engine/default.py", line 590, in do_execute
cursor.execute(statement, parameters)
psycopg2.errors.UndefinedTable: relation "galaxy_user" does not exist
LINE 3: FROM galaxy_user
^
In our instance, the featureCounts tool seems to rely on its built-in references to be inside the dependency (container, conda env)
## Export fc path for its built-in annotation
export FC_PATH=\$(command -v featureCounts | sed 's@/bin/featureCounts$@@') &&
and the annotation directory is present in the directory e.g. database/dependencies/_conda/envs/mulled-v1-39786c1966303ef2ef27f1708fd92087f19ec27505a1cad54a74932345e20ab0/annotation/hg38_RefSeq_exon.txt . However, the tool interface tells the user that no references are available. https://usegalaxy.org has the same issue. Where should I post a bug report? I'm unsure which github repo or support forum would be appropriate.
Hi - I'm running a custom Galaxy v21.01 instance with 7 job handlers. I pasted my job config file below. I noticed that if my job lands on handler0 or handler3 it is placed in the queue but if it goes to any other handler it runs. I know that another user has jobs running on handler0 and handler3. Is there a way to skip handlers that already have two jobs running and move to an empty handler automatically? Right now I kill my paused job and run it again to get on a different handler for it to run.
<job_conf>
<plugins>
<plugin id="local" type="runner" load="galaxy.jobs.runners.local:LocalJobRunner" workers="4"/>
</plugins>
<handlers assign_with="db-skip-locked" max_grab="2" />
<destinations>
<destination id="local" runner="local"/>
</destinations>
<limits>
<limit type="registered_user_concurrent_jobs">4</limit>
<limit type="anonymous_user_concurrent_jobs">1</limit>
</limits>
</job_conf>
planemo workflow_convert
, right?
./autogen.sh
followed by the usual ./configure
, make
, etc.
autodist.sh
can do it for you in Docker if you would prefer that to installing system packages.
We installed ragel and gperf but get these errors.
./autogen.sh
running sh autogen.sh (/usr/local/slurm-drmaa/drmaa_utils)
sh: autogen.sh: No such file or directory
./configure
=== configuring in drmaa_utils (/usr/local/slurm-drmaa/drmaa_utils)
configure: WARNING: no configuration information is in drmaa_utils
make
make[2]: Entering directory '/usr/local/slurm-drmaa/drmaa_utils'
make[2]: * No rule to make target 'all'. Stop
drmaa_utils
is a separate repo. You can fix that with git submodule init && git submodule update