Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
    Marius van den Beek
    @mvdbeek
    so one process watches the config and send a task to all processes via kombu (so database in most cases)
    Helena Rasche
    @erasche
    cool
    Marius van den Beek
    @mvdbeek
    so you get an all or none situation
    Helena Rasche
    @erasche
    works for me
    M Bernt
    @bernt-matthias

    hi @natefoo and @mvdbeek : seems that there was one unsuccessful reload for the job handler

    Executing toolbox reload on 'main.job-handlers.1'
    Exception in thread ToolConfWatcher.thread:
    Traceback (most recent call last):
      File "/global/apps/bioinf/galaxy/bin/Python-2.7.13/lib/python2.7/threading.py", line 801, in __bootstrap_inner
        self.run()
      File "/global/apps/bioinf/galaxy/bin/Python-2.7.13/lib/python2.7/threading.py", line 754, in run
        self.__target(*self.__args, **self.__kwargs)
      File "lib/galaxy/tools/toolbox/watcher.py", line 138, in check
        self.reload_callback()
      File "lib/galaxy/webapps/galaxy/config_watchers.py", line 32, in <lambda>
        self.tool_config_watcher = get_tool_conf_watcher(reload_callback=lambda: reload_toolbox(self.app), tool_cache=self.app.tool_cache)
      File "lib/galaxy/queue_worker.py", line 154, in reload_toolbox
        reload_count = app.toolbox._reload_count
    AttributeError: 'UniverseApplication' object has no attribute 'toolbox'

    after and before this seemingly only Executing toolbox reload on 'main.web....'

    Marius van den Beek
    @mvdbeek
    Hmm, that is a weird traceback. Maybe the reload was triggered while the handler was starting up?
    App should always have a toolbox otherwise
    M Bernt
    @bernt-matthias
    Not unlikely: galaxy.queue_worker INFO 2019-08-29 17:09:35,289 [p:50031,w:0,m:1] [MainThread] Initializing main.job-handlers.1 happened just 1min before this
    bruggerk
    @bruggerk
    We are considering running our galaxy installation on a virtual setup eg: frontend and database with compute happening somewhere else. Does anyone have any experience with such a setup, or input to why it might be a mad idea?
    Helena Rasche
    @erasche
    most larger sites do it like this
    perfectly normal :)
    Dan Fornika
    @dfornika
    Has anyone tried running an ansible deployment of a recent galaxy version (>=19.01) using the root-dir directory layout? I've been having trouble getting tool installation & uninstallation to work reliably. I think @natefoo hinted at the issue here: galaxyproject/ansible-galaxy#58 and this issue seems to be related: galaxyproject/ansible-galaxy#72
    Tim Dudgeon
    @tdudgeon
    I'm seeing sporadic errors when submitting tasks (with local executor). I'm executing with a collection of inputs and sometimes a task seems to fail at random. Execute again and all tasks might start OK. This is using the bgruening/galaxy-stable:latest docker image.
    I see this is the docker logs which seems to correlate with jobs not starting:
    [pid: 1453|app: 0|req: 7905/15816] 10.130.0.1 () {52 vars in 1247 bytes} [Fri Sep  6 08:19:19 2019] GET /api/histories/33b43b4e7093c91f/jobs_summary?ids=59ace41fc068d3ad&types=ImplicitCollectionJobs => generated 127 bytes in 31 msecs (HTTP/1.1 200) 3 headers in 124 bytes (1 switches on core 3)
    galaxy.jobs.runners ERROR 2019-09-06 08:19:20,017 [p:1450,w:1,m:0] [LocalRunner.work_thread-4] (18700) Failure preparing job
    Traceback (most recent call last):
      File "lib/galaxy/jobs/runners/__init__.py", line 223, in prepare_job
        modify_command_for_container=modify_command_for_container
      File "lib/galaxy/jobs/runners/__init__.py", line 257, in build_command_line
        container=container
      File "lib/galaxy/jobs/command_factory.py", line 79, in build_command
        externalized_commands = __externalize_commands(job_wrapper, external_command_shell, commands_builder, remote_command_params)
      File "lib/galaxy/jobs/command_factory.py", line 138, in __externalize_commands
        write_script(local_container_script, script_contents, config)
      File "lib/galaxy/jobs/runners/util/job_script/__init__.py", line 118, in write_script
        _handle_script_integrity(path, config)
      File "lib/galaxy/jobs/runners/util/job_script/__init__.py", line 153, in _handle_script_integrity
        raise Exception("Failed to write job script '%s', could not verify job script integrity." % path)
    Exception: Failed to write job script '/export/galaxy-central/database/job_working_directory/018/18700/tool_script.sh', could not verify job script integrity.
    M Bernt
    @bernt-matthias
    I have the following problem: Galaxy creates new files and dirs with permissions 777 (e.g. job working dirs, data sets and conda environments). I guess that I need to set the umask for uWSGI somehow (https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=866058). I start galaxy with a custom init script calling sh run.sh --daemon. Any suggestion where I can set uWSGI options?
    Nate Coraor
    @natefoo
    @dfornika I replied on the issue but it might be easier to follow up here with more details
    @tdudgeon any messages above that? There should be a few at the debug level that would include the actual errno: https://github.com/galaxyproject/galaxy/blob/dev/lib/galaxy/jobs/runners/util/job_script/__init__.py#L121
    Tim Dudgeon
    @tdudgeon
    @natefoo I don't recall there being any, but I'll need to re-run and repeat the error to be sure.
    Dan Fornika
    @dfornika
    Thanks @natefoo. I was having problems uninstalling shed tools. After a bit of digging I realized that it was due to a bug in the v19.05 (tagged) release, which is what I was deploying. I switched to the release_19.05 branch and my issue was resolved.
    Tim Dudgeon
    @tdudgeon

    @natefoo I re-ran to generate the error and there is indeed something suspicious a little earlier in the log:

    galaxy.jobs.runners.util.job_script DEBUG 2019-09-12 12:28:26,270 [p:1453,w:2,m:0] [LocalRunner.work_thread-12] Script not available yet: [Errno 26] Text file busy

    The [p:1453,w:2,m:0] is the same that is reported from the error like the one I reported above.

    Nicola Soranzo
    @nsoranzo
    @tdudgeon Is your Galaxy on a network filesystem?
    Tim Dudgeon
    @tdudgeon
    @nsoranzo This is using the bgruening/galaxy-stable:latest Docker image. The /export directory is a GlusterFS volume mounted into the container. All else is Docker ephemeral storage.
    BTW I also saw what seems to be the same problem on usegalaxy.eu (but of course can't look at the log files there).
    Björn Grüning
    @bgruening
    @tdudgeon unlikely, we do not use a LocalRunner
    But if you give me a timestamp I can get you maybe some logs
    Cristian
    @cche
    Hello, I impersonated a user and, when clicking on th elink to go to their session I got the login page. Now I can not log in with my own credentials and get the error “Wrong session token found, denying request”. Any idea how to solve this? I restarted Galaxy and nothing changed. Thanks!
    Helena Rasche
    @erasche
    Hi @cche, have you tried clearing your cookies completely? Then going back to Galaxy is usually sufficient
    If you're regularly impersonating users, I can recommend firefox's "profiles"/"containers" feature, to open user impersonations in tabs with different cookies, so your normal session isn't affected
    Cristian
    @cche
    Hi @erasche, after removing the cookies I got rid of the error message but I get straight back into the login screen.
    Helena Rasche
    @erasche
    can you then login with your own credentials?
    Cristian
    @cche
    Sorry It is with my credentials that I have the problem.
    Helena Rasche
    @erasche
    oh, odd, with your own credentials?
    would you mind trying a second time to login ,just in case some cookie was set that galaxy expects?
    Cristian
    @cche
    I tryed with firefox and it worked. Still can not login with chrome… Maybe I will have to change my default browser?
    Helena Rasche
    @erasche
    It should work on chrome, I cannot explain that it doesn't in your case
    If you're still seeing the wrong session token message, this is quite odd. I've only seen it occasionally, and usually when I access /login rather than going to the home page first and then logging in
    Cristian
    @cche
    Really strange, I cleared everything related to the galaxy site, cookies passwords and login data, etc and still can not login with chrome.
    From version 19.05, when I logged out from a user account, after impersonating, I was presented with the login screen and when pressing the button I got the login screen again. The second time I got logged in. It didn’t bother me much so I didn’t do anything. I guess I will uninstall all plugins and start over with chrome.
    Firefox is working and the “profile” interests me a lot as I impersonate quite often when users come to show me their problems.
    Thanks Helena.
    Helena Rasche
    @erasche
    Sure thing! Glad you found some solutions
    Hans-Rudolf Hotz
    @hrhotz
    I have installed rnastar index2 builder (Version 2.7.1a) from the toolshed, and I have indexed hg19. This has created a new directory: ~/tool-data/rnastar/2.7.1a/hg19/hg19/dataset_130274_files but ~/tool-data/rnastar_index2_versioned.loc is still empty (hence, the index is not recognized by STAR)...is it safe to just add the information manually to the loc file?
    Hans-Rudolf Hotz
    @hrhotz
    and just for completeness: ~/tool-data/toolshed.g2.bx.psu.edu/repos/iuc/data_manager_star_index_builder/f5eb9afa8f8a/rnastar_index2_versioned.loc is also empty
    Wolfgang Maier
    @wm75
    @hrhotz thanks for the report. Did the indexing actually work? If so, then it should be ok to fill in the record in the .loc file, but, of course, this shouldn't be required :worried:
    I'll take a look at the DM
    Hans-Rudolf Hotz
    @hrhotz
    the indexing job finished (i.e. I got a green history item), and the index directory looks fine
    -rw-rw-r--. 1 galaxy galaxy 688 Sep 18 15:22 chrLength.txt
    -rw-rw-r--. 1 galaxy galaxy 1971 Sep 18 15:22 chrNameLength.txt
    -rw-rw-r--. 1 galaxy galaxy 1283 Sep 18 15:22 chrName.txt
    -rw-rw-r--. 1 galaxy galaxy 1021 Sep 18 15:22 chrStart.txt
    -rw-rw-r--. 1 galaxy galaxy 3151757312 Sep 18 15:22 Genome
    -rw-rw-r--. 1 galaxy galaxy 903 Sep 18 15:22 genomeParameters.txt
    -rw-rw-r--. 1 galaxy galaxy 23902811315 Sep 18 15:22 SA
    -rw-rw-r--. 1 galaxy galaxy 1565873619 Sep 18 15:22 SAindex
    Hans-Rudolf Hotz
    @hrhotz
    adding the information manually to the third 'instance' of rnastar_index2_versioned.loc (i.e.: ~/tool-data/toolshed.g2.bx.psu.edu/repos/iuc/rgrnastar/850f3679b9b4
    /rnastar_index2_versioned.loc ) seems to work
    pvanheus
    @pvanheus
    yes @hrhotz I was thinking it might be in a file like that - should show up in the admin menu
    Wolfgang Maier
    @wm75
    @hrhotz could you find the command line that the data manager run generated?
    pvanheus
    @pvanheus
    what are up to date estimates for Galaxy server CPU and RAM requirements for a 10-20 user lab? will 8 GB RAM do? 4 vCPUs?
    this will submit to a small Slurm cluster.
    Hans-Rudolf Hotz
    @hrhotz
    @wm75 this is the command line (which successfully created the index): if [ -z "$GALAXY_MEMORY_MB" ] ; then GALAXY_MEMORY_BYTES=31000000000 ; else GALAXY_MEMORY_BYTES=$((GALAXY_MEMORY_MB 1000000)) ; fi ; mkdir -p '/**//galaxy/database/jobs_directory/000/126/126768/dataset_130274_files/dataset_130274_files' && STAR --runMode genomeGenerate --genomeFastaFiles '/work_xenon5/galaxy/helpers/R/data_extracted_from_BioCpackages/2bit_files_from_BSgenomes/3.3.2-bioc-3.4-release/BSgenome.Hsapiens.UCSC.hg19.fasta' --genomeDir '///galaxy/database/jobs_directory/000/126/126768/dataset_130274_files/dataset_130274_files' --limitGenomeGenerateRAM ${GALAXY_MEMORY_BYTES} --runThreadN ${GALAXY_SLOTS:-2} && python '///shed_tools/toolshed.g2.bx.psu.edu/repos/iuc/data_manager_star_index_builder/f5eb9afa8f8a/data_manager_star_index_builder/data_manager/rna_star_index_builder.py' --config-file '//*/galaxy/database/files/000/130/dataset_130274.dat' --value 'hg19' --dbkey 'hg19' --index-version '2.7.1a' --name 'BSgenome Hsapiens (UCSC hg19) 2.7.1a' --data-table rnastar_index2_versioned --subdir 'dataset_130274_files'
    Hans-Rudolf Hotz
    @hrhotz
    @wm75 @pvanheus it does show up now (i.e. after manually fixing the file) in the admin menu ("Local data" -> "View Tool Data Table Entries" section). I assume the system got confused/messed-up the order of tool installation I have done over the last few month. Well, it is only the development server....
    Wolfgang Maier
    @wm75
    @hrhotz maybe, but still I found a bug I introduced into that DM. So even if the DM had written to the loc file, it would have written a wrong path. So thanks again for the report!