Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
    Martin Cech
    @martenson
    it is not related to your PR
    Sergey Golitsynskiy
    @ic4f
    that was my guess (hope)
    Nate Coraor
    @natefoo
    So in that case uwsgi is probably building from source, since there is no uwsgi wheel for Python 3.5. We should in fact switch to using pyuwsgi, but that means we need to update scripts and so forth at the same time to execute pyuwsgi since the pyuwsgi wheel does not install uwsgi on $PATH.
    Nevermind, it looks like they ended up installing a uwsgi after all: lincolnloop/pyuwsgi-wheels@54b09d6
    pyuwsgi is built with libyaml though so it won't work with our config files =/
    Martin Cech
    @martenson
    hmm so I guess we need to figure out where did the libpython3.5-dev go, maybe it is in a different apt repo now
    Nicola Soranzo
    @nsoranzo
    I remember I tried to switch to pyuwsgi, but there were issues, maybe the libyaml one
    Martin Cech
    @martenson
    it seems circleci upgraded their images to new stable (codename buster)
    Nicola Soranzo
    @nsoranzo
    @martenson I think you can update galaxyproject/galaxy#8337 to just remove lines 142-144
    Nicola Soranzo
    @nsoranzo
    which means they removed Py3.5
    Martin Cech
    @martenson
    I think that circleci image still has python 3.5
    but it seems uwsgi build does not need the lipython anymore
    Vahid
    @VJalili
    within the integration tests context, what is the best way of defining user quota?
    Nicola Soranzo
    @nsoranzo
    @martenson They install Python on top of the distribution-installed Python, that's how they provide the latest Python patch versions
    Martin Cech
    @martenson
    yeah, they compile they own, that is how they got the dev libpython too
    Nicola Soranzo
    @nsoranzo
    Exactly, so they deb package it's not needed at all
    Martin Cech
    @martenson
    yep, basically a travis remnant
    Martin Cech
    @martenson
    @ic4f if you merge dev your PR will pass
    Sergey Golitsynskiy
    @ic4f
    @martenson merged and pushed. Watching the tests. Thanks!
    Nate Coraor
    @natefoo
    I've set some TCP keepalive options on leeroy and jenkins-aust-1 and reenabled that node, let's see if this makes any difference
    If not, any Jenkins admin can feel free to re-disable the node
    @martenson we'll need to cherry pick that circleci fix back to older release branches.
    Martin Cech
    @martenson
    right
    Jennifer Hillman-Jackson
    @jennaj
    Question at GHelp about using iFrames to maintain a login to more than one Galaxy account. Sounds like they are running their own galaxy... (?) Help please https://help.galaxyproject.org/t/external-authentication/1729
    Jennifer Hillman-Jackson
    @jennaj
    Also -- this isn't some problem with downloading data collections, is it? I didn't find any issues related to it. ping @mvdbeek @jmchilton https://help.galaxyproject.org/t/downloading-large-files/1717
    Enis Afgan
    @afgane
    @jmchilton Any thougths on what might be the cause of the following exception wrt the Pulsar K8s runner? I’ve tried it with what’s in dev as well as galaxyproject/galaxy#8195
    ```
    2019-07-15 21:52:38,564 INFO [pulsar.core][MainThread] Starting the Pulsar without a toolbox to white-list.Ensure this application is protected by firewall or a configured private token.
    2019-07-15 21:52:38,564 WARNI [galaxy.tool_util.deps][MainThread] Path 'dependencies' does not exist, ignoring
    2019-07-15 21:52:38,565 WARNI [galaxy.tool_util.deps][MainThread] Path 'dependencies' is not directory, ignoring
    2019-07-15 21:52:38,589 INFO [pulsar.locks][MainThread] pylockfile module not found, skipping experimental lockfile handling.
    2019-07-15 21:52:38,596 DEBUG [pulsar.managers.staging.pre][[manager=default]-[action=preprocess]-[job=52]] Staging tool 'random_lines_two_pass.py' via FileAction[url=http://galaxy/api/jobs/c24141d7e4e77705/files?job_key=8e8c36cb22a5c6b7&path=/galaxy/server/tools/filters/random_lines_two_pass.py&file_type=tool,action_type=remote_transfer,path=/galaxy/server/tools/filters/random_lines_two_pass.py] to /pulsar_staging/52/tool_files/random_lines_two_pass.py
    2019-07-15 21:53:54,113 DEBUG [pulsar.messaging.bind_amqp][[manager=default]-[action=preprocess]-[job=52]] Publishing Pulsar state change with status failed for job_id 52
    2019-07-15 21:53:54,114 DEBUG [pulsar.client.amqp_exchange][[manager=default]-[action=preprocess]-[job=52]] [publish:09d98780-a74b-11e9-9aea-42741e650fd1] Begin publishing to key pulsarstatus_update
    2019-07-15 21:53:54,121 DEBUG [pulsar.client.amqp_exchange][[manager=default]-[action=preprocess]-[job=52]] [publish:09d98780-a74b-11e9-9aea-42741e650fd1] Have producer for publishing to key pulsar
    status_update
    2019-07-15 21:53:54,132 DEBUG [pulsar.client.amqp_exchange][[manager=default]-[action=preprocess]-[job=52]] [publish:09d98780-a74b-11e9-9aea-42741e650fd1] Published to key pulsarstatus_update
    2019-07-15 21:53:54,133 ERROR [pulsar.managers.stateful][[manager=default]-[action=preprocess]-[job=52]] Failed job preprocessing for job 52:
    Traceback (most recent call last):
    File "/usr/local/lib/python2.7/site-packages/pulsar/managers/stateful.py", line 115, in _handling_of_preprocessing_state
    yield
    File "/usr/local/lib/python2.7/site-packages/pulsar/managers/stateful.py", line 106, in do_preprocess
    preprocess(job_directory, setup_config, self.
    preprocess_action_executor, object_store=self.object_store)
    File "/usr/local/lib/python2.7/site-packages/pulsar/managers/staging/pre.py", line 19, in preprocess
    action_executor.execute(lambda: action.write_to_path(path), "action[%s]" % description)
    File "/usr/local/lib/python2.7/site-packages/pulsar/managers/util/retry.py", line 49, in execute
    errback=on_error,
    File "/usr/local/lib/python2.7/site-packages/pulsar/managers/util/retry.py", line 93, in _retry_over_time
    return fun(args, **kwargs)
    File "/usr/local/lib/python2.7/site-packages/pulsar/managers/staging/pre.py", line 19, in <lambda>
    action_executor.execute(lambda: action.write_to_path(path), "action[%s]" % description)
    File "/usr/local/lib/python2.7/site-packages/pulsar/client/action_mapper.py", line 459, in write_to_path
    get_file(self.url, path)
    File "/usr/local/lib/python2.7/site-packages/pulsar/client/transport/poster.py", line 42, in get_file
    response = urlopen(request)
    File "/usr/local/lib/python2.7/urllib2.py", line 154, in urlopen
    return opener.open(url, data, timeout)
    File "/usr/local/lib/python2.7/urllib2.py", line 429, in open
    response = self._open(req, data)
    File "/usr/local/lib/python2.7/urllib2.py", line 447, in _open
    '_open', req)
    File "/usr/local/lib/python2.7/urllib2.py", line 407, in _call_chain
    result = func(
    args)
    File "/usr/local/lib/python2.7/site-packages/poster/streaminghttp.py", line 142, in http_open
    return self.do_open(StreamingHTTPConnection, req)
    File "/usr/local/lib/python2.7/urllib2.py", line 1198, in do_open
    raise URLError(err)
    URLError: <urlopen error
    John Chilton
    @jmchilton
    Looks like the Pulsar container cannot talk to Galaxy at the specified URL. Do you have galaxy_url set and is it set to something that would be accessible in the container?
    Enis Afgan
    @afgane
    Defined yes; I’ll work on making sure they can communicate. Here’s the job conf definition if you don’t mind taking a look to see if something is missing:
    runners:
      local:
        load: galaxy.jobs.runners.local:LocalJobRunner
        workers: 1
      pulsar_k8s:
        load: galaxy.jobs.runners.pulsar:PulsarKubernetesJobRunner
        galaxy_url: http://galaxy//
        amqp_url: amqp://galaxymq:PWD@galaxy-rabbitmq:5672/
    execution:
      default: pulsar_k8s_environment
      environments:
        pulsar_k8s_environment:
          k8s_use_service_account: true
          runner: pulsar_k8s
          docker_enabled: true
          docker_default_container_id: busybox:ubuntu-14.04
          pulsar_app_config:
            message_queue_url: amqp://galaxymq:PWD@galaxy-rabbitmq:5672/
        local_environment:
          runner: local
    tools:
      - id: upload1
        environment: local_environment
    Nicola Soranzo
    @nsoranzo
    @jmchilton Can you make a new galaxy-tool-util release on PyPI?
    Nicola Soranzo
    @nsoranzo
    So we can use it for Ephemeris and then Planemo
    Nicola Soranzo
    @nsoranzo
    After merging galaxyproject/galaxy#8342 :wink:
    Thanks @nsoranzo !
    Nicola Soranzo
    @nsoranzo
    Thank you!
    Vahid
    @VJalili
    @afgane have you checked DNS for rabbitmq cluster? I would also check if they're running and have joined the cluster. You may need to re-join and then restart them.
    Nicola Soranzo
    @nsoranzo
    @natefoo If you have time to take a look at galaxyproject/galaxy#8343 , that would help me restore BioBlend CI testing on the Galaxy dev branch.
    Nate Coraor
    @natefoo
    @nsoranzo is it a real environment variable that pip reads? I just picked it at random
    Nicola Soranzo
    @nsoranzo
    Ah, I thought you choose it on purpose! I also didn't know about this trick, which is why it took me several hours to understand why doing pip install --progress-bar ascii (or any other value) would still give the same error message option progress-bar: invalid choice: '--progress-bar off' (choose from 'on', 'ascii', 'off', 'pretty', 'emoji')
    (looks like a bug that the env variable has precedence over the command line switch, BTW)
    Nate Coraor
    @natefoo
    They need $PIP_* and $PIP_OVERRIDE_* ;)
    Nicola Soranzo
    @nsoranzo
    Eh eh, never enough env variables... like Ansible config variables...
    Nate Coraor
    @natefoo
    Hear hear! :beers:
    Martin Cech
    @martenson
    should we have a label packaging or sth like that?
    Nicola Soranzo
    @nsoranzo
    Seems a good idea :wink:
    Nicola Soranzo
    @nsoranzo
    @natefoo I've an increasing number of Jenkins failures like this: https://jenkins.galaxyproject.org/job/docker-integration/9953/consoleFull (look for OperationalError)
    Anything we can do about it?
    Nate Coraor
    @natefoo
    Interesting, I saw that on some of the AU nodes and didn't know it wasn't unique to them