Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
Ghost
@ghost~5772e7e2c2f0db084a206e1b
@Micropathology I'd suggest updating conda and galaxy to the latest versions respectively. Now unicycler is a bad example because it is python3 only
Micropathology
@Micropathology
is it possible that conda broke somehow and cannot install too dependencies? what is channel order?
Ghost
@ghost~5772e7e2c2f0db084a206e1b
yes, conda is being actively developed, and the channel order changes
every now and then. we account for this in galaxy, but we don't automatically upgrade installed conda versions
Micropathology
@Micropathology
so periodically I have to update conda? will this sort out the channel order?
Ghost
@ghost~5772e7e2c2f0db084a206e1b
no, that is a galaxy setting
and yes, often new packages can't be installed with old conda versions
galaxybot
@galaxybot
[mrscribe] Title: galaxy/galaxy.yml.sample at dev · galaxyproject/galaxy · GitHub (at github.com)
Ghost
@ghost~5772e7e2c2f0db084a206e1b
at the time of a galaxy release we ship the current default order, but this can change as I mentioned
now old galaxy versions don't entirely respect the configured channel order, that's why I'd advise to also upgrade galaxy
which would come with the correct default channel order
Micropathology
@Micropathology
I'm using 17.09
Ghost
@ghost~5772e7e2c2f0db084a206e1b
right, and the default channel order changed after the 17.09 release
and the bug fix to respect the specified channel order was also applied after the 17.09 release
it should be as simple as git pull if you're on the 17.09 branch
Micropathology
@Micropathology
right ok I think I'm understanding now. I need to keep galaxy upto date or I'm going to get tool install problems.
Ghost
@ghost~5772e7e2c2f0db084a206e1b
now for the case of unicycler you can probably install it with /mnt/galaxy/tool_dependencies/_conda/bin/conda create -y --override-channels --channel iuc --channel bioconda --channel conda-forge --channels defaults --name __unicycler@0.4.4 unicycler=0.4.4 python=3
note that i've change the channel order and added python=3
we'll have to update the tool on our side to specify the python3 dependency so that you don't need to do this by hand
Micropathology
@Micropathology
so you have prioritised iuc over the standard channel?
Ghost
@ghost~5772e7e2c2f0db084a206e1b
yes, and we also prioritize conda-forge over defaults and we drop the r channel (whose packages should be contained in conda-forge)
Micropathology
@Micropathology
great, so i should update galaxy, and this should solve most problems but I still might need to force certain channels and dependencies in some instances depending on what the log still states?
Ghost
@ghost~5772e7e2c2f0db084a206e1b
no, the manual stuff is only for unicycler
and I'd update conda too
/mnt/galaxy/tool_dependencies/_conda/bin/conda install -y -c conda-forge conda=4.3.34
should work for you
Micropathology
@Micropathology
thanks
Ghost
@ghost~5772e7e2c2f0db084a206e1b
sure!
Martin Page
@doomedramen
@mvdbeek thank you, I will look into upgrading the server
If you happen to know where to check the line ends option it would be good so I can confirm if its enabled or not
Nicola Soranzo
@nsoranzo
Actually packages from the r channel are also in defaults, not conda-forge
John Chilton
@jmchilton
What do we think about just ignoring the error code on the Selenium tests - so they always come back green. That way people reviewing client changes could still go in and look at the results manually but a random failure or two on non-client code changing PRs don't fail the whole PR. Other ideas I'm thinking about include stepping back to requesting a review from Jenkins in order for one to launch and partitioning the selenium suite into more, smaller jobs that run faster and could be re-run on errors more quickly.
Dave B.
@davebx
for those who are only on irc, the rest was "partitioning the selenium suite into more, smaller jobs that run faster and could be re-run on errors more quickly."
Dannon
@dannon
Heh, just booted up my gitter tab to read the rest.
Dave B.
@davebx
silly 20th century 512-character limit
Dannon
@dannon
It'd definitely be good to cut down on the noise, and load, but I worry that the sort of 'allowed failures' will just end in stuff staying broken.
That, or maybe worse, not noticing when things do break unrelated features (like the tool driven tours, or something, that you might not think to check)
Dave B.
@davebx
yeah, we had that issue when I joined the team, with buildbot constantly red and nobody willing to put in the hours/days/weeks to fix the tests
Dannon
@dannon
^ exactly.
Dave B.
@davebx
(it ended up being a few days, if memory serves)
Dannon
@dannon
Is it just completely random tests you're seeing failing, or can we keep marking stuff with the flaky-fail notation until we make it more robust, while allowing robust tests to continue to be useful?
John Chilton
@jmchilton
It is a complex mix of the two - it used to be that there were a few tests that would fail at 10% frequency and then random tests would fail at like 1% - but there is now this problem of the timeouts which I haven't gotten my head around yet but we cannot fix those with flakey annotations. The other complexity is the transient failures correspond to actual Galaxy bugs in some cases (galaxyproject/galaxy#5692 and https://github.com/galaxyproject/galaxy/issues/3782).
Dannon
@dannon
Seems to me the decision hinges on whether or not we can sort out the timeouts problem.
If we can fix the timeouts, then we should continue to let things fail (and show the failures), at least somewhat predictably.
John Chilton
@jmchilton
Alright - thanks for the input @dannon. I'll see what I can do.
Dannon
@dannon
Sure thing, let me know if I can help.
Martin Cech
@martenson
@wookoouk hey Martin, could you please repeat your issue? I got kinda lost trying to reconstruct it from past gitter messages.
Robert Leach
@hepcat72
One of our users is complaining that her runs of fastq_paired_end_interlacer is getting killed because it's exceeding its allocated memory. Lance usually handles this stuff and I'm filling in. Is there a way to increase the memory allocation in the galaxy admin settings?
Nate Coraor
@natefoo
@hepcat72: unfortunately no, it's managed via job_conf.xml