These are chat archives for astropy/astropy
saimn on Freenodehow does it work for the travis queue, just to be sure I understand correctly:
saimn on Freenodeonly one build is running, both for master (merges) and pr ?
saimn on Freenodeit's taking >3 hours per build now !
bsipoczwe have 5 threads running simultaniously (that will go up to 15 tomorrow, I hope at least).
bsipoczand everything that is a PR or a branch push under the astropy organization goes into the same queue
saimn on Freenodeah ok, so 5 jobs ?
bsipoczhowever I would suggest to cancel out the OSX ones, as travis is overloaded there, so the jobs are waiting an extra ~30-60 mins in another queue when on OSX
bsipoczso it's rather bad atm, we have ~4 hours of backlog
saimn on FreenodeI was wondering why the latest build took more than 3 hours
saimn on Freenodeit was more around 2 hours a few days ago
bsipoczif you have something that needs to be tested and adjusted, I suggest to enable travis on your fork so you don't need to wait the 4 hours
bsipoczyes, I saw that, too, actually not sure whether that's just a one off travis issue that we get slower stuff in the cloud, or indeed more tests are added that slow things done. Sadly the benchmarks are not yet running
bsipoczyou must be right, something is going rather bad
bsipocz@mirca are you here?
saimn on Freenode4 tests that takes 30sec, times the number of jobs ...
astropy/stats/tests/test_spatial.py::test_ripley_large_density[points0-0-1]from your K fucntions are burning too much CI time
bsipoczThanks saim, happy to see you got to the same conclusion, but faster and nicer
bsipoczsaimn: do you plan to open an issue for it or shall I?
test_spatial.pyneeds to be sorted out, it takes 4.5 mins on my computer to run, and as saimn points out almost an extra our in total for travis
bsipoczwhatever that works and still sensible
bsipocznope, we got mirca's attention here :)
saimn on Freenodeok great!
saimn on Freenodebtw it could be interesting to have the slowest tests (50?) on one of the travis jobs
bsipoczhow much is the overhead to get that stat? If it doesn't add much, I think it's an excellent idea
saimn on Freenodenice graph :)
bsipoczalso probably the unit support in modelling also added some extra time
saimn on Freenodeyeah probably
saimn on FreenodeI don't know exactly the overhead to get the tests timings, but it would also allow to know the real timing for each slow test
saimn on Freenodedepending on their cpu I mean
saimn on Freenodefor me the slowest takes 30sec per test, it's probably more on travis ;)
bsipoczProbably it takes longer on travis, that file containing the four slowest was much longer on mine:
real 4m23.816s user 4m18.258s sys 0m2.739s
saimn on Freenodethanks @mirca !
saimn on Freenodebsipocz: https://travis-ci.org/saimn/astropy/jobs/243453986#L763 !
saimn on Freenodehere that ~120s per test
saimn on Freenodeso 8 min per job
saimn on Freenodehave to go to bed, so I will push a pr with --durations later
bsipoczthat explains the +1 hour
bsipoczThanks. That PR is not critical for the freeze, so no rush.
saimn on Freenodeso I added the option in a build without --mpl
saimn on Freenodeyep
saimn on Freenodebye
bsipocz@mirca - you know that you can opt in to run travis on pushes, so it will run when you add a commit to any of your branches?
bsipocz(no need to open a PR against your fork)