[Adam Parkin, Test Podcast] It occurs to me I’m probably misusing the term “integration test” here, in my case my tests are really “full system tests”, so maybe to tweak my question slightly: is it a good idea to generate coverage information for tests that are full system tests, and if so (or at the very least if it’s not a bad idea) how do you achieve that?
Like maybe a more concrete way of thinking of this: say you have a bunch of selenium tests that you execute against your staging environment. You want to get a sense of what parts of the overall system are exercised by those tests and which are not, how would you go about discovering that?
[David Kotschessa, Test Podcast] Say I want to do something crazy with
tox like run...well... like every version of python since 2.7. (It's an experiment and possibly blog article).
I'm confused bout the overlapping domains of:
virtual environments (I use just
tox itself (which, I guess is lke venv, but you can still use it with venv?)
Now I'm reading i might want
pyenv if I want to install all these different versions of python.
What's the simplest way?
[David Kotschessa, Test Podcast] So I guess here's where I'm confused. say I'm in an activated
venv and I install a package, say django - it installs itself in the
venv/bin/whatever folder, but not otherwise on my machine.
The python version I'm using is also in
venv/bin/python (or whatever) based on what version I'm using. Bu it's also installed globally.
So I have python 2.7 (because mac still ships with it) and 3.8 (because that's what I installed)
Soooo say I want to install python 3.2 - is there an installation method that puts it in the venv but does not install globally?
[Gabriele Bonetti, Test Podcast] Hi All, currently I run some test pipelines on jenkins completely on AWS, creating on-demand workers (jenkins slaves) when they are needed. Workers have been linux only so far but we will introduce win 10 support which needs to be tested. Have you ever tried to manage win 10 images on AWS? since there is no native image I see you have to bring your own and manage your own licenses, but looks cumbersome, how it's going to work on on-demand dynamic workers that then disappear?
So I was wondering if this is a good idea at all or there are other solutions, for example using the supported windows server 2016 / 2019 images with desktop support and that can be good enough for testing on the equivalente windows desktop.
For example I was also looking at browerstack, and there is a server -> desktop table, except for windows 10 :-) https://www.browserstack.com/question/478#:~:text=Home%20Support%20Live-,Is%20web%20testing%20on%20Windows%20Server%20edition%20the%20same%20as,remote%20access%20to%20our%20customers.&text=For%20Windows%2010%2C%20we%20use,which%20is%20a%20Desktop%20edition.
[Kristoffer Bakkejord, Test Podcast] Anyone have experience with this course? https://www.udemy.com/course/elegant-automation-frameworks-with-python-and-pytest/
I would like to improve the pytest knowledge in my team, and addressing it through a not-too-long video course seems like it could be a good idea.
[Pax, Test Podcast] Hello! Anyone here going to PyCon US next week? I’m looking for someone to replace my spot as volunteer for the Sponsor Workshop to this year’s PyCon US, “Blackfire: Debugging Performance in Python and Django applications - (Jérôme Vieilledent, Sümer Cip)”, which is happening on (Eastern US) 15:00 to 16:30 on Thursday May 13.
I will give more info (what the role generally entails) here if anyone is a bit interested. I’m hesitant to send a super long message especially when I haven’t posted in a while on here. :see_no_evil:
[Jacob Floyd, Test Podcast] Some pytest help please:
Is there a way to access
capsys in a unittest-based test class without preventing nosetest from running the test as well? (only use capsys if running under pytest, else continue with the test's current stdout/stderr redirection logic)
I've got a unittest based test suite that I'm working on running under pytest. I'm trying to minimize the delta of my changes to avoid integration hell as my branch won't be merged until all tests are working under the new setup (more than just pytest). In the meantime the tests will continue to run under nosetest, and I will need to periodically merge in the changes from the main branch into my pytest branch (thus, minimizing deltas is really important).
I have one TestCase class that replaces
open(path_to_temp_file, "w") in
setUp() and then sets
sys.stdout = sys.__stdout__ in
path_to_temp_file is then inspected in the tests to make sure output of a command isn't broken.
This, of course conflicts with pytest's output capturing. And so the test seems to hang for a couple minutes before passing.
In a pure pytest test, I would reach for the
capsys fixture. But is there a way to use that in a unittest-based test? Hopefully conditionally based on if running under pytest or not.
[Jacob Floyd, Test Podcast] When pytest runs unittest tests, it does something like:
I have a TestClass that extends run() (and calls super), but pytest is passing in a
_pytest.unittest.TestCaseFunction instance instead of a
That run function access
result.errors or result.failures which of course is not present on
TestCaseFunction. Given a TestCaseFunction, how can I tell if the test failed or had errors?
[Adam Parkin, Test Podcast] Anyone have experience testing FastAPI apps that use Sqlalchemy? I’ve been spoiled by Django’s TestCase class which gives you that nice test isolation by isolating everything in a transaction and rolling back after a test completes. I’d like to do something similar in FastAPI, and while I can create a pytest fixture that starts a transaction, yields a database session, then rolls back the transaction, that means that (as an example) API testing becomes more difficult (any setup in a test to insert data etc won’t be visible to the API endpoint since it was never committed).
Would appreciate any experiences, tips, or tricks from those who’ve maybe done some testing of a FastAPI + Sqlalchemy app.