[David Kotschessa, Test Podcast] So I guess here's where I'm confused. say I'm in an activated
venv and I install a package, say django - it installs itself in the
venv/bin/whatever folder, but not otherwise on my machine.
The python version I'm using is also in
venv/bin/python (or whatever) based on what version I'm using. Bu it's also installed globally.
So I have python 2.7 (because mac still ships with it) and 3.8 (because that's what I installed)
Soooo say I want to install python 3.2 - is there an installation method that puts it in the venv but does not install globally?
[Gabriele Bonetti, Test Podcast] Hi All, currently I run some test pipelines on jenkins completely on AWS, creating on-demand workers (jenkins slaves) when they are needed. Workers have been linux only so far but we will introduce win 10 support which needs to be tested. Have you ever tried to manage win 10 images on AWS? since there is no native image I see you have to bring your own and manage your own licenses, but looks cumbersome, how it's going to work on on-demand dynamic workers that then disappear?
So I was wondering if this is a good idea at all or there are other solutions, for example using the supported windows server 2016 / 2019 images with desktop support and that can be good enough for testing on the equivalente windows desktop.
For example I was also looking at browerstack, and there is a server -> desktop table, except for windows 10 :-) https://www.browserstack.com/question/478#:~:text=Home%20Support%20Live-,Is%20web%20testing%20on%20Windows%20Server%20edition%20the%20same%20as,remote%20access%20to%20our%20customers.&text=For%20Windows%2010%2C%20we%20use,which%20is%20a%20Desktop%20edition.
[Kristoffer Bakkejord, Test Podcast] Anyone have experience with this course? https://www.udemy.com/course/elegant-automation-frameworks-with-python-and-pytest/
I would like to improve the pytest knowledge in my team, and addressing it through a not-too-long video course seems like it could be a good idea.
[Pax, Test Podcast] Hello! Anyone here going to PyCon US next week? I’m looking for someone to replace my spot as volunteer for the Sponsor Workshop to this year’s PyCon US, “Blackfire: Debugging Performance in Python and Django applications - (Jérôme Vieilledent, Sümer Cip)”, which is happening on (Eastern US) 15:00 to 16:30 on Thursday May 13.
I will give more info (what the role generally entails) here if anyone is a bit interested. I’m hesitant to send a super long message especially when I haven’t posted in a while on here. :see_no_evil:
[Jacob Floyd, Test Podcast] Some pytest help please:
Is there a way to access
capsys in a unittest-based test class without preventing nosetest from running the test as well? (only use capsys if running under pytest, else continue with the test's current stdout/stderr redirection logic)
I've got a unittest based test suite that I'm working on running under pytest. I'm trying to minimize the delta of my changes to avoid integration hell as my branch won't be merged until all tests are working under the new setup (more than just pytest). In the meantime the tests will continue to run under nosetest, and I will need to periodically merge in the changes from the main branch into my pytest branch (thus, minimizing deltas is really important).
I have one TestCase class that replaces
open(path_to_temp_file, "w") in
setUp() and then sets
sys.stdout = sys.__stdout__ in
path_to_temp_file is then inspected in the tests to make sure output of a command isn't broken.
This, of course conflicts with pytest's output capturing. And so the test seems to hang for a couple minutes before passing.
In a pure pytest test, I would reach for the
capsys fixture. But is there a way to use that in a unittest-based test? Hopefully conditionally based on if running under pytest or not.
[Jacob Floyd, Test Podcast] When pytest runs unittest tests, it does something like:
I have a TestClass that extends run() (and calls super), but pytest is passing in a
_pytest.unittest.TestCaseFunction instance instead of a
That run function access
result.errors or result.failures which of course is not present on
TestCaseFunction. Given a TestCaseFunction, how can I tell if the test failed or had errors?
[Adam Parkin, Test Podcast] Anyone have experience testing FastAPI apps that use Sqlalchemy? I’ve been spoiled by Django’s TestCase class which gives you that nice test isolation by isolating everything in a transaction and rolling back after a test completes. I’d like to do something similar in FastAPI, and while I can create a pytest fixture that starts a transaction, yields a database session, then rolls back the transaction, that means that (as an example) API testing becomes more difficult (any setup in a test to insert data etc won’t be visible to the API endpoint since it was never committed).
Would appreciate any experiences, tips, or tricks from those who’ve maybe done some testing of a FastAPI + Sqlalchemy app.
@pytest.mark.integration) you can exclude those tests by doing
-m "not integration"which works, but means you now have to remember to supply that option. Is there a way to configure Pytest so that by default (ie without specifying on the command line) tests with the
integrationmark are skipped?
[David Kotschessa, Test Podcast] man. Trying to debut a few tests that:
• Fail (sometimes)when I run my whole test suite, in random order using
randomly (can reproduce with the random seed)
• Pass when I run them independently (even if I use the same random seed)
• Pass when I manually run them in the same order
They are very isolated unit tests.
[Chris NeJame, Test Podcast] Unsolicited advice:
Consider the transitive property. If the expected is A < B < C, then A can be known to be less than C only by asserting A < B and B < C.
If you know function A should pass X to function B, you can assert that it does, and then assert that B handles X separately without needing to involve A.
But consider also how these things help you (and other developers) to debug problems. "Debugging" is not a singular thing. It has many components. For example, pinpointing, i.e. isolating the condition under which the bug occurs. Another is "locating", i.e. finding the location of the bug in the code so it can be fixed. A third is "repairing", i.e. changing the code to remove the problem. These 3 components all assume, however, that the bug has already been discovered.
How would breaking things out as atomically as possible (via the transitive property) help with these three components? How would it hurt them? Are there other things that could be done to enhance them?
There can be trade-offs. For example, focusing on exclusively on breaking things out as atomically as possible might help with locating a bug, and maybe even repairing a bug (albeit indirectly since the code could be more "decoupled" in general, although that's not a guarantee either). But it could also create a lot of cruft to sift through when trying to pinpoint a bug. How might this impact the overall "debugging" process, though?
[Adam Parkin, Test Podcast] Question: can you parametrize a pytest test by a fixture? Like say I have a test that takes a fixture, and I write a second test that takes a different fixture but is otherwise identical. I’d like to write something like:
"fixture_name", [ some_fixture, some_other_fixture ] ) def test_with_parameterized_fixture(fixture): ... modify fixture in some way ... ... do some assertion about fixture ...
But that doesn’t work as
some_other_fixture aren’t defined (the fixtures are in my conftest). Any ideas?
[Erich Zimmerman, Test Podcast] Question about "best approach"
I have a test case which will be parameterized. The parameters are simple strings, but are loaded from a text file. The question is how to best populate the parameters?
Option 1: Use
pytest_generate_tests to load the values from the file and parameterize a fixture with them, or
Option 2: Can I call a function to load the parameters directly in my parameterization?, e.g.
Personally, I like the second approach, but I'm not sure if I'm missing something.