[Adam Parkin, Test Podcast] I’d be interested in knowing why you’d want to introduce black gradually. For us we started doing file by file, where if someone was working on a file because they had black installed it’d format the entire file, but found that made the review process more difficult. Also found that because not everyone had black installed (or didn’t have their editor set to format on save) that over time people who didn’t have Black set up would make changes to a file, which would cause black to reformat it later when someone else edited the file (again making for noisy diffs).
We ended up doing a “big bang” commit where we ran black over the entire codebase and then reviewed that in one go and that worked really well for us. As a plus you can also add the commit of that “big bang” commit to a git-blame-ignore-revs file and then tell a git blame to ignore the commit when doing a diff (this blog post outlines that idea). And to enforce that black formatting is always enforced we added black to our CI pipeline, and fail the build if it finds files haven’t been formatted by black.
[Erich Zimmerman, Test Podcast] Hey all!
I have a problem that seems to come up regularly, and I'm not really sure about the best Pytest-y way to solve this problem. This example is about the most concise I could find for the issue.
```@pytest.fixture(params = ['device 1', 'device 2', 'device 3']
def test_device():
return request.param
def test_checking_urls_on_device(test_device):
url_list = get_urls(test_device)
for url in url_list:
# call the URL, make assertion```
This is not an ideal test, because I would like for each URL in the list to be a separate parameterized test case, def test_checking_urls_on_device(test_device, test_url)
, with test_url
being the itemized parameters coming out of the test case.
The concise problem statement is that depending on the device, the URL parameters will be different; we look up the URL list as a function of the device. I cannot know what the test_url
values would be until I already know what test_device
is.
I think that pytest_generate_tests
is my best option here, and I'm not even sure if that will really help me, because I need to know the value of test_device
before I can create the test_url
parameters.
[Jacob Floyd, Test Podcast] I'm trying to fix a test. It's in a unittest class, but pytest is the official runner, so I'm using pytest features. Pulling the test out of the unittest class didn't make a difference.
The test is supposed to test signal handling. ie when the app gets SIGHUP and SIGINT, reload() should be called, as should sys.exit (or SystemExit should be raised).
If I run the one test all by itself, everything passes. But, when I run it in the suite, something else seems to get the SIGHUP/SIGINT causing pytest to exit. However wrote this before tried to use threading, but that just swallowed exceptions so no one knew that this test was broken. Instead some time after the test "passed" pytest would exit saying it got a KeyboardInterrupt.
So, is there a good way to run a pytest test with subprocess or multiprocessing to make sure the signal is only going to the process under test?
[Pax, Test Podcast] I’m creating tests involving an email marketing service (e.g. mailchimp). I’m trying to decide whether to pass the campaign’s recipients
explicitly / as a separate parameter to a method that creates/schedules campaigns---making it safer to not schedule a campaign to the actual list during a test.
On the other hand, it doesn’t seem right to pass the recipients as a separate parameter from the “email type” since an email/campaign always go to the same list / recipients. (Unless of course the method is used in tests.)
Based on my collective knowledge about clean code (mostly books), passing a separate recipients
parameter just for the sake of tests is not a good design decision; and that it’s the responsibility of the person making tests (which is currently me…but could be a different person in the future) to clean up (delete campaigns) or mock appropriately (perhaps stub the recipients
prop) to make sure no email generated from tests are sent to actual users — but I also think there’s so much I don’t know so I would like to hear what others think.
[Kristoffer Bakkejord, Test Podcast] I am looking for something that lets me parametrize tests that sets a class property - something like this (not working example)
```@pytest.mark.parametrize("source_name", ["source_1.txt", "source_2.txt"])
class TestSource:
def params_init(self, source_name):
self.source = source_name
def test_source(self):
assert self.source in ["source_1.txt", "source_2.txt"]```
Has anyone come across something like this?
This would allow me to refer to the class instance when running my tests (self.source
in the above example).
Maybe this is what I'm looking for: https://docs.pytest.org/en/stable/example/parametrize.html#a-quick-port-of-testscenarios (thinking face emoji)
@pytest.fixture(params=["source_1.txt", "source_2.txt"], scope="class", autouse=True)
def source_file_name(request):
return request.param
[Kristoffer Bakkejord, Test Podcast] @Salmon Mode Indeed, if the test was as simple as in the above example, that would work. But I'm having a test class with several methods. I would like to run the entire test class, with different test inputs.
If I parametrize the test class, I will have to pass all argnames (of pytest.mark.parametrize) to all methods of the test class. I don't want that, as it would result in many methods getting variables they''re not using.
It's a set of tests which are interconnected (second test depends on first and so on), so using a class to tie these together seemed like a good idea. It's a bit difficult to explain here. Not so sure if I want to go that route anymore. Maybe I should look into pytest-subtest or pytest-check instead (thinking face emoji)