error != None
I thought "this guy might be a little newer to Python"
[Adam Parkin, Test Podcast] I’d be interested in knowing why you’d want to introduce black gradually. For us we started doing file by file, where if someone was working on a file because they had black installed it’d format the entire file, but found that made the review process more difficult. Also found that because not everyone had black installed (or didn’t have their editor set to format on save) that over time people who didn’t have Black set up would make changes to a file, which would cause black to reformat it later when someone else edited the file (again making for noisy diffs).
We ended up doing a “big bang” commit where we ran black over the entire codebase and then reviewed that in one go and that worked really well for us. As a plus you can also add the commit of that “big bang” commit to a git-blame-ignore-revs file and then tell a git blame to ignore the commit when doing a diff (this blog post outlines that idea). And to enforce that black formatting is always enforced we added black to our CI pipeline, and fail the build if it finds files haven’t been formatted by black.
[Erich Zimmerman, Test Podcast] Hey all!
I have a problem that seems to come up regularly, and I'm not really sure about the best Pytest-y way to solve this problem. This example is about the most concise I could find for the issue.
```@pytest.fixture(params = ['device 1', 'device 2', 'device 3']
def test_device():
return request.param
def test_checking_urls_on_device(test_device):
url_list = get_urls(test_device)
for url in url_list:
# call the URL, make assertion```
This is not an ideal test, because I would like for each URL in the list to be a separate parameterized test case, def test_checking_urls_on_device(test_device, test_url)
, with test_url
being the itemized parameters coming out of the test case.
The concise problem statement is that depending on the device, the URL parameters will be different; we look up the URL list as a function of the device. I cannot know what the test_url
values would be until I already know what test_device
is.
I think that pytest_generate_tests
is my best option here, and I'm not even sure if that will really help me, because I need to know the value of test_device
before I can create the test_url
parameters.