def ensure_some_default_settings() -> bool: ... return # ... True or False
did you mean `if ensure\_some\_default\_settings\(\):`
my_project.connections.redis_client,i.e. you need to patch it from the POV of the file you are using. if there are cross reference like that (with multiple layer of referencing) it become harder. since patch only replace the "reference", and reference which is different in multiple files, won't be replaced.
capsys.readouterr()but no success ^^
assert expected in captured.outwould work?
[a4z, Test Podcast] I wonder if I could get some startup help here, please!
I decided to share my helper scripts with colleagues, and the best option might be to distribute them as pip.
So I create some pip, say, mytool
Of course I know that mytool scripts work ( =@ ), but just to be sure, I would like to add some tests.
So I have this pip project layout
- mytool - tests LICENSE setup.py ... (rest of pip files)
now what to do that file in tests can import mytool
and optimal, that even VS Code knows about mytool when editing the test file
(You might notice on my question, python is not my dayjob)
[Erich Zimmerman, Test Podcast] General question -- in past testing, I have made use of delayed timing on assertions. For example, I may send an event to a system, but the assertion based on the event isn't immediate.
```some_object = MyClass()
related_object = NextClass(some_object)
assert related_object.updated_property == 'action'```
In Nunit and others, there is support for basically a polling assertion, where you check the predicate until a timeout is reached.
Pytest and the Python assertions don't support this directly (well, as far as I can tell), but I don't even find any conversations online about doing something like this.
So, I'm wondering if this approach doesn't "fit" in the Pytest approach to testing?
I wrote a simple function on my own, called
wait_for_assert that takes a predicate function, resulting in an AssertException if the predicate is still failing after some time, so I'm good with that on the "works for me" paradigm. But I'm just curious if Pytest thinking would point me down a different road.
[David Kotschessa, Test Podcast] Thank you @brian for an episode back in 2019 "From python script to maintainable package."
I created my very first pip installable package, which is a community provider for faker to create airport data. It's a tiny thing, but it was a great lesson in documentation, setting up tests, packaging, etc.
flitwas definitely the way to go too
[Tony Cappellini, Test Podcast] How do you deal with python3’s end= in the print() function, when using the python logger?
print(‘Python Rocks’, end=‘’)
How can I make the logger replace both of the above prints?
I”m adding the logger to a program where many of the print() calls use end=
From the logger documentation, I don’t see anything like end= for the logger
[Adam Parkin, Test Podcast] So I want to run
coverage.py on my work project, but here’s a wrinkle: most of our tests are integration tests, so they spin up a set of Docker containers (like each REST API is in a separate running container), and then the tests make HTTP calls to the various services and assert on results.
That’s going to mean that coverage won’t see what parts of the services get exercised by tests doesn’t it? Like if I have a test that looks like:
result = requests.get('http://localhost:1234/some/path/on/a/container')
coverage.py\ is running in the container where that test is running and not where the web server is running, all it sees is that the test made a http call to some endpoint somewhere, right? Like there’s no way to tell coverage “oh wait, http://localhost:1234/some/path/on/a/container is actually part of our project, so figure out what code in another container is running and do coverage over that” or is there? Anyone have any experience or ideas on how to get coverage information in this situation?
[Adam Parkin, Test Podcast] Unfortunately in my specific case here that (running the server locally alongside the test) is not an option (or at least not without a lot of work to make that happen, which maybe that’s the answer :shrug).
But yes, effectively it is “test is running on one machine, code under test is running on a different machine”.
[Adam Parkin, Test Podcast] It occurs to me I’m probably misusing the term “integration test” here, in my case my tests are really “full system tests”, so maybe to tweak my question slightly: is it a good idea to generate coverage information for tests that are full system tests, and if so (or at the very least if it’s not a bad idea) how do you achieve that?
Like maybe a more concrete way of thinking of this: say you have a bunch of selenium tests that you execute against your staging environment. You want to get a sense of what parts of the overall system are exercised by those tests and which are not, how would you go about discovering that?