Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
    Brian Okken
    @okken

    [Chris NeJame, Test Podcast] The problems with such a system likely aren't obvious now, and may not be for a time, but, like I said, taking up technical debt is about short term gains, with long term consequences. The problems will become more apparent as you build out the application. As more complexity is introduced, it will exponentially grow out of control.

    The systems I worked with all had in common an inability to test individual steps in isolation. This meant that the automated tests had to operate at the end-to-end level, which made them very expensive and time consuming to run, and would break regularly, especially when third party dependencies were having issues. They would also prevent developers from finding out the full picture in regards to what was broken, because if an earlier common step was broken, everything that depended on it could not be tested.

    The end result, was a lot of wasted work, and very slow developer feedback loops, which wasted a lot of time.

    I wish you the best of luck though. I'm sure this will be quite an informative/educational experience in many ways :)

    Brian Okken
    @okken
    [Erich Zimmerman, Test Podcast] Huh, almost every UI test ever.... ;)
    Brian Okken
    @okken
    [Alex SAplacan, Test Podcast] HI all, quick question, what's the major difference between running pytest [OPTIONS] and python -m pytest [OPTIONS] ?
    For me the first one always returns errors (importing mostly) while the second runs fine.
    Brian Okken
    @okken
    [gc, Test Podcast] path
    Brian Okken
    @okken
    [a4z, Test Podcast] Is there is some tooling that would have catch this stupid mistake
    def ensure_some_default_settings() -> bool: ... return # ... True or False
    later some where else
    if ensure_some_default_settings:
    was of course always false ... VS Code showed me also not problem with that
    this is for throw away scripts, so not real testing planned, but I wonder, is there not a tools that could say
    did you mean `if ensure\_some\_default\_settings\(\):`
    something like a static analyser in other languages ...
    Brian Okken
    @okken
    [Jahn Thomas Fidje, Test Podcast] Hi all! I'm struggling a lot with patching, and just can't figure out how to fix it. Conclusion is that the way I'm writing the code is bad, but I don't know better ways to do this. I've created a mini-snippet showing what I hope is enough to explain my setup. If anyone have time, could you please take a quick look at this? I would really appreciate any suggestions on how to improve :) Thanks in advance! https://pastebin.com/pnW15i88
    Brian Okken
    @okken
    [Israel Fruchter, Test Podcast] hi @Jahn Thomas Fidje, the one thing to keep in mind in patching is that you need to patch the name matching the place you are importing
    [Israel Fruchter, Test Podcast] in your case, my_project.connections.redis_client ,i.e. you need to patch it from the POV of the file you are using. if there are cross reference like that (with multiple layer of referencing) it become harder. since patch only replace the "reference", and reference which is different in multiple files, won't be replaced.
    Brian Okken
    @okken
    [Alex SAplacan, Test Podcast] Untitled
    [Alex SAplacan, Test Podcast] Hi all :wave: a little bit of help would be appreciated :) :
    I am trying to assert if a print() function is run in the code, but I can't capture it while running the test .... I know it gets executed because if I insert a breakpoint near it, that gets triggered. I am trying using capsys.readouterr() but no success ^^
    [Alex SAplacan, Test Podcast] incoming snippet ...
    Brian Okken
    @okken
    [Alex SAplacan, Test Podcast] Untitled
    [Alex SAplacan, Test Podcast] I guess it captures only the stdout from the test but not from the code to be tested?
    Brian Okken
    @okken
    [Alex SAplacan, Test Podcast] ... theoretically I just want to have that if statement covered... which it is. currently failing because I don't know how to catch that print statement.... I don't know a different way to test in and have it covered
    Brian Okken
    @okken
    [Viktor, Test Podcast] I am not sure exactly how the output capture works, but looks like it captures output from pytest as well. The end of the message ...! ~~~~~~~~~\n" looks like it contains the ~ from your test input.
    [Viktor, Test Podcast] So maybe assert expected in captured.out would work?
    Brian Okken
    @okken
    [Alex SAplacan, Test Podcast] Good point! I can see now what you mean :)
    Brian Okken
    @okken
    [Alex SAplacan, Test Podcast] That did the job! thank you
    Brian Okken
    @okken

    [a4z, Test Podcast] I wonder if I could get some startup help here, please!

    I decided to share my helper scripts with colleagues, and the best option might be to distribute them as pip.
    So I create some pip, say, mytool
    Of course I know that mytool scripts work ( =@ ), but just to be sure, I would like to add some tests.

    So I have this pip project layout

    - mytool
    - tests
    LICENSE
    setup.py
    ... (rest of pip files)

    now what to do that file in tests can import mytool
    and optimal, that even VS Code knows about mytool when editing the test file

    (You might notice on my question, python is not my dayjob)

    Brian Okken
    @okken

    [Erich Zimmerman, Test Podcast] General question -- in past testing, I have made use of delayed timing on assertions. For example, I may send an event to a system, but the assertion based on the event isn't immediate.

    ```some_object = MyClass()
    related_object = NextClass(some_object)

    some_object.take_action('action')

    after some time...

    assert related_object.updated_property == 'action'```
    In Nunit and others, there is support for basically a polling assertion, where you check the predicate until a timeout is reached.

    Pytest and the Python assertions don't support this directly (well, as far as I can tell), but I don't even find any conversations online about doing something like this.

    So, I'm wondering if this approach doesn't "fit" in the Pytest approach to testing?

    I wrote a simple function on my own, called wait_for_assert that takes a predicate function, resulting in an AssertException if the predicate is still failing after some time, so I'm good with that on the "works for me" paradigm. But I'm just curious if Pytest thinking would point me down a different road.

    Brian Okken
    @okken
    [Kristoffer Bakkejord, Test Podcast] image.png
    [Kristoffer Bakkejord, Test Podcast] image.png
    Brian Okken
    @okken
    [David Kotschessa, Test Podcast] Looks like I forgot to add this slack back in when I got a new computer. How's everybody doing? Seems quiet!
    Brian Okken
    @okken
    [David Kotschessa, Test Podcast] This is where I get up on stage and scream " I CAN'T HEAR YOU!!"
    [David Kotschessa, Test Podcast] https://www.youtube.com/watch?v=C1E10PRfhcA
    [David Kotschessa, Test Podcast] =D
    Brian Okken
    @okken

    [David Kotschessa, Test Podcast] Thank you @brian for an episode back in 2019 "From python script to maintainable package."

    I created my very first pip installable package, which is a community provider for faker to create airport data. It's a tiny thing, but it was a great lesson in documentation, setting up tests, packaging, etc.
    https://pypi.org/project/faker_airtravel/

    [David Kotschessa, Test Podcast] My excitement is very disproportional to how big or useful the package is. but it's the first thing I've ever put in pypi
    [David Kotschessa, Test Podcast] flit was definitely the way to go too
    Brian Okken
    @okken
    [David Kotschessa, Test Podcast] going through some of the other motions now, tox etc.
    Brian Okken
    @okken
    [AJ Kerrigan, Test Podcast] If this affects you you've probably already been notified, but... https://about.codecov.io/security-update/
    Brian Okken
    @okken

    [Tony Cappellini, Test Podcast] How do you deal with python3’s end= in the print() function, when using the python logger?

    print(‘Python Rocks’, end=‘’)

    print(‘Python Rocks’)

    How can I make the logger replace both of the above prints?
    I”m adding the logger to a program where many of the print() calls use end=
    From the logger documentation, I don’t see anything like end= for the logger

    Brian Okken
    @okken

    [Adam Parkin, Test Podcast] So I want to run coverage.py on my work project, but here’s a wrinkle: most of our tests are integration tests, so they spin up a set of Docker containers (like each REST API is in a separate running container), and then the tests make HTTP calls to the various services and assert on results.

    That’s going to mean that coverage won’t see what parts of the services get exercised by tests doesn’t it? Like if I have a test that looks like:

    `\def test_thing():
    result = requests.get('http://localhost:1234/some/path/on/a/container')

    assert something\_about\_the\_result```

    Since coverage.py\ is running in the container where that test is running and not where the web server is running, all it sees is that the test made a http call to some endpoint somewhere, right? Like there’s no way to tell coverage “oh wait, http://localhost:1234/some/path/on/a/container is actually part of our project, so figure out what code in another container is running and do coverage over that” or is there? Anyone have any experience or ideas on how to get coverage information in this situation?

    Brian Okken
    @okken
    [David Kotschessa, Test Podcast] @Adam Parkin That got me thinking a bit about the purpose of coverage.. poking around I think this stackoverlow answer was actually pretty good: https://sqa.stackexchange.com/questions/24997/why-do-code-coverage-of-integration-test
    [David Kotschessa, Test Podcast] in particular
    • Unit tests are a much more effective way to exercise code than integration tests. Compared to integration tests, unit tests are lighter weight, faster, and more targeted. Using an integration test to cover code is like using a sledge hammer to drive in a nail.
    • Unit tests are about exercising code, whereas integration tests are about testing assumptions. When we write unit tests, we mock/fake/stub out the interfaces that the code depends on, and in the process we encode assumptions about how those interfaces behave. Integration tests are where we find out whether those assumptions were valid.
    [David Kotschessa, Test Podcast] "integration tests are about testing assumptions." hit home
    [Adam Parkin, Test Podcast] > Unit tests are about exercising code, whereas integration tests are about testing assumptions
    That’s… a really good way to think about it and articulates my feelings really well.
    [David Kotschessa, Test Podcast] yeah
    [Adam Parkin, Test Podcast] I dunno if that helps me answer my original question, but thanks for sharing that, super insightful.
    [David Kotschessa, Test Podcast] and yet I also get not wanting to go overboard and be hell bent on having a test for every.single.function
    [David Kotschessa, Test Podcast] sometimes a test that takes place inside the container you are hitting and runs a few of those functions - who knows what to call it - unit, component, integration? Might still serve a very good purpose
    [Brian Okken, Test Podcast] Wow. I don’t think I can disagree more.
    Brian Okken
    @okken
    [Adam Parkin, Test Podcast] That’s really interesting, I’d be interested in hearing your thoughts as to why.
    [Brian Okken, Test Podcast] Code coverage using unit tests is trivial. Just write little tests for all of your code. Done.
    [Brian Okken, Test Podcast] But what I really want code coverage to tell me is what parts of the system are actually being used.
    [Brian Okken, Test Podcast] So coverage while running tests that hit API points or even system tests seem more effective for that purpose.
    [Brian Okken, Test Podcast] Your question seemed more like “how do I run coverage on code sitting on a different machine”.
    [Brian Okken, Test Podcast] The answer of “run it on the same machine, if you can” is still reasonable. I think. I actually don’t know if you can split coverage onto a different machine. But if you can spin up a local server for testing purposes, on the same machine as the tests, the integration tests still work.
    [Brian Okken, Test Podcast] YMMV, of course, depending on your architecture.