Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
    Brian Okken
    @okken

    [Kristoffer Bakkejord, Test Podcast] Hmm, which code is too tightly coupled to what?

    It's about testing a DAG - ensuring that one node is executed, then the next. One node can't be decoupled from the next.

    Brian Okken
    @okken
    [Chris NeJame, Test Podcast] Imagine you have function A, which first calls function B, and then, once function B completes, the result determines if A would then call function C, vs function D. Would function B actually need to be executed to determine if A would call C or D given a particular result from function B?
    Brian Okken
    @okken

    [Kristoffer Bakkejord, Test Podcast] Firstly, I agree that code should be designed in a manner where each can be broken into units that can be individually tested.

    In your example... it depends. Is it possible to mock function B's execution? If yes, then we can do just that. If no - then function B must be called.

    Just to be clear, and follow up on your example - I am mocking function B. I have unit tests/integration tests for the work that B performs.

    But what I need to test is that A can run with different inputs, and that D is run in some cases and C in other.

    I think my approach of running a test within the "main test" (the test class) has been the wrong approach. My conclusion is that test classes should be used for grouping tests, and doesn't allow too much control logic (e.g. if class property foo == bar, don't run test method B or C).

    I think my approach will move over to testing the workflow definition/DAG in a single pytest function, one for each variant. And mock the execution of the steps (which I have been doing with test class+test methods, but bending pytest the wrong ways).

    But - it's not always possible to mock B, and maybe B must be run to determine the outcome needed before C/D. It's not always possible to use the philosophy of unit testing, when testing systems you don't have too much control over (black box testing).

    Brian Okken
    @okken

    [Chris NeJame, Test Podcast] The system chosen (in this case, Zeebe) was a design decision, just as much as how functions, classes, and modules would be structured. If the functions, classes, and modules were designed in such a way (not to imply intent), that testing the individual components of it would be made more difficult, or even impossible, then I would suggest changing the design.

    When you have to test the DAG, it exponentially increases the number of tests you'd have to do, and I'm guessing you're trying to reduce the very large amount of duplicate code that you might have to write because of the sheer number of tests. Testing the DAG itself is not ideal, because of how many tests there would likely be, but this is why some places use model-based testing to start exploring issues that may only present themselves at that level of complexity, in addition to their normal tests.

    [Chris NeJame, Test Podcast] for a visual regarding the number of tests, I have some animated SVGs here that show it
    Brian Okken
    @okken
    [Kristoffer Bakkejord, Test Podcast] I see that the term black box testing wasn't correct after looking over the article provided (will check it out in detail later).
    Brian Okken
    @okken

    [Kristoffer Bakkejord, Test Podcast] To be clear Zeebe isn't "untestable", but it requires a different approach than say unit tests, and figuring that out isn't necessarily easy. Just because it's not, doesn't mean it's a bad design decision.

    Here's one test implementation for Zeebe BPMNs (i.e. DAGs): https://github.com/zeebe-io/bpmn-spec (this doesn't cover our use case, which is why I'm looking into an approach in pytest).

    Brian Okken
    @okken

    [Chris NeJame, Test Podcast] In my opinion, any design-related decision that makes testing things in an atomic way less achievable, is a bad design decision, as this inherently introduces large amounts of technical debt (i.e. time was borrowed from the future for a perceived speedup in the short term, and this will likely cost more time in the long term than what was perceived to be saved in the short term).

    In this case, in order to pay down the technical debt completely, you would have to go bankrupt and rebuild everything (unless you can piecemeal separate out individual components to be handled by separate, atomically testable components).

    I understand that your system would still be testable. But what I'm saying is that in having to involve so much in every one of your tests, and having to have so many more, they will take a much longer time to run and to write than they would if you could test things atomically, and you will likely spend much more time maintaining them.

    Consciously sacrificing testability, for the sake of short term speed ups, is by no means uncommon, because testing is often seen as secondary and not as important as writing the "actual" code. But I have never heard of a case where doing so didn't bite the person in the ass, and every system I've worked on that focused only on operating at the DAG level (which is surprisingly not 0) was a buggy, unmaintainable mess once it was down the road enough to be considered (by some, but not me) to be MVP.

    Brian Okken
    @okken

    [Kristoffer Bakkejord, Test Podcast] > any design-related decision that makes testing things in an atomic way less achievable, is a bad design decision
    Agreed.

    I understand that your system would still be testable. But what I'm saying is that in having to involve so much in every one of your tests, and having to have so many more, they will take a much longer time to run and to write than they would if you could test things atomically, and you will likely spend much more time maintaining them.
    It doesn't seem we're on the same page here. I do find Zeebe to be quite testable. With very little knowledge of pytest plugins and not too much time, I managed to create a basic framework that allowed me to test Zeebe BPMNs. Wanting to iterate on this, I met some challenges with how pytest works. But that don't mean either pytest or Zeebe is a bad design decision – just that my assumptions about it was wrong (and I can learn from this).
    Consciously sacrificing testability
    I don't think I am doing that,...
    that focused only on operating at the DAG level
    Not sure I'm following you here. You may have had a bad experience with DAG systems, but I don't think you should have assumptions about every use case and implementation considerations others may have.

    Anyways - I appreciate the discussion. You come with many good points. I was looking for some input on how to use pytest with test classes, and I've come to a conclusion that this isn't the right path.

    Brian Okken
    @okken
    [Pax, Test Podcast] > It’s not for testing python code. It’s testing a workflow running through a workflow system (Zeebe). The test class was intended to test a workflow, and several runs through parametrizing it.
    That’s interesting. I’ve not heard something like this before. I understand where Salmon Mode is coming from, given my experience with programs/scripts/applications. But I am uncertain if same applies when testing a workflow system. I hope you get it figured out.
    Brian Okken
    @okken

    [Chris NeJame, Test Podcast] The problems with such a system likely aren't obvious now, and may not be for a time, but, like I said, taking up technical debt is about short term gains, with long term consequences. The problems will become more apparent as you build out the application. As more complexity is introduced, it will exponentially grow out of control.

    The systems I worked with all had in common an inability to test individual steps in isolation. This meant that the automated tests had to operate at the end-to-end level, which made them very expensive and time consuming to run, and would break regularly, especially when third party dependencies were having issues. They would also prevent developers from finding out the full picture in regards to what was broken, because if an earlier common step was broken, everything that depended on it could not be tested.

    The end result, was a lot of wasted work, and very slow developer feedback loops, which wasted a lot of time.

    I wish you the best of luck though. I'm sure this will be quite an informative/educational experience in many ways :)

    Brian Okken
    @okken
    [Erich Zimmerman, Test Podcast] Huh, almost every UI test ever.... ;)
    Brian Okken
    @okken
    [Alex SAplacan, Test Podcast] HI all, quick question, what's the major difference between running pytest [OPTIONS] and python -m pytest [OPTIONS] ?
    For me the first one always returns errors (importing mostly) while the second runs fine.
    Brian Okken
    @okken
    [gc, Test Podcast] path
    Brian Okken
    @okken
    [a4z, Test Podcast] Is there is some tooling that would have catch this stupid mistake
    def ensure_some_default_settings() -> bool: ... return # ... True or False
    later some where else
    if ensure_some_default_settings:
    was of course always false ... VS Code showed me also not problem with that
    this is for throw away scripts, so not real testing planned, but I wonder, is there not a tools that could say
    did you mean `if ensure\_some\_default\_settings\(\):`
    something like a static analyser in other languages ...
    Brian Okken
    @okken
    [Jahn Thomas Fidje, Test Podcast] Hi all! I'm struggling a lot with patching, and just can't figure out how to fix it. Conclusion is that the way I'm writing the code is bad, but I don't know better ways to do this. I've created a mini-snippet showing what I hope is enough to explain my setup. If anyone have time, could you please take a quick look at this? I would really appreciate any suggestions on how to improve :) Thanks in advance! https://pastebin.com/pnW15i88
    Brian Okken
    @okken
    [Israel Fruchter, Test Podcast] hi @Jahn Thomas Fidje, the one thing to keep in mind in patching is that you need to patch the name matching the place you are importing
    [Israel Fruchter, Test Podcast] in your case, my_project.connections.redis_client ,i.e. you need to patch it from the POV of the file you are using. if there are cross reference like that (with multiple layer of referencing) it become harder. since patch only replace the "reference", and reference which is different in multiple files, won't be replaced.
    Brian Okken
    @okken
    [Alex SAplacan, Test Podcast] Untitled
    [Alex SAplacan, Test Podcast] Hi all :wave: a little bit of help would be appreciated :) :
    I am trying to assert if a print() function is run in the code, but I can't capture it while running the test .... I know it gets executed because if I insert a breakpoint near it, that gets triggered. I am trying using capsys.readouterr() but no success ^^
    [Alex SAplacan, Test Podcast] incoming snippet ...
    Brian Okken
    @okken
    [Alex SAplacan, Test Podcast] Untitled
    [Alex SAplacan, Test Podcast] I guess it captures only the stdout from the test but not from the code to be tested?
    Brian Okken
    @okken
    [Alex SAplacan, Test Podcast] ... theoretically I just want to have that if statement covered... which it is. currently failing because I don't know how to catch that print statement.... I don't know a different way to test in and have it covered
    Brian Okken
    @okken
    [Viktor, Test Podcast] I am not sure exactly how the output capture works, but looks like it captures output from pytest as well. The end of the message ...! ~~~~~~~~~\n" looks like it contains the ~ from your test input.
    [Viktor, Test Podcast] So maybe assert expected in captured.out would work?
    Brian Okken
    @okken
    [Alex SAplacan, Test Podcast] Good point! I can see now what you mean :)
    Brian Okken
    @okken
    [Alex SAplacan, Test Podcast] That did the job! thank you
    Brian Okken
    @okken

    [a4z, Test Podcast] I wonder if I could get some startup help here, please!

    I decided to share my helper scripts with colleagues, and the best option might be to distribute them as pip.
    So I create some pip, say, mytool
    Of course I know that mytool scripts work ( =@ ), but just to be sure, I would like to add some tests.

    So I have this pip project layout

    - mytool
    - tests
    LICENSE
    setup.py
    ... (rest of pip files)

    now what to do that file in tests can import mytool
    and optimal, that even VS Code knows about mytool when editing the test file

    (You might notice on my question, python is not my dayjob)

    Brian Okken
    @okken

    [Erich Zimmerman, Test Podcast] General question -- in past testing, I have made use of delayed timing on assertions. For example, I may send an event to a system, but the assertion based on the event isn't immediate.

    ```some_object = MyClass()
    related_object = NextClass(some_object)

    some_object.take_action('action')

    after some time...

    assert related_object.updated_property == 'action'```
    In Nunit and others, there is support for basically a polling assertion, where you check the predicate until a timeout is reached.

    Pytest and the Python assertions don't support this directly (well, as far as I can tell), but I don't even find any conversations online about doing something like this.

    So, I'm wondering if this approach doesn't "fit" in the Pytest approach to testing?

    I wrote a simple function on my own, called wait_for_assert that takes a predicate function, resulting in an AssertException if the predicate is still failing after some time, so I'm good with that on the "works for me" paradigm. But I'm just curious if Pytest thinking would point me down a different road.

    Brian Okken
    @okken
    [Kristoffer Bakkejord, Test Podcast] image.png
    [Kristoffer Bakkejord, Test Podcast] image.png
    Brian Okken
    @okken
    [David Kotschessa, Test Podcast] Looks like I forgot to add this slack back in when I got a new computer. How's everybody doing? Seems quiet!
    Brian Okken
    @okken
    [David Kotschessa, Test Podcast] This is where I get up on stage and scream " I CAN'T HEAR YOU!!"
    [David Kotschessa, Test Podcast] https://www.youtube.com/watch?v=C1E10PRfhcA
    [David Kotschessa, Test Podcast] =D
    Brian Okken
    @okken

    [David Kotschessa, Test Podcast] Thank you @brian for an episode back in 2019 "From python script to maintainable package."

    I created my very first pip installable package, which is a community provider for faker to create airport data. It's a tiny thing, but it was a great lesson in documentation, setting up tests, packaging, etc.
    https://pypi.org/project/faker_airtravel/

    [David Kotschessa, Test Podcast] My excitement is very disproportional to how big or useful the package is. but it's the first thing I've ever put in pypi
    [David Kotschessa, Test Podcast] flit was definitely the way to go too
    Brian Okken
    @okken
    [David Kotschessa, Test Podcast] going through some of the other motions now, tox etc.
    Brian Okken
    @okken
    [AJ Kerrigan, Test Podcast] If this affects you you've probably already been notified, but... https://about.codecov.io/security-update/
    Brian Okken
    @okken

    [Tony Cappellini, Test Podcast] How do you deal with python3’s end= in the print() function, when using the python logger?

    print(‘Python Rocks’, end=‘’)

    print(‘Python Rocks’)

    How can I make the logger replace both of the above prints?
    I”m adding the logger to a program where many of the print() calls use end=
    From the logger documentation, I don’t see anything like end= for the logger

    Brian Okken
    @okken

    [Adam Parkin, Test Podcast] So I want to run coverage.py on my work project, but here’s a wrinkle: most of our tests are integration tests, so they spin up a set of Docker containers (like each REST API is in a separate running container), and then the tests make HTTP calls to the various services and assert on results.

    That’s going to mean that coverage won’t see what parts of the services get exercised by tests doesn’t it? Like if I have a test that looks like:

    `\def test_thing():
    result = requests.get('http://localhost:1234/some/path/on/a/container')

    assert something\_about\_the\_result```

    Since coverage.py\ is running in the container where that test is running and not where the web server is running, all it sees is that the test made a http call to some endpoint somewhere, right? Like there’s no way to tell coverage “oh wait, http://localhost:1234/some/path/on/a/container is actually part of our project, so figure out what code in another container is running and do coverage over that” or is there? Anyone have any experience or ideas on how to get coverage information in this situation?

    Brian Okken
    @okken
    [David Kotschessa, Test Podcast] @Adam Parkin That got me thinking a bit about the purpose of coverage.. poking around I think this stackoverlow answer was actually pretty good: https://sqa.stackexchange.com/questions/24997/why-do-code-coverage-of-integration-test
    [David Kotschessa, Test Podcast] in particular
    • Unit tests are a much more effective way to exercise code than integration tests. Compared to integration tests, unit tests are lighter weight, faster, and more targeted. Using an integration test to cover code is like using a sledge hammer to drive in a nail.
    • Unit tests are about exercising code, whereas integration tests are about testing assumptions. When we write unit tests, we mock/fake/stub out the interfaces that the code depends on, and in the process we encode assumptions about how those interfaces behave. Integration tests are where we find out whether those assumptions were valid.
    [David Kotschessa, Test Podcast] "integration tests are about testing assumptions." hit home
    [Adam Parkin, Test Podcast] > Unit tests are about exercising code, whereas integration tests are about testing assumptions
    That’s… a really good way to think about it and articulates my feelings really well.
    [David Kotschessa, Test Podcast] yeah
    [Adam Parkin, Test Podcast] I dunno if that helps me answer my original question, but thanks for sharing that, super insightful.