Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
    Brian Okken
    @okken
    [Israel Fruchter, Test Podcast] In that case, we should be more selective of customers, LOL
    [Israel Fruchter, Test Podcast] luckily for us, some of our customers even contribute code, and showed it in europython2020
    [Israel Fruchter, Test Podcast] (or at least push me, to polish up, or python client fork of cassnadra one)
    Brian Okken
    @okken
    [Dima Spivak, Test Podcast] in unrelated news, I am currently hiring someone who nerds out about Python and Pytest for my team at StreamSets (post Series C startup focusing on interesting problems in the DataOps space): https://streamsets.bamboohr.com/jobs/view.php?id=36
    [Dima Spivak, Test Podcast] If anyone wants more info, DM me! :)
    [Dima Spivak, Test Podcast] end of hiring spam
    [Dima Spivak, Test Podcast] :)
    Jan Gazda
    @1oglop1
    Hi all, I remember one of the podcasts either TestAndCode or pythonBytes mentioned a tool to gradually introduce black and other things to the existing codebase in order to prevent big boom changes, I wonder if anyone could point me out to it.
    Brian Okken
    @okken
    [Mikey Reppy, Test Podcast] precommit makes it easy to black only the files you work on commit by commit
    [Mikey Reppy, Test Podcast] that's how we introduced it
    [Mikey Reppy, Test Podcast] although I confess I did black the entire test suite pretty soon in our journey so that it was very clear when a test changed in subsequent diffs
    Brian Okken
    @okken

    [Adam Parkin, Test Podcast] I’d be interested in knowing why you’d want to introduce black gradually. For us we started doing file by file, where if someone was working on a file because they had black installed it’d format the entire file, but found that made the review process more difficult. Also found that because not everyone had black installed (or didn’t have their editor set to format on save) that over time people who didn’t have Black set up would make changes to a file, which would cause black to reformat it later when someone else edited the file (again making for noisy diffs).

    We ended up doing a “big bang” commit where we ran black over the entire codebase and then reviewed that in one go and that worked really well for us. As a plus you can also add the commit of that “big bang” commit to a git-blame-ignore-revs file and then tell a git blame to ignore the commit when doing a diff (this blog post outlines that idea). And to enforce that black formatting is always enforced we added black to our CI pipeline, and fail the build if it finds files haven’t been formatted by black.

    Brian Okken
    @okken

    [Erich Zimmerman, Test Podcast] Hey all!

    I have a problem that seems to come up regularly, and I'm not really sure about the best Pytest-y way to solve this problem. This example is about the most concise I could find for the issue.

    ```@pytest.fixture(params = ['device 1', 'device 2', 'device 3']
    def test_device():
    return request.param

    def test_checking_urls_on_device(test_device):
    url_list = get_urls(test_device)
    for url in url_list:

        # call the URL, make assertion```

    This is not an ideal test, because I would like for each URL in the list to be a separate parameterized test case, def test_checking_urls_on_device(test_device, test_url), with test_url being the itemized parameters coming out of the test case.

    The concise problem statement is that depending on the device, the URL parameters will be different; we look up the URL list as a function of the device. I cannot know what the test_url values would be until I already know what test_device is.

    I think that pytest_generate_tests is my best option here, and I'm not even sure if that will really help me, because I need to know the value of test_device before I can create the test_url parameters.

    Brian Okken
    @okken
    [Liora Milbaum, Test Podcast] I am challenged with unit tests implementation for a FastAPI service with SQLAlchemy. Any ideas/examples would be super helpful.
    Brian Okken
    @okken
    [Dima Spivak, Test Podcast] Shouldn't devs be writing those as they develop, @Liora Milbaum ?
    Brian Okken
    @okken
    [Liora Milbaum, Test Podcast] Have you ever used pytest-postgresql plugin while connecting to already existing postgresql database and stayed alive to tell? :)
    Brian Okken
    @okken
    [Dima Spivak, Test Podcast] In a world of SQLAlchemy I dunno why to limit yourself to plugins specific to individual dbs
    Brian Okken
    @okken

    [Jacob Floyd, Test Podcast] I'm trying to fix a test. It's in a unittest class, but pytest is the official runner, so I'm using pytest features. Pulling the test out of the unittest class didn't make a difference.

    The test is supposed to test signal handling. ie when the app gets SIGHUP and SIGINT, reload() should be called, as should sys.exit (or SystemExit should be raised).

    If I run the one test all by itself, everything passes. But, when I run it in the suite, something else seems to get the SIGHUP/SIGINT causing pytest to exit. However wrote this before tried to use threading, but that just swallowed exceptions so no one knew that this test was broken. Instead some time after the test "passed" pytest would exit saying it got a KeyboardInterrupt.

    So, is there a good way to run a pytest test with subprocess or multiprocessing to make sure the signal is only going to the process under test?

    [Jacob Floyd, Test Podcast] Here's my updated test that fixes a hidden issues with that test and drops threading: https://github.com/cognifloyd/opsdroid/blob/new-test-machinery/tests/test_core.py#L87-L101
    Brian Okken
    @okken

    [Pax, Test Podcast] I’m creating tests involving an email marketing service (e.g. mailchimp). I’m trying to decide whether to pass the campaign’s recipients explicitly / as a separate parameter to a method that creates/schedules campaigns---making it safer to not schedule a campaign to the actual list during a test.

    On the other hand, it doesn’t seem right to pass the recipients as a separate parameter from the “email type” since an email/campaign always go to the same list / recipients. (Unless of course the method is used in tests.)

    Based on my collective knowledge about clean code (mostly books), passing a separate recipients parameter just for the sake of tests is not a good design decision; and that it’s the responsibility of the person making tests (which is currently me…but could be a different person in the future) to clean up (delete campaigns) or mock appropriately (perhaps stub the recipients prop) to make sure no email generated from tests are sent to actual users — but I also think there’s so much I don’t know so I would like to hear what others think.

    Brian Okken
    @okken

    [Chris NeJame, Test Podcast] @Pax I agree that adding an argument to a function just for a test is probably not a good design.

    What is the role of this function exactly? It sounds like it may be both to define the campaign, and also to send it out, is that correct?

    Brian Okken
    @okken

    [Kristoffer Bakkejord, Test Podcast] I am looking for something that lets me parametrize tests that sets a class property - something like this (not working example)
    ```@pytest.mark.parametrize("source_name", ["source_1.txt", "source_2.txt"])
    class TestSource:
    def params_init(self, source_name):
    self.source = source_name

    def test_source(self):
        assert self.source in ["source_1.txt", "source_2.txt"]```

    Has anyone come across something like this?

    This would allow me to refer to the class instance when running my tests (self.source in the above example).

    Maybe this is what I'm looking for: https://docs.pytest.org/en/stable/example/parametrize.html#a-quick-port-of-testscenarios (thinking face emoji)

    Brian Okken
    @okken
    [Chris NeJame, Test Podcast] @kbakk you can just use a fixture. I would avoid attaching anything to the actual class, and would only use it as a structuring/scoping/organization mechanism
    [Chris NeJame, Test Podcast] @pytest.fixture(params=["source_1.txt", "source_2.txt"], scope="class", autouse=True) def source_file_name(request): return request.param
    Brian Okken
    @okken

    [Kristoffer Bakkejord, Test Podcast] @Salmon Mode Indeed, if the test was as simple as in the above example, that would work. But I'm having a test class with several methods. I would like to run the entire test class, with different test inputs.

    If I parametrize the test class, I will have to pass all argnames (of pytest.mark.parametrize) to all methods of the test class. I don't want that, as it would result in many methods getting variables they''re not using.

    It's a set of tests which are interconnected (second test depends on first and so on), so using a class to tie these together seemed like a good idea. It's a bit difficult to explain here. Not so sure if I want to go that route anymore. Maybe I should look into pytest-subtest or pytest-check instead (thinking face emoji)

    Brian Okken
    @okken
    [Chris NeJame, Test Podcast] @kbakk that's what the autouse is for. If I understand correctly, it seems like you want the parametrization to affect all test methods, but don't want to actually use that info for each method. In which case, autouse will take care of that
    Brian Okken
    @okken
    [Chris NeJame, Test Podcast] Depending on what you're looking to do though, it may be better to just use a more deeply nested structure, i.e. another folder with a conftest and more test files with more test classes in them
    Brian Okken
    @okken
    [Chris NeJame, Test Podcast] But interconnected tests are usually not ideal. It can be helpful to design tests that make assumptions about those interconnected bits (possibly through mocking), and then write other independent tests to verify those assumptions. If this can't be done easily, it suggests the code may be too tightly coupled
    Brian Okken
    @okken
    [Chris NeJame, Test Podcast] @Pax what I'm thinking about is how such tests would be able to trigger an email being sent. I assume that these would be unit/component tests, and that account credentials must be used to actually send an email. If that's the case, are those credentials hard coded?
    Brian Okken
    @okken

    [Kristoffer Bakkejord, Test Podcast] It's not for testing python code. It's testing a workflow running through a workflow system (Zeebe). The test class was intended to test a workflow, and several runs through parametrizing it. Then each test function in the test class would be a step in the workflow. (Sorry if the explanation isn't entirely clear.)

    But I don't think I need the reporting of each step in the workflow - a pass/fail for the entire workflow may be sufficient (then I can do asserts or subtests/checks during the test run).

    Brian Okken
    @okken
    [Chris NeJame, Test Podcast] "if this can't be done easily, it suggests the code may be too tightly coupled" is still relevant
    Brian Okken
    @okken

    [Kristoffer Bakkejord, Test Podcast] Hmm, which code is too tightly coupled to what?

    It's about testing a DAG - ensuring that one node is executed, then the next. One node can't be decoupled from the next.

    Brian Okken
    @okken
    [Chris NeJame, Test Podcast] Imagine you have function A, which first calls function B, and then, once function B completes, the result determines if A would then call function C, vs function D. Would function B actually need to be executed to determine if A would call C or D given a particular result from function B?
    Brian Okken
    @okken

    [Kristoffer Bakkejord, Test Podcast] Firstly, I agree that code should be designed in a manner where each can be broken into units that can be individually tested.

    In your example... it depends. Is it possible to mock function B's execution? If yes, then we can do just that. If no - then function B must be called.

    Just to be clear, and follow up on your example - I am mocking function B. I have unit tests/integration tests for the work that B performs.

    But what I need to test is that A can run with different inputs, and that D is run in some cases and C in other.

    I think my approach of running a test within the "main test" (the test class) has been the wrong approach. My conclusion is that test classes should be used for grouping tests, and doesn't allow too much control logic (e.g. if class property foo == bar, don't run test method B or C).

    I think my approach will move over to testing the workflow definition/DAG in a single pytest function, one for each variant. And mock the execution of the steps (which I have been doing with test class+test methods, but bending pytest the wrong ways).

    But - it's not always possible to mock B, and maybe B must be run to determine the outcome needed before C/D. It's not always possible to use the philosophy of unit testing, when testing systems you don't have too much control over (black box testing).

    Brian Okken
    @okken

    [Chris NeJame, Test Podcast] The system chosen (in this case, Zeebe) was a design decision, just as much as how functions, classes, and modules would be structured. If the functions, classes, and modules were designed in such a way (not to imply intent), that testing the individual components of it would be made more difficult, or even impossible, then I would suggest changing the design.

    When you have to test the DAG, it exponentially increases the number of tests you'd have to do, and I'm guessing you're trying to reduce the very large amount of duplicate code that you might have to write because of the sheer number of tests. Testing the DAG itself is not ideal, because of how many tests there would likely be, but this is why some places use model-based testing to start exploring issues that may only present themselves at that level of complexity, in addition to their normal tests.

    [Chris NeJame, Test Podcast] for a visual regarding the number of tests, I have some animated SVGs here that show it
    Brian Okken
    @okken
    [Kristoffer Bakkejord, Test Podcast] I see that the term black box testing wasn't correct after looking over the article provided (will check it out in detail later).
    Brian Okken
    @okken

    [Kristoffer Bakkejord, Test Podcast] To be clear Zeebe isn't "untestable", but it requires a different approach than say unit tests, and figuring that out isn't necessarily easy. Just because it's not, doesn't mean it's a bad design decision.

    Here's one test implementation for Zeebe BPMNs (i.e. DAGs): https://github.com/zeebe-io/bpmn-spec (this doesn't cover our use case, which is why I'm looking into an approach in pytest).

    Brian Okken
    @okken

    [Chris NeJame, Test Podcast] In my opinion, any design-related decision that makes testing things in an atomic way less achievable, is a bad design decision, as this inherently introduces large amounts of technical debt (i.e. time was borrowed from the future for a perceived speedup in the short term, and this will likely cost more time in the long term than what was perceived to be saved in the short term).

    In this case, in order to pay down the technical debt completely, you would have to go bankrupt and rebuild everything (unless you can piecemeal separate out individual components to be handled by separate, atomically testable components).

    I understand that your system would still be testable. But what I'm saying is that in having to involve so much in every one of your tests, and having to have so many more, they will take a much longer time to run and to write than they would if you could test things atomically, and you will likely spend much more time maintaining them.

    Consciously sacrificing testability, for the sake of short term speed ups, is by no means uncommon, because testing is often seen as secondary and not as important as writing the "actual" code. But I have never heard of a case where doing so didn't bite the person in the ass, and every system I've worked on that focused only on operating at the DAG level (which is surprisingly not 0) was a buggy, unmaintainable mess once it was down the road enough to be considered (by some, but not me) to be MVP.

    Brian Okken
    @okken

    [Kristoffer Bakkejord, Test Podcast] > any design-related decision that makes testing things in an atomic way less achievable, is a bad design decision
    Agreed.

    I understand that your system would still be testable. But what I'm saying is that in having to involve so much in every one of your tests, and having to have so many more, they will take a much longer time to run and to write than they would if you could test things atomically, and you will likely spend much more time maintaining them.
    It doesn't seem we're on the same page here. I do find Zeebe to be quite testable. With very little knowledge of pytest plugins and not too much time, I managed to create a basic framework that allowed me to test Zeebe BPMNs. Wanting to iterate on this, I met some challenges with how pytest works. But that don't mean either pytest or Zeebe is a bad design decision – just that my assumptions about it was wrong (and I can learn from this).
    Consciously sacrificing testability
    I don't think I am doing that,...
    that focused only on operating at the DAG level
    Not sure I'm following you here. You may have had a bad experience with DAG systems, but I don't think you should have assumptions about every use case and implementation considerations others may have.

    Anyways - I appreciate the discussion. You come with many good points. I was looking for some input on how to use pytest with test classes, and I've come to a conclusion that this isn't the right path.

    Brian Okken
    @okken
    [Pax, Test Podcast] > It’s not for testing python code. It’s testing a workflow running through a workflow system (Zeebe). The test class was intended to test a workflow, and several runs through parametrizing it.
    That’s interesting. I’ve not heard something like this before. I understand where Salmon Mode is coming from, given my experience with programs/scripts/applications. But I am uncertain if same applies when testing a workflow system. I hope you get it figured out.
    Brian Okken
    @okken

    [Chris NeJame, Test Podcast] The problems with such a system likely aren't obvious now, and may not be for a time, but, like I said, taking up technical debt is about short term gains, with long term consequences. The problems will become more apparent as you build out the application. As more complexity is introduced, it will exponentially grow out of control.

    The systems I worked with all had in common an inability to test individual steps in isolation. This meant that the automated tests had to operate at the end-to-end level, which made them very expensive and time consuming to run, and would break regularly, especially when third party dependencies were having issues. They would also prevent developers from finding out the full picture in regards to what was broken, because if an earlier common step was broken, everything that depended on it could not be tested.

    The end result, was a lot of wasted work, and very slow developer feedback loops, which wasted a lot of time.

    I wish you the best of luck though. I'm sure this will be quite an informative/educational experience in many ways :)

    Brian Okken
    @okken
    [Erich Zimmerman, Test Podcast] Huh, almost every UI test ever.... ;)
    Brian Okken
    @okken
    [Alex SAplacan, Test Podcast] HI all, quick question, what's the major difference between running pytest [OPTIONS] and python -m pytest [OPTIONS] ?
    For me the first one always returns errors (importing mostly) while the second runs fine.
    Brian Okken
    @okken
    [gc, Test Podcast] path
    Brian Okken
    @okken
    [a4z, Test Podcast] Is there is some tooling that would have catch this stupid mistake
    def ensure_some_default_settings() -> bool: ... return # ... True or False
    later some where else
    if ensure_some_default_settings:
    was of course always false ... VS Code showed me also not problem with that
    this is for throw away scripts, so not real testing planned, but I wonder, is there not a tools that could say
    did you mean `if ensure\_some\_default\_settings\(\):`
    something like a static analyser in other languages ...
    Brian Okken
    @okken
    [Jahn Thomas Fidje, Test Podcast] Hi all! I'm struggling a lot with patching, and just can't figure out how to fix it. Conclusion is that the way I'm writing the code is bad, but I don't know better ways to do this. I've created a mini-snippet showing what I hope is enough to explain my setup. If anyone have time, could you please take a quick look at this? I would really appreciate any suggestions on how to improve :) Thanks in advance! https://pastebin.com/pnW15i88
    Brian Okken
    @okken
    [Israel Fruchter, Test Podcast] hi @Jahn Thomas Fidje, the one thing to keep in mind in patching is that you need to patch the name matching the place you are importing