Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
    Brian Okken
    @okken
    [Tibor Arpas, Test Podcast] so, venv and tox overlap a little but I think easiest is to use tox (with it’s standard way to create virtualenvs and install dependencies)

    [David Kotschessa, Test Podcast] So I guess here's where I'm confused. say I'm in an activated venv and I install a package, say django - it installs itself in the venv/bin/whatever folder, but not otherwise on my machine.
    The python version I'm using is also in venv/bin/python (or whatever) based on what version I'm using. Bu it's also installed globally.

    So I have python 2.7 (because mac still ships with it) and 3.8 (because that's what I installed)
    Soooo say I want to install python 3.2 - is there an installation method that puts it in the venv but does not install globally?

    Brian Okken
    @okken
    [David Kotschessa, Test Podcast] because I'm only going to use it for that project.
    [Tibor Arpas, Test Podcast] tox creates all the configured virtual environments under .tox directory and I think by default doesn’t take anything from global/active virtualenv.
    [Tibor Arpas, Test Podcast] i think it’s fine to have tox globally… and then all the things which you want experiment with are configured in tox.ini and corresponding virtualenv are independent from global install, from pyenv or venv
    [David Kotschessa, Test Podcast] oh yes I'm fine with having tox globally
    [David Kotschessa, Test Podcast] but I mean the different python versions
    Brian Okken
    @okken
    [David Kotschessa, Test Podcast] Because python isn't 'package' but it still ends up in your environment. I can't pip install python3.2 or something (right?)
    [Tibor Arpas, Test Podcast] tox was able to find the base python versions just fine on my mac… if that’s not the case for you I think you’ll be able to help it … via config
    [David Kotschessa, Test Podcast] i'm not worried it will find them - but just wondering if there's an ideal way to install them
    [Tibor Arpas, Test Podcast] I installed everything “globally” distinguished via python3.5 python3.6 etc
    [David Kotschessa, Test Podcast] I guess it doesn't matter that they are globally installed
    [Tibor Arpas, Test Podcast] I think you can start in the easiest way and then change if needed. There is too many options and details and I think those will be more important than a “strategy”
    [David Kotschessa, Test Podcast] It's also that I have the intention of making this easy for someone else to follow.
    [Tibor Arpas, Test Podcast] everything is configured in tox.ini and it doesn’t interfere with virtualenvs not created with tox. The python versions - easiest to have them all on path. I’m sure there is a other way but I don’t have experience with that.
    Brian Okken
    @okken
    [Tibor Arpas, Test Podcast] > It’s also that I have the intention of making this easy for someone else to follow.
    Understood. tox doesn’t install python versions and I don’t have experience with any fancy tool that would do that.
    [David Kotschessa, Test Podcast] No problem. You clarified a lot there for me though
    Brian Okken
    @okken
    [David Kotschessa, Test Podcast] well, there's this... https://hub.docker.com/r/fkrull/multi-python
    [David Kotschessa, Test Podcast] kind of taking it up a notch
    [David Kotschessa, Test Podcast] I kind of wanted to write something somewhat beginner friendly but I'm now thinking with tox and multiple python versions I'm already leaving that arenea
    Brian Okken
    @okken
    [Tibor Arpas, Test Podcast] nice :)
    Yeah tox is not very intuitive so not beginner friendly..
    Brian Okken
    @okken

    [Gabriele Bonetti, Test Podcast] Hi All, currently I run some test pipelines on jenkins completely on AWS, creating on-demand workers (jenkins slaves) when they are needed. Workers have been linux only so far but we will introduce win 10 support which needs to be tested. Have you ever tried to manage win 10 images on AWS? since there is no native image I see you have to bring your own and manage your own licenses, but looks cumbersome, how it's going to work on on-demand dynamic workers that then disappear?

    So I was wondering if this is a good idea at all or there are other solutions, for example using the supported windows server 2016 / 2019 images with desktop support and that can be good enough for testing on the equivalente windows desktop.

    For example I was also looking at browerstack, and there is a server -> desktop table, except for windows 10 :-) https://www.browserstack.com/question/478#:~:text=Home%20Support%20Live-,Is%20web%20testing%20on%20Windows%20Server%20edition%20the%20same%20as,remote%20access%20to%20our%20customers.&text=For%20Windows%2010%2C%20we%20use,which%20is%20a%20Desktop%20edition.

    Brian Okken
    @okken

    [Kristoffer Bakkejord, Test Podcast] Anyone have experience with this course? https://www.udemy.com/course/elegant-automation-frameworks-with-python-and-pytest/

    I would like to improve the pytest knowledge in my team, and addressing it through a not-too-long video course seems like it could be a good idea.

    [Kristoffer Bakkejord, Test Podcast] Or is there any other introduction video courses you recommend?
    Brian Okken
    @okken
    [David Kotschessa, Test Podcast] No but No but that reminds me @brian really needs to turn his book into a course. You still have any aspirations on that Brian?
    Brian Okken
    @okken

    [Pax, Test Podcast] Hello! Anyone here going to PyCon US next week? I’m looking for someone to replace my spot as volunteer for the Sponsor Workshop to this year’s PyCon US, “Blackfire: Debugging Performance in Python and Django applications - (Jérôme Vieilledent, Sümer Cip)”, which is happening on (Eastern US) 15:00 to 16:30 on Thursday May 13.

    I will give more info (what the role generally entails) here if anyone is a bit interested. I’m hesitant to send a super long message especially when I haven’t posted in a while on here. :see_no_evil:

    Brian Okken
    @okken

    [Jacob Floyd, Test Podcast] Some pytest help please:
    Is there a way to access capsys in a unittest-based test class without preventing nosetest from running the test as well? (only use capsys if running under pytest, else continue with the test's current stdout/stderr redirection logic)

    I've got a unittest based test suite that I'm working on running under pytest. I'm trying to minimize the delta of my changes to avoid integration hell as my branch won't be merged until all tests are working under the new setup (more than just pytest). In the meantime the tests will continue to run under nosetest, and I will need to periodically merge in the changes from the main branch into my pytest branch (thus, minimizing deltas is really important).

    I have one TestCase class that replaces sys.stdout and sys.stderr with open(path_to_temp_file, "w") in setUp() and then sets sys.stdout = sys.__stdout__ in tearDown(). That path_to_temp_file is then inspected in the tests to make sure output of a command isn't broken.
    This, of course conflicts with pytest's output capturing. And so the test seems to hang for a couple minutes before passing.
    In a pure pytest test, I would reach for the capsys fixture. But is there a way to use that in a unittest-based test? Hopefully conditionally based on if running under pytest or not.

    Brian Okken
    @okken
    [Jacob Floyd, Test Podcast] or is there a mark I can use to disable pytest stdout/stderr capturing for a class?
    Brian Okken
    @okken
    [Jacob Floyd, Test Podcast] Thanks! I found a way to lightly refactor the tests so that I have one place to inject the pytest specific bits and skip the old capture bits. It works much better now.
    Brian Okken
    @okken
    [Ash, Test Podcast] Nice
    Brian Okken
    @okken

    [Jacob Floyd, Test Podcast] When pytest runs unittest tests, it does something like:

    • _pytest.unittest.TestCaseFunction.runtest()
      • self._testcase(result=self) where self._tescase = the TestCase class under test, so TestCase.__call__(result=<instance TestCaseFunction>)
        • unittest2.run(*args, **kwargs)

    I have a TestClass that extends run() (and calls super), but pytest is passing in a _pytest.unittest.TestCaseFunction instance instead of a unittest.result.TestResult.

    That run function access result.errors or result.failures which of course is not present on TestCaseFunction. Given a TestCaseFunction, how can I tell if the test failed or had errors?

    Brian Okken
    @okken

    [Adam Parkin, Test Podcast] Anyone have experience testing FastAPI apps that use Sqlalchemy? I’ve been spoiled by Django’s TestCase class which gives you that nice test isolation by isolating everything in a transaction and rolling back after a test completes. I’d like to do something similar in FastAPI, and while I can create a pytest fixture that starts a transaction, yields a database session, then rolls back the transaction, that means that (as an example) API testing becomes more difficult (any setup in a test to insert data etc won’t be visible to the API endpoint since it was never committed).

    Would appreciate any experiences, tips, or tricks from those who’ve maybe done some testing of a FastAPI + Sqlalchemy app.

    Brian Okken
    @okken
    [Adam Parkin, Test Podcast] Pytest question: if one marks a bunch of tests with a custom mark (say @pytest.mark.integration) you can exclude those tests by doing -m "not integration" which works, but means you now have to remember to supply that option. Is there a way to configure Pytest so that by default (ie without specifying on the command line) tests with the integration mark are skipped?
    Brian Okken
    @okken
    [Brian Skinn, Test Podcast] Another pytest question -- I've got a repo with a small pytest suite, which I'm using with an Actions cron job to check the contents of a Gist for freshness: https://github.com/bskinn/intersphinx-gist/blob/95f959faf11a4a2e5fd7783aac73d19761534958/test_isphx_gist.py
    Brian Okken
    @okken

    [David Kotschessa, Test Podcast] man. Trying to debut a few tests that:

    • Fail (sometimes)when I run my whole test suite, in random order using randomly (can reproduce with the random seed)
    • Pass when I run them independently (even if I use the same random seed)
    • Pass when I manually run them in the same order randomly does.
    They are very isolated unit tests.

    Brian Okken
    @okken

    [Chris NeJame, Test Podcast] Unsolicited advice:

    Consider the transitive property. If the expected is A < B < C, then A can be known to be less than C only by asserting A < B and B < C.

    If you know function A should pass X to function B, you can assert that it does, and then assert that B handles X separately without needing to involve A.

    But consider also how these things help you (and other developers) to debug problems. "Debugging" is not a singular thing. It has many components. For example, pinpointing, i.e. isolating the condition under which the bug occurs. Another is "locating", i.e. finding the location of the bug in the code so it can be fixed. A third is "repairing", i.e. changing the code to remove the problem. These 3 components all assume, however, that the bug has already been discovered.

    How would breaking things out as atomically as possible (via the transitive property) help with these three components? How would it hurt them? Are there other things that could be done to enhance them?

    There can be trade-offs. For example, focusing on exclusively on breaking things out as atomically as possible might help with locating a bug, and maybe even repairing a bug (albeit indirectly since the code could be more "decoupled" in general, although that's not a guarantee either). But it could also create a lot of cruft to sift through when trying to pinpoint a bug. How might this impact the overall "debugging" process, though?

    Brian Okken
    @okken
    [David Kotschessa, Test Podcast] @Salmon Mode same inputs every time thr test is run. It has no depencies. It only fails on certain random number seeds when the entire test suite is run. The ordering of the tests is random within each module. Manually running the tests in the same order that caused the failure does not cause the same failure.
    [David Kotschessa, Test Podcast] Also there is more than one random number seed (and therefore test ordering) that causes the failure.
    [David Kotschessa, Test Podcast] Might be a bug with randomly itself, or some collision of the entire suite, e.g. djano plus pytest plus randomly plus...
    Brian Okken
    @okken
    [Jukka Akkanen, Test Podcast] I've had a couple of situations where a single test fails depending on the randomized test order. My approach has been to note which tests are executed before the failing one on each run, and after a while a pattern emerges
    Brian Okken
    @okken
    [David Kotschessa, Test Podcast] yeah if I run the tests in the same order manually they don't break.
    [David Kotschessa, Test Podcast] and the ones that were running prior to the breaking test were completely unrelated - different modules, different fixtures, different django app using a differenet set of models! just insanely weird.
    Unfortunately I just had to remove randomly because it is impossible to troubleshoot. I think I've at least established pretty good test independence practices by using it from the beginning. Maybe I'll run it occasionally locally but not in thee CI pipeline
    Brian Okken
    @okken
    [Mark Harvey, Test Podcast] I’m running pytest —cov=src but it's not picking up untested files. Is there an easy way to include 0% covered files?
    Brian Okken
    @okken
    [Mark Harvey, Test Podcast] Sorry forget that, I was missing an init file
    Brian Okken
    @okken
    [Jacob Floyd, Test Podcast] There are some tests that mock a mongo database here:
    opsdroid/opsdroid#1799
    But how can we write any assertions about the code under test? Is there a better test then merely running the functions to implicitly assert that they don't raise any exceptions?
    Brian Okken
    @okken

    [Adam Parkin, Test Podcast] Question: can you parametrize a pytest test by a fixture? Like say I have a test that takes a fixture, and I write a second test that takes a different fixture but is otherwise identical. I’d like to write something like:

        "fixture_name",
        [
            some_fixture, some_other_fixture
        ]
    )
    def test_with_parameterized_fixture(fixture):
        ... modify fixture in some way ...
        ... do some assertion about fixture ...

    But that doesn’t work as some_fixture and some_other_fixture aren’t defined (the fixtures are in my conftest). Any ideas?