Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
    Brian Okken
    @okken
    [Dima Spivak, Test Podcast] oh man, so nice to see pretty pytest work =D
    [Israel Fruchter, Test Podcast] yeah, it is
    [Israel Fruchter, Test Podcast] it's not mine, but I do approve most of it :)
    [Israel Fruchter, Test Podcast] whom wrote it, it's the first time he's touching pytest (or python for that matter)...
    [Israel Fruchter, Test Podcast] but he's really experienced developer, like a 20x one :)
    [Dima Spivak, Test Podcast] :) seeing error != None I thought "this guy might be a little newer to Python"
    [Dima Spivak, Test Podcast] there's that muscle memory of having read pep-8 90 times that shows in code looking "pythonic"
    [Israel Fruchter, Test Podcast] his day job, he's doing C++... so yeah :)
    Brian Okken
    @okken
    [Israel Fruchter, Test Podcast] So one take I have from you @dima is approaching from the angle of a "naive" developer angle. (and make it "idiot" proof) :)
    [Dima Spivak, Test Podcast] absolutely
    [Dima Spivak, Test Podcast] the most common issues that surface are from people who don't rtfm
    [Dima Spivak, Test Podcast] so having your application respond to that gracefully is important
    [Israel Fruchter, Test Podcast] yeah, seen that sentence today on a slide, "we should invest more in helping our users, to no shoot themselves in the leg"
    [Dima Spivak, Test Podcast] e.g. people who pass payloads when the endpoint doesn't support it, query parameters when path parameters are expected, etc.
    [Israel Fruchter, Test Podcast] we mainly going to use boto3 in the functional tests, kind of assuming most folks won't use HTTP directly with an AWS like API
    [Dima Spivak, Test Podcast] eh, you'd be surprised
    [Dima Spivak, Test Podcast] a lot of folks still try to use curl for all their automation needs
    [Israel Fruchter, Test Podcast] with AWS ?
    [Dima Spivak, Test Podcast] ooh yeah.
    [Dima Spivak, Test Podcast] lol
    [Israel Fruchter, Test Podcast] that the "crazy" developer angle :)
    [Dima Spivak, Test Podcast] the things I've seen would give you nightmares
    [Dima Spivak, Test Podcast] I mean think about it, the users of your products aren't technical enough to build CassandraDB in C++ themselves
    [Dima Spivak, Test Podcast] so they find a drop in replacement
    [Dima Spivak, Test Podcast] and then if they know about bash scripts and using curl, they'll keep using that
    [Israel Fruchter, Test Podcast] thanks but no thanks, have plenty of nightmares of my own :)
    [Israel Fruchter, Test Podcast] In that case, we should be more selective of customers, LOL
    Brian Okken
    @okken
    [Israel Fruchter, Test Podcast] luckily for us, some of our customers even contribute code, and showed it in europython2020
    [Israel Fruchter, Test Podcast] (or at least push me, to polish up, or python client fork of cassnadra one)
    Brian Okken
    @okken
    [Dima Spivak, Test Podcast] in unrelated news, I am currently hiring someone who nerds out about Python and Pytest for my team at StreamSets (post Series C startup focusing on interesting problems in the DataOps space): https://streamsets.bamboohr.com/jobs/view.php?id=36
    [Dima Spivak, Test Podcast] If anyone wants more info, DM me! :)
    [Dima Spivak, Test Podcast] end of hiring spam
    [Dima Spivak, Test Podcast] :)
    Jan Gazda
    @1oglop1
    Hi all, I remember one of the podcasts either TestAndCode or pythonBytes mentioned a tool to gradually introduce black and other things to the existing codebase in order to prevent big boom changes, I wonder if anyone could point me out to it.
    Brian Okken
    @okken
    [Mikey Reppy, Test Podcast] precommit makes it easy to black only the files you work on commit by commit
    [Mikey Reppy, Test Podcast] that's how we introduced it
    [Mikey Reppy, Test Podcast] although I confess I did black the entire test suite pretty soon in our journey so that it was very clear when a test changed in subsequent diffs
    Brian Okken
    @okken

    [Adam Parkin, Test Podcast] I’d be interested in knowing why you’d want to introduce black gradually. For us we started doing file by file, where if someone was working on a file because they had black installed it’d format the entire file, but found that made the review process more difficult. Also found that because not everyone had black installed (or didn’t have their editor set to format on save) that over time people who didn’t have Black set up would make changes to a file, which would cause black to reformat it later when someone else edited the file (again making for noisy diffs).

    We ended up doing a “big bang” commit where we ran black over the entire codebase and then reviewed that in one go and that worked really well for us. As a plus you can also add the commit of that “big bang” commit to a git-blame-ignore-revs file and then tell a git blame to ignore the commit when doing a diff (this blog post outlines that idea). And to enforce that black formatting is always enforced we added black to our CI pipeline, and fail the build if it finds files haven’t been formatted by black.

    Brian Okken
    @okken

    [Erich Zimmerman, Test Podcast] Hey all!

    I have a problem that seems to come up regularly, and I'm not really sure about the best Pytest-y way to solve this problem. This example is about the most concise I could find for the issue.

    ```@pytest.fixture(params = ['device 1', 'device 2', 'device 3']
    def test_device():
    return request.param

    def test_checking_urls_on_device(test_device):
    url_list = get_urls(test_device)
    for url in url_list:

        # call the URL, make assertion```

    This is not an ideal test, because I would like for each URL in the list to be a separate parameterized test case, def test_checking_urls_on_device(test_device, test_url), with test_url being the itemized parameters coming out of the test case.

    The concise problem statement is that depending on the device, the URL parameters will be different; we look up the URL list as a function of the device. I cannot know what the test_url values would be until I already know what test_device is.

    I think that pytest_generate_tests is my best option here, and I'm not even sure if that will really help me, because I need to know the value of test_device before I can create the test_url parameters.

    Brian Okken
    @okken
    [Liora Milbaum, Test Podcast] I am challenged with unit tests implementation for a FastAPI service with SQLAlchemy. Any ideas/examples would be super helpful.
    Brian Okken
    @okken
    [Dima Spivak, Test Podcast] Shouldn't devs be writing those as they develop, @Liora Milbaum ?
    Brian Okken
    @okken
    [Liora Milbaum, Test Podcast] Have you ever used pytest-postgresql plugin while connecting to already existing postgresql database and stayed alive to tell? :)
    Brian Okken
    @okken
    [Dima Spivak, Test Podcast] In a world of SQLAlchemy I dunno why to limit yourself to plugins specific to individual dbs
    Brian Okken
    @okken

    [Jacob Floyd, Test Podcast] I'm trying to fix a test. It's in a unittest class, but pytest is the official runner, so I'm using pytest features. Pulling the test out of the unittest class didn't make a difference.

    The test is supposed to test signal handling. ie when the app gets SIGHUP and SIGINT, reload() should be called, as should sys.exit (or SystemExit should be raised).

    If I run the one test all by itself, everything passes. But, when I run it in the suite, something else seems to get the SIGHUP/SIGINT causing pytest to exit. However wrote this before tried to use threading, but that just swallowed exceptions so no one knew that this test was broken. Instead some time after the test "passed" pytest would exit saying it got a KeyboardInterrupt.

    So, is there a good way to run a pytest test with subprocess or multiprocessing to make sure the signal is only going to the process under test?

    [Jacob Floyd, Test Podcast] Here's my updated test that fixes a hidden issues with that test and drops threading: https://github.com/cognifloyd/opsdroid/blob/new-test-machinery/tests/test_core.py#L87-L101
    Brian Okken
    @okken

    [Pax, Test Podcast] I’m creating tests involving an email marketing service (e.g. mailchimp). I’m trying to decide whether to pass the campaign’s recipients explicitly / as a separate parameter to a method that creates/schedules campaigns---making it safer to not schedule a campaign to the actual list during a test.

    On the other hand, it doesn’t seem right to pass the recipients as a separate parameter from the “email type” since an email/campaign always go to the same list / recipients. (Unless of course the method is used in tests.)

    Based on my collective knowledge about clean code (mostly books), passing a separate recipients parameter just for the sake of tests is not a good design decision; and that it’s the responsibility of the person making tests (which is currently me…but could be a different person in the future) to clean up (delete campaigns) or mock appropriately (perhaps stub the recipients prop) to make sure no email generated from tests are sent to actual users — but I also think there’s so much I don’t know so I would like to hear what others think.

    Brian Okken
    @okken

    [Chris NeJame, Test Podcast] @Pax I agree that adding an argument to a function just for a test is probably not a good design.

    What is the role of this function exactly? It sounds like it may be both to define the campaign, and also to send it out, is that correct?