Where communities thrive

  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
    Brian Okken
    [Dima Spivak, Test Podcast] (when I see "API test" I imagine more things around correctness and error handling than load in the face of fault injection)
    [Israel Fruchter, Test Podcast] I'm planning both testing the APIs in a more controlled and specific environment
    [Israel Fruchter, Test Podcast] but also in a more close to reality environment, with bigger amount of traffic going into the system
    [Dima Spivak, Test Podcast] gotcha
    [Dima Spivak, Test Podcast] so yeah, for APIs, the things most people forget are around malformed data
    [Dima Spivak, Test Podcast] payloads that are too big
    [Dima Spivak, Test Podcast] wrong datatypes being passed
    [Dima Spivak, Test Podcast] that sorta thing
    [Israel Fruchter, Test Podcast] FYI, those are the "unit-tests" the development guys were doing so far
    [Dima Spivak, Test Podcast] oh man, so nice to see pretty pytest work =D
    [Israel Fruchter, Test Podcast] yeah, it is
    [Israel Fruchter, Test Podcast] it's not mine, but I do approve most of it :)
    [Israel Fruchter, Test Podcast] whom wrote it, it's the first time he's touching pytest (or python for that matter)...
    [Israel Fruchter, Test Podcast] but he's really experienced developer, like a 20x one :)
    [Dima Spivak, Test Podcast] :) seeing error != None I thought "this guy might be a little newer to Python"
    [Dima Spivak, Test Podcast] there's that muscle memory of having read pep-8 90 times that shows in code looking "pythonic"
    [Israel Fruchter, Test Podcast] his day job, he's doing C++... so yeah :)
    Brian Okken
    [Israel Fruchter, Test Podcast] So one take I have from you @dima is approaching from the angle of a "naive" developer angle. (and make it "idiot" proof) :)
    [Dima Spivak, Test Podcast] absolutely
    [Dima Spivak, Test Podcast] the most common issues that surface are from people who don't rtfm
    [Dima Spivak, Test Podcast] so having your application respond to that gracefully is important
    [Israel Fruchter, Test Podcast] yeah, seen that sentence today on a slide, "we should invest more in helping our users, to no shoot themselves in the leg"
    [Dima Spivak, Test Podcast] e.g. people who pass payloads when the endpoint doesn't support it, query parameters when path parameters are expected, etc.
    [Israel Fruchter, Test Podcast] we mainly going to use boto3 in the functional tests, kind of assuming most folks won't use HTTP directly with an AWS like API
    [Dima Spivak, Test Podcast] eh, you'd be surprised
    [Dima Spivak, Test Podcast] a lot of folks still try to use curl for all their automation needs
    [Israel Fruchter, Test Podcast] with AWS ?
    [Dima Spivak, Test Podcast] ooh yeah.
    [Dima Spivak, Test Podcast] lol
    [Israel Fruchter, Test Podcast] that the "crazy" developer angle :)
    [Dima Spivak, Test Podcast] the things I've seen would give you nightmares
    [Dima Spivak, Test Podcast] I mean think about it, the users of your products aren't technical enough to build CassandraDB in C++ themselves
    [Dima Spivak, Test Podcast] so they find a drop in replacement
    [Dima Spivak, Test Podcast] and then if they know about bash scripts and using curl, they'll keep using that
    [Israel Fruchter, Test Podcast] thanks but no thanks, have plenty of nightmares of my own :)
    [Israel Fruchter, Test Podcast] In that case, we should be more selective of customers, LOL
    Brian Okken
    [Israel Fruchter, Test Podcast] luckily for us, some of our customers even contribute code, and showed it in europython2020
    [Israel Fruchter, Test Podcast] (or at least push me, to polish up, or python client fork of cassnadra one)
    Brian Okken
    [Dima Spivak, Test Podcast] in unrelated news, I am currently hiring someone who nerds out about Python and Pytest for my team at StreamSets (post Series C startup focusing on interesting problems in the DataOps space): https://streamsets.bamboohr.com/jobs/view.php?id=36
    [Dima Spivak, Test Podcast] If anyone wants more info, DM me! :)
    [Dima Spivak, Test Podcast] end of hiring spam
    [Dima Spivak, Test Podcast] :)
    Jan Gazda
    Hi all, I remember one of the podcasts either TestAndCode or pythonBytes mentioned a tool to gradually introduce black and other things to the existing codebase in order to prevent big boom changes, I wonder if anyone could point me out to it.
    Brian Okken
    [Mikey Reppy, Test Podcast] precommit makes it easy to black only the files you work on commit by commit
    [Mikey Reppy, Test Podcast] that's how we introduced it
    [Mikey Reppy, Test Podcast] although I confess I did black the entire test suite pretty soon in our journey so that it was very clear when a test changed in subsequent diffs
    Brian Okken

    [Adam Parkin, Test Podcast] I’d be interested in knowing why you’d want to introduce black gradually. For us we started doing file by file, where if someone was working on a file because they had black installed it’d format the entire file, but found that made the review process more difficult. Also found that because not everyone had black installed (or didn’t have their editor set to format on save) that over time people who didn’t have Black set up would make changes to a file, which would cause black to reformat it later when someone else edited the file (again making for noisy diffs).

    We ended up doing a “big bang” commit where we ran black over the entire codebase and then reviewed that in one go and that worked really well for us. As a plus you can also add the commit of that “big bang” commit to a git-blame-ignore-revs file and then tell a git blame to ignore the commit when doing a diff (this blog post outlines that idea). And to enforce that black formatting is always enforced we added black to our CI pipeline, and fail the build if it finds files haven’t been formatted by black.

    Brian Okken

    [Erich Zimmerman, Test Podcast] Hey all!

    I have a problem that seems to come up regularly, and I'm not really sure about the best Pytest-y way to solve this problem. This example is about the most concise I could find for the issue.

    ```@pytest.fixture(params = ['device 1', 'device 2', 'device 3']
    def test_device():
    return request.param

    def test_checking_urls_on_device(test_device):
    url_list = get_urls(test_device)
    for url in url_list:

        # call the URL, make assertion```

    This is not an ideal test, because I would like for each URL in the list to be a separate parameterized test case, def test_checking_urls_on_device(test_device, test_url), with test_url being the itemized parameters coming out of the test case.

    The concise problem statement is that depending on the device, the URL parameters will be different; we look up the URL list as a function of the device. I cannot know what the test_url values would be until I already know what test_device is.

    I think that pytest_generate_tests is my best option here, and I'm not even sure if that will really help me, because I need to know the value of test_device before I can create the test_url parameters.