These are chat archives for django/django

Jul 2015
Alexey Kalinin
Jul 08 2015 07:39

Hello. I'm writing tests for one application, and I has a couple of questions:

1) When I use assertNumQueries to check a views, I get the fact that num of queries is differ than shows debug-toolbar. For example if in the test counted 380 queries

self.assertNumQueries(380, self.client.get, self.url)

then when I see the same view with the debug-toolbar, it shows about 50-70 requests.

2) In tests for a models I checking the ability to create an empty model. For Example

with self.assertRaises(IntegrityError):
    Stack.objects.create ()

self.assertEqual(len(Stack.objects.all ()), 0)

... and this getting

Traceback (most recent call last):
  File "", line 911, in test_save_empty_object
    self.assertEqual (len(Stack.objects.all ()), 0)
DatabaseError: current transaction is aborted, commands ignored until end of transaction block

Then I do so

        sid = transaction.savepoint ()

        with self.assertRaises (IntegrityError):
            Stack.objects.create ()

        transaction.savepoint_rollback (sid)

        self.assertEqual (len (Stack.objects.all ()), 0)

And it works. But I'm not sure that it's a good practice.
As DB I'm using postgres and backend is postgresql_psycopg2

Johannes Hoppe
Jul 08 2015 07:48
@Alkalit I don’t get it, sure you’ll get a different amount of queryies in proportion to the amount of objects in you’re database. So if you use the debug toolbar of a database that has more objects stored, it will most likly result in more queries, where tests just have a few objects you create.
If you want to create many objects an once to do more reallistic tests especieally for list views, you can look into model-mommy. It’s a package that allows you to create large amounts of model instances.
Alexey Kalinin
Jul 08 2015 07:56
@codingjoe Thanks, I already using it for testing and this package is awesome. I just wonder if it's normal that in tests I got more DB queries than in production, since it already has hundreds of models.
Johannes Hoppe
Jul 08 2015 08:15
@Alkalit that depends on how different your test, is form your production setup. It shouldn’t diverge too much, for that matter. My college and I are working on a way to create a pytest plugin that automatically detects missing prefetches. Let me know if you find any interesting insights regarding that topic.