Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Nov 26 11:12
    gedia commented #35
  • Nov 06 07:56

    Suor on master

    Fix m2o/m2m queries invalidatio… (compare)

  • Nov 06 07:29

    Suor on master

    #407 failing test to demonstrat… (compare)

  • Nov 06 07:29
    Suor closed #408
  • Oct 29 13:44
    KOliver94 opened #414
  • Oct 19 07:58

    Suor on master

    Remove link to Gitter chat It … (compare)

  • Oct 19 07:54

    Suor on master

    Update build status badge to re… (compare)

  • Oct 19 07:48

    Suor on master

    Refine CI (#413) - Add lint, P… (compare)

  • Oct 19 07:48
    Suor closed #413
  • Oct 19 07:39
    Suor commented #413
  • Oct 19 07:38
    Suor synchronize #413
  • Oct 19 07:38

    Suor on refine-ci

    Do not upgrade pip (compare)

  • Oct 19 07:36
    Suor synchronize #413
  • Oct 19 07:36

    Suor on refine-ci

    Say that djmaster is ok to fail (compare)

  • Oct 19 06:51
    Suor synchronize #413
  • Oct 19 06:51

    Suor on refine-ci

    Fix name (compare)

  • Oct 19 06:50
    Suor synchronize #413
  • Oct 19 06:50

    Suor on refine-ci

    Fix things (compare)

  • Oct 19 06:43
    Suor synchronize #413
  • Oct 19 06:43

    Suor on refine-ci

    Add testing against Django mast… (compare)

Alexander Schepanovski
@Suor

By that you mean wrap get, add, delete add some ping checks

You only need to wrap .execute_command()

it goes straight to another cache it takes forever.

What other cache? You have several caches and they fail all at once? If you mean it will go to next fetch, then next fetch will be skipped immediately due .enable_from flag present. So it doesn't matter how much of that fetches you have in a row and no need to make special ping mechanics.

Alexander Schepanovski
@Suor
You can even set up master/slave cache failover with custom redis class, I wonder if someone wrote that already
Sam Buckingham
@BuckinghamIO
Okay I'll see how it works with wrapping .execute_command(), and by other cache I mean we direct the cache hits to a database cache on redis failure
Sam Buckingham
@BuckinghamIO
@Suor I've found the offending line connection = pool.get_connection(command_name, **options) inside execute_command. Once I wrapped that with the active logic and return nothing if its offline the speed comes back and fails nicely
@Suor Big thanks for helping us pin point where the problem was, its really appreciated
Alexander Schepanovski
@Suor
You are welcome
ed@sharpertool.com
@kutenai

Hi all. I just discovered this package. It looks really slick and is actively maintained. I'm surprised I've never heard of it before.

Sometimes, things that seem too good to be true really are. Are there any caveats to using this module? Can it just be used in "a few places" as it seams? What are the "gotchas" -- things I'd learn only after using it for a month or so...

My use case is with comment flagging (likes/dislikes). I want to avoid a lot of extra calls refreshing those values whenever something changes. I thought of keeping a 'running total' in redis and managing that myself, but this seems like it could just be built into a method that queries the counts, and caching those values until something changes.. I mean, it seems like the perfect package for my use case.

Alexander Schepanovski
@Suor
The caveats are in the readme. Counts would be efficient in redis, but cacheops might work for you too.
Sorry for slow response, there is something wrong with my gitter notifications
John Anderson
@lampwins
Hi, I am a maintainer of netbox and we have been using cacheops for a while now and it works great! That said we have begun to use subqueries in a number of places and this is obviously causing issues with caching for us. I wanted to explore contributing support for subqueries in cachops and so to that end I want to see if you have any suggestions for how I might go about that. Basically I want to see if you already have any ideas around how you would want to see this implemented?
John Anderson
@lampwins
@Suor ^
Alexander Schepanovski
@Suor
Hello, sorry for missing this. Gitter is not reliable with notifications at all. If this is still relevant you can create an issue and we'll discuss it there
Osman Goni Nahid
@osmangoninahid
Hello there
can anyone help me on a issue with django-cacheops ?
facing a horrible issue suddenly
when I enable django cacheops python mange.py mirgate command failed
getting error like this
Traceback (most recent call last):
File "/Users/osmangoninahid/.pyenv/versions/reseller/lib/python3.6/site-packages/django/apps/registry.py", line 155, in get_app_config
return self.app_configs[app_label]
KeyError: 'migrations'
when I put CACHEOPS_ENABLED=False into setting then everything works fine
Riden Shark
@RidenShark
Hi. Does cacheops serve cached HTML? In response headers I don't see any thing even after enabling the cache. I understand it's caching querysets and stuff, but what about html? Any settings I need to use?
Alexander Schepanovski
@Suor
Unfortunately, this chat fails to send notifications. So it's better to create an issue on GitHub.
Your stack trace is too short to say anything
Przemek
@przemekm1998
Hi everyone, I don't know if anyone asked this question before but is there any option to combine cacheops with celery? I'd like to move cache rebuilding to celery, on cache miss return instant HTTP 202 and than trigger the async rebuild (with cache stampede protection). Is there any way to achieve that?
Shubham Raj
@shubhamrmob
Is django-cacheops supported in Django 3.2 ?? @Suor