Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Oct 19 07:58

    Suor on master

    Remove link to Gitter chat It … (compare)

  • Oct 19 07:54

    Suor on master

    Update build status badge to re… (compare)

  • Oct 19 07:48

    Suor on master

    Refine CI (#413) - Add lint, P… (compare)

  • Oct 19 07:48
    Suor closed #413
  • Oct 19 07:39
    Suor commented #413
  • Oct 19 07:38
    Suor synchronize #413
  • Oct 19 07:38

    Suor on refine-ci

    Do not upgrade pip (compare)

  • Oct 19 07:36
    Suor synchronize #413
  • Oct 19 07:36

    Suor on refine-ci

    Say that djmaster is ok to fail (compare)

  • Oct 19 06:51
    Suor synchronize #413
  • Oct 19 06:51

    Suor on refine-ci

    Fix name (compare)

  • Oct 19 06:50
    Suor synchronize #413
  • Oct 19 06:50

    Suor on refine-ci

    Fix things (compare)

  • Oct 19 06:43
    Suor synchronize #413
  • Oct 19 06:43

    Suor on refine-ci

    Add testing against Django mast… (compare)

  • Oct 19 06:32
    Suor synchronize #413
  • Oct 19 06:32

    Suor on refine-ci

    Fix Py35, add lint (compare)

  • Oct 19 06:24
    Suor synchronize #413
  • Oct 19 06:24

    Suor on refine-ci

    Run by python again (compare)

  • Oct 19 06:18
    Suor synchronize #413
Sam Buckingham
@BuckinghamIO
@Suor Funcy could potentially work, I'll take a look into that and see what I can do. Well when you give cache-ops the incorrect details it fails immediately but if redis is online but not responding it increases page load times to 20 seconds plus... I was hoping to make it so that wasn't so slow.
Your service can be functional without a cache but not as fast as it could be with a cache. Cache's can help make the most of the current resource you have without needing to completely scale out to reach the same performance. @Suor
Sam Buckingham
@BuckinghamIO
You can test our situation by running a django app with cache-ops enabled and redis setup, but opening redis and putting it to sleep using 'debug redis sleep' for a couple minutes and see how the application reacts regardless of graceful enabled/disabled
Alexander Schepanovski
@Suor
I see, I always had application that immediately died without cache, too much load
Sam Buckingham
@BuckinghamIO
It could potentially happen as well I suppose now we give it more load because we expect cache to be there, however we are setup in a kubernetes environment with autoscaling via helm charts so extra load from redis dropping will trigger a scale out, costs us more but we stay online at reduced performance
Sam Buckingham
@BuckinghamIO
@Suor I don't believe funcy will work as it will be the same as setting the socket_timeout to X seconds as that is per redis request. The problem is when the first redis hit fails it still continues the next X amount of redis hits. So if you have a 1 second socket timeout * 50 redis requests just to load a object from cache with all its relations etc. That's 50 seconds. I need some way of on first fail for a query, cancelling out the next X amount of related or atleast a check that happens that pings redis and checks if it should revert to DB or continue with redis.
I don't think I can achieve this my self as I would need to redo core parts of Cacheops from what I am aware, unless you have another idea?
Alexander Schepanovski
@Suor
If you don't run Django in threads what are other requests you are talking?
Sam Buckingham
@BuckinghamIO
Well we modified the Redis class that cacheops uses to implement some logic to ping redis before hand and if it fails return back our failover cache. but on a single page load it will do this I think something like 90 times. My only guess is that the redis class is called per redis command?
Alexander Schepanovski
@Suor
cacheops uses single redis instance
This single redis instance has a pool of connections, but one request is done concurrently, unless you use threads
Sam Buckingham
@BuckinghamIO
Yeah 1 request after another, but I can't prevent the other requests from running after 1 fails or set it to hit another cache backend if needed. Its fairly hard to explain but essentially it seems cacheops only supports complete disconnect failure and not timeout
For instance socket_connect_timeout and socket_timeout
Alexander Schepanovski
@Suor
  1. You can pass any kwargs to redis class by adding them to CACHEOPS_REDIS
  1. It should be pretty simple to subclass CacheopsRedis and add behavior, which you want: wrap request, on error set some .enable_from = datetime.now() + timedelta(seconds=90), check if flag present and not expired before each request
Sam Buckingham
@BuckinghamIO
By that you mean wrap get, add, delete add some ping checks on each and return another cache if its not online. That's exactly what we are trying
Even if I add that active logic, set a timer so if its less than the next retry period it goes straight to another cache it takes forever.
Interestingly the only way to speed it up is to drop the socket_timeout setting in CACHEOPS_REDIS to 0/0.1
Which makes me wonder if Cacheops is making a redis connection even before hitting that custom class?
Since clearly that timoutsetting gets reached before my ping logic in GET/ADD/DELETE does
Sam Buckingham
@BuckinghamIO
@Suor
Sam Buckingham
@BuckinghamIO
On this page load I am testing I am recording 135 redis class get calls. When redis is active they are super fast, if they fail over however those 135 approx connections are super slow infact they take the full timeout of 0.5 second in this instance giving a 1 approx page load time.
When redis is online they page load time is 1.8 seconds as said before...
If i set the CACHEOP_REDIS timeout to 0.1 seconds we get a 13-20 second page load time which works out about right is you do 135 calls * 0.1 = 13 seconds or so
So it certainly has a impact but how? and how does it before the checks I've implemented
Alexander Schepanovski
@Suor

By that you mean wrap get, add, delete add some ping checks

You only need to wrap .execute_command()

it goes straight to another cache it takes forever.

What other cache? You have several caches and they fail all at once? If you mean it will go to next fetch, then next fetch will be skipped immediately due .enable_from flag present. So it doesn't matter how much of that fetches you have in a row and no need to make special ping mechanics.

Alexander Schepanovski
@Suor
You can even set up master/slave cache failover with custom redis class, I wonder if someone wrote that already
Sam Buckingham
@BuckinghamIO
Okay I'll see how it works with wrapping .execute_command(), and by other cache I mean we direct the cache hits to a database cache on redis failure
Sam Buckingham
@BuckinghamIO
@Suor I've found the offending line connection = pool.get_connection(command_name, **options) inside execute_command. Once I wrapped that with the active logic and return nothing if its offline the speed comes back and fails nicely
@Suor Big thanks for helping us pin point where the problem was, its really appreciated
Alexander Schepanovski
@Suor
You are welcome
ed@sharpertool.com
@kutenai

Hi all. I just discovered this package. It looks really slick and is actively maintained. I'm surprised I've never heard of it before.

Sometimes, things that seem too good to be true really are. Are there any caveats to using this module? Can it just be used in "a few places" as it seams? What are the "gotchas" -- things I'd learn only after using it for a month or so...

My use case is with comment flagging (likes/dislikes). I want to avoid a lot of extra calls refreshing those values whenever something changes. I thought of keeping a 'running total' in redis and managing that myself, but this seems like it could just be built into a method that queries the counts, and caching those values until something changes.. I mean, it seems like the perfect package for my use case.

Alexander Schepanovski
@Suor
The caveats are in the readme. Counts would be efficient in redis, but cacheops might work for you too.
Sorry for slow response, there is something wrong with my gitter notifications
John Anderson
@lampwins
Hi, I am a maintainer of netbox and we have been using cacheops for a while now and it works great! That said we have begun to use subqueries in a number of places and this is obviously causing issues with caching for us. I wanted to explore contributing support for subqueries in cachops and so to that end I want to see if you have any suggestions for how I might go about that. Basically I want to see if you already have any ideas around how you would want to see this implemented?
John Anderson
@lampwins
@Suor ^
Alexander Schepanovski
@Suor
Hello, sorry for missing this. Gitter is not reliable with notifications at all. If this is still relevant you can create an issue and we'll discuss it there
Osman Goni Nahid
@osmangoninahid
Hello there
can anyone help me on a issue with django-cacheops ?
facing a horrible issue suddenly
when I enable django cacheops python mange.py mirgate command failed
getting error like this
Traceback (most recent call last):
File "/Users/osmangoninahid/.pyenv/versions/reseller/lib/python3.6/site-packages/django/apps/registry.py", line 155, in get_app_config
return self.app_configs[app_label]
KeyError: 'migrations'
when I put CACHEOPS_ENABLED=False into setting then everything works fine
Riden Shark
@RidenShark
Hi. Does cacheops serve cached HTML? In response headers I don't see any thing even after enabling the cache. I understand it's caching querysets and stuff, but what about html? Any settings I need to use?
Alexander Schepanovski
@Suor
Unfortunately, this chat fails to send notifications. So it's better to create an issue on GitHub.
Your stack trace is too short to say anything
Przemek
@przemekm1998
Hi everyone, I don't know if anyone asked this question before but is there any option to combine cacheops with celery? I'd like to move cache rebuilding to celery, on cache miss return instant HTTP 202 and than trigger the async rebuild (with cache stampede protection). Is there any way to achieve that?
Shubham Raj
@shubhamrmob
Is django-cacheops supported in Django 3.2 ?? @Suor