Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Oct 11 10:55
    peteyb commented #331
  • Oct 11 10:55
    peteyb commented #331
  • Oct 10 12:59

    Suor on master

    Added custom manager and querys… Removed unit test and fixed imp… (compare)

  • Oct 10 12:59
    Suor closed #336
  • Oct 10 12:59
    Suor closed #333
  • Oct 09 23:12
    xuru opened #336
  • Oct 09 22:26
    xuru synchronize #334
  • Oct 09 22:26
    xuru closed #334
  • Oct 09 22:13
    xuru synchronize #334
  • Oct 09 21:30
    xuru synchronize #334
  • Oct 08 15:38
    xuru commented #334
  • Oct 07 18:56
    xuru synchronize #334
  • Oct 06 01:55
    Suor labeled #331
  • Oct 06 01:49
    Suor commented #335
  • Oct 06 01:48

    Suor on master

    add Django 3.0 to the test matr… (compare)

  • Oct 06 01:48
    Suor closed #335
  • Oct 05 19:28
    mvbrn synchronize #335
  • Oct 05 19:15
    mvbrn synchronize #335
  • Oct 05 18:39
    mvbrn opened #335
  • Oct 04 16:30
    xuru commented #334
Abhishek Menon
@mav-erick
Hey guys. Amazing project! @Suor can i cache serializers if i am using DRF using cache ops?
Abhishek Menon
@mav-erick
I am running a multitenant application using Postgres schemas. The schema varies depending on the tenant. Will it cause problems when using this package?
Alexander Schepanovski
@Suor
@mav-erick as long as there are querysets inside it should work. Many people use cacheops with DRF just fine.
If you have the same SQL for different queries you will need to use cache prefix, see https://github.com/Suor/django-cacheops#sharing-redis-instance
Abhishek Menon
@mav-erick
Hi @Suor. Thanks for the info. One final question. Will the cache be invalidated with there are changes in a related model(Foreign keys and stuff) ?
Alexander Schepanovski
@Suor
Will be invalidated for any joined models via conditions. Won't be invalidated if .select_related() object changes. .prefetch_related() creates a separate query for related which is invalidated separately, so you might want to use that if the thing with .select_related() is an issue. See all CAVEATS here https://github.com/Suor/django-cacheops#caveats
Abhishek Menon
@mav-erick
Perfect. Thanks a lot!
Nikita Vilunov
@vilunov
Hiya all! Is it possible to keep the cache of an arbitrary function hot on time invalidation? I am caching some functions and a view. Currently I'm thinking about introducing a periodic celery task which invalidates that cache and fetches the view, but maybe there is a cleaner solution
Alexander Schepanovski
@Suor
Running a function after invalidation possibly via celery .delay() is the way to go. There is no any built in way to do that and there is no way to introspect what function results were just invalidated.
Alexander Schepanovski
@Suor
You may take a look at cache_invalidated signal if that helps.
BroCheng
@wang-xi
Does anyone konw the icon for django-cacheops?
Sam Buckingham
@BuckinghamIO
Anyone able to shed some light on how I could potentially get Cacheops to fail gracefully but quickly? currently it's incredibly slow if redis goes offline
I've tried overriding the redis class but cacheops makes so many requests that it negates it and I have no way or pushing the other cache hits over to a fallback cache or just turning off cacheops dynamically
It's caused by the timeouts it seems and even a 0.1 second timeout isn't exactly fast
This is in reference to issue #327 FYI
Alexander Schepanovski
@Suor
I don't know why it takes long, you should get connection refused almost immediately
BTW, I am not a big fan of this degrade on failure feature, I sometimes regret that I've added it. Still don't understand how people can use that, if your service is still functional without cache why even use one?
Sam Buckingham
@BuckinghamIO
@Suor Funcy could potentially work, I'll take a look into that and see what I can do. Well when you give cache-ops the incorrect details it fails immediately but if redis is online but not responding it increases page load times to 20 seconds plus... I was hoping to make it so that wasn't so slow.
Your service can be functional without a cache but not as fast as it could be with a cache. Cache's can help make the most of the current resource you have without needing to completely scale out to reach the same performance. @Suor
Sam Buckingham
@BuckinghamIO
You can test our situation by running a django app with cache-ops enabled and redis setup, but opening redis and putting it to sleep using 'debug redis sleep' for a couple minutes and see how the application reacts regardless of graceful enabled/disabled
Alexander Schepanovski
@Suor
I see, I always had application that immediately died without cache, too much load
Sam Buckingham
@BuckinghamIO
It could potentially happen as well I suppose now we give it more load because we expect cache to be there, however we are setup in a kubernetes environment with autoscaling via helm charts so extra load from redis dropping will trigger a scale out, costs us more but we stay online at reduced performance
Sam Buckingham
@BuckinghamIO
@Suor I don't believe funcy will work as it will be the same as setting the socket_timeout to X seconds as that is per redis request. The problem is when the first redis hit fails it still continues the next X amount of redis hits. So if you have a 1 second socket timeout * 50 redis requests just to load a object from cache with all its relations etc. That's 50 seconds. I need some way of on first fail for a query, cancelling out the next X amount of related or atleast a check that happens that pings redis and checks if it should revert to DB or continue with redis.
I don't think I can achieve this my self as I would need to redo core parts of Cacheops from what I am aware, unless you have another idea?
Alexander Schepanovski
@Suor
If you don't run Django in threads what are other requests you are talking?
Sam Buckingham
@BuckinghamIO
Well we modified the Redis class that cacheops uses to implement some logic to ping redis before hand and if it fails return back our failover cache. but on a single page load it will do this I think something like 90 times. My only guess is that the redis class is called per redis command?
Alexander Schepanovski
@Suor
cacheops uses single redis instance
This single redis instance has a pool of connections, but one request is done concurrently, unless you use threads
Sam Buckingham
@BuckinghamIO
Yeah 1 request after another, but I can't prevent the other requests from running after 1 fails or set it to hit another cache backend if needed. Its fairly hard to explain but essentially it seems cacheops only supports complete disconnect failure and not timeout
For instance socket_connect_timeout and socket_timeout
Alexander Schepanovski
@Suor
  1. You can pass any kwargs to redis class by adding them to CACHEOPS_REDIS
  1. It should be pretty simple to subclass CacheopsRedis and add behavior, which you want: wrap request, on error set some .enable_from = datetime.now() + timedelta(seconds=90), check if flag present and not expired before each request
Sam Buckingham
@BuckinghamIO
By that you mean wrap get, add, delete add some ping checks on each and return another cache if its not online. That's exactly what we are trying
Even if I add that active logic, set a timer so if its less than the next retry period it goes straight to another cache it takes forever.
Interestingly the only way to speed it up is to drop the socket_timeout setting in CACHEOPS_REDIS to 0/0.1
Which makes me wonder if Cacheops is making a redis connection even before hitting that custom class?
Since clearly that timoutsetting gets reached before my ping logic in GET/ADD/DELETE does
Sam Buckingham
@BuckinghamIO
@Suor
Sam Buckingham
@BuckinghamIO
On this page load I am testing I am recording 135 redis class get calls. When redis is active they are super fast, if they fail over however those 135 approx connections are super slow infact they take the full timeout of 0.5 second in this instance giving a 1 approx page load time.
When redis is online they page load time is 1.8 seconds as said before...
If i set the CACHEOP_REDIS timeout to 0.1 seconds we get a 13-20 second page load time which works out about right is you do 135 calls * 0.1 = 13 seconds or so
So it certainly has a impact but how? and how does it before the checks I've implemented
Alexander Schepanovski
@Suor

By that you mean wrap get, add, delete add some ping checks

You only need to wrap .execute_command()

it goes straight to another cache it takes forever.

What other cache? You have several caches and they fail all at once? If you mean it will go to next fetch, then next fetch will be skipped immediately due .enable_from flag present. So it doesn't matter how much of that fetches you have in a row and no need to make special ping mechanics.

Alexander Schepanovski
@Suor
You can even set up master/slave cache failover with custom redis class, I wonder if someone wrote that already
Sam Buckingham
@BuckinghamIO
Okay I'll see how it works with wrapping .execute_command(), and by other cache I mean we direct the cache hits to a database cache on redis failure
Sam Buckingham
@BuckinghamIO
@Suor I've found the offending line connection = pool.get_connection(command_name, **options) inside execute_command. Once I wrapped that with the active logic and return nothing if its offline the speed comes back and fails nicely
@Suor Big thanks for helping us pin point where the problem was, its really appreciated
Alexander Schepanovski
@Suor
You are welcome