CacheopsRedisand add behavior, which you want: wrap request, on error set some .enable_from = datetime.now() + timedelta(seconds=90), check if flag present and not expired before each request
it goes straight to another cache it takes forever.
What other cache? You have several caches and they fail all at once? If you mean it will go to next fetch, then next fetch will be skipped immediately due
.enable_from flag present. So it doesn't matter how much of that fetches you have in a row and no need to make special ping mechanics.
Hi all. I just discovered this package. It looks really slick and is actively maintained. I'm surprised I've never heard of it before.
Sometimes, things that seem too good to be true really are. Are there any caveats to using this module? Can it just be used in "a few places" as it seams? What are the "gotchas" -- things I'd learn only after using it for a month or so...
My use case is with comment flagging (likes/dislikes). I want to avoid a lot of extra calls refreshing those values whenever something changes. I thought of keeping a 'running total' in redis and managing that myself, but this seems like it could just be built into a method that queries the counts, and caching those values until something changes.. I mean, it seems like the perfect package for my use case.
python mange.py mirgatecommand failed
CACHEOPS_ENABLED=Falseinto setting then everything works fine