Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Apr 20 18:39

    mergify[bot] on master

    Update scalatest to 3.2.8 Merge pull request #515 from sc… (compare)

  • Apr 20 18:39
    mergify[bot] closed #515
  • Apr 20 18:35
    scala-steward opened #516
  • Apr 20 18:35
    scala-steward opened #515
  • Apr 20 14:25
    scala-steward closed #500
  • Apr 20 14:25
    scala-steward commented #500
  • Apr 20 14:25
    scala-steward opened #514
  • Apr 20 10:34

    mergify[bot] on master

    Update sbt-mdoc to 2.2.20 Merge pull request #513 from sc… (compare)

  • Apr 20 10:34
    mergify[bot] closed #513
  • Apr 20 10:30
    scala-steward opened #513
  • Apr 11 02:08
    scala-steward synchronize #501
  • Apr 10 22:47

    mergify[bot] on master

    Update cats-effect, cats-effect… Merge pull request #512 from sc… (compare)

  • Apr 10 22:47
    mergify[bot] closed #512
  • Apr 10 22:43
    scala-steward opened #512
  • Apr 04 18:15
    scala-steward opened #511
  • Apr 02 06:05
    scala-steward synchronize #501
  • Apr 02 05:05
    lewisjkl closed #506
  • Apr 02 05:05

    lewisjkl on master

    Update cats-effect, cats-effect… Fix caffeine tests empty commit and 1 more (compare)

  • Apr 02 05:05
    lewisjkl closed #510
  • Apr 01 23:19
    kubukoz synchronize #510
Paulo "JCranky" Siqueira
@jcranky
should, but why take the risk in a production system, while it is still a Milestone?
Arunav Sanyal
@Khalian
is there documentation on how to create a Caffeine scala cache. I am looking at this https://cb372.github.io/scalacache/docs/index.html and it only says "use caffeine if you want to use a high performance cache"
Arunav Sanyal
@Khalian
nvm, i figured it out : private val accountSPCache = CaffeineCache(Caffeine.newBuilder.build[String, Entry[List[String]]]). Can someone please add this (and every other version) into the documentation so that people do not have to go look at unit tests. Thanks
Arunav Sanyal
@Khalian
in the example | val result = caching("benjamin")(ttl = None) - what does ttl None mean? does it mean the ttl is now the same as the underlying cache init one OR does it imply that there is no ttl, this is meant to live till infinity
Sean Kwak
@cosmir17

Hi :) I used scalacache version, 0.10.0 for scalacache-core & scala-caffeine. I upgraded to "0.28.0"

The following code stopped being compiled..

  implicit private val inMemoryCache: ScalaCache[InMemoryRepr] = ScalaCache(CaffeineCache())
  private val CacheTime = 10.seconds
  def myMethod: Future[Boolean] = memoizeSync(CacheTime) {.....}

Can I ask how I can migrate to 0.28.0?

I have changed the first line to

  implicit private val inMemoryCache: Cache[Future[Boolean]] = CaffeineCache[Future[Boolean]]

It compiles but I don't feel that it is the right approach..

Bijan Chokoufe Nejad
@bijancn
Hey @here . Is there a release planned with cats effect 2.0 ?
Matthew Tovbin
@tovbinm
Howdy, folks! I just started using your library and it's absolutely amazing! Simple and clean API, easy integrations with Redis and others. Thank you!! ;))
Roberto Leibman
@rleibman
Hey... I'm having an issue... I'm trying to use memoizeF and for some reason it's not working the cache keeps on getting "missed" according to the logs. My code looks like this:
    private case object UserCache {

      import scalacache.ZioEffect.modes._

      private implicit val userCache: Cache[Option[User]] = CaffeineCache[Option[User]]

      private[LiveRecipeDAO] def get(userId: Int): Task[Option[User]] = memoizeF[Task, Option[User]](Option(1 hour)) {
        val zio: Task[Option[User]] = fromDBIO(for {
          userOpt <- UserQuery.filter(u => u.id === userId && !u.deleted).result.headOption
          accountOpt <- DBIO
            .sequence(
              userOpt.toSeq.map(
                user =>
                  AccountQuery.filter(account => account.id === user.accountId).result.headOption
              )
            )
            .map(_.flatten.headOption)
        } yield for {
          account <- accountOpt
          user    <- userOpt
        } yield user.toUser(account))

        val runtime = new DefaultRuntime {}
        for {
        _ <- console.putStrLn(s"Retrieving user ${userId}").provide(runtime.environment)
          zio <- zio.provide(self): Task[Option[User]]
        } yield zio
      }
    }
Roberto Leibman
@rleibman
The log does show that the value is inserted into the cache, but the cache consistently misses:
*** (s.caffeine.CaffeineCache) Cache miss for key dao.LiveRecipeDAO.$anon.UserCache.get(1)
*** (s.caffeine.CaffeineCache) Inserted value into cache with key dao.LiveRecipeDAO.$anon.UserCache.get(1) with TTL 3600000 ms
Roberto Leibman
@rleibman
... answering my own self...
Turns out that private inner objects are not "static" so each of my outer objects was getting it's own cache... I moved all the caches outside to a global Object and that worked!
Roberto Leibman
@rleibman
How do I remove from a cache? In the example above, I've tried removing by the user, and by the userId, but neither of those worked. Only removeAll worked
eltherion
@eltherion

Hi, I'm evaluating ScalaCache usage, but I've got one problem. In order to avoid accidental flushing of a Redis prod database we have this configuration enabled (example aliases, ofc):

rename-command FLUSHALL FLUSHALLNEW
rename-command FLUSHDB FLUSHDBNEW

That means standard:

removeAll[T]()

will not work, bc it uses hardcoded FLUSHDB command resulting in that exception:

redis.clients.jedis.exceptions.JedisDataException: ERR unknown command `FLUSHDB`, with args beginning with: 
  redis.clients.jedis.Protocol.processError(Protocol.java:130)
  redis.clients.jedis.Protocol.process(Protocol.java:164)
  redis.clients.jedis.Protocol.read(Protocol.java:218)
  redis.clients.jedis.Connection.readProtocolWithCheckingBroken(Connection.java:341)
  redis.clients.jedis.Connection.getStatusCodeReply(Connection.java:240)
  redis.clients.jedis.BinaryJedis.flushDB(BinaryJedis.java:361)
  scalacache.redis.RedisCache.$anonfun$doRemoveAll$1(RedisCache.scala:20)
  scalacache.AsyncForId$.delay(Async.scala:48)
  scalacache.redis.RedisCache.doRemoveAll(RedisCache.scala:17)
  scalacache.AbstractCache.removeAll(AbstractCache.scala:76)
  scalacache.AbstractCache.removeAll$(AbstractCache.scala:75)
  scalacache.redis.RedisCache.removeAll(RedisCache.scala:12)
  scalacache.package$RemoveAll.apply(package.scala:63)

Is there any workaround for that? I don't want to mix libraries and I would like to avoid low level calls, but I might accept it if inevitable.

Roberto Leibman
@rleibman
I don't know if this thing is on ;) I'm working with the zio branch from @dieproht , I'm trying to memoizeF a ZIO[SomeResource, Throwable, Something], I do include the zio modes, but I still get this error:
Error:(123, 62) Could not find a Mode for type dao.RepositoryIO.
If you want synchronous execution, try importing the sync mode:
import scalacache.modes.sync._
If you are working with Scala Futures, import the scalaFuture mode
and don't forget you will also need an ExecutionContext:
import scalacache.modes.scalaFuture._
import scala.concurrent.ExecutionContext.Implicits.global

        memoizeF[RepositoryIO, Option[User]](Option(1.hour)) {
I have no problem if it's a Task[Something], only if it's a more complicated ZIO that requires resources
Roberto Leibman
@rleibman
I guess the mode only supports Task... ok... so I put my cache closer to the edge of the app and got it to work.
Roberto Leibman
@rleibman
@dieproht?
@dieproht did you zio interop ever make it into the main repo?
Roberto Leibman
@rleibman
I hope you don't mind... I'm actually using it, so I put in a PR for it.
Roberto Leibman
@rleibman
@cb372 Can you look at my PR please?
Roberto Leibman
@rleibman
bump
bengraygh
@bengraygh

How do I remove from a cache? In the example above, I've tried removing by the user, and by the userId, but neither of those worked. Only removeAll worked

I think I have figured out why this is happening to you, but am not sure what is the best way to fix it. Did you find a solution?

I think the problem is that the method call is part of the key stored by scalacache, and calling it from a different method causes a cache miss.

Miguel Vilá
@miguel-vila
hello :wave: , we have observed some strange behavior when using scalacache along with ZIO. We are using a Task mode and we noticed that the parameters were not being included as part of the key. We factored out the code by extracting the underlying operation and it worked as expected, will link some gist
you can see the differences between the working version and the non working version there
so I'm wondering if this is something that's expected? were we missusing scalacache?
Roberto Leibman
@rleibman
@miguel-vila I didn't know there was even an official zio mode. I put a PR back in April but haven't heard anything since.
sc6l6d3v
@sc6l6d3v
Is there any possibility of caching an fs2 stream?
Simon Redfern
@simonredfern_gitlab
Folks, Nubeee here. How do I make the following work with maven?: import scalacache.serialization.binary._
PawelJ-PL
@PawelJ-PL
Hi, I'm wonder why traits Cache and CacheAlg doesn't include effect type parameter (it was moved to methods declarations). I've just created an issue for this (cb372/scalacache#417), where I've explained my doubts more precisely.
SWAPNIL SONAWANE
@iamswapnil44_twitter
Folks, Is there any way to retrive all the keys from Caffeine cache ?
Brendan Maguire
@brendanmaguire

Hi. Hoping someone can help with a question I have regarding using scalacache with cats.effect.IO. If I run the following code:

import cats.effect.{ExitCode, IO, IOApp}
import cats.implicits._
import scalacache.guava.GuavaCache
import scalacache.{cachingF, CacheConfig, Flags}

import scala.concurrent.duration.DurationLong

object CacheTest extends IOApp {

  override def run(args: List[String]): IO[ExitCode] =
    (
      cached("abc"),
      cached("abc"),
      cached("def")
    ).parMapN(_ + _ + _) *> IO.pure(ExitCode.Success)

  private val cache = GuavaCache[String](CacheConfig.defaultCacheConfig)

  private def cached(key: String) =
    cachingF(key)(ttl = None)(loadValue(key))(cache, scalacache.CatsEffect.modes.async, Flags.defaultFlags)

  private def loadValue(key: String) =
    IO(println(s"Loading $key")) *>
      IO.sleep(1.second) *>
      IO {
        println(s"Loaded $key")
        key
      }
}

I get the output:

Loading abc
Loading abc
Loading def
Loaded abc
Loaded abc
Loaded def

cachingF invokes the loadValue function twice with "abc". Instead I would like it to only invoke it once and use the resulting IO for both calls to cached. Is this possible using scalacache or do I need to implement this manually by storing an actual IO in the cache and removing the key if the IO fails at a later stage?

SWAPNIL SONAWANE
@iamswapnil44_twitter

Hello All,

Is there any way to get the redis cache stats like hit rate , hit count as we get it for caffaine by using caffainecache.stats.hitCount()

SWAPNIL SONAWANE
@iamswapnil44_twitter

Hello All ,

Is there any way to provide custom codecs for redis cache.

here is my code :

def creatCache[T](implicit  codec: Codec[T]) = {
      val jedisPool = new JedisPool(new JedisPoolConfig(), "localhost", 6379, 20000)
      val customisedRedisCache: Cache[T] = RedisCache[T](jedisPool)
      customisedRedisCache
}

and calling this function as

 implicit val jobPropsCache = creatCache[Map[String,String]]

code is failing at calling thr function. error is

 No implicits provides for codec: Codec[Map[String,String]]
Yoann Guyot
@ygu

Hi there, I'm migrating from scala 2.12 to 2.13, including code that was written using scalacache and guava. But now the compiler refuses that code. These are the issues:

import scalacache._
import scalacache.serialization.InMemoryRepr
import guava._

class MyClass(cache: ScalaCache[InMemoryRepr]) {...}

gives me : object InMemoryRepr is not a member of package scalacache.serialization and: not found: type ScalaCache

Also:
sync.cachingWithTTL(key)(Duration.apply(1, DAYS)) { ... }

gives me: value cachingWithTTL is not a member of object scalacache.sync

Finally:
ScalaCache(GuavaCache())

gives me:

overloaded method apply with alternatives:
[error]   [V](underlying: com.google.common.cache.Cache[String,scalacache.Entry[V]])(implicit config: scalacache.CacheConfig): scalacache.guava.GuavaCache[V] <and>
[error]   [V](implicit config: scalacache.CacheConfig): scalacache.guava.GuavaCache[V]
[error]  cannot be applied to ()

Anyone to help migrating to last version of scalacache (I can't find any scaladoc or changelog to help replacing the old code) ?
So, what should I use instead of scalacache.serialization.InMemoryRepr which seems to have been removed from the library.
Why is ScalaCache not found ?
What should I use instead of sync.cachingWithTTL ?
How to use the new constructors of GuavaCache to get the same as old GuavaCache() ? Isn't there a default one ?

@cosmir17 it seems you've experienced the same kind of issue, have you found out how to fix it ?
Yoann Guyot
@ygu

I may have found answers:

  • I use GuavaCache[MyClass] instead of ScalaCache[InMemoryRepr]
  • I use sync.caching(key)(Some(Duration.apply(1, DAYS))) { ... }(cache = myImplicitCache, mode = mode, flags = new Flags()) instead of sync.cachingWithTTL(key)(Duration.apply(1, DAYS)) { ... }
  • I use GuavaCache(config) with implicit val config = CacheConfig() instead of ScalaCache(GuavaCache())

I hope this is correct.

Roberto Leibman
@rleibman
Are there any facilities for multi-level layering of cache systems in either scalacache or in any of the cache systems that scalacache uses? Basically I'd like a short TTL for in memory cache and a a longer TTL for a cloud level cache before I hit the actual process that gets the data.
Roberto Leibman
@rleibman
It may not be too difficult to create a CompoundCache that extends AbstractCache and has a list of caches to try
Jeff Lewis
@lewisjkl
Hey @rleibman I don’t believe that there is a current “out-of-the-box” way to do this. But I agree that you could pretty easily create a compound cache of some kind that did this. We have an issue open to take a look at the API Scalacache exposes before we go live with a final 1.0.0 release and will keep this in mind while going through that exercise.
Roberto Leibman
@rleibman
Cool!
Jeff Lewis
@lewisjkl
My initial instinct is to wait until after the 1.0.0 release is through to look at introducing this functionality, but very happy to hear your thoughts on that. Feel free to open an issue on GH for this if you feel so inclined :)
Roberto Leibman
@rleibman
I think the only weirdness with it would be that each cache would have it's own ttl vs the ttl of the compound cache.
Jeff Lewis
@lewisjkl
Yeah that is a good callout, thanks for putting the issue in!
Vimit Dhawan
@vimitdhawan
Hello Everyone, I am using CaffeineCache in my application but it looks like we can use only string as a key. is there any way to use case class
Brian P. Holt
@bpholt

When using ScalaCache with cats-effect and Caffeine, is it a good idea to set the Caffeine executor field to the cats-effect Blocker being used?

e.g.

Caffeine.newBuilder().maximumSize(10000L).executor(blocker.blockingContext.execute(_)).build[String, Entry[String]]
Leo De Souza
@cleliofs
Hi all, I am new to Scalacache lib. I am looking for a good introduction to it using Caffeine underlining cache. Any suggestion for an introductory post? I checked the documentation (https://cb372.github.io/scalacache/docs/) but it is quite poor tbh :( For example, which Sbt libs I need to include to support the diff Modes as per "scalacache.modes.try."?