Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • 09:41
    gvolpe commented #389
  • 07:00
    sshobotov commented #389
  • 00:09
    sshobotov commented #389
  • 00:07
    sshobotov commented #389
  • Oct 27 22:11

    mergify[bot] on master

    Update lettuce-core to 6.0.1.RE… Merge pull request #395 from sc… (compare)

  • Oct 27 22:11
    mergify[bot] closed #395
  • Oct 27 22:04
    scala-steward opened #395
  • Oct 23 11:18
    stale[bot] labeled #353
  • Oct 23 11:18
    stale[bot] commented #353
  • Oct 19 21:39

    mergify[bot] on master

    Update sbt to 1.4.1 Merge pull request #394 from sc… (compare)

  • Oct 19 21:39
    mergify[bot] closed #394
  • Oct 19 21:33
    scala-steward opened #394
  • Oct 19 11:03
    ikempf commented #389
  • Oct 18 16:24
    gvolpe closed #383
  • Oct 18 16:24
    gvolpe commented #383
  • Oct 18 15:39

    gvolpe on master

    Support refresh of the redis cl… Merge pull request #384 from co… (compare)

  • Oct 18 15:39
    gvolpe closed #384
  • Oct 18 15:33

    gvolpe on master

    Added example of how to setup m… Merge pull request #393 from ss… (compare)

  • Oct 18 15:33
    gvolpe closed #393
  • Oct 18 15:32
    gvolpe reopened #376
Gavin Bisesi
@Daenyth
Filing that upstream: 47degrees/sbt-microsites#503
Narek Asadorian
@nasadorian

Hi folks, I have a question regarding atomicity. Actually first off shoutout to the maintainers, this is an awesome and well designed library.

I am trying to implement a "unique URL counter" in Redis for a school project, with multiple distributed actors updating the state. I am able to make it work easily by blindly using sAdd and finally sCard when querying for unique counts. But I wanted to know how I could accomplish this without the O(n) operation of pulling the cardinality upon query?

I initially had the idea of bumping a counter anytime a URL does not exist in the set, using incr, but I had race conditions using applicative sequencing or even watch + transactions. Wondering if anyone has a tip for how to make things atomic when there is a set and a counter involved to make this a O(1) operation on write and query.
Christopher Davenport
@ChristopherDavenport
@nasadorian Did you look at scripting? I believe that allows that atomicity.
Narek Asadorian
@nasadorian
Thanks @ChristopherDavenport I didn't know about that
I suspect that's some kind of eval based API?
Christopher Davenport
@ChristopherDavenport
Its a lua scripting component that runs inside the redis node
Narek Asadorian
@nasadorian
Interesting, I might give it a try
Gavin Bisesi
@Daenyth
Lua is the way to go for this
in the past I've had a request counter that worked approx like this:
  • key by "requests:" + url
  • each key is a sortedset of ints
  • values are epoch timestamps
  • zrange lets you count how many requests happened in a time period
  • you can use a method (I forget) to mass remove elements smaller than a value (clear stale values)
  • add to the set
  • return number of elements
The details are fuzzy to me now, but it was something like that
Cosmin Ciobanu
@cosminci

Hi all. I'm a bit confused about the cluster connection APIs. When using Redis as a managed service, or deploying it via an operator or helm chart in K8s, the service is exposed by a single host-port pair e.g. redis-cluster:6379. Behind it are several nodes, usually with at least one shard each. What is the API for connecting to such a cluster? Doing something like:

      uri <- Resource.liftF(RedisURI.make[IO]("redis://redis-cluster:6379"))
      cli <- RedisClusterClient[IO](uri)
      api <- Redis[IO].fromClusterClient(cli, RedisCodec.Utf8)

results in a single connection that hops through DNS between the multiple nodes behind the exposed service URL. Ideally there should be 1 connection per Redis node, with automatic failover to the shards if they fail (which is done by Lettuce as far as I can tell).

Cosmin Ciobanu
@cosminci
I see there is an API for creating connections per node, but that doesn't help when the nodes aren't exposed individually.
Cosmin Ciobanu
@cosminci
Looking at the underlying lettuce RedisClusterClient - it should automatically create connections to each node, and the input URIs are only for initial bootstrapping. Quoting the java docs All uris are tried in sequence for connecting initially to the cluster. If any uri is successful for connection, the others are not tried anymore.
Kind of strange that the Redis nodes are showing that the client is jumping between them, rather than each having its own.
Gavin Bisesi
@Daenyth
I think the cluster support is fairly recent, it's possible it needs improvement
Gabriel Volpe
@gvolpe

@cosminci have a look at the docs: https://redis4cats.profunktor.dev/client.html#cluster-connection

I personally don't use the cluster support so pretty sure we could use some improvements if we get feedback from real users.

Cosmin Ciobanu
@cosminci
Screenshot 2020-10-03 at 00.31.01.png

From lettuce

A connection object with Redis Cluster consists of multiple transport connections. These are:
    Default connection object (Used for key-less commands and for Pub/Sub message publication
    Connection per node (read/write connection to communicate with individual Cluster nodes)
    When using ReadFrom: Read-only connection per read replica node (read-only connection to read data from read replicas)

^ I'm thinking one of these hops between nodes while the other 2 are stable. What I'm seeing with a single pod / cluster client is in the screenshot above with the 3 switching to a different node every few seconds (but the interval is just an artifact of the prometheus scraping interval and grafana refresh interval).

In any case, the performance seems great overall. The service is based on akka-http, has 8 CPUs, and does an evalSha for each request. Without the evalSha (no redis), the service handles ~ 24k RPS peak throughput, while with Redis, ~20k RPS. So a 17% performance impact overall, which is really great.
p99 to Redis is a bit high at 20k RPS, around 9ms. I tried Jedis as well, and while the p99 was <1ms, due to the high number of threads, I couldn't get past 10k RPS before being overloaded.
Cosmin Ciobanu
@cosminci
The difference in usability and safety when using redis4cats is amazing though, really great work. If I spot anything that could be improved regarding Redis Cluster, I'll let you know / open a PR.
Gabriel Volpe
@gvolpe
Thanks for the feedback @cosminci, definitely all contributions more than welcome :)
Pierre Ricadat
@ghostdogpr
Hi!
I'm getting this error when using transactions under moderate load: io.lettuce.core.RedisCommandExecutionException: ERR MULTI calls can not be nested
looking at lettuce-io/lettuce-core#67 and lettuce-io/lettuce-core#673
Lettuce connections require single-threaded/synchronized access when using transactions. If two or more threads call concurrently MULTI/EXEC methods this will destroy the connection state.
is redis4cats taking care of that? I fear it's not 😥
Gabriel Volpe
@gvolpe
Hey @ghostdogpr, that's unfortunate to hear. I fought a long battle to get transactions right due to that same issue you are mentioning but I thought that was solved. FWIW I'm also using transactions in production and haven't come across this issue but maybe we don't have as much load as you do.
Could you please raise an issue? The more details, the better :)
Pierre Ricadat
@ghostdogpr
interesting
let me try to reproduce
Pierre Ricadat
@ghostdogpr
yeah i can reproduce with a very simple snippet
i will make an issue but here it is basically
client.use { redis =>
      val key      = "mykey"
      val tx       = RedisTransaction(redis)
      val commands = redis.incr(key) :: redis.expireAt(key, Instant.now()) :: HNil
      val io       = tx.filterExec(commands)

      ZIO.collectAllParN_(10)(List.fill(100)(io)).catchAll(error => Task(println(error)))
    }
running 100 transactions with max 10 concurrently
this triggers ERR MULTI calls can not be nested
Pierre Ricadat
@ghostdogpr
also posted a version with cats IO
Cosmin Ciobanu
@cosminci

@gvolpe it seems like redis4cats isn’t enabling any sort of topology refresh for Redis Cluster, possibly making the clients unstable whenever a failover occurs. The flatTap when initializing the lettuce client only retrieves an initial set of partitions. That flatTap could instead do something alone the lines of

client.setOptions(
        ClusterClientOptions
          .builder()
          .topologyRefreshOptions(
            ClusterTopologyRefreshOptions
              .builder()
              .enableAdaptiveRefreshTrigger(RefreshTrigger.MOVED_REDIRECT, RefreshTrigger.PERSISTENT_RECONNECTS)
              .build()
          )
          .build()
      )

or even better, expose a way for users to set what kind of refresh mechanism they want according to https://github.com/lettuce-io/lettuce-core/wiki/Redis-Cluster#user-content-refreshing-the-cluster-topology-view. What do you think?

Gabriel Volpe
@gvolpe
@ghostdogpr replied on the GH issue
@cosminci definitely open to improve the cluster API, though, I'm not really familiar with it and time availability is scarce but if you come up with a proposal, I'd be happy to take a look at it.
Cosmin Ciobanu
@cosminci
:+1:
I’m pretty new to the cats ecosystem so the first iteration might need a lot of feedback but I’ll give it a shot.
Gabriel Volpe
@gvolpe
No problems, just FYI it doesn't have to be code, just proposing ideas like setting the ClusterTopology above is good enough for a proposal. It could also be a feature request linking to what Lettuce provides so we can expose an API on top of it :)
Pierre Ricadat
@ghostdogpr
@gvolpe thanks, I missed that part
maybe it should be a bigger warning near the top, because it's easy to believe it will work, and the problems only show up under load :D
I naively believed the library was taking care of it :P
Gabriel Volpe
@gvolpe
yeah most users don't read documentation (I'm also in that group :grimacing: ), we should definitely have a bigger warning at the top
James Cosford
@jamescosford
Is there a way to read from all streams with keys matching a regex? I am working on a system in which I log data from many different sensor platforms. I would like to log the data for each platform individually, but then be able to process the data en-masse in some use-cases.
Gavin Bisesi
@Daenyth
there's definitely some way to express the application control flow you want
what that looks like will depend on circumstances and how the app is put together