Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • 10:13
    bplommer synchronize #484
  • 10:13

    bplommer on ce3-2

    Update log4cats to 2.0.0-RC1 (compare)

  • 09:59
    gvolpe commented #484
  • 09:57
    bplommer ready_for_review #484
  • 09:48
    bplommer synchronize #484
  • 09:48

    bplommer on ce3-2

    add chunkSize parameter to `Str… Add assertions to promise compl… (compare)

  • 09:34
    bplommer synchronize #484
  • 09:34

    bplommer on ce3-2

    Drop context-applied drop bm4 Merge branch 'master' into drop… and 4 more (compare)

  • 02:58
    mergify[bot] labeled #486
  • 02:57
    scala-steward opened #486
  • Mar 07 19:49

    github-actions[bot] on gh-pages

    updated site updated site updated site and 19 more (compare)

  • Mar 07 19:38

    gvolpe on drop-scala2-compiler-plugins

    (compare)

  • Mar 07 19:38

    gvolpe on master

    Drop context-applied drop bm4 Merge branch 'master' into drop… and 3 more (compare)

  • Mar 07 19:38
    gvolpe closed #485
  • Mar 07 17:10
    bplommer review_requested #485
  • Mar 07 17:09
    bplommer ready_for_review #485
  • Mar 07 17:02
    bplommer synchronize #485
  • Mar 07 17:02

    bplommer on drop-scala2-compiler-plugins

    Update docs (compare)

  • Mar 07 16:51
    bplommer edited #485
  • Mar 07 16:51
    bplommer opened #485
Cosmin Ciobanu
@cosminci

From lettuce

A connection object with Redis Cluster consists of multiple transport connections. These are:
    Default connection object (Used for key-less commands and for Pub/Sub message publication
    Connection per node (read/write connection to communicate with individual Cluster nodes)
    When using ReadFrom: Read-only connection per read replica node (read-only connection to read data from read replicas)

^ I'm thinking one of these hops between nodes while the other 2 are stable. What I'm seeing with a single pod / cluster client is in the screenshot above with the 3 switching to a different node every few seconds (but the interval is just an artifact of the prometheus scraping interval and grafana refresh interval).

In any case, the performance seems great overall. The service is based on akka-http, has 8 CPUs, and does an evalSha for each request. Without the evalSha (no redis), the service handles ~ 24k RPS peak throughput, while with Redis, ~20k RPS. So a 17% performance impact overall, which is really great.
p99 to Redis is a bit high at 20k RPS, around 9ms. I tried Jedis as well, and while the p99 was <1ms, due to the high number of threads, I couldn't get past 10k RPS before being overloaded.
Cosmin Ciobanu
@cosminci
The difference in usability and safety when using redis4cats is amazing though, really great work. If I spot anything that could be improved regarding Redis Cluster, I'll let you know / open a PR.
Gabriel Volpe
@gvolpe
Thanks for the feedback @cosminci, definitely all contributions more than welcome :)
Pierre Ricadat
@ghostdogpr
Hi!
I'm getting this error when using transactions under moderate load: io.lettuce.core.RedisCommandExecutionException: ERR MULTI calls can not be nested
looking at lettuce-io/lettuce-core#67 and lettuce-io/lettuce-core#673
Lettuce connections require single-threaded/synchronized access when using transactions. If two or more threads call concurrently MULTI/EXEC methods this will destroy the connection state.
is redis4cats taking care of that? I fear it's not 😥
Gabriel Volpe
@gvolpe
Hey @ghostdogpr, that's unfortunate to hear. I fought a long battle to get transactions right due to that same issue you are mentioning but I thought that was solved. FWIW I'm also using transactions in production and haven't come across this issue but maybe we don't have as much load as you do.
Could you please raise an issue? The more details, the better :)
Pierre Ricadat
@ghostdogpr
interesting
let me try to reproduce
Pierre Ricadat
@ghostdogpr
yeah i can reproduce with a very simple snippet
i will make an issue but here it is basically
client.use { redis =>
      val key      = "mykey"
      val tx       = RedisTransaction(redis)
      val commands = redis.incr(key) :: redis.expireAt(key, Instant.now()) :: HNil
      val io       = tx.filterExec(commands)

      ZIO.collectAllParN_(10)(List.fill(100)(io)).catchAll(error => Task(println(error)))
    }
running 100 transactions with max 10 concurrently
this triggers ERR MULTI calls can not be nested
Pierre Ricadat
@ghostdogpr
also posted a version with cats IO
Cosmin Ciobanu
@cosminci

@gvolpe it seems like redis4cats isn’t enabling any sort of topology refresh for Redis Cluster, possibly making the clients unstable whenever a failover occurs. The flatTap when initializing the lettuce client only retrieves an initial set of partitions. That flatTap could instead do something alone the lines of

client.setOptions(
        ClusterClientOptions
          .builder()
          .topologyRefreshOptions(
            ClusterTopologyRefreshOptions
              .builder()
              .enableAdaptiveRefreshTrigger(RefreshTrigger.MOVED_REDIRECT, RefreshTrigger.PERSISTENT_RECONNECTS)
              .build()
          )
          .build()
      )

or even better, expose a way for users to set what kind of refresh mechanism they want according to https://github.com/lettuce-io/lettuce-core/wiki/Redis-Cluster#user-content-refreshing-the-cluster-topology-view. What do you think?

Gabriel Volpe
@gvolpe
@ghostdogpr replied on the GH issue
@cosminci definitely open to improve the cluster API, though, I'm not really familiar with it and time availability is scarce but if you come up with a proposal, I'd be happy to take a look at it.
Cosmin Ciobanu
@cosminci
:+1:
I’m pretty new to the cats ecosystem so the first iteration might need a lot of feedback but I’ll give it a shot.
Gabriel Volpe
@gvolpe
No problems, just FYI it doesn't have to be code, just proposing ideas like setting the ClusterTopology above is good enough for a proposal. It could also be a feature request linking to what Lettuce provides so we can expose an API on top of it :)
Pierre Ricadat
@ghostdogpr
@gvolpe thanks, I missed that part
maybe it should be a bigger warning near the top, because it's easy to believe it will work, and the problems only show up under load :D
I naively believed the library was taking care of it :P
Gabriel Volpe
@gvolpe
yeah most users don't read documentation (I'm also in that group :grimacing: ), we should definitely have a bigger warning at the top
James Cosford
@jamescosford
Is there a way to read from all streams with keys matching a regex? I am working on a system in which I log data from many different sensor platforms. I would like to log the data for each platform individually, but then be able to process the data en-masse in some use-cases.
Gavin Bisesi
@Daenyth
there's definitely some way to express the application control flow you want
what that looks like will depend on circumstances and how the app is put together
James Cosford
@jamescosford

Hi Again! I'm reading from streams with using StreamingOffset.Latest, expecting that I would receive all messages added after I started the reading process. Instead my read process doesn't get any updates after adding a new item to the stream.

If I use StreamingOffset.All I get all of the historical items, plus the newly added one.

Any ideas why that might be?
James Cosford
@jamescosford
Reading the Redis docs I get the idea that the "$" offset is expected to be used with the blocking Xread... I'm not sure if that's the issue I'm facing. For now I've covered it up by writing my last offset back into a hash like a poor man's consumer group implementation.
Serhii Shobotov
@sshobotov
hey @gvolpe, sorry for a bit dumb question but how can I access the latest published snapshot version? Search via Sonatype doesn't help
Click on the latest job and look for the Publish refs/heads/master workflow. You will find the latest SNAPSHOT version in the logs.
Serhii Shobotov
@sshobotov
oh, right, thanks a lot!
Gabriel Volpe
@gvolpe

@/all a new release is on its way to Maven Central :tada: Thanks a lot for your contributions: https://github.com/profunktor/redis4cats/releases/tag/v0.11.0

I would like to highlight that the long outstanding issue with regards to transactions has now been fixed, though, there are still some caveats that have been properly documented. This has been an extremely-hard-to-solve issue. I encourage you to give it a try and report any issues you may find.

With this issue out of the way, I will now focus my spare time on getting it integrated with CE3 and Scala 3.

Jakub Kozłowski
@kubukoz
@gvolpe do we have the sbt-github-actions/sbt-ci-release flow set up here?
Gabriel Volpe
@gvolpe
We have Nix at home :wink: @kubukoz
Jakub Kozłowski
@kubukoz
fair
matrixbot
@matrixbot
@gvolpe:matrix.org Test
Gabriel Volpe
@gvolpe
Not there yet...
gvolpe
@gvolpe:matrix.org
[m]
Test
Gabriel Volpe
@gvolpe
:tada:
Jakub Kozłowski
@kubukoz
Nice!
Vasiliy Muzychenko
@chimmi
Hey everyone! Are there any plans on expanding streams support? xreadgroup and xack in particular