Where communities thrive

  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
  • 07:44
    mp911de labeled #1860
  • 07:44
    mp911de closed #1860
  • 07:44
    mp911de commented #1860
  • 07:15
    maozhen520 opened #1860
  • Sep 22 14:45
    mp911de labeled #1859
  • Sep 22 12:22
    kgrits closed #1859
  • Sep 22 12:22
    kgrits commented #1859
  • Sep 22 11:02
    kgrits opened #1859
  • Sep 21 00:41
    yunzhuoren commented #1148
  • Sep 18 11:39
    SNY019 commented #757
  • Sep 16 11:01
    d-saravanan opened #1858
  • Sep 16 09:52
    StillerRen opened #1857
  • Sep 16 07:20
    xuririse commented #720
  • Sep 15 10:26
    mp911de closed #1852
  • Sep 15 10:26
    mp911de closed #1851
  • Sep 15 10:13
    Travis lettuce-io/lettuce-core (6.1.x) fixed (3276)
  • Sep 15 10:13
    Travis lettuce-io/lettuce-core (6.0.x) fixed (3275)
  • Sep 15 09:46

    mp911de on 6.1.x

    Switch build to Java 15 as the … (compare)

  • Sep 15 09:46

    mp911de on 6.0.x

    Switch build to Java 15 as the … (compare)

  • Sep 15 09:39
    Travis lettuce-io/lettuce-core (6.1.5.RELEASE) failed (3273)

Using RESP2 isn't really an issue unless you want to heavily use Redis-assisted client-side caching invalidation. In fact, the majority of Redis clients supports only RESP2 as RESP3 isn't widely adopted yet.

Ok. So what could be the reason for microsoft azure redis not sending the expected tuples like you mentioned above and if I need to manually use the RESP2 using RedisClient then how can I use the micronaut cache feature because if we define the value with redis in application.yml. Miconaut will create the RedisClient bean automatically and eventaully it will call the RESP3 protocol which is failing with azure redis. Tested in local docker container for redis and it works without and manual plumbing.

 public static ProtocolVersion newestSupported() {
        return RESP3;

This class for redis cache configuration reading the cache names defined under redis.caches.xxx

Is it possible to add the protocol version in the redis uri parameter something like below in future lettuce release...
Kindly let me know is there anyway good way which I can solve this issue. Thanks for your help.
Reason being how Microsoft is going to use RESP3 in near future is very highly unlikely as you mentioned many clients are not switched to RESP3 protocol.
Mark Paluch
No, the protocol cannot be set via RedisURI. You might want to ask Micronaut to provide a way to configure ClientOptions. I think that the Azure Redis implementation needs to be fixed with regards to the RESP3 protocol.
Thank you Mark. Yes I agree Azure Redis implemntation should be changed/verfied instead of allowing users to change the protocol which may results in unexpected errors and what I read is RESP2 will be deprecation is the future, so parameterizing the protocol version is not a good idea my bad...

Hello @mp911de

Got the response from Microsoft. Details are as below...

Hi  Binoy  , 

Would you be  able to confirm if  below  is  the  error  which you are taking about? 

•    Redis client lib, that uses the Hello command for the protocol negotiation. the hello command has a problem saying that it will answer with 7 entries, but only answers with 6   azure-redis 6.0.3 responding incorrectly to HELLO 3 ? map length is incorrect - Microsoft Q&A
•    https://github.com/antirez/RESP3/blob/master/spec.md

There  is  a  known issue  on  Redis  6.0.3  in the  above .Below is  an update on that from PG  We are able to repro an issue with the hello command in 6.0.3 as well. The new version that will be included in the next patch fixes the issue, so it will have a broken response until we can patch that cache.  
This  patch  would be  applied  by  mid or end  of  July  post which the issue  should be mitigated . 

But  please  note  Redis  6  is currently  under  Preview.  

Seems this is the actual issue right? Can you pls confirm Mark?

Mark Paluch
That sounds about right.
Thank you Mark for confirming.
Kyrylo Nozdrin
Hello. We are using vert.x 3.6.0. Migration to vert.x 4.0 is not possible for now. Old vertx-redis-client doesn't support clustering. It it possible to use any version of Lettuce in this case?
Kyrylo Nozdrin
I found this random example on github https://github.com/UniverseProjects/EventServer/blob/master/src/main/java/com/universeprojects/eventserver/RedisHistoryService.java and it seams easy to integrate Lettuce async commands because of java.util.CompletionStage
Any known pitfalls?
hi, how i can verify redis pooling is working or not?
Mark Paluch
@ig0revich Lettuce uses netty internally and exposes asynchronous and reactive APIs. RedisFuture implements CompletionStage
Patrick Shim

Hello, I'm trying to add a tracer to the lettuce as this doc(https://github.com/lettuce-io/lettuce-core/wiki/Tracing) says, but I have a problem.
The first command doesn't have any parent span context but the second command does.

Here's the sample code.

    @RequestMapping(method = [RequestMethod.GET], path = ["/users/{id}"])
    suspend fun getById(@PathVariable id: Long): UserEntity? {

I'm using Spring Webflux with coroutine, Spring Data Redis Reactive, Spring Cloud Sleuth

Mark Paluch
I'm not fully sure whether Kotlin's coroutine context is propagated correctly. Can you share a example and drop an issue into Spring Cloud Sleuth for the beginning?
2 replies
Is reactive blpop trully blocking? Can multiple subscription to multiple blpop queues be established without blocking those threads?
Mark Paluch
I don't know, that is probably a Redis question. Redis says, that the client connection will be blocked until timeout or until a element is returned to the command.
Dileep Kumar M
Do we have lettuce benchmark results?. I have use case to perform 100K/sec sorted set add operations.
Sven Ludwig

I have a question regarding the reactive API. I have 1 RedisClient on which I create 1 instance of RedisReactiveCommands. On that RedisReactiveCommands instance I first subscribe to a Redis Stream using a Publisher created with xread, successfully.

That said, for each element passing through this stream, I also need to perform an hget.

I naively used to perform the hget simply using the very same instance of RedisReactiveCommands without doing anything explicit with respect to Project Reactor stuff etc. and also without chaining or nesting Observables etc. This used to work, but currently I have the problem that all attempts to perform the hget fail in the sense that no elements are published by the hget-Publisher (even though in my Redis Server everything is still there).

Is it possible that my Publishers interfere with each other, perhaps prevent or somehow indirectly block each other from publishing elements? Do I have to consider something technical in order to successfully consume the Redis Stream via my xread-Publisher AND, for each so consumed element, to also successfully create and consume a stream based on an hget-Publisher, more or less in parallel?

Mark Paluch
When using XREAD make sure to not use it in blocking mode when using a single connection. Otherwise, commands sent after XREAD will be delayed until XREAD unblocks.

Hi, I have one error coming while doing a graceful shutdown in web flux.

java.util.concurrent.CancellationException: Disconnected
    at reactor.core.publisher.FluxPublish$PublishSubscriber.disconnectAction(FluxPublish.java:314) ~[reactor-core-3.4.6.jar:3.4.6]
    at reactor.core.publisher.FluxPublish$PublishSubscriber.dispose(FluxPublish.java:305) ~[reactor-core-3.4.6.jar:3.4.6]
    at org.springframework.data.redis.connection.lettuce.LettuceReactiveSubscription$State.terminate(LettuceReactiveSubscription.java:288) ~[spring-data-redis-2.5.1.jar:2.5.1]
    at org.springframework.data.redis.connection.lettuce.LettuceReactiveSubscription.lambda$cancel$6(LettuceReactiveSubscription.java:177) ~[spring-data-redis-2.5.1.jar:2.5.1]
    at reactor.core.publisher.MonoDefer.subscribe(MonoDefer.java:44) ~[reactor-core-3.4.6.jar:3.4.6]
    at reactor.core.publisher.MonoIgnoreThen$ThenIgnoreMain.subscribeNext(MonoIgnoreThen.java:236) ~[reactor-core-3.4.6.jar:3.4.6]
    at reactor.core.publisher.MonoIgnoreThen$ThenIgnoreMain.onComplete(MonoIgnoreThen.java:203) ~[reactor-core-3.4.6.jar:3.4.6]
    at reactor.core.publisher.Operators$MultiSubscriptionSubscriber.onComplete(Operators.java:2057) ~[reactor-core-3.4.6.jar:3.4.6]
    at reactor.core.publisher.MonoPeekTerminal$MonoTerminalPeekSubscriber.onComplete(MonoPeekTerminal.java:299) ~[reactor-core-3.4.6.jar:3.4.6]
    at reactor.core.publisher.MonoIgnoreElements$IgnoreElementsSubscriber.onComplete(MonoIgnoreElements.java:88) ~[reactor-core-3.4.6.jar:3.4.6]
    at io.lettuce.core.RedisPublisher$ImmediateSubscriber.onComplete(RedisPublisher.java:896) ~[lettuce-core-6.1.2.RELEASE.jar:6.1.2.RELEASE]
    at io.lettuce.core.RedisPublisher$State.onAllDataRead(RedisPublisher.java:698) ~[lettuce-core-6.1.2.RELEASE.jar:6.1.2.RELEASE]
    at io.lettuce.core.RedisPublisher$State$3.read(RedisPublisher.java:608) ~[lettuce-core-6.1.2.RELEASE.jar:6.1.2.RELEASE]
    at io.lettuce.core.RedisPublisher$State$3.onDataAvailable(RedisPublisher.java:565) ~[lettuce-core-6.1.2.RELEASE.jar:6.1.2.RELEASE]
    at io.lettuce.core.RedisPublisher$RedisSubscription.onDataAvailable(RedisPublisher.java:326) ~[lettuce-core-6.1.2.RELEASE.jar:6.1.2.RELEASE]
    at io.lettuce.core.RedisPublisher$RedisSubscription.onAllDataRead(RedisPublisher.java:341) ~[lettuce-core-6.1.2.RELEASE.jar:6.1.2.RELEASE]
    at io.lettuce.core.RedisPublisher$SubscriptionCommand.doOnComplete(RedisPublisher.java:778) ~[lettuce-core-6.1.2.RELEASE.jar:6.1.2.RELEASE]
    at io.lettuce.core.protocol.CommandWrapper.complete(CommandWrapper.java:65) ~[lettuce-core-6.1.2.RELEASE.jar:6.1.2.RELEASE]
    at io.lettuce.core.protocol.CommandWrapper.complete(CommandWrapper.java:63) ~[lettuce-core-6.1.2.RELEASE.jar:6.1.2.RELEASE]
    at io.lettuce.core.cluster.ClusterCommand.complete(ClusterCommand.java:65) ~[lettuce-core-6.1.2.RELEASE.jar:6.1.2.RELEASE]
    at io.lettuce.core.pubsub.PubSubCommandHandler.completeCommand(PubSubCommandHandler.java:260) ~[lettuce-core-6.1.2.RELEASE.jar:6.1.2.RELEASE]
    at io.lettuce.core.pubsub.PubSubCommandHandler.notifyPushListeners(PubSubCommandHandler.java:220) ~[lettuce-core-6.1.2.RELEASE.jar:6.1.2.RELEASE]
    at io.lettuce.core.protocol.CommandHandler.decode(CommandHandler.java:646) ~[lettuce-core-6.1.2.RELEASE.jar:6.1.2.RELEASE]
    at io.lettuce.core.pubsub.PubSubCommandHandler.decode(PubSubCommandHandler.java:112) ~[lettuce-core-6.1.2.RELEASE.jar:6.1.2.RELEASE]
    at io.lettuce.core.protocol.CommandHandler.channelRead(CommandHandler.java:598) ~[lettuce-core-6.1.2.RELEASE.jar:6.1.2.RELEASE]
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) ~[netty-transport-4.1.65.Final.jar:4.1.65.Final]
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) ~[netty-transport-4.1.65.Final.jar:4.1.65.Final]
    at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) ~[netty-tra

I have asked on StackOverflow also


We found a very critical issue, where reactive lettuce client is returning mismatched data once in a while in one of the pods, once it starts, it only get to normal state after restart of the pod. Just before we see data mismatch, we see 2 errors. 1) ""stack": "j.l.IndexOutOfBoundsException: Index 2 out of bounds for length 2\n\tat j.i.util.Preconditions.outOfBounds(Unknown Source)\n\tat j.i.util.Preconditions.outOfBoundsCheckIndex(Unknown Source)\n\tat j.i.util.Preconditions.checkIndex(Unknown Source)\n\t... 2 frames excluded\n\tat i.l.c.c.RedisAdvancedClusterReactiveCommandsImpl.lambda$mget$11(RedisAdvancedClusterReactiveCommandsImpl.java:307)
at r.c.p.FluxMapFuseable$MapFuseableSubscriber.poll(FluxMapFuseable.java:184)\n\t
at r.c.p.FluxFlattenIterable$FlattenIterableSubscriber.drainAsync(FluxFlattenIterable.java:330)\n\t"

"stack": "j.u.NoSuchElementException: null
at java.util.ArrayList$Itr.next(Unknown Source)\n\t
at i.l.c.o.KeyValueListOutput.set(KeyValueListOutput.java:58)\n\t
at i.l.c.p.RedisStateMachine.safeSet(RedisStateMachine.java:457)\n\t
at i.l.c.p.RedisStateMachine.handleBytes(RedisStateMachine.java:274)\n\t

Here is the tech stack:

  1. Spring Webflux: Version 2.4.1
  2. lettuce-core: version 5.3.7.RELEASE and also happening with 6.0.1.RELEASE
  3. Using Reactive commands for GET and MGET use cases

Note: This application handles really high QPS

1 reply
Hi. I accidentally found from jfr 'old object event' list that RedisClusterNode takes 500MB of heap and it's growing over time.
could you guys help me figure out why this happens? (I'm using lettuce-core 5.3.7)
jfr print --events OldObjectSample JFR_FILE_PATH <- is what I ran
Hello folks,
getting below exception
2021-08-13T10:25:41.439768864Z stderr F at org.springframework.data.redis.PassThroughExceptionTranslationStrategy.translate(PassThroughExceptionTranslationStrategy.java:44)
2021-08-13 15:55:41
2021-08-13T10:25:41.439761955Z stderr F at org.springframework.data.redis.connection.lettuce.LettuceExceptionConverter.convert(LettuceExceptionConverter.java:41)
2021-08-13 15:55:41
2021-08-13T10:25:41.439730914Z stderr F at org.springframework.data.redis.connection.lettuce.LettuceExceptionConverter.convert(LettuceExceptionConverter.java:74)
2021-08-13 15:55:41
2021-08-13T10:25:41.439345893Z stderr F org.springframework.data.redis.RedisSystemException: Redis exception; nested exception is io.lettuce.core.RedisException: java.lang.OutOfMemoryError: Java heap space
2021-08-13 15:55:41
2021-08-13T10:25:41.439140011Z stderr F ERROR ErrorPageFilter Forwarding to error page from request [/webviewer/rest/docviewer/loadPdf/1/1302] due to exception [Redis exception; nested exception is io.lettuce.core.RedisException: java.lang.OutOfMemoryError: Java heap space]
2021-08-13 15:55:41
2021-08-13T10:25:41.331319993Z stderr F Caused by: java.lang.OutOfMemoryError: Java heap space
2021-08-13 15:55:41
2021-08-13T10:25:41.331299199Z stderr F ... 18 more
2021-08-13 15:55:41
2021-08-13T10:25:41.331293248Z stderr F at org.springframework.data.redis.connection.lettuce.LettuceSetCommands.sMembers(LettuceSetCommands.java:244)
2021-08-13 15:55:41
2021-08-13T10:25:41.331285353Z stderr F at com.sun.proxy.$Proxy269.smembers(Unknown Source)
2021-08-13 15:55:41
2021-08-13T10:25:41.331269706Z stderr F at io.lettuce.core.internal.AbstractInvocationHandler.invoke(AbstractInvocationHandler.java:80)
2021-08-13 15:55:41
2021-08-13T10:25:41.331262525Z stderr F at io.lettuce.core.cluster.ClusterFutureSyncInvocationHandler.handleInvocation(ClusterFutureSyncInvocationHandler.java:130)
2021-08-13 15:55:41
2021-08-13T10:25:41.331256176Z stderr F at io.lettuce.core.internal.Futures.awaitOrCancel(Futures.java:250)
2021-08-13 15:55:41
2021-08-13T10:25:41.331246996Z stderr F at io.lettuce.core.internal.Exceptions.bubble(Exceptions.java:83)
2021-08-13 15:55:41
2021-08-13T10:25:41.331218242Z stderr F Caused by: io.lettuce.core.RedisException: java.lang.OutOfMemoryError: Java heap space
2021-08-13 15:55:41
2021-08-13T10:25:41.331070849Z stderr F at java.lang.Thread.run(Thread.java:748)
2021-08-13 15:55:41
2021-08-13T10:25:41.331065311Z stderr F at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
2021-08-13 15:55:41
2021-08-13T10:25:41.331059868Z stderr F at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
2021-08-13 15:55:41
2021-08-13T10:25:41.331053362Z stderr F at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)Show context
2021-08-13 15:55:41
2021-08-13T10:25:41.331047951Z stderr F at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
2021-08-13 15:55:41
2021-08-13T10:25:41.331042036Z stderr F at java.util.concurrent.FutureTask.run(FutureTask.java:266)
2021-08-13 15:55:41
2021-08-13T10:25:41.33103566Z stderr F at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
any idea

We are having some latency issues with our app that uses Lettuce to connect to Redis Elasticache in AWS.
In production, the application throughput is about 600K RPS, running on ~200nodes and EC Redis cluster of ~60 machines (5 shards + replicas)

I've been able to reproduce the high latency in our TEST environment:

  • 1 application node running under 500 RPS total (~2500 RPS to redis)
  • 1 box in the redis cluster though it runs in a cluster mode (tried with 3 nodes but we see the same issue)

The infrastructure runs in the same AWS account / region and Availability zone.

Here is the RedisClusterClient setup we use:

  • DefaultEventExecutorGroup numThreads = 20 (tried from processor count to 50)
  • DefaultEventLoopGroupProvider numThreads = 20 (tried from processor count to 50)
  • default timeout = 20ms
  • cluster options:
  • Codec: StringByteArray codec
  • We use AsyncCommands and LettuceFutures.awaitOrCancel
  • ReadFrom: NEAREST (tried ANY, REPLICA_PREFERRED etc...)

The Lettuce internal metrics below are aligned with what we measure via our app metrics and grafana dashboards
I'm surprised to see that the TP 99 is often around 10ms where I'd expect to see something around 1 or 2 ms (trace below). We executed a few runs of slowlog against our redis nodes and those slow commands do NOT surface on the redis side.

Is there an advanced way for us to identify where those commands get stuck? maybe in the event loop before being really sent to Redis? (FYI, we use GRPC to expose the service and that relies on NETTY too)

Thanks for reading

LETTUCE CommandLatency metrics

MGET : /10.X.Y.57:6379 => [count=11524, timeUnit=MICROSECONDS, firstResponse=[min=786, max=33816, percentiles={50.0=1794, 90.0=5079, 95.0=6291, 99.0=10747, 99.9=14614}], completion=[min=790, max=33816, percentiles={50.0=1794, 90.0=5079, 95.0=6291, 99.0=10747, 99.9=14614}]]
HMGET : /10.X.Y.57:6379 => [count=20661, timeUnit=MICROSECONDS, firstResponse=[min=724, max=33816, percentiles={50.0=1753, 90.0=4816, 95.0=6127, 99.0=9502, 99.9=16252}], completion=[min=724, max=33816, percentiles={50.0=1753, 90.0=4816, 95.0=6160, 99.0=9502, 99.9=16252}]]
MGET : /10.X.Y.194:6379 => [count=11494, timeUnit=MICROSECONDS, firstResponse=[min=72, max=29491, percentiles={50.0=638, 90.0=2064, 95.0=3538, 99.0=6586, 99.9=15335}], completion=[min=73, max=29491, percentiles={50.0=638, 90.0=2064, 95.0=3538, 99.0=6586, 99.9=15335}]]
HMGET : /10.X.Y.194:6379 => [count=20901, timeUnit=MICROSECONDS, firstResponse=[min=71, max=17170, percentiles={50.0=593, 90.0=2228, 95.0=3719, 99.0=7634, 99.9=14155}], completion=[min=73, max=17170, percentiles={50.0=598, 90.0=2228, 95.0=3719, 99.0=7634, 99.9=14155}]]
MGET : /10.X.Y.57:6379 => [count=8063, timeUnit=MICROSECONDS, firstResponse=[min=716, max=36175, percentiles={50.0=1695, 90.0=4521, 95.0=6586, 99.0=14286, 99.9=36175}], completion=[min=716, max=36175, percentiles={50.0=1695, 90.0=4521, 95.0=6586, 99.0=14286, 99.9=36175}]]
HMGET : /10.X.Y.57:6379 => [count=14712, timeUnit=MICROSECONDS, firstResponse=[min=179, max=36175, percentiles={50.0=1679, 90.0=4358, 95.0=6586, 99.0=14221, 99.9=35913}], completion=[min=179, max=36175, percentiles={50.0=1679, 90.0=4358, 95.0=6586, 99.0=14221, 99.9=35913}]]
MGET : /10.X.Y.194:6379 => [count=8029, timeUnit=MICROSECONDS, firstResponse=[min=84, max=19005, percentiles={50.0=511, 90.0=1572, 95.0=2572, 99.0=5406, 99.9=8060}], completion=[min=86, max=19005, percentiles={50.0=514, 90.0=1572, 95.0=2588, 99.0=5406, 99.9=8060}]]
Mark Paluch
You should run your app through an profiler to detect what else happens on the event loop. Using the asynchronous API, all callbacks happen on the netty event loop thread. Any CPU-bound work affects I/O performance. You might want to switch schedulers occasionally (thenAsync and friends).
I have a question about TCP retransmits and Linux using the default transport implementation(no native extensions). If the command timeout is set to less than the OS socket timeout(TCP_RTO_MIN) then there won't be any attempts at re-transmissions if the request times out, correct? Also if it is set to >= TCP_RTO_MIN then am I correct in assuming it will try to retry?
Mark Paluch
If the command timeout is less than the retransmit threshold, then the client-side of the command will terminate with a timeout exception. Since data was written to the send buffers already, there's no way to retract the data to send unless terminating the connection.
2 replies

hi, I am new to Redis. I have the enterprise redis running on docker on my mac. From java program, i was able to connect to redis using "RedisClient redisClient = RedisClient.create("redis://localhost:12000");"
The public endpoint of the database reads as "redis-12000.cluster.local:12000 /". I can connect through redis-cli using the endpoint by the command "/opt/redislabs/bin/redis-cli -h redis-12000.cluster.local -p 12000"

But I have NOT been able to connect using the public endpoint of the database from my java code by doing "RedisClient.create("redis://redis-12000.cluster.local:12000")". I have exhausted many combination like removing "redis://" or by using the ip address or creating RedisURI with hostname and port, but no luck.

Am I doing it something wrong with the endpoint in my java code? Any help is much appreciated. Thanks!

1 reply
EpollEventLoop spends most of CPU time on Unicompletion.claim(70%) which is a subsequence of AsyncCommand.completeResult
Is this normal behavior? Any opinion would be appreciated
Mark Paluch
What is your code doing in terms of future callbacks/future composition?
Henrique Campos
Sup. Is there any way to use a cluster connection to run SCANs that are not cluster-wide scans? I appreciate the feature but apparently there's no flag to turn it off if I want more fine-grained control over the node selection that the SCANs run on, other than creating separate, cluster-unaware connections myself.
Henrique Campos
Actually, in retrospect, after creating a separate non-connection connection manually, the problem still occurs. I found some issues opened on Redis itself regarding this, and will have to work around them. Basically, SCAN is returning keys that have been rebalanced to other nodes.
Mark Paluch
You can obtain a connection to a specific node (via StatefulRedisClusterConnection.getConnection(…)) and issue a SCAN there
Henrique Campos

Yeah I was doing that previously, and thought the MOVED keys were a sign that it was still multiplexing the SCANs. It's this long-standing issue, good to know if someone else comes along with the same issue also thinking it's Lettuce: redis/redis#4810

My workaround for now, which has its relatively high overhead, but works, was to catch MOVED exceptions and ignore them.

Hello everyone. Can u say please, does this lib has method to decode response from
"connection.sync().dump("HashList")" ?
or some another decisions how to deserialize the response
Hey, I'm pretty new to Lettuce and I've been able to connect to an AWS Elasticache cluster with no encryption in transit/redis auth enabled with no issues, but I can't connect to one with encryption in transit enabled and this is a QA cluster with a hard coded auth token, I've used the RedisURI Builder with ssl set to true, password set to the hard coded value, and the error log is stating unable to connect to master.XXXXXXXXXX:port no matter what I've tried, was wondering if anyone had any similar experiences and if they could help assist me
val upstreamUri: RedisURI = RedisURI.Builder.redis(redisHost, 6379).build()
val client: RedisClient = RedisClient.create()
val connection: StatefulRedisMasterReplicaConnection[String, String] = MasterReplica.connect(client, StringCodex.UTF8, upstreamUri)
val commands: RedisCommands[String, String] = connection.sync()
forgot to mention this is all in scala but shouldn't make a huge difference, one thing to note is that the host for the non encryption-in-transit cluster doesnt have master in the url but the one with encryption does (not sure if this makes a difference but figured I'd point it out)
lazy val redisAuth = ******
val upstreamUri: RedisURI = RedisURI.Builder.redis(redisHost, 6379).withSsl(true).withPassword(redisAuth.toCharArray()).build()
this is the only change i've made when i switched over to the encrypt-in-transit host, adding ssl enabled + the auth token, but im really not sure whats causing the error of not being able to connect since everything else works with the non encrypted elasticache cluster
Siva Ram Nyapathi

Hi @mp911de ,
I am facing this issue org.springframework.dao.QueryTimeoutException sporadically , this is happening in production :( . Is there a way to avoid this , it takes 1 min before it throws an error . We are running redis on a pod and our app is running on pod , using spring web flux . What are the parameters that we need to check on the server side as well as client side ?
I tried setting the timeout but unfortunately redis is not taking command timeout option , it is taking default value of 1 min .

Can you help ?


Hi @mp911de, I am getting this in my Redis Cluster

Redis exception; nested exception is io.lettuce.core.cluster.PartitionSelectorException: Cannot determine a partition for slot 5161

What can be the reason for this?

3 replies