by

Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • May 30 06:45
    kuldeepsingh99 closed #1295
  • May 29 06:40
    mp911de labeled #1292
  • May 29 06:40
    mp911de closed #1292
  • May 29 06:40
    mp911de commented #1292
  • May 28 23:11
    vihunk commented #1292
  • May 28 11:38
    mp911de commented #1298
  • May 28 11:26
    little-fish commented #1298
  • May 28 08:47
    Kiranmai148 commented #1299
  • May 28 08:46
    Kiranmai148 commented #1299
  • May 28 08:45
    Kiranmai148 commented #1299
  • May 28 07:42
    mp911de edited #1299
  • May 28 07:41
    mp911de labeled #1299
  • May 28 07:41
    mp911de labeled #1299
  • May 28 07:41
    mp911de commented #1299
  • May 28 05:35
    Kiranmai148 edited #1299
  • May 28 05:32
    Kiranmai148 opened #1299
  • May 27 15:14
    mp911de commented #1298
  • May 27 14:01
    little-fish commented #1298
  • May 27 13:58
    little-fish edited #1298
  • May 27 13:51
    mp911de commented #1298
Rok Carl
@rokcarl
thanks
Rok Carl
@rokcarl
it works!
voiceofvega
@voiceofvega
Hello, I am using lettuce's reactive API. Besides "normal" redis access by a web application, I need to run some background processes that periodically scan some parts of the database, obtain object values and perform actions (key renames). I am struggling to understand how to use the scan() in a reactive context. Would someone have some pointers to a description of its operation and/or sample code ?
Mark Paluch
@mp911de
Proper scan(…) usage requires to be called in a loop with the outcome of the previous cursor. This can be a bit tricky in async/reactive arrangements so we decided to provide io.lettuce.core.ScanStream which performs incremental scans behind the scenes while you consume the scan result as Flux<K>.
voiceofvega
@voiceofvega
Thanks ! I had overlooked this package, and using scan in a reactive context is indeed tricky...
Adarsh Ramamurthy
@radarsh
Hello, I have a setup involving a Spring Boot microservice using Spring Data Redis, Atomikos for transactions and Lettuce for Redis. I use a standalone Redis and make use of transactions in my application. I'm trying to identify the root cause of a memory leak in Lettuce when Redis goes down. The queue of io.lettuce.core.protocol.TransactionalCommand instances held within io.lettuce.core.StatefulRedisConnectionImpl.multi keeps on growing until the JVM crashes. I'm sure this is not a bug and my misconfiguration but unable to resolve it.
3 replies
Mark Paluch
@mp911de
We would need to see a bit of code how this is done. Issuing multi requires an exec to submit the command and to clear the output aggregator at io.lettuce.core.StatefulRedisConnectionImpl.multi.
Adarsh Ramamurthy
@radarsh
I will create a reproducible example. I am using @Transactional and not calling multi/exec manually anywhere in the code.
Mark Paluch
@mp911de
Okay. It's likely that a bit in the template setup or transaction synchronization is missing.
Joshua Cohen
@jcohen

I'm running into the following error when attempting to run multiple SCANs in parallel against Redis running in cluster mode:

A scan in Redis Cluster mode requires to reuse the resulting cursor from the previous scan invocation

According to the Redis docs:

It is possible for an infinite number of clients to iterate the same collection at the same time, as the full state of the iterator is in the cursor, that is obtained and returned to the client at every call. Server side no state is taken at all.

So, I'm guessing that because I'm issuing multiple concurrent scans coming from the same client that there is some state that spans these calls that prevents this from working?
I'm wondering if I have any alternatives other than either using multiple clients or rewriting the logic to serialize the parallel scan invocations?
(to be clear, each parallel invocation of scan does reuse the cursor returned from its previous invocation, this issue only arises as a result of multiple parallel scans being kicked off effectively simultaneously)
Joshua Cohen
@jcohen
This turned out to be a bug in my scan impl (I was passing along the string value of the cursor, rather than the cursor itself, it worked fine in non-cluster mode because I checked if the string value of the cursor was "0", rather than using isFinished). Apologies for the noise!
Adarsh Ramamurthy
@radarsh

Hello, I have a setup involving a Spring Boot microservice using Spring Data Redis, Atomikos for transactions and Lettuce for Redis. I use a standalone Redis and make use of transactions in my application. I'm trying to identify the root cause of a memory leak in Lettuce when Redis goes down. The queue of io.lettuce.core.protocol.TransactionalCommand instances held within io.lettuce.core.StatefulRedisConnectionImpl.multi keeps on growing until the JVM crashes. I'm sure this is not a bug and my misconfiguration but unable to resolve it.

@mp911de I have a simple application that can reproduce this now - https://github.com/radarsh/lettuce-oom.

Mark Paluch
@mp911de
Thanks a lot, I'll have a look.
Mark Paluch
@mp911de
I'm not able to reproduce the issue nor the memory consumption profile @radarsh
Screenshot 2020-05-20 15.45.44.png
Also, stepping through the code ensures that a transaction is properly committed.
Adarsh Ramamurthy
@radarsh
@mp911de it only starts showing the leak after stopping Redis. We should stop Redis and continue hitting the API. It will not immediately show the heap increase but if you look at the number of instances of TransactionalCommand, it will show that it will only increase now.
Mark Paluch
@mp911de
Okay, I missed this aspect.
In that regard, it's expected behavior
Lettuce auto-reconnects and queues commands until Redis is back up
You can control this behavior through ClientOptions (specifically disconnectedBehavior and requestQueueSize), see https://lettuce.io/core/release/reference/index.html#client-options
Adarsh Ramamurthy
@radarsh
I have already configured these options in my example - https://github.com/radarsh/lettuce-oom/blob/master/src/main/java/com/adarshr/lettuce/oom/Application.java#L46-L52 but despite the queue size being small, it causes the leak and crash.
Mark Paluch
@mp911de
Okay, let me retest with these assumptions again.
Andrew Winterman
@AWinterman
hey folk, is there documentation on the concurrency characteristis of the async and reactive connections?
i assume there's a threadpool in there somewhere but i'm not finding it easy to find it
i'm specifically looking at redis pubsub, and trying to figure the context where the listeners get run
BharahthyKannan
@BharahthyKannan
Hi i am using sprint data redis - Lettuce. I have set the connection pool , and validating connection as true. App is in docker and for some reason the connection between app and redis is getting closed may be due to no traffic for some time, so using validation connection as true to avoid. but landed up in another exception . Exception org.springframework.data.redis.connection.PoolException: Returned connection io.lettuce.core.StatefulRedisConnectionImpl@29cbe14a was either previously returned or does not belong to this connection provider. Any pointers ?
BharahthyKannan
@BharahthyKannan
just for reference this is the configuration i am using
@Bean
public LettuceConnectionFactory redisConnectionFactory() {
 RedisStandaloneConfiguration redisConfiguration = new RedisStandaloneConfiguration();
    redisConfiguration.setHostName(hostName);
    redisConfiguration.setPort(port);
    redisConfiguration.setPassword(password);
    redisConfiguration.setDatabase(databaseIndex);
    //Redis Cluster configuration
    GenericObjectPoolConfig poolConfig = new GenericObjectPoolConfig();
    poolConfig.setMaxIdle(MAX_IDLE);
    poolConfig.setMaxTotal(MAX_TOTAL);
    LettuceClientConfiguration lettuceClientConfiguration=LettucePoolingClientConfiguration.builder().
            poolConfig(poolConfig)
            .clientOptions(clientOptions())
            .clientResources(clientResources())
           .build();

    LettuceConnectionFactory lettuceConnectionFactory=new LettuceConnectionFactory(redisConfiguration, lettuceClientConfiguration);
   lettuceConnectionFactory.setValidateConnection(true);
    lettuceConnectionFactory.afterPropertiesSet();
     return lettuceConnectionFactory;   }


@Bean
public ClientOptions clientOptions(){
    return ClientOptions.builder()
            .disconnectedBehavior(ClientOptions.DisconnectedBehavior.REJECT_COMMANDS)
            .pingBeforeActivateConnection(true)
            .autoReconnect(true)
            .build();
}

@Bean(destroyMethod = "shutdown")
ClientResources clientResources() {
    return DefaultClientResources.create();
}
Mark Paluch
@mp911de
Do you have a full stack trace @BharahthyKannan. This exception indicates a potential bug in Spring Data Redis.
BharahthyKannan
@BharahthyKannan
@mp911de i dont have full trace , but these links explains the problem especially pls see the summary part. https://ddcode.net/2019/06/21/case37-talk-about-lettuces-sharenativeconnection-parameter/
黄大仙
@huangluyu
How could I use the redisTemplate.opsForList().leftPop(redisQueueName, 0, TimeUnit.SECONDS); to block the pop but without time limit, I always get the exception Caused by: io.lettuce.core.RedisCommandTimeoutException: Command timed out after 1 minute(s)
Mark Paluch
@mp911de
Please either increase the global command timeout or configure TimeoutOptions and TimeoutSource through ClientOptions
BharahthyKannan
@BharahthyKannan
@mp911de i currently removed the pooling and using only the validation parameter true, as i am not really doing any blocking transactions . This issue will not occur in theory. but can you give me some pointers to tackle this issue
黄大仙
@huangluyu

I see BLPOP command is written with 0

This is correct. TimeoutSource considers only the timeout for command synchronization (how long to wait for a command to complete)

I try to set TimeoutConfig but it also didn’t work, and it's timeoutCommands disabled by defalut. But I found this reply by you.Maybe I have to learn the best practice in redisTempalteand lettuce at first.thks

vikash1111
@vikash1111
Could someone point me a to a link which advises on how we could rollout AUTH or change AUTH password for a production redis cluster, if we have to handle the failure scenarios arising due to the rollout like ((error) NOAUTH Authentication required. OR wrong password) at the client side, does Lettuce provide support for that?
rsathyadav
@rsathyadav

Hi, I am observing a strange issue with "RedisClusterClient" and connection events.

My redis server has an idle timeout of 2 mins. As I mentioned, I am using "RedisClusterClient". I have also added an eventBus listener. In my sample code, I am doing a keepalive every 30secs (by writing to a dummy entry). But, despite this, i receive "ConnectionDeactivated" event every 2 minutes (followed by connection activated). It doesn't matter if i use async() or sync() APIs, I receive the disconnect events every 2 mins.

But, If I use "RedisClient", i.e. the regular, non-cluster client, I do not see this behaviour - I do not see any disconnect events since I am doing a keepalive every 30secs.
Sample code:

        RedisClusterClient redisClient = RedisClusterClient.create("redis://localhost:6379");
        StatefulRedisClusterConnection<String, String> connection = redisClient.connect();
        RedisAdvancedClusterAsyncCommands<String, String> handle = connection.async();

        TimerTask batchWriterTask = new TimerTask() {
            @Override
            public void run() {
                handle.set("keepalive", "keepalive");
                System.out.println("keepalive");
            }   
        };  
        new Timer("keepalive").scheduleAtFixedRate(batchWriterTask, 0, 30000);
        redisClient.getResources().eventBus().get().subscribe((event) -> {
            System.out.println(event);
        }); 
        while (true);

It would be great if I get some clarification. Thank you!

Mark Paluch
@mp911de
For some reason, your TCP connection gets disconnected every two minutes. You can enable debug logging (category io.lettuce.core) to log out all I/O events
rsathyadav
@rsathyadav
Logs around the time of disconnection:
2020-05-29 05:59:29.572 PDT [keepalive] INFO c.a.a.a.LettuceSampleCluster - keepalive
.
.
.
.
2020-05-29 05:59:59.613 PDT [lettuce-nioEventLoop-4-3] DEBUG i.l.core.protocol.RedisStateMachine - Decoded ClusterCommand [command=AsyncCommand [type=SET, output=StatusOutput [output=OK, error='null'], commandType=io.lettuce.core.protocol.Command], redirections=0, maxRedirections=5], empty stack: true
2020-05-29 06:00:29.570 PDT [keepalive] DEBUG i.l.core.protocol.DefaultEndpoint - [channel=0xe23f8cee, /10.1.4.105:55398 -> /10.164.0.51:6379, epid=0x3] write() writeAndFlush command ClusterCommand [command=AsyncCommand [type=SET, output=StatusOutput [output=null, error='null'], commandType=io.lettuce.core.protocol.Command], redirections=0, maxRedirections=5]
2020-05-29 06:00:29.571 PDT [keepalive] DEBUG i.l.core.protocol.DefaultEndpoint - [channel=0xe23f8cee, /10.1.4.105:55398 -> /10.164.0.51:6379, epid=0x3] write() done
2020-05-29 06:00:29.571 PDT [keepalive] INFO c.a.a.a.LettuceSampleCluster - keepalive
2020-05-29 06:00:29.571 PDT [lettuce-nioEventLoop-4-3] DEBUG i.l.core.protocol.CommandHandler - [channel=0xe23f8cee, /10.1.4.105:55398 -> /10.164.0.51:6379, chid=0x3] write(ctx, ClusterCommand [command=AsyncCommand [type=SET, output=StatusOutput [output=null, error='null'], commandType=io.lettuce.core.protocol.Command], redirections=0, maxRedirections=5], promise)
2020-05-29 06:00:29.572 PDT [lettuce-nioEventLoop-4-3] DEBUG i.l.core.protocol.CommandEncoder - [channel=0xe23f8cee, /10.1.4.105:55398 -> /10.164.0.51:6379] writing command ClusterCommand [command=AsyncCommand [type=SET, output=StatusOutput [output=null, error='null'], commandType=io.lettuce.core.protocol.Command], redirections=0, maxRedirections=5]
2020-05-29 06:00:29.612 PDT [lettuce-nioEventLoop-4-3] DEBUG i.l.core.protocol.CommandHandler - [channel=0xe23f8cee, /10.1.4.105:55398 -> /10.164.0.51:6379, chid=0x3] Received: 5 bytes, 1 commands in the stack
2020-05-29 06:00:29.612 PDT [lettuce-nioEventLoop-4-3] DEBUG i.l.core.protocol.CommandHandler - [channel=0xe23f8cee, /10.1.4.105:55398 -> /10.164.0.51:6379, chid=0x3] Stack contains: 1 commands
2020-05-29 06:00:29.612 PDT [lettuce-nioEventLoop-4-3] DEBUG i.l.core.protocol.RedisStateMachine - Decode ClusterCommand [command=AsyncCommand [type=SET, output=StatusOutput [output=null, error='null'], commandType=io.lettuce.core.protocol.Command], redirections=0, maxRedirections=5]
2020-05-29 06:00:29.612 PDT [lettuce-nioEventLoop-4-3] DEBUG i.l.core.protocol.RedisStateMachine - Decoded ClusterCommand [command=AsyncCommand [type=SET, output=StatusOutput [output=OK, error='null'], commandType=io.lettuce.core.protocol.Command], redirections=0, maxRedirections=5], empty stack: true
DisconnectedEvent [/10.1.4.105:55396 -> /10.164.0.51:6379]
2020-05-29 06:00:30.042 PDT [lettuce-nioEventLoop-4-2] DEBUG i.l.core.protocol.ConnectionWatchdog - [channel=0xafd7dac9, /10.1.4.105:55396 -> /10.164.0.51:6379, last known addr=/10.164.0.51:6379] channelInactive()
2020-05-29 06:00:30.043 PDT [lettuce-nioEventLoop-4-2] DEBUG i.l.core.protocol.ConnectionWatchdog - [channel=0xafd7dac9, /10.1.4.105:55396 -> /10.164.0.51:6379, last known addr=/10.164.0.51:6379] scheduleReconnect()
2020-05-29 06:00:30.043 PDT [lettuce-nioEventLoop-4-2] DEBUG i.l.core.protocol.ConnectionWatchdog - [channel=0xafd7dac9, /10.1.4.105:55396 -> /10.164.0.51:6379, last known addr=/10.164.0.51:6379] Reconnect attempt 1, delay 1ms
ConnectionDeactivatedEvent [/10.1.4.105:55396 -> /10.164.0.51:6379]
2020-05-29 06:00:30.152 PDT [lettuce-eventExecutorLoop-1-2] INFO i.l.core.protocol.ConnectionWatchdog - Reconnecting, last destination was /10.164.0.51:6379
2020-05-29 06:00:30.155 PDT [lettuce-eventExecutorLoop-1-2] DEBUG i.l.c.c.RoundRobinSocketAddressSupplier - Resolved SocketAddress 10.164.0.51:6379 using for Cluster node f89444b6339b454aaaa99f2fb6405ddb01889ac4
2020-05-29 06:00:30.155 PDT [lettuce-eventExecutorLoop-1-2] DEBUG i.l.c.protocol.ReconnectionHandler - Reconnecting to Redis at 10.164.0.51:6379

2020-05-29 06:00:30.174 PDT [lettuce-nioEventLoop-4-4] DEBUG i.l.core.protocol.CommandHandler - [channel=0x49aa6237, [id: 0x49aa6237] (inactive), chid=0x4] channelRegistered()
ConnectedEvent [/10.1.4.105:55426 -> /10.164.0.51:6379]
2020-05-29 06:00:30.216 PDT [lettuce-nioEventLoop-4-4] DEBUG i.l.core.protocol.CommandHandler - [channel=0x49aa6237, /10.1.4.105:55426 -> /10.164.0.51:6379, chid=0x4] channelActive()
2020-05-29 06:00:30.216 PDT [lettuce-nioEventLoop-4-4] DEBUG i.l.core.protocol.ConnectionWatchdog - [channel=0x49aa6237, /10.1.4.105:55426 -> /10.164.0.51:6379, last known addr=/10.164.0.51:6379] channelActive()
2020-05-29 06:00:30.216 PDT [lettuce-nioEventLoop-4-4] DEBUG i.l.core.protocol.CommandHandler - [channel=0x49aa6237, /10.1.4.105:55426 -> /10.164.0.51:6379, chid=0x4] channelActive() done
2020-05-29 06:00:30.217 PDT [lettuce-nioEventLoop-4-4] INFO i.l.c.protocol.ReconnectionHandler - Reconnected to 10.164.0.51:6379, Channel channel=0x49aa6237, /10.1.4.105:55426 -> /10.164.0.51:6379
ConnectionActivatedEvent [/10.1.4.105:55426 -> /10.164.0.51:6379]
2020-05-29 06:00:30.217 PDT [lettuce-nioEventLoop-4-4] DEBUG i.l.core.protocol.ConnectionWatchdog - [channel=0x49aa6237, /10.1.4.105:55426 -> /10.164.0.51:6379, last known addr=/10.164.0.51:6379] userEventTriggered(ctx, io.lettuce.core.ConnectionEvents$Activated@20491923)
2020-05-29 06:00:59.570 PDT [keepalive] DEBUG i.l.core.protocol.DefaultEndpoint - [channel=0xe23f8cee, /10.1.4.105:55398 -> /10.164.0.51:6379, epid=0x3] write() writeAndFlush command ClusterCommand [command=AsyncCommand [type=SET, output=StatusOutput [output=null, error='null'], commandType=io.lettuce.core.protocol.Command], redirections=0, maxRedirections=5]
2020-05-29 06:00:59.571 PDT [keepalive] DEBUG i.l.core.protocol.DefaultEndpoint - [channel=0xe23f8cee, /10.1.4.105:55398 -> /10.164.0.51:6379, epid=0x3] write() done
2020-05-29 06:00:59.571 PDT [keepalive] INFO c.a.a.a.LettuceSampleCluster - keepalive

.
.
Just for an expt, i configured the redis server timeout to be 0. I observed no disconnections (as expected, I believe)
And, just to reiterate, I don't see the 2min disconnects with "RedisClient" - this is seen only with "RedisClusterClient"

Mark Paluch
@mp911de
Thanks a lot. RedisClusterClient creates multiple connections (a default connection for key-less commands and a connection for each shard for command routing). So even when sending a keep-alive on one connection, the other connection(s) in RedisClusterClient do not get kept alive.
Therefore Redis will disconnect you after the inactivity timeout. So I'd suggest to disable the idle disconnect on Redis.
rsathyadav
@rsathyadav
I am using the redis listener to identify issues with my connection. Could you please suggest how I can detect the same? Say, the server goes down temporarily. If the background connections are getting disconnected and I receive events for those too, I am not sure how to differentiate b/w these scenarios.
1 reply
Wei Song
@TalkWIthKeyboard
Hi, I look at debug message for lettuce-core, I find a problem. I have many command outputs is null, like "ZADD", "ZREMRANGEBYSCORE".... message: Decode LatencyMeteredCommand [type=ZREMRANGEBYSCORE, output=IntegerOutput [output=null, error='null'], commandType=io.lettuce.core.RedisPublisher$SubscriptionCommand]
vikash1111
@vikash1111

Could someone point me a to a link which advises on how we could rollout AUTH or change AUTH password for a production redis cluster, if we have to handle the failure scenarios arising due to the rollout like ((error) NOAUTH Authentication required. OR wrong password) at the client side, does Lettuce provide support for that?

could someone give their guidance on this?