mp911de on 6.0.x
Polishing #1518 (compare)
mp911de on main
Polishing #1518 (compare)
mp911de on main
Update doc references #1518 (compare)
I am facing one issue in my Redis connectivity
I am using lettuce 5.1.2 version in my project.. Here the issue is
when I am connected to Redis and for some connectivity issue, it got disconnected... I can see my Redis client was able to make the connection again which means it is reconnected..
Here when I send the ping command it times out.... but read command is working fine
ClusterClientOptions clusterClientOptions = ClusterClientOptions .builder() .pingBeforeActivateConnection(true) .autoReconnect(true) .build();
These are the client option I am using... Can someone please help
java.lang.ClassCastException: io.netty.channel.epoll.EpollEventLoopGroup cannot be cast to io.netty.channel.EventLoopGroup at io.lettuce.core.resource.DefaultEventLoopGroupProvider.getOrCreate(DefaultEventLoopGroupProvider.java:137) at io.lettuce.core.resource.DefaultEventLoopGroupProvider.allocate(DefaultEventLoopGroupProvider.java:83) at io.lettuce.core.AbstractRedisClient.getEventLoopGroup(AbstractRedisClient.java:250) at io.lettuce.core.AbstractRedisClient.channelType(AbstractRedisClient.java:236) at io.lettuce.core.cluster.RedisClusterClient.createConnectionBuilder(RedisClusterClient.java:795) at io.lettuce.core.cluster.RedisClusterClient.connectStatefulAsync(RedisClusterClient.java:764) at io.lettuce.core.cluster.RedisClusterClient.connectToNodeAsync(RedisClusterClient.java:535) at io.lettuce.core.cluster.RedisClusterClient$NodeConnectionFactoryImpl.connectToNodeAsync(RedisClusterClient.java:1183) at io.lettuce.core.cluster.topology.DefaultClusterTopologyRefresh.openConnections(DefaultClusterTopologyRefresh.java:302) at io.lettuce.core.cluster.topology.DefaultClusterTopologyRefresh.loadViews(DefaultClusterTopologyRefresh.java:81) at io.lettuce.core.cluster.RedisClusterClient.fetchPartitions(RedisClusterClient.java:936) at io.lettuce.core.cluster.RedisClusterClient.loadPartitionsAsync(RedisClusterClient.java:905) at io.lettuce.core.cluster.RedisClusterClient.initializePartitions(RedisClusterClient.java:860) at io.lettuce.core.cluster.RedisClusterClient.assertInitialPartitions(RedisClusterClient.java:865) at io.lettuce.core.cluster.RedisClusterClient.connect(RedisClusterClient.java:387) at io.lettuce.core.cluster.RedisClusterClient.connect(RedisClusterClient.java:364)
RedisClusterClient redisClient = RedisClusterClient.create(uris); StatefulRedisClusterConnection<String, String> connection = redisClient.connect();the error happens when i get an connection
mget? I'm getting ~700ms with very low call volume and it shoots up to 25,000ms with high volume. I feel that there must be something wrong. I'm using ElastCache with cluster mode and has 8 nodes. I got the metrics from
CommandLatencyEvent. Is there a way to monitor Netty, like the thread pool using Lettuce? Any idea as how to troubleshoot is appreciated!
thenApplyAsynchelps a lot. I'll also checkout the codec. Our payload size is 1kb. Do you think the average latency around 1000ms is expected for 100 items x 1kb of
mget, or it can be reduced?
PINGcommand to be send over Pub/Sub channel. I believe you remember the details so I won't dive deep here and just put a reference to your answer saying that Redis 6 and new protocol RESP3 will solve this issue. The question now is whether you have plans to implement it? We can't rely on
keepaliveoption for now and need to go with a
PINGcommand to be sent in the channel.
io.lettuce.core.RedisException: Command PING not allowed while subscribed. Allowed commands are: [PSUBSCRIBE, QUIT, PUNSUBSCRIBE, SUBSCRIBE, UNSUBSCRIBE]as Lettuce prevents us from sending
PING. Your commit that checks it is still in the master codebase.
XREADhas two operating modes: Immediate return (default) and blocking (using the
BLOCKoption). See https://redis.io/commands/xread. There's no such thing as pushing stream messages because that's not how Redis streams work. It's pull-oriented
Channelrespective its pipeline. All threading (also, binding the I/O thread to the channel) is netty business where we don't interfere with.