by

Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • 01:35
    vincentjames501 labeled #1380
  • 01:35
    vincentjames501 opened #1380
  • Aug 05 07:22
    mp911de commented #1281
  • Aug 05 03:39
    vincentjames501 commented #1281
  • Aug 03 15:02
    etherrien closed #1379
  • Aug 03 15:02
    etherrien commented #1379
  • Aug 03 14:38
    Travis lettuce-io/lettuce-core (main) errored (2747)
  • Aug 03 14:23
    mp911de labeled #1379
  • Aug 03 14:23
    mp911de labeled #1379
  • Aug 03 14:23
    mp911de unlabeled #1379
  • Aug 03 14:22
    mp911de commented #1379
  • Aug 03 14:19
    Travis lettuce-io/lettuce-core (main) errored (2746)
  • Aug 03 14:12
    etherrien edited #1379
  • Aug 03 14:08
    Travis lettuce-io/lettuce-core (6.0.0.RC1) errored (2745)
  • Aug 03 14:01

    mp911de on main

    Polishing #1372 Update link to… (compare)

  • Aug 03 13:59
    etherrien commented #1379
  • Aug 03 13:57
    etherrien labeled #1379
  • Aug 03 13:57
    etherrien opened #1379
  • Aug 03 13:50
    mp911de closed #1372
  • Aug 03 13:43
    Travis lettuce-io/lettuce-core (main) canceled (2744)
Zifeng Yang
@fallingyang
I mean rename del to mydel
i use @comamnd, but it doesn't work.
Zifeng Yang
@fallingyang
io.lettuce.core.dynamic.CommandMethodSyntaxException: Command MIITDEL does not exist Offending method: public abstract java.lang.Integer com.dky.base.redis.MixedCommands.del(java.lang.String)
    at io.lettuce.core.dynamic.DefaultCommandMethodVerifier.syntaxException(DefaultCommandMethodVerifier.java:165)
    at io.lettuce.core.dynamic.DefaultCommandMethodVerifier.lambda$validate$0(DefaultCommandMethodVerifier.java:70)
    at java.util.Optional.orElseThrow(Optional.java:290)
    at io.lettuce.core.dynamic.DefaultCommandMethodVerifier.validate(DefaultCommandMethodVerifier.java:69)
    at io.lettuce.core.dynamic.ExecutableCommandLookupStrategySupport$DefaultCommandFactoryResolver.resolveRedisCommandFactory(ExecutableCommandLookupStrategySupport.java:74)
    at io.lettuce.core.dynamic.ExecutableCommandLookupStrategySupport.resolveCommandFactory(ExecutableCommandLookupStrategySupport.java:48)
    at io.lettuce.core.dynamic.AsyncExecutableCommandLookupStrategy.resolveCommandMethod(AsyncExecutableCommandLookupStrategy.java:47)
    at io.lettuce.core.dynamic.RedisCommandFactory$CompositeCommandLookupStrategy.resolveCommandMethod(RedisCommandFactory.java:273)
    at io.lettuce.core.dynamic.RedisCommandFactory$BatchAwareCommandLookupStrategy.resolveCommandMethod(RedisCommandFactory.java:321)
    at io.lettuce.core.dynamic.RedisCommandFactory$CommandFactoryExecutorMethodInterceptor.<init>(RedisCommandFactory.java:212)
    at io.lettuce.core.dynamic.RedisCommandFactory.getCommands(RedisCommandFactory.java:192)
    at com.dky.base.redis.CustomRedisClient.customCommands(CustomRedisClient.java:15)
    at com.dky.base.redis.CustomRedisClient.main(CustomRedisClient.java:19)
Mark Paluch
@mp911de
Care to post the interface definition?
Shivam Sharma
@svmsharma20
Hi All, I have some queries; It would be helpful if someone can solve them for me:
1) What are the extra topological changes do enablePeriodicRefresh options covers which are not covered by adaptiveRefresh?
2) For REPLICA_PREFERRED setting, On what basis lettuce decide that from which replica to read from if, master has more than one replica? Does it consider latency in it?
3) Regarding validateClusterNodeMembership in client options, What does lettuce validate if this setting is enabled?
Mark Paluch
@mp911de
Periodic refresh is looking periodically whether the cluster topology has changed. This isn't ideal as the topology may change between the individual refresh runs. Adaptive refresh reacts to disconnects and MOVED redirects which may have a more reasonable performance and response profile.

REPLICA_PREFERRED: See Javadoc:

Setting to read preferred from replica and fall back to master if no replica is not available

In general, Lettuce orders the list of candidate nodes by latency to use the fastest node.
validateClusterNodeMembership checks whether the requested target node (from ASK and MOVED redirections) is known in the topology. This setting is to avoid connections to nodes that are not part of the cluster.
ankitjhil
@ankitjhil
Hi All, We have recently started facing the issue of many RedisCommandTimeoutException on only few machines. Other machines are not throwing any such error.. Also, its observed that that the commands are stacked up to be processed by the EventLoop threads.. Is there any way we can increase the EventLoop threads and make more connections with Redis? We are suspecting that or load requirements are not being handled fairly with the current number of EventLoop threads
Mark Paluch
@mp911de
netty assigns a single thread per channel. Please check by using a profiler why the load is so large that commands pile up.
ankitjhil
@ankitjhil
@mp911de Any possibility to increase the threads per channel ? Is there any configuration which handles it ?
Mark Paluch
@mp911de
While it's possible to run each ChannelHandler on its own thread, it's not possible to do the same for Lettuce. If you see such need, then your channel thread load is too high. In such case, offload all the work yourself (that originates from future callbacks or serialization) onto your own threading infrastructure.
ankitjhil
@ankitjhil
ClientResources res = DefaultClientResources.builder() .ioThreadPoolSize(4) .computationThreadPoolSize(4) .build();
@mp911de I am planning to try the load run by increasing the ioThreadPoolSize and computationThreadPoolSize to make it 12 each.. Currently the number of processors reported is 6 so lettuce is creating only 6 thread each. Not sure if it will bring any positive impact on the system.
Mark Paluch
@mp911de
The I/O threadpool size should roughly match your CPU core count. Computation pools are used for cluster topology refresh, command timeout schedulers, and reactive signal dispatch, if enabled.
阿拉斯加大闸蟹
@singgel

Hello all, I need some help, Our service uses the lettuce 5.1.6 version, and a total of 22 docker nodes are deployed.
Whenever the service is deployed, several docker nodes will appear ERROR: READONLY You can't write against a read only slave.
Restart the problematic docker node ERROR no longer appears

  • redis server configuration:

    8 master 8 slave
    stop-writes-on-bgsave-error no
    slave-serve-stale-data yes
    slave-read-only yes
    cluster-enabled yes
    cluster-config-file "/data/server/redis-cluster/{port}/conf/node.conf"

  • lettuce configuration:

    ClientResources res = DefaultClientResources.builder()
          .commandLatencyPublisherOptions(
                  DefaultEventPublisherOptions.builder()
                          .eventEmitInterval(Duration.ofSeconds(5))
                          .build()
          )
          .build();
    redisClusterClient = RedisClusterClient.create(res, REDIS_CLUSTER_URI);
    redisClusterClient.setOptions(
          ClusterClientOptions.builder()
                  .maxRedirects(99)
                  .socketOptions(SocketOptions.builder().keepAlive(true).build())
                  .topologyRefreshOptions(
                          ClusterTopologyRefreshOptions.builder()
                                  .enableAllAdaptiveRefreshTriggers()
                                  .build())
                  .build());
    RedisAdvancedClusterCommands<String, String> command = redisClusterClient.connect().sync();
    command.setex("some key", 18000, "some value");
  • The Exception that appears:

    io.lettuce.core.RedisCommandExecutionException: READONLY You can't write against a read only slave.
      at io.lettuce.core.ExceptionFactory.createExecutionException(ExceptionFactory.java:135)
      at io.lettuce.core.LettuceFutures.awaitOrCancel(LettuceFutures.java:122)
      at io.lettuce.core.cluster.ClusterFutureSyncInvocationHandler.handleInvocation(ClusterFutureSyncInvocationHandler.java:123)
      at io.lettuce.core.internal.AbstractInvocationHandler.invoke(AbstractInvocationHandler.java:80)
      at com.sun.proxy.$Proxy135.setex(Unknown Source)
      at com.xueqiu.infra.redis4.RedisClusterImpl.lambda$setex$164(RedisClusterImpl.java:1489)
      at com.xueqiu.infra.redis4.RedisClusterImpl$$Lambda$1422/1017847781.apply(Unknown Source)
      at com.xueqiu.infra.redis4.RedisClusterImpl.execute(RedisClusterImpl.java:526)
      at com.xueqiu.infra.redis4.RedisClusterImpl.executeTotal(RedisClusterImpl.java:491)
      at com.xueqiu.infra.redis4.RedisClusterImpl.setex(RedisClusterImpl.java:1489)
redis cluster version 4.0.10, redis server has no failover operation during this period
Mark Paluch
@mp911de
It would make to enable periodic refresh. READONLY errors aren't considered yet in adaptive refresh triggers, however they would make a lot of sense when looking at the current scenario
I created lettuce-io/lettuce-core#1365 to improvide adaptive refresh.
Martin Misiarz
@little-fish
Hello.
I have a question about custom commands in Lettuce. RedisCommandFactory provides abstraction over custom Commands and I guess this factory should be reused. This works well in case of single connection. But what about pooled connection? Is it ok, in terms of usability and performance, to create a factory every time I obtain a connection from a pool?
Thank you.
Mark Paluch
@mp911de
Lettuce connections are thread-safe for the general usage. However, if you share a connection across multiple threads you should not use transactions or blocking commands (BLPOP and such) as other threads can interfere with transactional state or see delays caused by blocking commands.
In these cases we advise to use pooling.
Martin Misiarz
@little-fish
I do understand pooling in Lettuce in general. In our project we need a pool because in one thread we want to send commands in batches (we are processing a huge number of keys so we need to disable auto flush) and in another thread we want to process keys the regular way (with auto flush enabled). My question is related to creation of RedisCommandFactory - whether is it ok to create new factory every time a connection is acquired from a pool.
The reason why I am thinking about custom Commands interface is exactly the same like in lettuce-io/lettuce-core#1080 - I need a different RedisCodec than the one bounded with an acquired connection.
Mark Paluch
@mp911de
That's an interesting arrangement. For now, RedisCommandFactory operates on top of a single connection. However, RedisCommandFactory is pretty flexible on codecs which allows registering own codec types so you can use multiple codecs within a single interface.
Martin Misiarz
@little-fish
Thank you for you reply. To sum it up, it's fine to create new factory everytime I acquire a connection from a pool.
Shivam Sharma
@svmsharma20

validateClusterNodeMembership checks whether the requested target node (from ASK and MOVED redirections) is known in the topology. This setting is to avoid connections to nodes that are not part of the cluster.

In which scenario, Lettuce will try to connect to a master, which is not part of the cluster. Were there any cases in the past. I mean, what problem does it solve at a very low level. And, what is the recommended value?

Mark Paluch
@mp911de
Once a Redis command gets a response MOVED <host>:<port>. In that case, Lettuce uses hostname and port to determine the shard. If the shard is not listed in Partitions, then the connection attempt gets rejected.
Dmitriy Neretin
@dimarzio

Hello! I have a question regarding the SCAN operation on the redis cluster. I implemented a small lambda function which puts
some String entries to the redis cluster (AWS Elasticache Cluster Mode active. Cluster have two shards. Each shard has one replica. So at all there are 4 mashines).
For debugging purposes (which entries are currently on the cache) I also implemented a method, which should scans all keys using a
particular namespace and then I'm calling a GET operation on each key to see the cached value. The problem is: SCAN returns only
keys from one node and not the keys from the whole cluster. For example: There are 1000 entries at all. From the AWS metrics I
know, that each shard contains 500 entries. So my expectation is that the SCAN also return 1000 keys. But every time I get only
500 results (entries form only one node). Below is the code how the SCAN operation will be executed:

RedisURI redisURI = RedisURI.Builder.redis("my-aws-elasticache-cluster-config-endpoint", 6379).build()
RedisClusterClient clusterClient = RedisClusterClient.create(redisURI)
StatefulRedisClusterConnection<String, String> connection = clusterClient.connect()
RedisAdvancedClusterAsyncCommands<String, String> asynchCommands = connection.async()

RedisFuture<KeyScanCursor<String>> future = 
this.asynchCommands.scan(ScanArgs.Builder.matches("my_namespace:*").limit(10000L))

KeyScanCursor<String> keyScan = future.get()

The keyScan always contains only 500 entries and not all keys from the cluster.

Does anybody know what is the problem? Am I using the API in a wrong way? Maybe it is also important to know: I'm using the
RedisAdvancedClusterCommands (the sync. commands) to execute SET operations.

Mark Paluch
@mp911de
SCAN works cursor-based which means that you need to continue with issuing SCAN commands until the cursor is finished
You might want to look into ScanIterator that encapsulates cursor iteration for you.
Dmitriy Neretin
@dimarzio
@mp911de Thank you. I will take a look at it. The thing I still not really understand is: To make a cluster SCAN I have to use the async API, but the ScanIterator is based on the sync API.
Dmitriy Neretin
@dimarzio
@mp911de ScanIterator worked for me. Thank you again. What actually would be really great (for people who is not so experienced with redis, like me :) ) is to describe how the same result can be achieved using async API like in my previous example. Because the JavaDoc says: the scan will be executes on the cluster, but the strait forward usage of the API leads to the wrong result...
drykod
@drykod_twitter
Any suggestion/idea on how to diagnose "io.lettuce.core.RedisException: java.io.IOException: Connection reset by peer"? (Application running on Kubernetes and connecting to an external Redis Cluster)
drykod
@drykod_twitter
(prior to that, we see a lot of watchdog "reconnecting, last destination was..." messages [using lettuce core 5.3.2])
Mark Paluch
@mp911de
Connection reset by peer indicates that your remote side has closed (The remote server has sent you a RST packet, which indicates an immediate dropping of the connection, rather than the usual handshake). Likely, that the server process has been killed.
drykod
@drykod_twitter

Thank you for your reply. It's quite strange to be honest. It works fine on a different environment even while doing a stress test.

The watchdog keeps reconnecting to the cluster nodes and commands get executed, but for some unknown reasons, as you mention, there must be a RST packet sent from the remote server.

Redis cluster nodes are up and running. Not noticing any particular error.

Will try to diagnose, but currently have no real clue.

drykod
@drykod_twitter
We tried to debug today and we are seeing some RST packets coming from some Redis cluster master nodes. I wonder what could be the possible causes as it is intermittent. (Idle timeout disconnection is set to 300s on the server side). We will keep debugging but I personally have no experience in this kind of issue. Still looking for some clues if anyone can help / (what could be possible causes).
Mark Paluch
@mp911de
You should ask this question in the Redis gitter channel or in the Redis issue tracker. Another reason could be a client connection limit in Redis or a file handle limit on the server side.
drykod
@drykod_twitter
Thank you again. I will try to debug a little bit more this week and see maybe on the Redis issue tracker / gitter. I came here first because our Redis servers were already up and running for over a year, no change there and working fine. Client connections and file limits are high enough I believe and we had no issue until now. Only thing we changed this time is on the client side (now using Lettuce and now using a different host). So my first thought was the problem might be with the way we configure Lettuce / client host.
Weichen Liu
@weicliu
Hello, a quick question, regarding ScriptOutputType (https://lettuce.io/lettuce-4/release/api/com/lambdaworks/redis/ScriptOutputType.html), what does the VALUE option mean? I want to return a string from lua to indicate which branch in the script was actually used , should I use value? STATUS also seems to work, but it seems it was intended for things like "ok"?
sushma
@SushmaReddyLoka
Hi, I am using lettuce client for connecting to redis cluster , and I am using a single connection. So I want to understand how should I check using the redisclusterclient or connection objects, If I am able to connect to redis or not . I also have seen there is a connectionwatchdog which checks for connection inactiveness and tries to reconnect back, what is the trigger point for this watchdog?
Mark Paluch
@mp911de
VALUE applies the value decoding from your RedisCodec @weicliu
Lettuce's connect() fails if it cannot connect. So if you get back a connection, then you can be sure that your connection is connected.
sushma
@SushmaReddyLoka
Thanks @mp911de , just in case i want to be extra safe, what api should i be using to check reach ability of redis.
Mark Paluch
@mp911de
What's wrong with creating a connection and handling exceptions?
sushma
@SushmaReddyLoka
Yes, I would create connection and use it for executing commands. Which exceptions are you referring to exceptions while creating connection or executing commands ?
Mark Paluch
@mp911de
To the one when creating a connection.
Lettuce auto-reconnects disconnected connections and buffers by default commands that were issued in that period
You can also configure Lettuce to reject commands when disconnected through ClientOptions