Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • 10:53
    sebarys commented #2100
  • 09:06
    mp911de commented #2100
  • 06:58
    dkrcharlie commented #2100
  • May 19 04:55
    ExploreNcrack opened #2102
  • May 18 11:54
    pruthirajmohanty opened #2101
  • May 18 10:27
    sebarys opened #2100
  • May 18 05:34
    ExploreNcrack closed #2097
  • May 18 05:34
    ExploreNcrack commented #2097
  • May 18 02:20
    chibenwa commented #2098
  • May 18 02:19
    chibenwa opened #2099
  • May 18 02:16
    chibenwa commented #2098
  • May 17 09:54
    pedrovilasboas edited #2098
  • May 17 09:53
    pedrovilasboas opened #2098
  • May 17 06:37
    mp911de unlabeled #2095
  • May 17 06:37
    mp911de labeled #2096
  • May 17 06:35
    mp911de labeled #2097
  • May 17 06:35
    mp911de commented #2097
  • May 17 05:06
    ExploreNcrack opened #2097
  • May 16 18:41
    mikeamzn commented #2095
  • May 16 18:38
    mikeamzn opened #2096
Marco Cullen
@marcocullen
[hello!]
Sharath Kumar V
@vsharathis
Question: we are using Lettuce Java client 6.1.5 and using thenAsyc, but we have seen that in 500 requests we can see the responses for 492, always 8 responses missing and not seen any error. but in redis DB can see all 500 entries , kindly share the comments on this , is are we using right method for callback . thank you
future.getOperationsresults().getFutureResultString().thenAccept(value -> {
if(null != value){
if(dataCompressionEnabled) {
try {
value = getUnCompressedValue(value);
} catch (IOException e) {
MSG_Exception.log("Error in uncompressing the data during GET: " + e);
}
}
StringValue resultValue = new StringValue(value);
try {
mResultPI.putValue(workItem, resultValue);
} catch (DispositionException e) {
MSG_Exception.log(e);
}
}
MSG_debug.log("resuming workitem after getting response from db: " + Thread.currentThread().getName());
workItem.scheduleResume(future);
});
Sharath Kumar V
@vsharathis
Any comments please ?
Mark Paluch
@mp911de
It sounds a bit like congestion of threads. If you reduce your code to the bits that interact with Lettuce, then we might help. The code above isn't related to Lettuce usage at all.
Sharath Kumar V
@vsharathis
we are using Lettuce-core java client 6.1.5 and using various APIs with thenAsyc to work asynchronously with DB. We are seeing some inconsistency with these apis. thenaccept is not invoked for any requests in the load testing scenarios.
For example, if we sent 100000 requests (DB writes/reads) then around 50-70% of the requests will be succeeded end to end. The rest of the requests are not succeeding because thenaccept is not getting invoked for these. When we tried .exceptionally to see what was the reason, we are getting the connection close error as mentioned
future.getOperationsresults().getFutureResultSelectedMapEntries().exceptionally(new Function<Throwable, List<KeyValue<String, String>>>() {
Daniel Wilkins
@tekktonic
Hey y'all, I'm going through some lettuce 4.5.0 code and I've got a question about List<V> mget(K... keys): what's the behavior if you request a key which doesn't exist?
That's not mentioned in the docs and I can't find the actual implementation in the source tarball.
Daniel Wilkins
@tekktonic
^ nvm this, found the answer in the assumed behaviour behind some tests.
sudhansagar
@sudhansagar:matrix.org
[m]
Hi, i am trying to use AWS Elasticache redis version 6.2 from my existing code using LettuceConnectionFactory, where i am getting as below, where transitEncryption is "Enabled" for the Redis, please help what changes might be required w.r.t this version
2022-03-08 06:15:09,521 WARN com.lambdaworks.redis.cluster.ClusterTopologyRefresh Cannot connect to RedisURI [host='demo-redis-2-0002-001.demo-redis-2.jdjdkq.use2.cache.amazonaws.com', port=6379]
com.lambdaworks.redis.RedisCommandTimeoutException: Command timed out
at com.lambdaworks.redis.LettuceFutures.await(LettuceFutures.java:100)
1 reply
adreso
@adreso

Hello, I have a question, there is a way to conect two differents clusters with one connection?
For example:
` private static final RedisURI redis1 = RedisURI.create("redis://redis1.*.cache.amazonaws.com");
private static final RedisURI redis2 = RedisURI.create("redis://redis2.*.cache.amazonaws.com");
private static final RedisClient redisClient = RedisClient.create();

public static StatefulRedisMasterReplicaConnection<String, String> conection() {
    List<RedisURI> nodes = Arrays.asList(redis1, redis2);
    StatefulRedisMasterReplicaConnection<String, String> connection = MasterReplica.connect(
            redisClient,
            StringCodec.UTF8,
            nodes
    );
    return connection;
}     `

Every cluster have their own nodes, but I only want to write, I need to know if this is possible or what is the best practice in this cases.
Thanks a lot in advance

Matthias Erche
@matterche
Hi, I'm trying to find out which exceptions used by Lettuce are safe to retry. Could not find anything in the docs so far.
Matthias Erche
@matterche
Can anyone help?
1 reply
anrajamani13
@anrajamani13
Hi,
I am trying to connect to redis cluster (comprises of 5 nodes) which are TLS enabled.
May i know, how to establish the connection factory with TLS enabled by providing Cert, Key and CAcert details from java client (using spring-data-redis and its starter project)
Derek Li
@druglee0113_twitter
Hey guys, our ops team reports that our redis cluster receives a lot of "client" command. Any idea in what case this command will be sent by Lettuce?
Joël | NoPermission
@MassiveLag

Hai, i am looking for a way to filter a value out of a key.

For example, i saved as a key with values in it. But on a other application i have to filter out the value, then grab the uuid instead.

Per Lundberg
@perlun

@mp911de One thing we noted here some day ago: When executing a large number of Redis requests and the network latency is significant (2ms in our case, servers in multiple data centers in the same country), the synchronous mode of operation is way slower than the async mode of operation (because it waits on OK for each request, presumably). With sync, roughly 500 queries/s (1000/2, so makes total sense), but with async something like 20 000 per second.

Would it make sense to more heavily emphasize this in the docs somewhere, that using the sync API can cause significant performance degradation in high-throughput scenarios? Is it perhaps already mentioned somewhere?

Mark Paluch
@mp911de
I'm not sure I follow. Synchronous API implies that you can only proceed once the command has been completed whereas async means that you can proceed without awaiting the command result.
Per Lundberg
@perlun
Yeah, I know. And because of this, for some use cases the synchronous API is mandatory. The thing is just that you (=me) might not always realize the performance implications it can have => you might be using the sync API when you "should" really try to write your application to take advantage of the (potentially huge!) advantage in throughput you can get with the async API... if your use case doesn't "need" the result/"need" to ensure that errors can be propagated to the caller directly.
(We have always used the sync API until now, in production since perhaps 2 years. But now that some network latency is being introduced & some high-volume events which we are handling in our system, we realized that async can be much more performant for us. Our use case is SET in this case, i.e. writing data to Redis so we don't "need" the result in that sense. However, it does have error-handling implications that we need to take into consideration.)
dileep
@dileepkmandapam_twitter

@mp911de I have implemented a Redis write-intensive application using spring Redis backed by lettuce. The write pipeline starts by reading a batch of events (10K) from the Kafka topic, applies filters, transforms, and finally performs the Redis (sorted sets) batch ingestion using the pipeline.

Here is the ingestion code . is there any better way to ingest data from performance perspective ?.

stringRedisTemplate.execute(connection -> {
    map.forEach((key, value) ->
            connection.zSetCommands().zAdd(key.getBytes(StandardCharsets.UTF_8), value));
    return null;
}, true, true);
Mark Paluch
@mp911de
ZADD accepts tuples, so if you're able to group multiple zset entries into a single ZADD invocation (https://redis.io/commands/zadd/), then you can improve the throughput quite a bit. Like 5 or 10 items, don't overdo it.
1 reply
December
@Decemberrrr
hey, so I'm currently trying to run two redis clients on the same server via 2 seperate plugins but both use the same shaded connection builder yet I get this error
Caused by: java.lang.IllegalArgumentException: 'RedisURI' is already in use
RedisClient.create(RedisURI.builder() .withHost(hostname) .withPassword(password.toCharArray()) .withPort(port) .withClientName(clientName) .build());That's what im using to create the clients
Mark Paluch
@mp911de
What plugins? Please provide more context such as a stack trace.
Subham
@GrowlyX

hey, so I'm currently trying to run two redis clients on the same server via 2 seperate plugins but both use the same shaded connection builder yet I get this error
Caused by: java.lang.IllegalArgumentException: 'RedisURI' is already in use
RedisClient.create(RedisURI.builder() .withHost(hostname) .withPassword(password.toCharArray()) .withPort(port) .withClientName(clientName) .build());That's what im using to create the clients

I'm having the same issue

Miguel González
@magg
hi guys, what's the use case of the connection pooling used by commons-pool2? If I only need need to have one open connection to the redis server, do i still need to configure it?
2 replies
Santhosh
@santhoshv_twitter
Hi guys, I am getting one error like following when using 6.0.0.RELEASE and 6.0.5 server. Any idea why out of memory happen with this version
code
org.springframework.data.redis.RedisSystemException: Redis exception; nested exception is io.lettuce.core.RedisException: java.lang.OutOfMemoryError: Direct buffer memory\n\tat org.springframework.data.redis.connection.lettuce.LettuceExceptionConverter.convert(LettuceExceptionConverter.java:74)\n\tat org.springframework.data.redis.connection.lettuce.LettuceExceptionConverter.convert(LettuceExceptionConverter.java:41)\n\tat org.springframework.data.redis.PassThroughExceptionTranslationStrategy.translate(PassThroughExceptionTranslationStrategy.java:44)\n\tat org.springframework.data.redis.FallbackExceptionTranslationStrategy.translate(FallbackExceptionTranslationStrategy.java:42)\n\tat org.springframework.data.redis.connection.lettuce.LettuceConnection.convertLettuceAccessException(LettuceConnection.java:272)\n\tat org.springframework.data.redis.connection.lettuce.LettuceConnection.await(LettuceConnection.java:1063)\n\tat org.springframework.data.redis.connection.lettuce.LettuceConnection.lambda$doInvoke$4(LettuceConnection.java:920)\n\tat org.springframework.data.redis.connection.lettuce.LettuceInvoker$Synchronizer.invoke(LettuceInvoker.java:673)\n\tat org.springframework.data.redis.connection.lettuce.LettuceInvoker$DefaultSingleInvocationSpec.get(LettuceInvoker.java:589)\n\tat org.springframework.data.redis.connection.lettuce.LettuceKeyCommands.exists(LettuceKeyCommands.java:79)\n\tat org.springframework.data.redis.connection.DefaultedRedisConnection.exists(DefaultedRedisConnection.java:80)\n\tat org.springframework.data.redis.core.RedisTemplate.lambda$hasKey$7(RedisTemplate.java:781)\n\tat org.springframework.data.redis.core.RedisTemplate.execute(RedisTemplate.java:223)\n\tat org.springframework.data.redis.core.RedisTemplate.execute(RedisTemplate.java:190)\n\tat org.springframework.data.redis.core.RedisTemplate.hasKey(RedisTemplate.java:781)
code
Mark Paluch
@mp911de
You can switch to heap buffers or tweak the direct memory size. See https://stackoverflow.com/questions/62383057/outofdirectmemoryerror-using-spring-data-redis-with-lettuce-in-a-multi-threading for further advice
1 reply
nrghong
@nrghong

Hi @mp911de and those who develop Spring MVC application. I have one question. I'm doing a load test on my application and found a weird behavior of IO event loop. The thing is,

  • In the test API there is some CPU bound works and redis IO together.
  • When load tested, with moderate load (50 RPS), I noticed redis I/O spent 50% of total time.
  • When profiled, "EpollEventLoop.run" was consuming 20% of CPU
  • As RPS growed up to 200, redis I/O spent over 80% of total time, which was apparently a problem
  • When profiled, "EpollEventLoop.run" was consuming only less than 10% of CPU

I assume as RPS growed up a lot more http-nio threads became active and consumed CPU, and they stole CPU cycle from the IO event loop threads. But it's so weird if it's just the way lettuce works.

Could you give me any tip on this problem. Thanks.

2 replies
Miguel González
@magg

I've been having some strange issues with ElasticCache on AWS lettuce reports these kind of messages in a loop

Reconnected to master , x
Reconnecting, last destination was x

is it related to some kind of network problem? How can i avoid it with lettuce?

Mark Paluch
@mp911de
Lettuce tries to auto-reconnect connections that have been disconnected unintentionally. Causes might be connection limits on Redis, disconnect on idle if configured within Redis and when the Redis server goes down.
Rafael Gude
@rafael-gude-picpay
Hi guys! Is it normal to RedisTemplate + Lettuce make my application consume more memory overtime, even If I'm not making any operations on Redis?
It is increasing slowly, but it's a constant increase overtime
The application that I said it's just running, and isn't receiving any requests or reading messages
darrenwjlau
@darrenwjlau:matrix.org
[m]
Can anyone help me fix this code
package ga.darren.darrenmcblocklogger;
import org.bukkit.block.Block;
import org.bukkit.event.EventHandler;
import org.bukkit.event.Listener;
import org.bukkit.event.block.BlockPlaceEvent;
import io.lettuce.core.*;
import io.lettuce.core.pubsub.*;
import io.lettuce.core.api.*;
import io.lettuce.core.api.sync.*;
import io.lettuce.core.cluster.pubsub.*;
import io.lettuce.core.cluster.pubsub.api.sync.*;



import org.bukkit.plugin.java.JavaPlugin;

public final class darrenmcblocklogger extends JavaPlugin implements Listener {
  @Override
  public void onEnable() {
    this.getLogger().info("Darrenmc block logger started!");
    RedisClient redisClient =
      RedisClient.create("");
    StatefulRedisConnection connection = redisClient.connect();
    StatefulRedisPubSubConnection pubconnection = connection.connectPubSub();
    RedisClusterPubSubCommands sync = pubconnection.sync();
    RedisCommands syncCommands = connection.sync();
    this.getLogger().info(" [Redis Handler] Redis connected!");
    getServer().getPluginManager().registerEvents(this, this);
  }
  @EventHandler
  public void BlockPlaceEvent(BlockPlaceEvent event) {
   Block b = event.getBlock();
   String name = b.getType().toString();
   this.getLogger().info(" [Update Handler] " + name + " has been placed!");
   syncCommands.set("blockplaced.last", name);
   this.getLogger().info(" [Redis Handler] Key have been set!");
   sync.publish("blockplaced.update." + name, "message");
   this.getLogger().info(" [Redis Handler] Websocket message sent!");

    }

  @Override
  public void onDisable() {
    connection.close();
    this.getLogger().info(" [Redis Handler] Redis disconnected!");
    redisClient.shutdown();
    }
}
Daniel Wilkins
@tekktonic

Hey, what's the current way to clean up the resources on a RedisCommands instance? I have a plugin for druid which used lettuce 4.5 and in a cleanup method I had

        clusterCommands.close();
        clusterConnection.close();
        cluster.shutdown();

After upgrading to lettuce 5 (I have some classpath issues with 6) the Commands objects no longer have a close method, but if I don't call it then my jvm very quickly runs out of FDs, so I'm obviously supposed to clean them up somehow.

I tried the quit method but I still see the out of FD errors there
randompersonhello
@randompersonhello:matrix.org
[m]

Hey, wanted to run a reactive processing flow by folx here to see whether this makes sense:

We are reading data from a kafka queue in one main thread. The main thread is processing the kafka message and then writing to redis by calling :

redisReactiveTemplate.opsForValue().set(key, value).subscribe()

where redisReactiveTemplate is of type: ReactiveRedisTemplate<String, RandomObject>

  1. Is this the correct method for mass writing to redis in a reactive way?
  2. Are there alternatives/preferred ways of handling this specific scenario that anyone could recommend?
  3. When calling this function - is the main thread both the publisher and the subscriber?

Let me know if you need more information - thanks!

1zg12
@1zg12
hi, I am seeing a lot of reconnection message during a redis put for a large data set. This doesn't happen for a smaller data set
image.png
is this an known issue with large data
Tushar Paudel
@t-paudel

@mp911de I am am using spring-boot-starter-data-redis + Lettuce. I am creating connections in @PostContruct method and closing the connections in @Predestroy method. But i am getting memory leak error. Could you let me know how to fix this.

@PostConstruct
public void getConnection() {
    RedisURI redisURI = RedisURI.builder()
            .withHost(host)
            .withPort(port)
            .build();

    redisClient = RedisClient.create(redisURI);
    redisConnection = redisClient.connect();

    redisCommand = redisConnection.sync(); 

}

@PreDestroy
public void closeConnections() {
    redisConnection.close();
    redisClient.shutdown();
}

Error :- 2022 07:30:42.331 WARNING [main] org.apache.catalina.loader.WebappClassLoaderBase.clearReferencesThreads The web application [ROOT] appears to have started a thread named [lettuce-nioEventLoop-4-1] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:

java.base@11.0.15/sun.nio.ch.EPoll.wait(Native Method)

java.base@11.0.15/sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:120)

java.base@11.0.15/sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:124)

java.base@11.0.15/sun.nio.ch.SelectorImpl.select(SelectorImpl.java:141)

io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68)

io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:810)

io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457)

io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)

io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)

io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)

java.base@11.0.15/java.lang.Thread.run(Thread.java:829)

15-May-2022 07:30:42.331 SEVERE [main] org.apache.catalina.loader.WebappClassLoaderBase.checkThreadLocalMapForLeaks The web application [ROOT] created a ThreadLocal with key of type [java.lang.ThreadLocal] (value [java.lang.ThreadLocal@37a1152a]) and a value of type [io.netty.util.internal.InternalThreadLocalMap] (value [io.netty.util.internal.InternalThreadLocalMap@73fa5f13]) but failed to remove it when the web application was stopped. Threads are going to be renewed over time to try and avoid a probable memory leak.

Bogdan Flueras
@bflueras:matrix.org
[m]
Hello, does the latest lettuce driver (6.1.8) works with Redis 7.x? Or only with 6.x? Thanks!
Mark Paluch
@mp911de
It works with Redis 2.6+ up to the latest version. It may be that some commands aren't directly available as Java methods, but you can always run custom commands.
Daniel Wilkins
@tekktonic

Hey, what's the current way to clean up the resources on a RedisCommands instance? I have a plugin for druid which used lettuce 4.5 and in a cleanup method I had

        clusterCommands.close();
        clusterConnection.close();
        cluster.shutdown();

After upgrading to lettuce 5 (I have some classpath issues with 6) the Commands objects no longer have a close method, but if I don't call it then my jvm very quickly runs out of FDs, so I'm obviously supposed to clean them up somehow.

Still stuck on this one.

Mark H
@markhu
Hi. guys, I'm trying to add Lettuce for Redis support to my existing project in IntelliJ. But I can't get the dependency to resolve. Unsure if this is an IntelliJ issue, or a Lettuce issue.
Mark H
@markhu
Ok, it was IntelliJ. Solved after consulting the answer machine at StackOverflow.
saifmasood
@saifmasood
Hello, we recently encountered a problem with Redis transactions using lettuce. We use a sentinel setup that monitors a single cluster with 3 servers (1 leader, and 2 followers). Some transaction queries failed with:
"io.lettuce.core.RedisCommandExecutionException: ERR EXEC without MULTI"
Code:
We have a few dedicated connections for transactions in a blocking queue. Whenever a thread wants to run a transaction, it polls the queue for a connection, completes the transaction steps (multi, <commands>, exec), and adds the connection back to the queue.
We've also disabled auto-flushing the commands to the server and schedule the flush on a different executor (after every x ms).
We checked our Redis Sentinel logs and found out that there was a maser switch when we started seeing the exception. So we do think that the master switch caused this but are unsure of the reason since lettuce handles the master switch. Is it possible that the commands were queued for the previous master but were actually sent to the new master (post failover) that caused this issue?
Thanks!