Hello, I have a question, there is a way to conect two differents clusters with one connection?
For example:
` private static final RedisURI redis1 = RedisURI.create("redis://redis1.*.cache.amazonaws.com");
private static final RedisURI redis2 = RedisURI.create("redis://redis2.*.cache.amazonaws.com");
private static final RedisClient redisClient = RedisClient.create();
public static StatefulRedisMasterReplicaConnection<String, String> conection() {
List<RedisURI> nodes = Arrays.asList(redis1, redis2);
StatefulRedisMasterReplicaConnection<String, String> connection = MasterReplica.connect(
redisClient,
StringCodec.UTF8,
nodes
);
return connection;
} `
Every cluster have their own nodes, but I only want to write, I need to know if this is possible or what is the best practice in this cases.
Thanks a lot in advance
@mp911de One thing we noted here some day ago: When executing a large number of Redis requests and the network latency is significant (2ms in our case, servers in multiple data centers in the same country), the synchronous mode of operation is way slower than the async mode of operation (because it waits on OK for each request, presumably). With sync, roughly 500 queries/s (1000/2, so makes total sense), but with async something like 20 000 per second.
Would it make sense to more heavily emphasize this in the docs somewhere, that using the sync API can cause significant performance degradation in high-throughput scenarios? Is it perhaps already mentioned somewhere?
SET
in this case, i.e. writing data to Redis so we don't "need" the result in that sense. However, it does have error-handling implications that we need to take into consideration.)
@mp911de I have implemented a Redis write-intensive application using spring Redis backed by lettuce. The write pipeline starts by reading a batch of events (10K) from the Kafka topic, applies filters, transforms, and finally performs the Redis (sorted sets) batch ingestion using the pipeline.
Here is the ingestion code . is there any better way to ingest data from performance perspective ?.
stringRedisTemplate.execute(connection -> {
map.forEach((key, value) ->
connection.zSetCommands().zAdd(key.getBytes(StandardCharsets.UTF_8), value));
return null;
}, true, true);
ZADD
accepts tuples, so if you're able to group multiple zset entries into a single ZADD
invocation (https://redis.io/commands/zadd/), then you can improve the throughput quite a bit. Like 5 or 10 items, don't overdo it.
RedisClient.create(RedisURI.builder()
.withHost(hostname)
.withPassword(password.toCharArray())
.withPort(port)
.withClientName(clientName)
.build());
That's what im using to create the clients
hey, so I'm currently trying to run two redis clients on the same server via 2 seperate plugins but both use the same shaded connection builder yet I get this error
Caused by: java.lang.IllegalArgumentException: 'RedisURI' is already in useRedisClient.create(RedisURI.builder() .withHost(hostname) .withPassword(password.toCharArray()) .withPort(port) .withClientName(clientName) .build());
That's what im using to create the clients
I'm having the same issue
code
code
Hi @mp911de and those who develop Spring MVC application. I have one question. I'm doing a load test on my application and found a weird behavior of IO event loop. The thing is,
I assume as RPS growed up a lot more http-nio threads became active and consumed CPU, and they stole CPU cycle from the IO event loop threads. But it's so weird if it's just the way lettuce works.
Could you give me any tip on this problem. Thanks.
package ga.darren.darrenmcblocklogger;
import org.bukkit.block.Block;
import org.bukkit.event.EventHandler;
import org.bukkit.event.Listener;
import org.bukkit.event.block.BlockPlaceEvent;
import io.lettuce.core.*;
import io.lettuce.core.pubsub.*;
import io.lettuce.core.api.*;
import io.lettuce.core.api.sync.*;
import io.lettuce.core.cluster.pubsub.*;
import io.lettuce.core.cluster.pubsub.api.sync.*;
import org.bukkit.plugin.java.JavaPlugin;
public final class darrenmcblocklogger extends JavaPlugin implements Listener {
@Override
public void onEnable() {
this.getLogger().info("Darrenmc block logger started!");
RedisClient redisClient =
RedisClient.create("");
StatefulRedisConnection connection = redisClient.connect();
StatefulRedisPubSubConnection pubconnection = connection.connectPubSub();
RedisClusterPubSubCommands sync = pubconnection.sync();
RedisCommands syncCommands = connection.sync();
this.getLogger().info(" [Redis Handler] Redis connected!");
getServer().getPluginManager().registerEvents(this, this);
}
@EventHandler
public void BlockPlaceEvent(BlockPlaceEvent event) {
Block b = event.getBlock();
String name = b.getType().toString();
this.getLogger().info(" [Update Handler] " + name + " has been placed!");
syncCommands.set("blockplaced.last", name);
this.getLogger().info(" [Redis Handler] Key have been set!");
sync.publish("blockplaced.update." + name, "message");
this.getLogger().info(" [Redis Handler] Websocket message sent!");
}
@Override
public void onDisable() {
connection.close();
this.getLogger().info(" [Redis Handler] Redis disconnected!");
redisClient.shutdown();
}
}
Hey, what's the current way to clean up the resources on a RedisCommands instance? I have a plugin for druid which used lettuce 4.5 and in a cleanup method I had
clusterCommands.close();
clusterConnection.close();
cluster.shutdown();
After upgrading to lettuce 5 (I have some classpath issues with 6) the Commands objects no longer have a close method, but if I don't call it then my jvm very quickly runs out of FDs, so I'm obviously supposed to clean them up somehow.
Hey, wanted to run a reactive processing flow by folx here to see whether this makes sense:
We are reading data from a kafka queue in one main thread. The main thread is processing the kafka message and then writing to redis by calling :
redisReactiveTemplate.opsForValue().set(key, value).subscribe()
where redisReactiveTemplate is of type: ReactiveRedisTemplate<String, RandomObject>
Let me know if you need more information - thanks!
@mp911de I am am using spring-boot-starter-data-redis + Lettuce. I am creating connections in @PostContruct method and closing the connections in @Predestroy method. But i am getting memory leak error. Could you let me know how to fix this.
@PostConstruct
public void getConnection() {
RedisURI redisURI = RedisURI.builder()
.withHost(host)
.withPort(port)
.build();
redisClient = RedisClient.create(redisURI);
redisConnection = redisClient.connect();
redisCommand = redisConnection.sync();
}
@PreDestroy
public void closeConnections() {
redisConnection.close();
redisClient.shutdown();
}
Error :- 2022 07:30:42.331 WARNING [main] org.apache.catalina.loader.WebappClassLoaderBase.clearReferencesThreads The web application [ROOT] appears to have started a thread named [lettuce-nioEventLoop-4-1] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
java.base@11.0.15/sun.nio.ch.EPoll.wait(Native Method)
java.base@11.0.15/sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:120)
java.base@11.0.15/sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:124)
java.base@11.0.15/sun.nio.ch.SelectorImpl.select(SelectorImpl.java:141)
io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68)
io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:810)
io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457)
io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
java.base@11.0.15/java.lang.Thread.run(Thread.java:829)
15-May-2022 07:30:42.331 SEVERE [main] org.apache.catalina.loader.WebappClassLoaderBase.checkThreadLocalMapForLeaks The web application [ROOT] created a ThreadLocal with key of type [java.lang.ThreadLocal] (value [java.lang.ThreadLocal@37a1152a]) and a value of type [io.netty.util.internal.InternalThreadLocalMap] (value [io.netty.util.internal.InternalThreadLocalMap@73fa5f13]) but failed to remove it when the web application was stopped. Threads are going to be renewed over time to try and avoid a probable memory leak.
Hey, what's the current way to clean up the resources on a RedisCommands instance? I have a plugin for druid which used lettuce 4.5 and in a cleanup method I had
clusterCommands.close(); clusterConnection.close(); cluster.shutdown();
After upgrading to lettuce 5 (I have some classpath issues with 6) the Commands objects no longer have a close method, but if I don't call it then my jvm very quickly runs out of FDs, so I'm obviously supposed to clean them up somehow.
Still stuck on this one.