Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • Dec 06 17:00
    gee4vee closed #1187
  • Dec 06 17:00
    gee4vee commented #1187
  • Dec 06 13:05
    mp911de commented #531
  • Dec 06 10:49
    qwqmobile commented #531
  • Dec 06 07:41
    mp911de commented #1187
  • Dec 06 07:37
    mp911de unlabeled #1186
  • Dec 06 07:37
    mp911de labeled #1186
  • Dec 06 07:37
    mp911de closed #1186
  • Dec 06 07:37
    mp911de commented #1186
  • Dec 05 21:52
    gee4vee edited #1187
  • Dec 05 21:40
    gee4vee labeled #1187
  • Dec 05 21:40
    gee4vee opened #1187
  • Dec 04 08:30
    peihao91 commented #1152
  • Dec 02 07:30
    zyfsuzy commented #924
  • Dec 01 20:18
    tbrannt commented #1182
  • Nov 29 08:19
    subaoye edited #1186
  • Nov 29 05:20
    SudBisht commented #496
  • Nov 29 03:12
    subaoye edited #1186
  • Nov 29 03:09
    subaoye edited #1186
  • Nov 29 03:09
    subaoye edited #1186
DeepShiv
@deepshiv126

image.png

Even though TimeoutSource provides timeout as "0" but it doesnt consider it and takes defaultTimeoutSupplier - as 60seconds.
Really we appreciate it, if you helps us to understand this lettuce with blocking command problem...!!

DeepShiv
@deepshiv126
FYI : When I mean shareNativeConnection - according Spring Data Redis JavaDoc - If shareNativeConnection is true, the pool will be used to select a
  • connection for blocking and tx operations only, which should not share a connection.
Mark Paluch
@mp911de

Shouldn't "LettuceFuture"."awaitOrCancel" exclude BlockingCommands not to cancel?

No. There are various things that impact a timeout and you want to protect your application in the first place. One thing that could be done is considering the timeout of BLPOP, XREAD and so on and increase the duration. But then, you still have the aspect that the timeout in each command is evaluated on the Redis server itself. If we would use the same value, then all commands would get timed out as the network latency plays into command completion.

So while the command would have a timeout of e.g. 5 seconds, there are a few milliseconds on top that are spent with sending the command to Redis and the command response, so a sane command timeout would need to consider the network latency.

That isn't something the driver can provide out of the box.
Understood the relation to Spring Data Redis.
If you invoke blocking commands via Spring Data Redis (e.g. RedisConnection.bLPop(…)), then Spring Data Redis will allocate a dedicated connection (see https://github.com/spring-projects/spring-data-redis/blob/2eb7067e8c7e859168a281145cc46ccddb42049f/src/main/java/org/springframework/data/redis/connection/lettuce/LettuceListCommands.java#L340-L382) for the command instead of using the shared one
Mark Paluch
@mp911de

What still can happen is that with pooling, a dedicated connection may receive a command, it times out from a caller perspective, the connection gets released to the pool while the command is still active on Redis and blocks the connection.

A subsequent allocation that would returh the connection has still that command in progress which blocks the connection and if you run another command, then you likely run into another timeout

Tamil Selvan
@tamilselvan-chinnaswamy

I configured redis to use connection pool, spring.redis.lettuce.pool.max-active=10.

I set shareNativeConnection to false and noticed the performance of api's are degraded by 20% and also noticed a select dbIndex command was executed all the time before executing any redis commands.

https://github.com/spring-projects/spring-data-redis/blob/master/src/main/java/org/springframework/data/redis/connection/lettuce/LettuceConnection.java#L980

if (asyncDedicatedConn instanceof StatefulRedisConnection) { ((StatefulRedisConnection<byte[], byte[]>) asyncDedicatedConn).sync().select(dbIndex); }

why is this select db required ?

I want the connection to be taken from pool without any performance degradation. how should i configure ?
I don't want to use single native connection because we have a service it's heavily dependent on redis and it's tomcat running thread count is 200.

we are using a single redis server only.

Mark Paluch
@mp911de
That's the wrong forum for Spring Data Redis, the right is https://gitter.im/spring-projects/spring-data
With pooling, a connection might be returned to the pool where the connection points to a different database than you want to point it to
Spring Data Redis therefore resets the database after obtaining the connection to make sure it's in the right state.
In general, if you don't use transactions or blocking Redis commands, then you should be good with a single shared connection.
Tamil Selvan
@tamilselvan-chinnaswamy
but our app is running with 200 tomcat threads. Instead of all waiting on a single connection in application layer we can use pool and push it to wait in the redis layer.
our app is configured to use single redis db. how can i avoid the select db command. For executing each command it takes 2 network call. first one for select db and then to execute the actual command.
Mark Paluch
@mp911de
Redis is single-threaded, so more than one or two connections typically add overhead
There is currently no possibility to disable the select call in Spring Data Redis.
You might want to file an issue at jira.spring.io to disable SELECT when the database index is zero. That is an optimization we can make if we can assume that application code does not change the database index when the initial database is used
Tamil Selvan
@tamilselvan-chinnaswamy
Tamil Selvan
@tamilselvan-chinnaswamy

@mp911de from this lettuce-io/lettuce-core#835

Commands are written immediately to the transport without awaiting completion of a previous command

Is that means lettuce puts all the commands to the trasport channel asynchronously. when a request is sent to transport layer is there any call back registered, so that completion data can be sent to the original thread ? Is this the reason your advising single native thread is enough ?

how this was achieved. In case of executing a redis get command before awaiting completion of this command and it executes the next command requested from a different thread, when the first thread's get command complete's how the data will be returned the first thread.

Mark Paluch
@mp911de

Is that means lettuce puts all the commands to the trasport channel asynchronously

Yes. That mode of operation is called pipelining.

how this was achieved

Commands are represented as objects. They get written to the I/O and then added into a protocol stack in the order they get written to the transport. As soon as Redis responds, the channel handler takes the response and assigns it to the first element in the protocol stack and removes the command. After that, we go back to the next response and so on.

Each Command object is either a Future or offer some other means of synchronization. In case for synchronous invocation, the invocation is intercepted by a proxy that calls the await(…) method on the command and so the calling thread is blocked until a command completes (or times out).

Tamil Selvan
@tamilselvan-chinnaswamy
got it, thanks @mp911de.
DamirCiganovic-Jankovic
@DamirCiganovic-Jankovic

This could be a stupid question, but I didn't manage to find an answer. Is version of lettuce somehow coupled with redis versions? Can I run letuce 5+ with redis 3.2.8 for example? We have java 11 so I would like to upgrade lettuce, but I will not upgrade redis at the moment if I dont' have to

Thank you in advance

Mark Paluch
@mp911de
Lettuce is compatible with a wide range of Redis versions. The only requirement is the use of the RESP2 protocol and that is satisfied by Redis 2.x up to 5.x
DamirCiganovic-Jankovic
@DamirCiganovic-Jankovic
Thanks!
Tamil Selvan
@tamilselvan-chinnaswamy

In this doc https://lettuce.io/core/release/reference/#advanced-usage it was mentioned. EventLoopGroups are started lazily to allocate Threads on-demand. which class can i check to see how this works.

In our application we are seeing api latency only for the first 5 minutes. just want to see does on-demand EventLoop thread bootup was the cause. if so, is there way to configure to have minimum number EpollEventLoop threads should be running ?

Tamil Selvan
@tamilselvan-chinnaswamy
@mp911de In a EvenLoopGroup 6 EpollEventLoop objects are created and only 2 of its state is ST_STARTED. other 4 where in ST_NOT_STARTED. In thread dump also there where only 2 lettuce-epollEventLoop threads. I want to know how and when the remaining 4 EpollEventLoop will be created as a thread.
Mark Paluch
@mp911de
That’s netty behavior. Thread objects are pre-allocated in not started state. Only if there is an incoming piece of work netty starts the thread. That is typically upon connect.
Tamil Selvan
@tamilselvan-chinnaswamy
thanks @mp911de
Pradeep Venkataraman
@pradeepvenkataraman
I'm performing zscan of a SortedSet whose value is a key in Hash. If I invoke hmget in the ScoredValueStreamingChannel of the zscan, then I get Timeout exception. Is this expected?
image.png
If I comment the syncCommands.hmget, then it works without any issues
Mark Paluch
@mp911de
Streaming results streams the results as they are received. If you call nested commands within the stream processing, you block the EventLoop and then you experience timeouts as consequences of the blocked eventloop
tianyi
@flyingsheep123
hi @mp911de, while reading the source code of Lettuce pipelining I found that the when pipelining is triggered, commands are just sent using Async api and their RedisFutures are maintained in a list; when we call closePipeline Lettuce will just wait for all the RedisFutures to be completed. I'm a little confused because in this article (https://developpaper.com/redis-throughput-enhancement-through-pipeline/), it is said that "After Pipelinearization, all requests are merged into one IO", but it seems to me that the implementation of Lettuce will actually trigger multiple IO operation, each for one command in the pipeline, although we only try to get all the result at the end, is that right? thanks!
Mark Paluch
@mp911de
@flyingsheep123 your question touches multiple issues. closePipeline is Spring Data Redis functionality
Pipelining itself means that a response is not awaited before sending subequent commands. Pipelining can be optimized with various approaches, using Future's is the most basic form
You can optimize even further with collecting data to send in a packet buffer and send multiple commands in a single TCP packet. Or say it differently: reduce the number of flush() syscalls.
With using Futures, multiple processes can use the same connection and commands are sent in self-contained messages.
Optimizing for packets requires a buffer that is close to the transport. Lettuce can use this mode, too, but requires exclusive usage so a connection cannot be longer shared by multiple threads as buffering changes the connection state. A possible consequence (bug) is that your application locks up entirely (see setAutoFlush(…), flushCommands())
Let me know if that helps or you need more details
tianyi
@flyingsheep123
@mp911de thanks a lot for the very detailed explication! I haven't well realized the impact of concurrent access of the shared connection while considering the design of pipelining... Very happy to learn much things each time communicating with you. I will think deeply what you have said, thanks again.
Dark_Han
@darkhan97cool
Hello guys! I am working with lettusearch which is client for module over Redis(redisearch). I can not pass Cyrillic(russian) letters in query while via cli it works. Latin alphabet works well while in query if response texts contain mixed language(English + Russian) I can see them both.
It is all about command FT.SEARCH("INDEX_NAME", QUERY, SEARCHOPTIONS)
I tried pass language in SEARCHOPTIONS, it did not work.
Do you know how to handle it?
Mark Paluch
@mp911de
Paging @gkorland and @jruaux. Lettusearch is maintained by RedisLabs.
tianyi
@flyingsheep123
hi @mp911de, I would like to know if my understanding is correct: although Lettuce may do multiple flush calls in pipeline, the syscalls at Redis server side are reduced (because we don't need to wait for the response of a previous command before sending another command), as stated in Redis doc(https://redis.io/topics/pipelining): "When pipelining is used, many commands are usually read with a single read() system call, and multiple replies are delivered with a single write() system call...and eventually reaches 10 times the baseline obtained not using pipelining".
@mp911de In fact we have done a benchmark recently, comparing the performance of single-connection Lettuce client and a Jedis connection pool of size 100, using 100 threads to make get/set calls concurrently. And we found (with surprise) that even only with a single connection, the QPS of Lettuce is higher than Jedis Pool
tianyi
@flyingsheep123
I think one reason is that in both pipelining/non-pipelining mode, the non-blocking commands dispatched by Lettuce are finally multiplexed on one connection, thus the syscalls(write/read) for Redis server are reduced, and as a result we get a higher QPS. Is that right? thanks!
Mark Paluch
@mp911de
In my measurements, Lettuce was slower than Jedis, but maybe that's my my measurement. With pooling you get an additional overhead of connections and the actual pooling functionality overhead.
I'm not sure what happens on Redis itself, whether Redis snoops on the same connection for additional commands before going to another connection.
tianyi
@flyingsheep123
@mp911de OK, thanks all the same!