Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • Feb 05 16:52
    renovate[bot] edited #582
  • Feb 05 16:51
    renovate[bot] opened #582
  • Feb 05 16:51

    renovate[bot] on configure

    Add renovate.json (compare)

  • Feb 02 15:12
    k163377 opened #581
  • Feb 02 15:12
    k163377 labeled #581
  • Jan 30 08:29
    mp911de commented #580
  • Jan 30 08:16
    akoufa commented #580
  • Jan 30 07:29
    mp911de labeled #580
  • Jan 30 07:29
    mp911de unlabeled #580
  • Jan 30 07:29
    mp911de commented #580
  • Jan 28 11:17
    akoufa labeled #580
  • Jan 28 11:17
    akoufa opened #580
  • Jan 27 16:27
    mwinkels-bol edited #578
  • Jan 27 14:53
    mwinkels-bol synchronize #579
  • Jan 27 12:26
    mwinkels-bol opened #579
  • Jan 27 12:20
    mwinkels-bol labeled #578
  • Jan 27 12:20
    mwinkels-bol opened #578
  • Jan 26 07:44
    mp911de assigned #274
  • Jan 26 07:44
    mp911de commented #274
  • Jan 26 07:40
    mp911de commented #181
harishvashistha
@harishvashistha
@mp911de can you please guide how to get io.r2dbc specific files by specifying org.postgresql groupid dependency in maven. Because when I am specifying org.postgresql:r2dbc-postgresql:1.0.0.RC1 in my maven then io.r2dbc files are not reachable in my project. Please help
1 reply
kitkars
@kitkars

Hi Mark @mp911de,
Is there any way to customize the column selection in the named queries?

    @Query(
            "select a, b 
            from author a, book b
            where b.author_id = a.id and a.id in (:authorIds)"
    )
    Flux<DummyProjection> findByAuthorIdIn(List<Integer> authorIds);

Even though Projection classes help if I know the column names during the design time, lets take GraphQL for example where the client specify the fields at run time.

So it would be nice, if we could support as shown here.

    @Query(
            "select a, b 
            from author a, book b
            where b.author_id = a.id and a.id in (:authorIds)"
    )
    Flux<DummyProjection> findByAuthorIdIn(List<Integer> authorIds, Collection<String> columns);
7 replies
Zarko Stankovic
@ManInTheBox
Hi everyone, I have a question regarding r2dbc-pool. Specifically, it's about "validationQuery" option. When this option is set, this query runs before any other query against the database. I've found out this in my DB logs. It is my understanding (from the docs) that this was supposed to be run only before acquiring a connection from the pool, not always, before any other query. My "initialSize" pool is 10 (default) and DB instance is running locally. No matter how many queries are executed against DB, it always runs "validationQuery" before anything else. Here's the code in r2dbc-pool https://github.com/r2dbc/r2dbc-pool/blob/2f21aacf877c9983ed1393d2d88f6c3d100d6e78/src/main/java/io/r2dbc/pool/ConnectionPool.java#L189-L192
Thanks in advance!
1 reply
Ling Hengqian
@linghengqian
Just curious, while it is impossible to adopt the XA model, but have friends tried to put the LOCAL or BASE transaction model on R2DBC transactions? The thing I'm referring to is somewhat similar to transactional coordination of multiple data sources, i.e. distributed transactions.
Mark Paluch
@mp911de
There is no XA support for R2DBC drivers. For SQL Server at least, that would require additional messages and synchronization within the driver.
Ling Hengqian
@linghengqian
Wow, thanks for the answer!
kitkars
@kitkars
Hi All,
Just curious. Is there anyway to use Projection class/interface to convert the save entity response?
<T> Mono<T>  save(S entity);
vivek.singh02
@vivek.singh02:matrix.org
[m]
Hi All,
I a new to r2dc, i am aware that it's used for asynchronous and non-blocking purpose.
My task is to insert/update the data into the DB using r2dbc but with synchronous and blocking way.
Can someone guide me on how to do it? Thanks :)
vivek.singh02
@vivek.singh02:matrix.org
[m]
:point_up: Edit: Hi All,
I am new to r2dc, i am aware that it's used for asynchronous and non-blocking purpose.
My task is to insert/update the data into the DB using r2dbc but with synchronous and blocking way.
Can someone guide me on how to do it? Thanks :)
Mark Paluch
@mp911de

My task is to insert/update the data into the DB using r2dbc but with synchronous and blocking way.

Use JDBC.

1 reply
vivek.singh02
@vivek.singh02:matrix.org
[m]

Hi All,
Trying to insert a string in a column which is enum inside postrgresql table. .bind("address_type", Parameter.fromOrEmpty(address.getHomeAddress().toString(), String.class))
Getting io.r2dbc.postgresql.ExceptionFactory$PostgresqlBadGrammarException: column "address_type" is of type addresstype but expression is of type character varying

HomeAddress is a enum here {HOME, OFFICE, OTHERS}

1 reply
Alexander Lindholm
@alexanderlindholm

Hey! I am trying to save an entity in a postgresdb with spring data that has an OffsetDateTime timestamp. It works fine in my dev environment but when I run it inside a docker container I get this exception:

Caused by: java.lang.ClassCastException: class java.time.OffsetDateTime cannot be cast to class java.time.LocalTime (java.time.OffsetDateTime and java.time.LocalTime are in module java.base of loader 'bootstrap')
    at io.r2dbc.postgresql.codec.BuiltinCodecSupport.encodeToText(BuiltinCodecSupport.java:83)
    Suppressed: reactor.core.publisher.FluxOnAssembly$OnAssemblyException:
Assembly trace from producer [reactor.core.publisher.MonoSupplier] :
    reactor.core.publisher.Mono.fromSupplier
    io.r2dbc.postgresql.codec.AbstractCodec.create(AbstractCodec.java:150)
Error has been observed at the following site(s):

I am not sure why as r2dbc postgres driver should have support for this datatype. Anyone have any suggestion to solve this issue?

2 replies
vivek.singh02
@vivek.singh02:matrix.org
[m]
If anyone receives error while inserting enum value into postresql, use below format
INSERT INTO TABLE (enum_column_name) VALUES (CAST(:value as ENUM_type)); :)
taha-alyaseen
@taha-alyaseen
Hello all, I've been trying to use postgreSql r2dbc in a project but I found 2 drivers io.r2dbc:r2dbc-postgresql and org.postgresql:r2dbc-postgresql which one is correct and do they have any differences in the performance?
sushovan24
@sushovan24
Can any one help me with r2dbc connection pool, how this will handle 100rps with min pool size?
Jack Peterson
@jackdpeterson
What's an appropriate way to persist to r2dbc from .doOnSuccess?
.doOnSuccess(
                        o -> {
                            // this method 
                            this.logEventToDatabase(Event.builder()
                                            .message("Updated contact information.")
                                            .build())
                                    .subscribe();
                        }
                );

private Mono<Event> logEventToDatabase(final Event event) {
        return this.eventRepository.save(event).publishOn(Schedulers.boundedElastic());
}
2 replies
I'd like to remove the publishOn and not be calling .subscribe(). Both subscribe() and .block() cause errors in terms of calling a blocking operation.
shivangmittal01
@shivangmittal01

Hi everyone. I have recently upgraded r2dbc-postgresql to 0.9.1.RELEASE from 0.8.x. I have started getting Bound parameter count does not match parameters in SQL statement which was not the case with the previous version. below is how I am passing the parameters. Requesting team to help on this

  @Transactional
    public Flux<Integer> unblockInventory(InventoryRequest request, boolean orderFulfilled) {
        return this.databaseClient.inConnectionMany(connection -> {
            var statement = connection.createStatement(UNBLOCK_INVENTORY_QUERY);
            for (var item : request.getOrderItems()) {
                statement
                        .bind(0, request.getSource() == UpdateSource.AMAZON ? item.getQuantity() : 0)
                        .bind(1, request.getSource() == UpdateSource.APOLLO247 ? item.getQuantity() : 0)
                        .bind(2, orderFulfilled ? 1 : 0)
                        .bind(3, Utility.getKey(item.getSku(), request.getStoreId()))
                        .bind(4, request.getAxdcCode())
                        .add();
            }
            return Flux
                    .from(statement.execute())
                    .flatMap(Result::getRowsUpdated)
                    .map(e -> {
                        if (e == 0) {
                            throw new InventoryNotUnblockedException(request.toString());
                        }
                        return e;
                    })
                    .doOnError(ex -> LOGGER.error(() -> MessageUtils.errorMessage(Event.UNBLOCK_INVENTORY_FAILED,
                            ex.getMessage(), ex, false)));
        });
    }

below is the query

    private static final String UNBLOCK_INVENTORY_QUERY = """
            UPDATE item_inventory AS iv
            SET
                amazon_reserved = CASE
                                    WHEN (iv.amazon_reserved - $1) < 0 THEN 0 ELSE iv.amazon_reserved - $1
                                  END,
                apollo_reserved = CASE
                                    WHEN (iv.apollo_reserved - $2) < 0 THEN 0 ELSE iv.apollo_reserved - $2
                                  END,
                quantity = CASE
                            WHEN $3 = 1 THEN iv.quantity - $1 - $2 ELSE iv.quantity
                           END,
                version = iv.version + 1,
                updated_at = NOW()      
            WHERE id = $4 AND iv.axdc_code = $5      
            """;

This is after I have updated to

     <dependency>
            <groupId>org.postgresql</groupId>
            <artifactId>r2dbc-postgresql</artifactId>
            <version>0.9.1.RELEASE</version>
        </dependency>
sushovan24
@sushovan24
Caused by: dev.miku.r2dbc.mysql.client.MySqlConnectionClosedException: Connection closed at dev.miku.r2dbc.mysql.client.ClientExceptions.expectedClosed(ClientExceptions.java:36) ~[r2dbc-mysql-0.8.2.RELEASE.jar!/:0.8.2.RELEASE]
Connection was closed after some time.
My Pool Configuration::
ConnectionPoolConfiguration configuration = ConnectionPoolConfiguration.builder(ConnectionFactories.get(optionsAcc))
.maxIdleTime(Duration.ofMinutes(30))
.initialSize(5)
.maxSize(10)
// .maxCreateConnectionTime(Duration.ofSeconds(10))
// .maxAcquireTime(Duration.ofSeconds(10))
.maxLifeTime(Duration.ofMinutes(60))
.acquireRetry(2)
.validationQuery("SELECT 1")
.registerJmx(false)
.name(poolName)
.build();
peace
@inpeace_gitlab
@sushovan24 Turns out that you must not call .add() on the last element in the iteration.
rahimkhanabdul
@rahimkhanabdul:matrix.org
[m]
How do we use useUnicode=true&characterEncoding=UTF-8 in r2dbc-spi url
jbaddam17
@jbaddam17

Database calls are getting stuck. not sure whether they are making to DB host or not, but seeing below log statement when i enable TRACE logs on io.r2dbc package after that nothing

491156 --- [or-http-epoll-5] io.r2dbc.mssql.QUERY [ 250] : Executing query: select column_1, column_2 from table where column_3 = 123

Using Springboot 2.7.2 with following

r2dbc-mssql:0.8.8.RELEASE
r2dbc-pool:0.8.8.RELEASE
r2dbc-spi:0.8.6.RELEASE

''return databaseClient.sql("select column_1, column_2 from table where column_3 = :value)
.bind("value", "123")
.fetch()
.first().flatmap(res -> return Mono.just(res));''

did any one face/saw similar issue? what could be the problem

1 reply
AdarshSRM
@AdarshSRM
hello, I am trying to find whether there is a sequence generation support with r2dbc (like with hibernate @GeneratedValue with jdbc)? Thank you.
sushovan24
@sushovan24

@inpeace_gitlab This is solve there is a conflict between connect_timeout and idleTimeOut. idleTimeOut <= connect_timeout.

But now when there is any error occurred then mysql error 1129 coming.

org.springframework.dao.DataAccessResourceFailureException: Failed to obtain R2DBC Connection; nested exception is io.r2dbc.spi.R2dbcNonTransientResourceException: [1129] Host '<myip>' is blocked because of many connection errors; unblock with 'mysqladmin flush-hosts')

Andreas Arifin
@andreas-trvlk
Hi All. I want to ask about minIdle behavior on the r2dbc-pool. I have set this option on my project and my expectation was it will behave same like HikariCP where the idle size will be maintained at the configured size no matter how long the app run. But it is not the case here. First try, the connection count dropped to zero after 30 minutes. We found that it's because of maxIdleTime was default to 30 mins. We set it to -1 to remove eviction behavior. It was looking good at the surface because the connection count stayed at the configured size. But when we tried to access the DB from our app, it seems the connection is hanging: it stays at the configured size, but seems to lose the connection to the db because the process got timed out. It is known issue or my expectation was wrong about the minIdle behavior?
2 replies
vivek.singh02
@vivek.singh02:matrix.org
[m]

Hi All. Getting below error when trying to use reactor-test

13:36:18.316 [Test worker] DEBUG reactor.core.publisher.Operators - Duplicate Subscription has been detected
java.lang.IllegalStateException: Spec. Rule 2.12 - Subscriber.onSubscribe MUST NOT be called more than once (based on object equality)
StepVerifier.create(promotionMono.log())
            .expectSubscription()
            .expectNextMatches(validatePromotion(promotion))
            .verifyComplete(); --- error comes in this line

I am using smae thread pool in different classes as @Qualifier

Panusitt Khuenkham
@panusitt.khu_gitlab

Sorry, I have a problem, please help.

I have a problem in below,

[1;31m[ERROR][2022-09-07T19:20:19,078][reactor-tcp-epoll-2][][BaseExceptionHandler] causeMsg: Request queue is full

AWS RDS MySQL 8
Spring Boot: 2.6.10
POM.xml

<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-r2dbc</artifactId>
</dependency>
<dependency>
<groupId>dev.miku</groupId>
<artifactId>r2dbc-mysql</artifactId>
<version>0.8.2.RELEASE</version>
</dependency>
<dependency>
<groupId>io.r2dbc</groupId>
<artifactId>r2dbc-spi</artifactId>
<version>1.0.0.RELEASE</version>
</dependency>
<dependency>
<groupId>io.netty</groupId>
<artifactId>netty-all</artifactId>
<version>4.1.79.Final</version>
</dependency>

Thank you so much
I try add -Dreactor.bufferSize.small=4096
But I don't know if it can be fixed in the long term.
Can you suggest a solution for me?
Panusitt Khuenkham
@panusitt.khu_gitlab
What should I calculate from?
bufferSize.small
muratyildiz1976
@muratyildiz1976
Hello everybody, is it possible to stream a file (byte array, or from FilePart.class) into a traditional relational database without loading the whole file into memory ?
Mark Paluch
@mp911de
Only with some databases. It depends whether the entire protocol frame must be assembled first (e.g. to calculate its size). SQL Server should work with streaming.
vgupta88
@vgupta88
Guys, new to the room. Wanted to ask something i am struggling with on r2dbc pool. I want to pass acquire-retry param to r2dbc pool config from my yaml but i was not able to. I tried adding programatically referring the documentation and all looks good but my initial size doesn't reflect on my postgres query to check number of connections.
    public ConnectionFactory connectionFactory() {
        ConnectionFactory connectionFactory = ConnectionFactories.get(ConnectionFactoryOptions.builder()
                .option(DRIVER, "pool")
                .option(PROTOCOL, "postgresql")
                .option(HOST, host)
                .option(PORT, port)
                .option(USER, userName)
                .option(PASSWORD, password)
                .option(DATABASE, dbName)
                .build());

        ConnectionPoolConfiguration configuration = ConnectionPoolConfiguration.builder(connectionFactory)
                .initialSize(initialSize)
                .maxSize(maxSize)
                .minIdle(minIdle)
                .metricsRecorder(new R2DBCPoolMetrics())
                .maxValidationTime(Duration.ofMillis(maxValidationTime))
                .maxAcquireTime(Duration.ofMillis(maxAcquireTime))
                .acquireRetry(acquireRetry)
                .build();
        ConnectionPool connectionPool= new ConnectionPool(configuration);
        return connectionPool;
    }
This looks good but if i check num of connections in postgres it doesn't reflect initialSize i pass here
Mark Paluch
@mp911de
You need to call the warmu() method to pre-initialize the pool
¯\_(ツ)_/¯
@OhadShai_twitter
Question: is there a version dependency between r2dbc spi & core?
I am asking because of jasync-sql/jasync-sql#310
Also is someone can help me setup a basic integration test for the r2dbc module that would be great
5 replies
Mahender Gusain
@mahendergusain:matrix.org
[m]
Need to understand maxValidationTime( r2dbc connection pool ConnectionPoolConfiguration)?? Is it good to override maxValidationTime=200. or Is it good to keep it default.?
1 reply
peace
@inpeace_gitlab
@mp911de Would you be able to update the reactor core version on r2dbc-postgresql/1.0.0.RC1?
The current specified version 3.5.0-M2is missing on maven. I guess the reactor people removed it for some reason.
1 reply
desember
@desember

Hi, did someone stumble upon this error in r2dbc?

executeMany; SQL [SELECT <skipped...>]; Connection unexpectedly closed; nested exception is io.r2dbc.postgresql.client.ReactorNettyClient$PostgresConnectionClosedException: Connection unexpectedly closed

It happens several times a day. After re-try query returns valid dat. I use pooled connection created as

     @Bean
    override fun connectionFactory(): ConnectionFactory {
        val factory = PostgresqlConnectionFactory(
            PostgresqlConnectionConfiguration.builder()
                .host(host)
                .port(port)
                .database(database)
                .username(username)
                .password(password)
                .sslMode(SSLMode.VERIFY_FULL)
                .codecRegistrar(
                    EnumCodec.builder()
                        .withEnum("allowed_status", Status::class.java)
                        .build()
                )
                .build()
        )
        return connectionPool(factory)
    }

    fun connectionPool(
        connectionFactory: ConnectionFactory
    ): ConnectionPool {
        val builder = ConnectionPoolConfiguration.builder(connectionFactory)
        builder.maxSize(poolMaxSize)
        builder.initialSize(poolInitialSize)
        builder.maxLifeTime(Duration.ofMillis(-1))
        return ConnectionPool(builder.build())
    }

I'm a bit suspicious that this behavior can be caused by some sort of ttl timeout. Can't prove it though.

1 reply
vivek.singh02
@vivek.singh02:matrix.org
[m]
Hi, instead of using @Transactional i am using .as(TransactionalOperator::transactional).
How can i use @Transactional(readOnly=True) with transactionalOperator ?
Hantsy Bai
@hantsy

HI I am using Spring Data R2dbc in my project, I want to add a temp field to get the value calculated by other fields , so follow the Spring Data R2dbc reference doc and define the field and add a @Value.

@Table("workers")
data class Worker(
    @Id
    val id: UUID? = null,

    @Column(value = "photo")
    var photo: String? = null,

    // see: https://github.com/spring-projects/spring-data-r2dbc/issues/449
    @Transient
    @Value("#{root.photo!=null}")
    val hasPhoto: Boolean = false
)

But the hasPhoto always false, even I have set photo to a nonnull string.

2 replies
mr-nothing
@mr-nothing

Hi there!
I'm using spring r2dbc in my project and trying to make it work with multiple host/failover postgres topology (need to specify db url like this: r2dbc:postgresql:failover://host1,host2,host3:port/)

I'm using 2.7.5 version of spring boot, and get:
r2dbc-pool-0.9.2.RELEASE
r2dbc-spi-0.9.1.RELEASE
r2dbc-postgresql-0.9.2.RELEASE
as a part of spring boot.
As far as I understand this set of r2dbc libs doesn't support failover yet.

So for the next step I was trying to upgrade r2dbc-postgresql-0.9.2.RELEASE to 1.0.0.RC1 but I'm getting the following error:

class java.lang.Long cannot be cast to class java.lang.Integer (java.lang.Long and java.lang.Integer are in module java.base of loader 'bootstrap')
java.lang.ClassCastException: class java.lang.Long cannot be cast to class java.lang.Integer (java.lang.Long and java.lang.Integer are in module java.base of loader 'bootstrap')

as a result of executing simple delete query DELETE FROM my_table WHERE boolean_flag = $1

Which indirectly says that there is some compatibility issue in r2dbc libs. Can anyone guide me if there is working set of these libs that can work in my case or is waiting for new spring release is my only option?

Any help is very appreciated, thank you!

Mark Paluch
@mp911de
R2DBC 1.0 is supported by Spring Framework 6/Spring Data R2DBC 3.0 as there's a binary compatibility change between R2DBC 0.9 and 1.0
1 reply
aeropagz
@aeropagz

Database calls are getting stuck. not sure whether they are making to DB host or not, but seeing below log statement when i enable TRACE logs on io.r2dbc package after that nothing

491156 --- [or-http-epoll-5] io.r2dbc.mssql.QUERY [ 250] : Executing query: select column_1, column_2 from table where column_3 = 123

Using Springboot 2.7.2 with following

r2dbc-mssql:0.8.8.RELEASE
r2dbc-pool:0.8.8.RELEASE
r2dbc-spi:0.8.6.RELEASE

''return databaseClient.sql("select column_1, column_2 from table where column_3 = :value)
.bind("value", "123")
.fetch()
.first().flatmap(res -> return Mono.just(res));''

did any one face/saw similar issue? what could be the problem

Hello together,
I have the same issue here.
If can fix it, if I use a pageable in the repository and it works up to a pagesize of 80. Above 81 I get the following error message:"Could not read property @org.springframework.data.annotation.Id()private java.lang.Long de.fhkiel.ndbk.amazonapi.model.Order.id from column id!" and returns a 500.

If I increase pagesize to 149 it gets stuck and no error appears and the http request times out.

It is very weird that the result depends on the pagesize...

This is my service code:

@Transactional(readOnly = true)
    public Flux<Order> getAll(Pageable pageable) {
        return orderRepository.findBy(pageable)
                .concatMap(order -> Mono.just(order)
                        .zipWith(orderPositionRepository.findByOrderId(order.getId()).collectList())
                        .map(tuple -> tuple.getT1().withPositions(tuple.getT2()))
                );
    }

I would be very thankful for an explanation :) Probably I am messing something up....

Jens Geiregat
@jgrgt

Hi, we've recently been seeing this error in our logs when we put our application under load:

LEAK: DataRow.release() was not called before it's garbage-collected. See https://netty.io/wiki/reference-counted-objects.html for more information.
Recent access records: 
Created at:
    io.r2dbc.postgresql.message.backend.DataRow.<init>(DataRow.java:37)
    io.r2dbc.postgresql.message.backend.DataRow.decode(DataRow.java:141)
    io.r2dbc.postgresql.message.backend.BackendMessageDecoder.decodeBody(BackendMessageDecoder.java:65)
    io.r2dbc.postgresql.message.backend.BackendMessageDecoder.decode(BackendMessageDecoder.java:39)
    reactor.core.publisher.FluxMap$MapConditionalSubscriber.onNext(FluxMap.java:208)
    reactor.core.publisher.FluxMap$MapConditionalSubscriber.onNext(FluxMap.java:224)
    reactor.netty.channel.FluxReceive.drainReceiver(FluxReceive.java:279)
    reactor.netty.channel.FluxReceive.onInboundNext(FluxReceive.java:388)
    reactor.netty.channel.ChannelOperations.onInboundNext(ChannelOperations.java:404)
    reactor.netty.channel.ChannelOperationsHandler.channelRead(ChannelOperationsHandler.java:113)
    io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444)
    io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
    io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)
    io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:336)
    io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:323)
    io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:444)
    io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:280)
    io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444)
    io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
    io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)
    io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
    io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440)
    io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
    io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
    io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:800)
    io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:499)
    io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397)
    io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997)
    io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
    io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
    java.base/java.lang.Thread.run(Thread.java:833)

Does anybody have any hints to help us figure out where the problem is?

Mark Paluch
@mp911de
Actually, we're trying to get hold of the buffer release issue for over a year now. Any help is appreciated. Cancellations could play a role in this.