Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • 12:57
    kartikarora2 opened #277
  • Dec 01 15:00
    fkomauli opened #572
  • Dec 01 14:54
    fkomauli opened #571
  • Dec 01 14:54
    fkomauli labeled #571
  • Dec 01 14:25
    fkomauli commented #570
  • Dec 01 14:06
    mp911de commented #570
  • Dec 01 13:58
    davecramer commented #570
  • Dec 01 13:47
    fkomauli opened #570
  • Dec 01 13:47
    fkomauli labeled #570
  • Nov 29 08:56
    mp911de labeled #569
  • Nov 29 08:56

    mp911de on main

    Remove rogue newline in readme.… (compare)

  • Nov 29 08:56
    mp911de closed #569
  • Nov 29 08:56
    mp911de commented #569
  • Nov 29 08:38
    ViddeM commented #569
  • Nov 29 08:37
    ViddeM edited #569
  • Nov 29 08:36
    ViddeM opened #569
  • Nov 28 01:55

    github-actions[bot] on gh-pages

    Deploying documents(1.1.1.BUILD… (compare)

  • Nov 28 01:54

    ttddyy on main

    Update copyright year Signed-o… Update github workflow to prope… (compare)

  • Nov 23 22:19
    dependabot[bot] opened #568
  • Nov 23 22:19
    dependabot[bot] labeled #568
Jack Peterson
@jackdpeterson
What's an appropriate way to persist to r2dbc from .doOnSuccess?
.doOnSuccess(
                        o -> {
                            // this method 
                            this.logEventToDatabase(Event.builder()
                                            .message("Updated contact information.")
                                            .build())
                                    .subscribe();
                        }
                );

private Mono<Event> logEventToDatabase(final Event event) {
        return this.eventRepository.save(event).publishOn(Schedulers.boundedElastic());
}
2 replies
I'd like to remove the publishOn and not be calling .subscribe(). Both subscribe() and .block() cause errors in terms of calling a blocking operation.
shivangmittal01
@shivangmittal01

Hi everyone. I have recently upgraded r2dbc-postgresql to 0.9.1.RELEASE from 0.8.x. I have started getting Bound parameter count does not match parameters in SQL statement which was not the case with the previous version. below is how I am passing the parameters. Requesting team to help on this

  @Transactional
    public Flux<Integer> unblockInventory(InventoryRequest request, boolean orderFulfilled) {
        return this.databaseClient.inConnectionMany(connection -> {
            var statement = connection.createStatement(UNBLOCK_INVENTORY_QUERY);
            for (var item : request.getOrderItems()) {
                statement
                        .bind(0, request.getSource() == UpdateSource.AMAZON ? item.getQuantity() : 0)
                        .bind(1, request.getSource() == UpdateSource.APOLLO247 ? item.getQuantity() : 0)
                        .bind(2, orderFulfilled ? 1 : 0)
                        .bind(3, Utility.getKey(item.getSku(), request.getStoreId()))
                        .bind(4, request.getAxdcCode())
                        .add();
            }
            return Flux
                    .from(statement.execute())
                    .flatMap(Result::getRowsUpdated)
                    .map(e -> {
                        if (e == 0) {
                            throw new InventoryNotUnblockedException(request.toString());
                        }
                        return e;
                    })
                    .doOnError(ex -> LOGGER.error(() -> MessageUtils.errorMessage(Event.UNBLOCK_INVENTORY_FAILED,
                            ex.getMessage(), ex, false)));
        });
    }

below is the query

    private static final String UNBLOCK_INVENTORY_QUERY = """
            UPDATE item_inventory AS iv
            SET
                amazon_reserved = CASE
                                    WHEN (iv.amazon_reserved - $1) < 0 THEN 0 ELSE iv.amazon_reserved - $1
                                  END,
                apollo_reserved = CASE
                                    WHEN (iv.apollo_reserved - $2) < 0 THEN 0 ELSE iv.apollo_reserved - $2
                                  END,
                quantity = CASE
                            WHEN $3 = 1 THEN iv.quantity - $1 - $2 ELSE iv.quantity
                           END,
                version = iv.version + 1,
                updated_at = NOW()      
            WHERE id = $4 AND iv.axdc_code = $5      
            """;

This is after I have updated to

     <dependency>
            <groupId>org.postgresql</groupId>
            <artifactId>r2dbc-postgresql</artifactId>
            <version>0.9.1.RELEASE</version>
        </dependency>
sushovan24
@sushovan24
Caused by: dev.miku.r2dbc.mysql.client.MySqlConnectionClosedException: Connection closed at dev.miku.r2dbc.mysql.client.ClientExceptions.expectedClosed(ClientExceptions.java:36) ~[r2dbc-mysql-0.8.2.RELEASE.jar!/:0.8.2.RELEASE]
Connection was closed after some time.
My Pool Configuration::
ConnectionPoolConfiguration configuration = ConnectionPoolConfiguration.builder(ConnectionFactories.get(optionsAcc))
.maxIdleTime(Duration.ofMinutes(30))
.initialSize(5)
.maxSize(10)
// .maxCreateConnectionTime(Duration.ofSeconds(10))
// .maxAcquireTime(Duration.ofSeconds(10))
.maxLifeTime(Duration.ofMinutes(60))
.acquireRetry(2)
.validationQuery("SELECT 1")
.registerJmx(false)
.name(poolName)
.build();
peace
@inpeace_gitlab
@sushovan24 Turns out that you must not call .add() on the last element in the iteration.
rahimkhanabdul
@rahimkhanabdul:matrix.org
[m]
How do we use useUnicode=true&characterEncoding=UTF-8 in r2dbc-spi url
jbaddam17
@jbaddam17

Database calls are getting stuck. not sure whether they are making to DB host or not, but seeing below log statement when i enable TRACE logs on io.r2dbc package after that nothing

491156 --- [or-http-epoll-5] io.r2dbc.mssql.QUERY [ 250] : Executing query: select column_1, column_2 from table where column_3 = 123

Using Springboot 2.7.2 with following

r2dbc-mssql:0.8.8.RELEASE
r2dbc-pool:0.8.8.RELEASE
r2dbc-spi:0.8.6.RELEASE

''return databaseClient.sql("select column_1, column_2 from table where column_3 = :value)
.bind("value", "123")
.fetch()
.first().flatmap(res -> return Mono.just(res));''

did any one face/saw similar issue? what could be the problem

1 reply
AdarshSRM
@AdarshSRM
hello, I am trying to find whether there is a sequence generation support with r2dbc (like with hibernate @GeneratedValue with jdbc)? Thank you.
sushovan24
@sushovan24

@inpeace_gitlab This is solve there is a conflict between connect_timeout and idleTimeOut. idleTimeOut <= connect_timeout.

But now when there is any error occurred then mysql error 1129 coming.

org.springframework.dao.DataAccessResourceFailureException: Failed to obtain R2DBC Connection; nested exception is io.r2dbc.spi.R2dbcNonTransientResourceException: [1129] Host '<myip>' is blocked because of many connection errors; unblock with 'mysqladmin flush-hosts')

Andreas Arifin
@andreas-trvlk
Hi All. I want to ask about minIdle behavior on the r2dbc-pool. I have set this option on my project and my expectation was it will behave same like HikariCP where the idle size will be maintained at the configured size no matter how long the app run. But it is not the case here. First try, the connection count dropped to zero after 30 minutes. We found that it's because of maxIdleTime was default to 30 mins. We set it to -1 to remove eviction behavior. It was looking good at the surface because the connection count stayed at the configured size. But when we tried to access the DB from our app, it seems the connection is hanging: it stays at the configured size, but seems to lose the connection to the db because the process got timed out. It is known issue or my expectation was wrong about the minIdle behavior?
2 replies
vivek.singh02
@vivek.singh02:matrix.org
[m]

Hi All. Getting below error when trying to use reactor-test

13:36:18.316 [Test worker] DEBUG reactor.core.publisher.Operators - Duplicate Subscription has been detected
java.lang.IllegalStateException: Spec. Rule 2.12 - Subscriber.onSubscribe MUST NOT be called more than once (based on object equality)
StepVerifier.create(promotionMono.log())
            .expectSubscription()
            .expectNextMatches(validatePromotion(promotion))
            .verifyComplete(); --- error comes in this line

I am using smae thread pool in different classes as @Qualifier

Panusitt Khuenkham
@panusitt.khu_gitlab

Sorry, I have a problem, please help.

I have a problem in below,

[1;31m[ERROR][2022-09-07T19:20:19,078][reactor-tcp-epoll-2][][BaseExceptionHandler] causeMsg: Request queue is full

AWS RDS MySQL 8
Spring Boot: 2.6.10
POM.xml

<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-r2dbc</artifactId>
</dependency>
<dependency>
<groupId>dev.miku</groupId>
<artifactId>r2dbc-mysql</artifactId>
<version>0.8.2.RELEASE</version>
</dependency>
<dependency>
<groupId>io.r2dbc</groupId>
<artifactId>r2dbc-spi</artifactId>
<version>1.0.0.RELEASE</version>
</dependency>
<dependency>
<groupId>io.netty</groupId>
<artifactId>netty-all</artifactId>
<version>4.1.79.Final</version>
</dependency>

Thank you so much
I try add -Dreactor.bufferSize.small=4096
But I don't know if it can be fixed in the long term.
Can you suggest a solution for me?
Panusitt Khuenkham
@panusitt.khu_gitlab
What should I calculate from?
bufferSize.small
muratyildiz1976
@muratyildiz1976
Hello everybody, is it possible to stream a file (byte array, or from FilePart.class) into a traditional relational database without loading the whole file into memory ?
Mark Paluch
@mp911de
Only with some databases. It depends whether the entire protocol frame must be assembled first (e.g. to calculate its size). SQL Server should work with streaming.
vgupta88
@vgupta88
Guys, new to the room. Wanted to ask something i am struggling with on r2dbc pool. I want to pass acquire-retry param to r2dbc pool config from my yaml but i was not able to. I tried adding programatically referring the documentation and all looks good but my initial size doesn't reflect on my postgres query to check number of connections.
    public ConnectionFactory connectionFactory() {
        ConnectionFactory connectionFactory = ConnectionFactories.get(ConnectionFactoryOptions.builder()
                .option(DRIVER, "pool")
                .option(PROTOCOL, "postgresql")
                .option(HOST, host)
                .option(PORT, port)
                .option(USER, userName)
                .option(PASSWORD, password)
                .option(DATABASE, dbName)
                .build());

        ConnectionPoolConfiguration configuration = ConnectionPoolConfiguration.builder(connectionFactory)
                .initialSize(initialSize)
                .maxSize(maxSize)
                .minIdle(minIdle)
                .metricsRecorder(new R2DBCPoolMetrics())
                .maxValidationTime(Duration.ofMillis(maxValidationTime))
                .maxAcquireTime(Duration.ofMillis(maxAcquireTime))
                .acquireRetry(acquireRetry)
                .build();
        ConnectionPool connectionPool= new ConnectionPool(configuration);
        return connectionPool;
    }
This looks good but if i check num of connections in postgres it doesn't reflect initialSize i pass here
Mark Paluch
@mp911de
You need to call the warmu() method to pre-initialize the pool
¯\_(ツ)_/¯
@OhadShai_twitter
Question: is there a version dependency between r2dbc spi & core?
I am asking because of jasync-sql/jasync-sql#310
Also is someone can help me setup a basic integration test for the r2dbc module that would be great
5 replies
Mahender Gusain
@mahendergusain:matrix.org
[m]
Need to understand maxValidationTime( r2dbc connection pool ConnectionPoolConfiguration)?? Is it good to override maxValidationTime=200. or Is it good to keep it default.?
1 reply
peace
@inpeace_gitlab
@mp911de Would you be able to update the reactor core version on r2dbc-postgresql/1.0.0.RC1?
The current specified version 3.5.0-M2is missing on maven. I guess the reactor people removed it for some reason.
1 reply
desember
@desember

Hi, did someone stumble upon this error in r2dbc?

executeMany; SQL [SELECT <skipped...>]; Connection unexpectedly closed; nested exception is io.r2dbc.postgresql.client.ReactorNettyClient$PostgresConnectionClosedException: Connection unexpectedly closed

It happens several times a day. After re-try query returns valid dat. I use pooled connection created as

     @Bean
    override fun connectionFactory(): ConnectionFactory {
        val factory = PostgresqlConnectionFactory(
            PostgresqlConnectionConfiguration.builder()
                .host(host)
                .port(port)
                .database(database)
                .username(username)
                .password(password)
                .sslMode(SSLMode.VERIFY_FULL)
                .codecRegistrar(
                    EnumCodec.builder()
                        .withEnum("allowed_status", Status::class.java)
                        .build()
                )
                .build()
        )
        return connectionPool(factory)
    }

    fun connectionPool(
        connectionFactory: ConnectionFactory
    ): ConnectionPool {
        val builder = ConnectionPoolConfiguration.builder(connectionFactory)
        builder.maxSize(poolMaxSize)
        builder.initialSize(poolInitialSize)
        builder.maxLifeTime(Duration.ofMillis(-1))
        return ConnectionPool(builder.build())
    }

I'm a bit suspicious that this behavior can be caused by some sort of ttl timeout. Can't prove it though.

1 reply
vivek.singh02
@vivek.singh02:matrix.org
[m]
Hi, instead of using @Transactional i am using .as(TransactionalOperator::transactional).
How can i use @Transactional(readOnly=True) with transactionalOperator ?
Hantsy Bai
@hantsy

HI I am using Spring Data R2dbc in my project, I want to add a temp field to get the value calculated by other fields , so follow the Spring Data R2dbc reference doc and define the field and add a @Value.

@Table("workers")
data class Worker(
    @Id
    val id: UUID? = null,

    @Column(value = "photo")
    var photo: String? = null,

    // see: https://github.com/spring-projects/spring-data-r2dbc/issues/449
    @Transient
    @Value("#{root.photo!=null}")
    val hasPhoto: Boolean = false
)

But the hasPhoto always false, even I have set photo to a nonnull string.

2 replies
mr-nothing
@mr-nothing

Hi there!
I'm using spring r2dbc in my project and trying to make it work with multiple host/failover postgres topology (need to specify db url like this: r2dbc:postgresql:failover://host1,host2,host3:port/)

I'm using 2.7.5 version of spring boot, and get:
r2dbc-pool-0.9.2.RELEASE
r2dbc-spi-0.9.1.RELEASE
r2dbc-postgresql-0.9.2.RELEASE
as a part of spring boot.
As far as I understand this set of r2dbc libs doesn't support failover yet.

So for the next step I was trying to upgrade r2dbc-postgresql-0.9.2.RELEASE to 1.0.0.RC1 but I'm getting the following error:

class java.lang.Long cannot be cast to class java.lang.Integer (java.lang.Long and java.lang.Integer are in module java.base of loader 'bootstrap')
java.lang.ClassCastException: class java.lang.Long cannot be cast to class java.lang.Integer (java.lang.Long and java.lang.Integer are in module java.base of loader 'bootstrap')

as a result of executing simple delete query DELETE FROM my_table WHERE boolean_flag = $1

Which indirectly says that there is some compatibility issue in r2dbc libs. Can anyone guide me if there is working set of these libs that can work in my case or is waiting for new spring release is my only option?

Any help is very appreciated, thank you!

Mark Paluch
@mp911de
R2DBC 1.0 is supported by Spring Framework 6/Spring Data R2DBC 3.0 as there's a binary compatibility change between R2DBC 0.9 and 1.0
1 reply
aeropagz
@aeropagz

Database calls are getting stuck. not sure whether they are making to DB host or not, but seeing below log statement when i enable TRACE logs on io.r2dbc package after that nothing

491156 --- [or-http-epoll-5] io.r2dbc.mssql.QUERY [ 250] : Executing query: select column_1, column_2 from table where column_3 = 123

Using Springboot 2.7.2 with following

r2dbc-mssql:0.8.8.RELEASE
r2dbc-pool:0.8.8.RELEASE
r2dbc-spi:0.8.6.RELEASE

''return databaseClient.sql("select column_1, column_2 from table where column_3 = :value)
.bind("value", "123")
.fetch()
.first().flatmap(res -> return Mono.just(res));''

did any one face/saw similar issue? what could be the problem

Hello together,
I have the same issue here.
If can fix it, if I use a pageable in the repository and it works up to a pagesize of 80. Above 81 I get the following error message:"Could not read property @org.springframework.data.annotation.Id()private java.lang.Long de.fhkiel.ndbk.amazonapi.model.Order.id from column id!" and returns a 500.

If I increase pagesize to 149 it gets stuck and no error appears and the http request times out.

It is very weird that the result depends on the pagesize...

This is my service code:

@Transactional(readOnly = true)
    public Flux<Order> getAll(Pageable pageable) {
        return orderRepository.findBy(pageable)
                .concatMap(order -> Mono.just(order)
                        .zipWith(orderPositionRepository.findByOrderId(order.getId()).collectList())
                        .map(tuple -> tuple.getT1().withPositions(tuple.getT2()))
                );
    }

I would be very thankful for an explanation :) Probably I am messing something up....

Jens Geiregat
@jgrgt

Hi, we've recently been seeing this error in our logs when we put our application under load:

LEAK: DataRow.release() was not called before it's garbage-collected. See https://netty.io/wiki/reference-counted-objects.html for more information.
Recent access records: 
Created at:
    io.r2dbc.postgresql.message.backend.DataRow.<init>(DataRow.java:37)
    io.r2dbc.postgresql.message.backend.DataRow.decode(DataRow.java:141)
    io.r2dbc.postgresql.message.backend.BackendMessageDecoder.decodeBody(BackendMessageDecoder.java:65)
    io.r2dbc.postgresql.message.backend.BackendMessageDecoder.decode(BackendMessageDecoder.java:39)
    reactor.core.publisher.FluxMap$MapConditionalSubscriber.onNext(FluxMap.java:208)
    reactor.core.publisher.FluxMap$MapConditionalSubscriber.onNext(FluxMap.java:224)
    reactor.netty.channel.FluxReceive.drainReceiver(FluxReceive.java:279)
    reactor.netty.channel.FluxReceive.onInboundNext(FluxReceive.java:388)
    reactor.netty.channel.ChannelOperations.onInboundNext(ChannelOperations.java:404)
    reactor.netty.channel.ChannelOperationsHandler.channelRead(ChannelOperationsHandler.java:113)
    io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444)
    io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
    io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)
    io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:336)
    io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:323)
    io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:444)
    io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:280)
    io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444)
    io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
    io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)
    io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
    io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440)
    io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
    io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
    io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:800)
    io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:499)
    io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397)
    io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997)
    io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
    io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
    java.base/java.lang.Thread.run(Thread.java:833)

Does anybody have any hints to help us figure out where the problem is?

Mark Paluch
@mp911de
Actually, we're trying to get hold of the buffer release issue for over a year now. Any help is appreciated. Cancellations could play a role in this.
Jens Geiregat
@jgrgt

@mp911de Googling told me that typically this happens with ByteBuffer. We can reproduce the above error in 1/3 of our load-test runs now.

So far our only lead is that we use some 'low level' mapping of io.r2dbc.spi.Row to our own DTO's. And that is somewhat related to the DataRows. (We don't use spring data or anything like that. Just plain r2dbc.)

Ibrahim Ates
@atesibrahim

Hey everybody, I have an issue with R2DBC rowsUpdated(). I'm trying to update just one row in the database and get expect it to return 1 count to me for updated rows. But rowsUpdated() return 4 even if one row is updated. The code sample is as follow;
``public class RunQueryRepository implements RunQueryApiRepository {

    private final R2dbcEntityTemplate r2dbcEntityTemplate;

    public Mono<Integer> runCustomSqlQuery(String sqlQuery) {

        Mono<Integer> result = r2dbcEntityTemplate.getDatabaseClient().sql(sqlQuery)
                        .fetch()
                        .rowsUpdated();

        system.out.println(result.toFuture().get()); // prints 4

        return result;
    }

}``

can anyone know the reason and how to solve it? Many thanks
Achille Nana Chimi
@nanachimi

Hey guys, I'm using R2DBC to persist data publish by kafka stream. TransactionDetailData implement the interface Persistable<String> to indicate if the record is new or not. The only way I have to confirm if a record is new or not is to fetch that record by ID (topicKey)from the DB.

But my code below returns me DuplicateViolationException. Only explanation I can see here is that, by the time of evaluating findById() there is no record found but once that record arrives to save() call a record with the same ID was already saved.

I cannot just ignore that record since it an update event with new values (apart from the ID). Is there any way to solve it with R2DBC? With JPA/JDBC this issue seems not to occur.

    public Mono<TransactionDetailData> persist(TransactionDetailData transactionDetailData) {

        return transactionDetailRepository.findById(transactionDetailData.getTopicKey())
                .map(dataFound ->  {
                    transactionDetailData.setCreatedDate(dataFound.getCreatedDate());
                    transactionDetailData.setLastModifiedDate(LocalDateTime.now());
                    return transactionDetailData.asOld();
                })
                .switchIfEmpty(Mono.defer(() -> Mono.just(transactionDetailData.asNew())))
                .flatMap(transactionDetailRepository::save)
                .doOnError(throwable -> log.error("transaction detail error has occurred when persisting: isNew: " +
                        transactionDetailData.isNew() + " - " + transactionDetailData, throwable));
    }
Dhilli Nellepalli
@ndhilliprasad

Hey everyone, we are using r2dbc-postgres (0.8.8.RELEASE) and r2dbc-pool (0.8.7.RELEASE) with spring-bbot 2.5.8. on a given day our app will be actively taking requests for close to 10-12 hours and later on there will be no requests (literally zero) for rest of the time. on the sub sequent day when the requests start to the app, we see high latency for first few requests (on an average our latency is around 10 ms, where as for these initial requests it will be around 1s). our suspect is that db connections has become stale and pool is trying to clean-up/re-establish the connections and which is taking time.

our pool configuration are s below

max-size: 10
max-idle-time: 30s
max-create-connection-time: 10s
max-acquire-time: 5s
max-life-time: 30m
initial-size: 2
validation-query: SELECT 1
validationDepth: REMOTE

is there way tweak these configurations to make the connections alive ever? or any other way to reduce the latency on the initial request after a dark period. Thanks

alxxyz
@alxxyz
Hi, is there a way to retrieve ConnectionPoolMXBean information about the pool like getAcquiredSize ?
alxxyz
@alxxyz
How to retrieve an instance of PoolMetrics interface from R2DBC pool?
Mark Paluch
@mp911de
Via ConnectionPool.getPoolMetrics()
nguyentthai96
@nguyentthai96
kotlin r2dbc-mssql update java.lang.Long cannot be cast to class java.lang.Integer (java.lang.Long and java.lang.Integer are in module java.base of loader 'bootstrap')
nguyentthai96
@nguyentthai96
private static Mono<Integer> sumRowsUpdated(
Function<Connection, Flux<Result>> resultFunction, Connection it) {
return resultFunction.apply(it)
.flatMap(Result::getRowsUpdated)
.collect(Collectors.summingInt(Integer::intValue));
}
spring-r2dbc-5.3.23.jar
r2dbc-spi-0.9.1.RELEASE.jar, io.r2dbc:r2dbc-mssql downgrade to io.r2dbc:r2dbc-mssql:0.9.0.RELEASE
Aviram Birenbaum
@abiren
How can I set query timeout in r2dbc mysql?