Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • Feb 05 16:52
    renovate[bot] edited #582
  • Feb 05 16:51
    renovate[bot] opened #582
  • Feb 05 16:51

    renovate[bot] on configure

    Add renovate.json (compare)

  • Feb 02 15:12
    k163377 opened #581
  • Feb 02 15:12
    k163377 labeled #581
  • Jan 30 08:29
    mp911de commented #580
  • Jan 30 08:16
    akoufa commented #580
  • Jan 30 07:29
    mp911de labeled #580
  • Jan 30 07:29
    mp911de unlabeled #580
  • Jan 30 07:29
    mp911de commented #580
  • Jan 28 11:17
    akoufa labeled #580
  • Jan 28 11:17
    akoufa opened #580
  • Jan 27 16:27
    mwinkels-bol edited #578
  • Jan 27 14:53
    mwinkels-bol synchronize #579
  • Jan 27 12:26
    mwinkels-bol opened #579
  • Jan 27 12:20
    mwinkels-bol labeled #578
  • Jan 27 12:20
    mwinkels-bol opened #578
  • Jan 26 07:44
    mp911de assigned #274
  • Jan 26 07:44
    mp911de commented #274
  • Jan 26 07:40
    mp911de commented #181
Panusitt Khuenkham
@panusitt.khu_gitlab
Can you suggest a solution for me?
Panusitt Khuenkham
@panusitt.khu_gitlab
What should I calculate from?
bufferSize.small
muratyildiz1976
@muratyildiz1976
Hello everybody, is it possible to stream a file (byte array, or from FilePart.class) into a traditional relational database without loading the whole file into memory ?
Mark Paluch
@mp911de
Only with some databases. It depends whether the entire protocol frame must be assembled first (e.g. to calculate its size). SQL Server should work with streaming.
vgupta88
@vgupta88
Guys, new to the room. Wanted to ask something i am struggling with on r2dbc pool. I want to pass acquire-retry param to r2dbc pool config from my yaml but i was not able to. I tried adding programatically referring the documentation and all looks good but my initial size doesn't reflect on my postgres query to check number of connections.
    public ConnectionFactory connectionFactory() {
        ConnectionFactory connectionFactory = ConnectionFactories.get(ConnectionFactoryOptions.builder()
                .option(DRIVER, "pool")
                .option(PROTOCOL, "postgresql")
                .option(HOST, host)
                .option(PORT, port)
                .option(USER, userName)
                .option(PASSWORD, password)
                .option(DATABASE, dbName)
                .build());

        ConnectionPoolConfiguration configuration = ConnectionPoolConfiguration.builder(connectionFactory)
                .initialSize(initialSize)
                .maxSize(maxSize)
                .minIdle(minIdle)
                .metricsRecorder(new R2DBCPoolMetrics())
                .maxValidationTime(Duration.ofMillis(maxValidationTime))
                .maxAcquireTime(Duration.ofMillis(maxAcquireTime))
                .acquireRetry(acquireRetry)
                .build();
        ConnectionPool connectionPool= new ConnectionPool(configuration);
        return connectionPool;
    }
This looks good but if i check num of connections in postgres it doesn't reflect initialSize i pass here
Mark Paluch
@mp911de
You need to call the warmu() method to pre-initialize the pool
¯\_(ツ)_/¯
@OhadShai_twitter
Question: is there a version dependency between r2dbc spi & core?
I am asking because of jasync-sql/jasync-sql#310
Also is someone can help me setup a basic integration test for the r2dbc module that would be great
5 replies
Mahender Gusain
@mahendergusain:matrix.org
[m]
Need to understand maxValidationTime( r2dbc connection pool ConnectionPoolConfiguration)?? Is it good to override maxValidationTime=200. or Is it good to keep it default.?
1 reply
peace
@inpeace_gitlab
@mp911de Would you be able to update the reactor core version on r2dbc-postgresql/1.0.0.RC1?
The current specified version 3.5.0-M2is missing on maven. I guess the reactor people removed it for some reason.
1 reply
desember
@desember

Hi, did someone stumble upon this error in r2dbc?

executeMany; SQL [SELECT <skipped...>]; Connection unexpectedly closed; nested exception is io.r2dbc.postgresql.client.ReactorNettyClient$PostgresConnectionClosedException: Connection unexpectedly closed

It happens several times a day. After re-try query returns valid dat. I use pooled connection created as

     @Bean
    override fun connectionFactory(): ConnectionFactory {
        val factory = PostgresqlConnectionFactory(
            PostgresqlConnectionConfiguration.builder()
                .host(host)
                .port(port)
                .database(database)
                .username(username)
                .password(password)
                .sslMode(SSLMode.VERIFY_FULL)
                .codecRegistrar(
                    EnumCodec.builder()
                        .withEnum("allowed_status", Status::class.java)
                        .build()
                )
                .build()
        )
        return connectionPool(factory)
    }

    fun connectionPool(
        connectionFactory: ConnectionFactory
    ): ConnectionPool {
        val builder = ConnectionPoolConfiguration.builder(connectionFactory)
        builder.maxSize(poolMaxSize)
        builder.initialSize(poolInitialSize)
        builder.maxLifeTime(Duration.ofMillis(-1))
        return ConnectionPool(builder.build())
    }

I'm a bit suspicious that this behavior can be caused by some sort of ttl timeout. Can't prove it though.

1 reply
vivek.singh02
@vivek.singh02:matrix.org
[m]
Hi, instead of using @Transactional i am using .as(TransactionalOperator::transactional).
How can i use @Transactional(readOnly=True) with transactionalOperator ?
Hantsy Bai
@hantsy

HI I am using Spring Data R2dbc in my project, I want to add a temp field to get the value calculated by other fields , so follow the Spring Data R2dbc reference doc and define the field and add a @Value.

@Table("workers")
data class Worker(
    @Id
    val id: UUID? = null,

    @Column(value = "photo")
    var photo: String? = null,

    // see: https://github.com/spring-projects/spring-data-r2dbc/issues/449
    @Transient
    @Value("#{root.photo!=null}")
    val hasPhoto: Boolean = false
)

But the hasPhoto always false, even I have set photo to a nonnull string.

2 replies
mr-nothing
@mr-nothing

Hi there!
I'm using spring r2dbc in my project and trying to make it work with multiple host/failover postgres topology (need to specify db url like this: r2dbc:postgresql:failover://host1,host2,host3:port/)

I'm using 2.7.5 version of spring boot, and get:
r2dbc-pool-0.9.2.RELEASE
r2dbc-spi-0.9.1.RELEASE
r2dbc-postgresql-0.9.2.RELEASE
as a part of spring boot.
As far as I understand this set of r2dbc libs doesn't support failover yet.

So for the next step I was trying to upgrade r2dbc-postgresql-0.9.2.RELEASE to 1.0.0.RC1 but I'm getting the following error:

class java.lang.Long cannot be cast to class java.lang.Integer (java.lang.Long and java.lang.Integer are in module java.base of loader 'bootstrap')
java.lang.ClassCastException: class java.lang.Long cannot be cast to class java.lang.Integer (java.lang.Long and java.lang.Integer are in module java.base of loader 'bootstrap')

as a result of executing simple delete query DELETE FROM my_table WHERE boolean_flag = $1

Which indirectly says that there is some compatibility issue in r2dbc libs. Can anyone guide me if there is working set of these libs that can work in my case or is waiting for new spring release is my only option?

Any help is very appreciated, thank you!

Mark Paluch
@mp911de
R2DBC 1.0 is supported by Spring Framework 6/Spring Data R2DBC 3.0 as there's a binary compatibility change between R2DBC 0.9 and 1.0
1 reply
aeropagz
@aeropagz

Database calls are getting stuck. not sure whether they are making to DB host or not, but seeing below log statement when i enable TRACE logs on io.r2dbc package after that nothing

491156 --- [or-http-epoll-5] io.r2dbc.mssql.QUERY [ 250] : Executing query: select column_1, column_2 from table where column_3 = 123

Using Springboot 2.7.2 with following

r2dbc-mssql:0.8.8.RELEASE
r2dbc-pool:0.8.8.RELEASE
r2dbc-spi:0.8.6.RELEASE

''return databaseClient.sql("select column_1, column_2 from table where column_3 = :value)
.bind("value", "123")
.fetch()
.first().flatmap(res -> return Mono.just(res));''

did any one face/saw similar issue? what could be the problem

Hello together,
I have the same issue here.
If can fix it, if I use a pageable in the repository and it works up to a pagesize of 80. Above 81 I get the following error message:"Could not read property @org.springframework.data.annotation.Id()private java.lang.Long de.fhkiel.ndbk.amazonapi.model.Order.id from column id!" and returns a 500.

If I increase pagesize to 149 it gets stuck and no error appears and the http request times out.

It is very weird that the result depends on the pagesize...

This is my service code:

@Transactional(readOnly = true)
    public Flux<Order> getAll(Pageable pageable) {
        return orderRepository.findBy(pageable)
                .concatMap(order -> Mono.just(order)
                        .zipWith(orderPositionRepository.findByOrderId(order.getId()).collectList())
                        .map(tuple -> tuple.getT1().withPositions(tuple.getT2()))
                );
    }

I would be very thankful for an explanation :) Probably I am messing something up....

Jens Geiregat
@jgrgt

Hi, we've recently been seeing this error in our logs when we put our application under load:

LEAK: DataRow.release() was not called before it's garbage-collected. See https://netty.io/wiki/reference-counted-objects.html for more information.
Recent access records: 
Created at:
    io.r2dbc.postgresql.message.backend.DataRow.<init>(DataRow.java:37)
    io.r2dbc.postgresql.message.backend.DataRow.decode(DataRow.java:141)
    io.r2dbc.postgresql.message.backend.BackendMessageDecoder.decodeBody(BackendMessageDecoder.java:65)
    io.r2dbc.postgresql.message.backend.BackendMessageDecoder.decode(BackendMessageDecoder.java:39)
    reactor.core.publisher.FluxMap$MapConditionalSubscriber.onNext(FluxMap.java:208)
    reactor.core.publisher.FluxMap$MapConditionalSubscriber.onNext(FluxMap.java:224)
    reactor.netty.channel.FluxReceive.drainReceiver(FluxReceive.java:279)
    reactor.netty.channel.FluxReceive.onInboundNext(FluxReceive.java:388)
    reactor.netty.channel.ChannelOperations.onInboundNext(ChannelOperations.java:404)
    reactor.netty.channel.ChannelOperationsHandler.channelRead(ChannelOperationsHandler.java:113)
    io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444)
    io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
    io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)
    io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:336)
    io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:323)
    io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:444)
    io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:280)
    io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444)
    io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
    io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)
    io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
    io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440)
    io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
    io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
    io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:800)
    io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:499)
    io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397)
    io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997)
    io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
    io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
    java.base/java.lang.Thread.run(Thread.java:833)

Does anybody have any hints to help us figure out where the problem is?

Mark Paluch
@mp911de
Actually, we're trying to get hold of the buffer release issue for over a year now. Any help is appreciated. Cancellations could play a role in this.
Jens Geiregat
@jgrgt

@mp911de Googling told me that typically this happens with ByteBuffer. We can reproduce the above error in 1/3 of our load-test runs now.

So far our only lead is that we use some 'low level' mapping of io.r2dbc.spi.Row to our own DTO's. And that is somewhat related to the DataRows. (We don't use spring data or anything like that. Just plain r2dbc.)

Ibrahim Ates
@atesibrahim

Hey everybody, I have an issue with R2DBC rowsUpdated(). I'm trying to update just one row in the database and get expect it to return 1 count to me for updated rows. But rowsUpdated() return 4 even if one row is updated. The code sample is as follow;
``public class RunQueryRepository implements RunQueryApiRepository {

    private final R2dbcEntityTemplate r2dbcEntityTemplate;

    public Mono<Integer> runCustomSqlQuery(String sqlQuery) {

        Mono<Integer> result = r2dbcEntityTemplate.getDatabaseClient().sql(sqlQuery)
                        .fetch()
                        .rowsUpdated();

        system.out.println(result.toFuture().get()); // prints 4

        return result;
    }

}``

can anyone know the reason and how to solve it? Many thanks
Achille Nana Chimi
@nanachimi

Hey guys, I'm using R2DBC to persist data publish by kafka stream. TransactionDetailData implement the interface Persistable<String> to indicate if the record is new or not. The only way I have to confirm if a record is new or not is to fetch that record by ID (topicKey)from the DB.

But my code below returns me DuplicateViolationException. Only explanation I can see here is that, by the time of evaluating findById() there is no record found but once that record arrives to save() call a record with the same ID was already saved.

I cannot just ignore that record since it an update event with new values (apart from the ID). Is there any way to solve it with R2DBC? With JPA/JDBC this issue seems not to occur.

    public Mono<TransactionDetailData> persist(TransactionDetailData transactionDetailData) {

        return transactionDetailRepository.findById(transactionDetailData.getTopicKey())
                .map(dataFound ->  {
                    transactionDetailData.setCreatedDate(dataFound.getCreatedDate());
                    transactionDetailData.setLastModifiedDate(LocalDateTime.now());
                    return transactionDetailData.asOld();
                })
                .switchIfEmpty(Mono.defer(() -> Mono.just(transactionDetailData.asNew())))
                .flatMap(transactionDetailRepository::save)
                .doOnError(throwable -> log.error("transaction detail error has occurred when persisting: isNew: " +
                        transactionDetailData.isNew() + " - " + transactionDetailData, throwable));
    }
Dhilli Nellepalli
@ndhilliprasad

Hey everyone, we are using r2dbc-postgres (0.8.8.RELEASE) and r2dbc-pool (0.8.7.RELEASE) with spring-bbot 2.5.8. on a given day our app will be actively taking requests for close to 10-12 hours and later on there will be no requests (literally zero) for rest of the time. on the sub sequent day when the requests start to the app, we see high latency for first few requests (on an average our latency is around 10 ms, where as for these initial requests it will be around 1s). our suspect is that db connections has become stale and pool is trying to clean-up/re-establish the connections and which is taking time.

our pool configuration are s below

max-size: 10
max-idle-time: 30s
max-create-connection-time: 10s
max-acquire-time: 5s
max-life-time: 30m
initial-size: 2
validation-query: SELECT 1
validationDepth: REMOTE

is there way tweak these configurations to make the connections alive ever? or any other way to reduce the latency on the initial request after a dark period. Thanks

alxxyz
@alxxyz
Hi, is there a way to retrieve ConnectionPoolMXBean information about the pool like getAcquiredSize ?
alxxyz
@alxxyz
How to retrieve an instance of PoolMetrics interface from R2DBC pool?
Mark Paluch
@mp911de
Via ConnectionPool.getPoolMetrics()
nguyentthai96
@nguyentthai96
kotlin r2dbc-mssql update java.lang.Long cannot be cast to class java.lang.Integer (java.lang.Long and java.lang.Integer are in module java.base of loader 'bootstrap')
nguyentthai96
@nguyentthai96
private static Mono<Integer> sumRowsUpdated(
Function<Connection, Flux<Result>> resultFunction, Connection it) {
return resultFunction.apply(it)
.flatMap(Result::getRowsUpdated)
.collect(Collectors.summingInt(Integer::intValue));
}
spring-r2dbc-5.3.23.jar
r2dbc-spi-0.9.1.RELEASE.jar, io.r2dbc:r2dbc-mssql downgrade to io.r2dbc:r2dbc-mssql:0.9.0.RELEASE
Aviram Birenbaum
@abiren
How can I set query timeout in r2dbc mysql?
Santwana Verma
@santwanav

Hello
I am using AWS IAM DB authentication to authenticate with the DB, and the token for that lasts for 15 minutes. Since there is no way in r2dbc-pool to update only the password field dynamically, what I do is that I close the connection pool every 14 minutes and start a new connection pool. The max pool size has been set as 20. What I suspect is that when I close the connection pool, the connections aren't really being closed. From the monitoring on the number of DB connection on the AWS side, I can see a steady increase in the number of connections which reaches a point and then drops suddenly. .

This is my code:

this.connectionPool.close().doOnSuccess(unused -> {
          .......
          this.connectionPool = new ConnectionPool(configuration);
        }).doOnError(err -> {
          .......
          this.connectionPool = new ConnectionPool(configuration);
        }).subscribe();

Does anyone has any idea what might be happening here?

2 replies
mafei
@mafei-dev
Hantsy Bai
@hantsy
I have R2dbc Postgres question, and I created a discussion on the R2dbc postgres repository, check https://github.com/pgjdbc/r2dbc-postgresql/discussions/574
Any help is appreciated.
David Ankin
@alexanderankin
Hi all, I'm trying to implement r2dbc support in liquibase - one thing im noticing is that in jdbc you can specify the driver class name, i dont have experience writing jdbc libraries, so im not sure what exactly the specifics there are, ive never dealt with it on a low level. are there any specific differences i should know about selecting a driver by class name in r2dbc as opposed to jdbc?
David Ankin
@alexanderankin

Please also note that JDBC and R2DBC aren't compatible to each other so driver-class-name is not applicable in the r2dbc context.

found this, got it

Aditya Tolety
@AdityaTolety
I am trying to performance testing for my Spring boot Application which is configured with r2dbc connection pool . When i start my Spring boot App, the first request which i fire is taking around 3 secs which is very high . And the subsequent requests were taking less time like 230 milli seconds
harinathb
@harinathb
Hi All,
I'm trying to implement one POC with Spring boot 2.3.6.RELEASE, r2dbc,reactor-core and Azure SQL(mssql) .I want to fetch the data by using reactive flux from 4-5 tables using normal sql joins, but not able to fetch entire dataset, fetching few records 20-30 records out of 100K. I don't see any errors in the application level, looks like thread went waiting state. can somebody help/suggest on this? Also, please suggest stable version of r2dbc and Azure SQL(mssql).
andrewGitHub
@andrewGitHub

Hi All,
Just wondering if anyone has any ideas on how to answer this question, I'm facing the same problem:

Trying to use AWS IAM with r2dbc-postgres (no obvious way to periodically swap passwords or to swap passwords on authentication failure):

https://stackoverflow.com/questions/74364988/iam-authentication-with-r2dbc-postgresql

Would be much appreciated. Thank you!