renovate[bot] on configure
Add renovate.json (compare)
Hi All,
Trying to insert a string in a column which is enum inside postrgresql table. .bind("address_type", Parameter.fromOrEmpty(address.getHomeAddress().toString(), String.class))
Getting io.r2dbc.postgresql.ExceptionFactory$PostgresqlBadGrammarException: column "address_type" is of type addresstype but expression is of type character varying
HomeAddress is a enum here {HOME, OFFICE, OTHERS}
Hey! I am trying to save an entity in a postgresdb with spring data that has an OffsetDateTime timestamp. It works fine in my dev environment but when I run it inside a docker container I get this exception:
Caused by: java.lang.ClassCastException: class java.time.OffsetDateTime cannot be cast to class java.time.LocalTime (java.time.OffsetDateTime and java.time.LocalTime are in module java.base of loader 'bootstrap')
at io.r2dbc.postgresql.codec.BuiltinCodecSupport.encodeToText(BuiltinCodecSupport.java:83)
Suppressed: reactor.core.publisher.FluxOnAssembly$OnAssemblyException:
Assembly trace from producer [reactor.core.publisher.MonoSupplier] :
reactor.core.publisher.Mono.fromSupplier
io.r2dbc.postgresql.codec.AbstractCodec.create(AbstractCodec.java:150)
Error has been observed at the following site(s):
I am not sure why as r2dbc postgres driver should have support for this datatype. Anyone have any suggestion to solve this issue?
INSERT INTO TABLE (enum_column_name) VALUES (CAST(:value as ENUM_type));
:)
.doOnSuccess(
o -> {
// this method
this.logEventToDatabase(Event.builder()
.message("Updated contact information.")
.build())
.subscribe();
}
);
private Mono<Event> logEventToDatabase(final Event event) {
return this.eventRepository.save(event).publishOn(Schedulers.boundedElastic());
}
Hi everyone. I have recently upgraded r2dbc-postgresql to 0.9.1.RELEASE from 0.8.x. I have started getting Bound parameter count does not match parameters in SQL statement
which was not the case with the previous version. below is how I am passing the parameters. Requesting team to help on this
@Transactional
public Flux<Integer> unblockInventory(InventoryRequest request, boolean orderFulfilled) {
return this.databaseClient.inConnectionMany(connection -> {
var statement = connection.createStatement(UNBLOCK_INVENTORY_QUERY);
for (var item : request.getOrderItems()) {
statement
.bind(0, request.getSource() == UpdateSource.AMAZON ? item.getQuantity() : 0)
.bind(1, request.getSource() == UpdateSource.APOLLO247 ? item.getQuantity() : 0)
.bind(2, orderFulfilled ? 1 : 0)
.bind(3, Utility.getKey(item.getSku(), request.getStoreId()))
.bind(4, request.getAxdcCode())
.add();
}
return Flux
.from(statement.execute())
.flatMap(Result::getRowsUpdated)
.map(e -> {
if (e == 0) {
throw new InventoryNotUnblockedException(request.toString());
}
return e;
})
.doOnError(ex -> LOGGER.error(() -> MessageUtils.errorMessage(Event.UNBLOCK_INVENTORY_FAILED,
ex.getMessage(), ex, false)));
});
}
below is the query
private static final String UNBLOCK_INVENTORY_QUERY = """
UPDATE item_inventory AS iv
SET
amazon_reserved = CASE
WHEN (iv.amazon_reserved - $1) < 0 THEN 0 ELSE iv.amazon_reserved - $1
END,
apollo_reserved = CASE
WHEN (iv.apollo_reserved - $2) < 0 THEN 0 ELSE iv.apollo_reserved - $2
END,
quantity = CASE
WHEN $3 = 1 THEN iv.quantity - $1 - $2 ELSE iv.quantity
END,
version = iv.version + 1,
updated_at = NOW()
WHERE id = $4 AND iv.axdc_code = $5
""";
This is after I have updated to
<dependency>
<groupId>org.postgresql</groupId>
<artifactId>r2dbc-postgresql</artifactId>
<version>0.9.1.RELEASE</version>
</dependency>
Database calls are getting stuck. not sure whether they are making to DB host or not, but seeing below log statement when i enable TRACE logs on io.r2dbc package after that nothing
491156 --- [or-http-epoll-5] io.r2dbc.mssql.QUERY [ 250] : Executing query: select column_1, column_2 from table where column_3 = 123
Using Springboot 2.7.2 with following
r2dbc-mssql:0.8.8.RELEASE
r2dbc-pool:0.8.8.RELEASE
r2dbc-spi:0.8.6.RELEASE
''return databaseClient.sql("select column_1, column_2 from table where column_3 = :value)
.bind("value", "123")
.fetch()
.first().flatmap(res -> return Mono.just(res));''
did any one face/saw similar issue? what could be the problem
@inpeace_gitlab This is solve there is a conflict between connect_timeout and idleTimeOut. idleTimeOut <= connect_timeout.
org.springframework.dao.DataAccessResourceFailureException: Failed to obtain R2DBC Connection; nested exception is io.r2dbc.spi.R2dbcNonTransientResourceException: [1129] Host '<myip>' is blocked because of many connection errors; unblock with 'mysqladmin flush-hosts')
minIdle
behavior on the r2dbc-pool
. I have set this option on my project and my expectation was it will behave same like HikariCP where the idle size will be maintained at the configured size no matter how long the app run. But it is not the case here. First try, the connection count dropped to zero after 30 minutes. We found that it's because of maxIdleTime
was default to 30 mins. We set it to -1 to remove eviction behavior. It was looking good at the surface because the connection count stayed at the configured size. But when we tried to access the DB from our app, it seems the connection is hanging: it stays at the configured size, but seems to lose the connection to the db because the process got timed out. It is known issue or my expectation was wrong about the minIdle
behavior?
Hi All. Getting below error when trying to use reactor-test
13:36:18.316 [Test worker] DEBUG reactor.core.publisher.Operators - Duplicate Subscription has been detected
java.lang.IllegalStateException: Spec. Rule 2.12 - Subscriber.onSubscribe MUST NOT be called more than once (based on object equality)
StepVerifier.create(promotionMono.log())
.expectSubscription()
.expectNextMatches(validatePromotion(promotion))
.verifyComplete(); --- error comes in this line
I am using smae thread pool in different classes as @Qualifier
Sorry, I have a problem, please help.
I have a problem in below,
[1;31m[ERROR][m[36m[2022-09-07T19:20:19,078][m[34m[reactor-tcp-epoll-2][m[35m[][m[BaseExceptionHandler] causeMsg: Request queue is full
AWS RDS MySQL 8
Spring Boot: 2.6.10
POM.xml
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-r2dbc</artifactId>
</dependency>
<dependency>
<groupId>dev.miku</groupId>
<artifactId>r2dbc-mysql</artifactId>
<version>0.8.2.RELEASE</version>
</dependency>
<dependency>
<groupId>io.r2dbc</groupId>
<artifactId>r2dbc-spi</artifactId>
<version>1.0.0.RELEASE</version>
</dependency>
<dependency>
<groupId>io.netty</groupId>
<artifactId>netty-all</artifactId>
<version>4.1.79.Final</version>
</dependency>
acquire-retry
param to r2dbc pool config from my yaml but i was not able to. I tried adding programatically referring the documentation and all looks good but my initial size doesn't reflect on my postgres query to check number of connections.
public ConnectionFactory connectionFactory() {
ConnectionFactory connectionFactory = ConnectionFactories.get(ConnectionFactoryOptions.builder()
.option(DRIVER, "pool")
.option(PROTOCOL, "postgresql")
.option(HOST, host)
.option(PORT, port)
.option(USER, userName)
.option(PASSWORD, password)
.option(DATABASE, dbName)
.build());
ConnectionPoolConfiguration configuration = ConnectionPoolConfiguration.builder(connectionFactory)
.initialSize(initialSize)
.maxSize(maxSize)
.minIdle(minIdle)
.metricsRecorder(new R2DBCPoolMetrics())
.maxValidationTime(Duration.ofMillis(maxValidationTime))
.maxAcquireTime(Duration.ofMillis(maxAcquireTime))
.acquireRetry(acquireRetry)
.build();
ConnectionPool connectionPool= new ConnectionPool(configuration);
return connectionPool;
}
This looks good but if i check num of connections in postgres it doesn't reflect initialSize i pass here
Hi, did someone stumble upon this error in r2dbc?
executeMany; SQL [SELECT <skipped...>]; Connection unexpectedly closed; nested exception is io.r2dbc.postgresql.client.ReactorNettyClient$PostgresConnectionClosedException: Connection unexpectedly closed
It happens several times a day. After re-try query returns valid dat. I use pooled connection created as
@Bean
override fun connectionFactory(): ConnectionFactory {
val factory = PostgresqlConnectionFactory(
PostgresqlConnectionConfiguration.builder()
.host(host)
.port(port)
.database(database)
.username(username)
.password(password)
.sslMode(SSLMode.VERIFY_FULL)
.codecRegistrar(
EnumCodec.builder()
.withEnum("allowed_status", Status::class.java)
.build()
)
.build()
)
return connectionPool(factory)
}
fun connectionPool(
connectionFactory: ConnectionFactory
): ConnectionPool {
val builder = ConnectionPoolConfiguration.builder(connectionFactory)
builder.maxSize(poolMaxSize)
builder.initialSize(poolInitialSize)
builder.maxLifeTime(Duration.ofMillis(-1))
return ConnectionPool(builder.build())
}
I'm a bit suspicious that this behavior can be caused by some sort of ttl timeout. Can't prove it though.
@Transactional
i am using .as(TransactionalOperator::transactional)
.HI I am using Spring Data R2dbc in my project, I want to add a temp field to get the value calculated by other fields , so follow the Spring Data R2dbc reference doc and define the field and add a @Value
.
@Table("workers")
data class Worker(
@Id
val id: UUID? = null,
@Column(value = "photo")
var photo: String? = null,
// see: https://github.com/spring-projects/spring-data-r2dbc/issues/449
@Transient
@Value("#{root.photo!=null}")
val hasPhoto: Boolean = false
)
But the hasPhoto always false, even I have set photo to a nonnull string.
Hi there!
I'm using spring r2dbc in my project and trying to make it work with multiple host/failover postgres topology (need to specify db url like this: r2dbc:postgresql:failover://host1,host2,host3:port/
)
I'm using 2.7.5 version of spring boot, and get:r2dbc-pool-0.9.2.RELEASE
r2dbc-spi-0.9.1.RELEASE
r2dbc-postgresql-0.9.2.RELEASE
as a part of spring boot.
As far as I understand this set of r2dbc libs doesn't support failover yet.
So for the next step I was trying to upgrade r2dbc-postgresql-0.9.2.RELEASE
to 1.0.0.RC1
but I'm getting the following error:
class java.lang.Long cannot be cast to class java.lang.Integer (java.lang.Long and java.lang.Integer are in module java.base of loader 'bootstrap')
java.lang.ClassCastException: class java.lang.Long cannot be cast to class java.lang.Integer (java.lang.Long and java.lang.Integer are in module java.base of loader 'bootstrap')
as a result of executing simple delete query DELETE FROM my_table WHERE boolean_flag = $1
Which indirectly says that there is some compatibility issue in r2dbc libs. Can anyone guide me if there is working set of these libs that can work in my case or is waiting for new spring release is my only option?
Any help is very appreciated, thank you!
Database calls are getting stuck. not sure whether they are making to DB host or not, but seeing below log statement when i enable TRACE logs on io.r2dbc package after that nothing
491156 --- [or-http-epoll-5] io.r2dbc.mssql.QUERY [ 250] : Executing query: select column_1, column_2 from table where column_3 = 123
Using Springboot 2.7.2 with following
r2dbc-mssql:0.8.8.RELEASE
r2dbc-pool:0.8.8.RELEASE
r2dbc-spi:0.8.6.RELEASE''return databaseClient.sql("select column_1, column_2 from table where column_3 = :value)
.bind("value", "123")
.fetch()
.first().flatmap(res -> return Mono.just(res));''did any one face/saw similar issue? what could be the problem
Hello together,
I have the same issue here.
If can fix it, if I use a pageable in the repository and it works up to a pagesize of 80. Above 81 I get the following error message:"Could not read property @org.springframework.data.annotation.Id()private java.lang.Long de.fhkiel.ndbk.amazonapi.model.Order.id from column id!"
and returns a 500.
If I increase pagesize to 149 it gets stuck and no error appears and the http request times out.
It is very weird that the result depends on the pagesize...
This is my service code:
@Transactional(readOnly = true)
public Flux<Order> getAll(Pageable pageable) {
return orderRepository.findBy(pageable)
.concatMap(order -> Mono.just(order)
.zipWith(orderPositionRepository.findByOrderId(order.getId()).collectList())
.map(tuple -> tuple.getT1().withPositions(tuple.getT2()))
);
}
I would be very thankful for an explanation :) Probably I am messing something up....
Hi, we've recently been seeing this error in our logs when we put our application under load:
LEAK: DataRow.release() was not called before it's garbage-collected. See https://netty.io/wiki/reference-counted-objects.html for more information.
Recent access records:
Created at:
io.r2dbc.postgresql.message.backend.DataRow.<init>(DataRow.java:37)
io.r2dbc.postgresql.message.backend.DataRow.decode(DataRow.java:141)
io.r2dbc.postgresql.message.backend.BackendMessageDecoder.decodeBody(BackendMessageDecoder.java:65)
io.r2dbc.postgresql.message.backend.BackendMessageDecoder.decode(BackendMessageDecoder.java:39)
reactor.core.publisher.FluxMap$MapConditionalSubscriber.onNext(FluxMap.java:208)
reactor.core.publisher.FluxMap$MapConditionalSubscriber.onNext(FluxMap.java:224)
reactor.netty.channel.FluxReceive.drainReceiver(FluxReceive.java:279)
reactor.netty.channel.FluxReceive.onInboundNext(FluxReceive.java:388)
reactor.netty.channel.ChannelOperations.onInboundNext(ChannelOperations.java:404)
reactor.netty.channel.ChannelOperationsHandler.channelRead(ChannelOperationsHandler.java:113)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)
io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:336)
io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:323)
io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:444)
io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:280)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)
io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:800)
io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:499)
io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397)
io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997)
io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
java.base/java.lang.Thread.run(Thread.java:833)
Does anybody have any hints to help us figure out where the problem is?
@mp911de Googling told me that typically this happens with ByteBuffer. We can reproduce the above error in 1/3 of our load-test runs now.
So far our only lead is that we use some 'low level' mapping of io.r2dbc.spi.Row to our own DTO's. And that is somewhat related to the DataRows. (We don't use spring data or anything like that. Just plain r2dbc.)
Hey everybody, I have an issue with R2DBC rowsUpdated(). I'm trying to update just one row in the database and get expect it to return 1 count to me for updated rows. But rowsUpdated() return 4 even if one row is updated. The code sample is as follow;
``public class RunQueryRepository implements RunQueryApiRepository {
private final R2dbcEntityTemplate r2dbcEntityTemplate;
public Mono<Integer> runCustomSqlQuery(String sqlQuery) {
Mono<Integer> result = r2dbcEntityTemplate.getDatabaseClient().sql(sqlQuery)
.fetch()
.rowsUpdated();
system.out.println(result.toFuture().get()); // prints 4
return result;
}
}``
Hey guys, I'm using R2DBC to persist data publish by kafka stream. TransactionDetailData
implement the interface Persistable<String>
to indicate if the record is new or not. The only way I have to confirm if a record is new or not is to fetch that record by ID (topicKey
)from the DB.
But my code below returns me DuplicateViolationException
. Only explanation I can see here is that, by the time of evaluating findById() there is no record found but once that record arrives to save() call a record with the same ID was already saved.
I cannot just ignore that record since it an update event with new values (apart from the ID). Is there any way to solve it with R2DBC? With JPA/JDBC this issue seems not to occur.
public Mono<TransactionDetailData> persist(TransactionDetailData transactionDetailData) {
return transactionDetailRepository.findById(transactionDetailData.getTopicKey())
.map(dataFound -> {
transactionDetailData.setCreatedDate(dataFound.getCreatedDate());
transactionDetailData.setLastModifiedDate(LocalDateTime.now());
return transactionDetailData.asOld();
})
.switchIfEmpty(Mono.defer(() -> Mono.just(transactionDetailData.asNew())))
.flatMap(transactionDetailRepository::save)
.doOnError(throwable -> log.error("transaction detail error has occurred when persisting: isNew: " +
transactionDetailData.isNew() + " - " + transactionDetailData, throwable));
}
Hey everyone, we are using r2dbc-postgres (0.8.8.RELEASE) and r2dbc-pool (0.8.7.RELEASE) with spring-bbot 2.5.8. on a given day our app will be actively taking requests for close to 10-12 hours and later on there will be no requests (literally zero) for rest of the time. on the sub sequent day when the requests start to the app, we see high latency for first few requests (on an average our latency is around 10 ms, where as for these initial requests it will be around 1s). our suspect is that db connections has become stale and pool is trying to clean-up/re-establish the connections and which is taking time.
our pool configuration are s below
max-size: 10
max-idle-time: 30s
max-create-connection-time: 10s
max-acquire-time: 5s
max-life-time: 30m
initial-size: 2
validation-query: SELECT 1
validationDepth: REMOTE
is there way tweak these configurations to make the connections alive ever? or any other way to reduce the latency on the initial request after a dark period. Thanks