mp911de on 0.8.x
Improve codec caching. Codecs … (compare)
mp911de on main
Improve codec caching. Codecs … (compare)
mp911de on 0.9.x
Improve codec caching. Codecs … (compare)
io.r2dbc.spi.R2dbcTimeoutException: Connection Acquisition timed out after 2000ms
j.l.IllegalArgumentException: Too many permits returned: returned=1, would bring to 11/10\n\tat r.p.AllocationStrategies$SizeBasedAllocationStrategy.returnPermits(AllocationStrategies.java:141)\n\tat r.pool.AbstractPool.destroyPoolable(AbstractPool.java:158)\n\tat r.p.SimpleDequePool.evictInBackground(SimpleDequePool.java:152)\n\tat o.s.c.s.i.r.ReactorSleuth.lambda$null$6(ReactorSleuth.java:309)\n\tat r.c.s.SchedulerTask.call(SchedulerTask.java:68)\n\tat r.c.s.SchedulerTask.call(SchedulerTask.java:28)\n\tat j.u.c.FutureTask.run(FutureTask.java:264)\n\tat j.u.c.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304)\n\tat s.a.w.async.WrTask.run(WrTask.java:35)\n\tat j.u.c.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat j.u.c.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n
The message below is our failed health check request due to an error occured above.{
"timestamp": "2022-01-16T13:50:48.376+09:00",
"level": "ERROR",
"thread_name": "reactor-tcp-epoll-3",
"logger_name": "reactor.core.publisher.Operators",
"throwable_class": "java.lang.IllegalStateException",
"throwable_root_cause_class": "java.lang.IllegalStateException",
"message": "Operator called default onErrorDropped",
"caller_class_name": "reactor.util.Loggers$Slf4JLogger",
"caller_method_name": "error",
"caller_file_name": "Loggers.java",
"caller_line_number": 314,
"stack_trace": "<#8a783798> j.l.IllegalStateException: Request queue was disposed\n\tat d.m.r.m.c.RequestQueue.requireDisposed(RequestQueue.java:150)\n\tat d.m.r.m.c.RequestQueue.dispose(RequestQueue.java:139)\n\tat d.m.r.m.c.ReactorNettyClient.drainError(ReactorNettyClient.java:253)\n\tat d.m.r.m.c.ReactorNettyClient.resumeError(ReactorNettyClient.java:214)\n\tat r.c.p.FluxOnErrorResume$ResumeSubscriber.onError(FluxOnErrorResume.java:94)\n\tat o.s.c.s.i.r.ScopePassingSpanSubscriber.onError(ScopePassingSpanSubscriber.java:95)\n\tat r.c.p.FluxConcatMap$ConcatMapImmediate.innerError(FluxConcatMap.java:308)\n\tat r.c.p.FluxConcatMap$ConcatMapInner.onError(FluxConcatMap.java:872)\n\tat o.s.c.s.i.r.ScopePassingSpanSubscriber.onError(ScopePassingSpanSubscriber.java:95)\n\tat r.c.p.Operators.error(Operators.java:197)\n\tat r.c.p.MonoError.subscribe(MonoError.java:52)\n\tat r.c.p.InternalMonoOperator.subscribe(InternalMonoOperator.java:64)\n\tat o.s.c.s.i.r.SleuthMonoLift.subscribe(ReactorHooksHelper.java:225)\n\tat r.c.publisher.Mono.subscribe(Mono.java:4...\n",
"type": "service",
"pid": 31117,
"hostname": "DSV-NUGUP-CALENDAR-STG01",
"requested_uri": "/health",
"traceId": "8cc488d24f89a01e",
"spanId": "8cc488d24f89a01e",
"method": "GET"
}
Hello, I am using r2dbc-mssql and I am struggling with an error I get after a SELECT
:
2022-01-24 14:38:08.907 ERROR 1 --- [tor-tcp-epoll-3] i.r2dbc.mssql.client.ReactorNettyClient : Error: java.lang.IllegalArgumentException: Invalid header type: 0x0
java.lang.RuntimeException: java.lang.IllegalArgumentException: Invalid header type: 0x0
at io.r2dbc.mssql.client.StreamDecoder$ListSink.error(StreamDecoder.java:350) ~[r2dbc-mssql-0.8.7.RELEASE.jar:0.8.7.RELEASE]
Suppressed: reactor.core.publisher.FluxOnAssembly$OnAssemblyException:
Assembly trace from producer [reactor.core.publisher.FluxFlattenIterable] :
reactor.core.publisher.Flux.concatMapIterable
io.r2dbc.mssql.client.ReactorNettyClient.<init>(ReactorNettyClient.java:250)
Error has been observed at the following site(s):
*__Flux.concatMapIterable ⇢ at io.r2dbc.mssql.client.ReactorNettyClient.<init>(ReactorNettyClient.java:250)
Original Stack Trace:
at io.r2dbc.mssql.client.StreamDecoder$ListSink.error(StreamDecoder.java:350) ~[r2dbc-mssql-0.8.7.RELEASE.jar:0.8.7.RELEASE]
at io.r2dbc.mssql.client.StreamDecoder.withState(StreamDecoder.java:135) ~[r2dbc-mssql-0.8.7.RELEASE.jar:0.8.7.RELEASE]
at io.r2dbc.mssql.client.StreamDecoder.decode(StreamDecoder.java:88) ~[r2dbc-mssql-0.8.7.RELEASE.jar:0.8.7.RELEASE]
at io.r2dbc.mssql.client.StreamDecoder.decode(StreamDecoder.java:64) ~[r2dbc-mssql-0.8.7.RELEASE.jar:0.8.7.RELEASE]
at io.r2dbc.mssql.client.ReactorNettyClient.lambda$new$6(ReactorNettyClient.java:255) ~[r2dbc-mssql-0.8.7.RELEASE.jar:0.8.7.RELEASE]
[...]
Caused by: java.lang.IllegalArgumentException: Invalid header type: 0x0
at io.r2dbc.mssql.message.header.Type.valueOf(Type.java:68) ~[r2dbc-mssql-0.8.7.RELEASE.jar:0.8.7.RELEASE]
at io.r2dbc.mssql.message.header.Header.decode(Header.java:215) ~[r2dbc-mssql-0.8.7.RELEASE.jar:0.8.7.RELEASE]
at io.r2dbc.mssql.client.StreamDecoder$DecoderState.readChunk(StreamDecoder.java:289) ~[r2dbc-mssql-0.8.7.RELEASE.jar:0.8.7.RELEASE]
at io.r2dbc.mssql.client.StreamDecoder.withState(StreamDecoder.java:112) ~[r2dbc-mssql-0.8.7.RELEASE.jar:0.8.7.RELEASE]
at io.r2dbc.mssql.client.StreamDecoder.decode(StreamDecoder.java:88) ~[r2dbc-mssql-0.8.7.RELEASE.jar:0.8.7.RELEASE]
at io.r2dbc.mssql.client.StreamDecoder.decode(StreamDecoder.java:64) ~[r2dbc-mssql-0.8.7.RELEASE.jar:0.8.7.RELEASE]
at io.r2dbc.mssql.client.ReactorNettyClient.lambda$new$6(ReactorNettyClient.java:255) ~[r2dbc-mssql-0.8.7.RELEASE.jar:0.8.7.RELEASE]
at reactor.core.publisher.FluxFlattenIterable$FlattenIterableSubscriber.drainAsync(FluxFlattenIterable.java:351) ~[reactor-core-3.4.13.jar:3.4.13]
at reactor.core.publisher.FluxFlattenIterable$FlattenIterableSubscriber.drain(FluxFlattenIterable.java:686) ~[reactor-core-3.4.13.jar:3.4.13]
at reactor.core.publisher.FluxFlattenIterable$FlattenIterableSubscriber.request(FluxFlattenIterable.java:274) ~[reactor-core-3.4.13.jar:3.4.13]
at reactor.core.publisher.FluxOnAssembly$OnAssemblySubscriber.request(FluxOnAssembly.java:649) ~[reactor-core-3.4.13.jar:3.4.13]
How should I interpret this llegalArgumentException: Invalid header type: 0x0/0x32/0x61 ...
?
Any pointer would be much appreciated! Thanks!
Hello, the README of oracle-r2dbc says the following.
Oracle R2DBC's ConnectionFactory and ConnectionFactoryProvider are thread safe. All other SPI implementations are not thread safe.
Is this true?
Hey everyone, I'm using Spring Webflux and Spring Data R2DBC with PostgreSQL. I'm facing a weird issue where the subscription does not complete on an error thrown by R2dbc
Error: SEVERITY_LOCALIZED=ERROR, SEVERITY_NON_LOCALIZED=ERROR, CODE=23503, MESSAGE=insert or update on table "dummy" violates foreign key constraint "eq_fkey", DETAIL=Key (iso_code)=(111) is not present in table ""., SCHEMA_NAME=, TABLE_NAME=, CONSTRAINT_NAME=, FILE=ri_triggers.c, LINE=3255, ROUTINE=ri_ReportViolation
Any idea how this can be caught ?
Failed to obtain R2DBC Connection; nested exception is io.r2dbc.spi.R2dbcNonTransientResourceException: Connection is close. Cannot send anything
anyone an idea what's going on here?
Hi, I got null id after called repository.save. Anyone knows what was going on?
@Table
@Entity
data class AuthDomainModel(@Id @GeneratedValue var id: Int? = null,
var modelName:String)
And a repository like this
interface AuthDomainModelRepository : ReactiveCrudRepository<AuthDomainModel, Int>
Got id=null after saving, tried two methods
var entity = AuthDomainModel(modelName = name)
return transactionalOperator
.transactional(domainModelRepository.save(entity))
.map { item ->
// CHECK HERE item.id is null WTF??
mapper.fromEntity(item)
}
return domainModelRepository
.save(entity)
.map { item ->
// item.id is NULL too
mapper.fromEntity(item)
}
Hi, I am using r2dbc-postgresql:0.9.0
from Kotlin and getting the following error while mapping an integer column value of a Postgres table row. Does anyone have any info on why this error could happen?
Error:
java.lang.IllegalArgumentException: Cannot decode value of type int with OID 23
at io.r2dbc.postgresql.codec.DefaultCodecs.decode(DefaultCodecs.java:222)
at io.r2dbc.postgresql.PostgresqlRow.decode(PostgresqlRow.java:104)
at io.r2dbc.postgresql.PostgresqlRow.get(PostgresqlRow.java:85)
at com.doordash.runtime.web.repositories.experiments.AnalysisMetricsRepository.MAPPING_FUNCTION$lambda-4(AnalysisMetricsRepository.kt:111)
at org.springframework.r2dbc.core.DatabaseClient$GenericExecuteSpec.lambda$map$1(DatabaseClient.java:222)
at io.r2dbc.postgresql.PostgresqlResult.lambda$map$2(PostgresqlResult.java:123)
Code:
suspend fun getEntities(id: UUID): List<Entity> {
return template.databaseClient.sql("Select entity.*, child.name from table entity inner join child_table child on entity.c_id = child.id where entity.id = :id")
.bind("id", id)
.map(MAPPING_FUNCTION)
.all()
.asFlow()
.toList()
}
private val MAPPING_FUNCTION: Function<Row, Entity> = Function { row ->
Entity(
...
intColumn = row.get("integer_column_name", Int::class.java)
)
}
Hello, I hope here is right place to ask my question about R2DBC driver in general.
r2dbc-postgesql and r2dbc-mysql utilize "reactor-netty". As far as I understand, It uses tcp client of reactor netty and It relies on TCP flow control for backpressure mechanism. (I tried to understand how it works from the source codes) this is how I understood
If i make a select query to database, database will fetch a resultSet and send all items to a client(reactor-netty tcp client). Even though a client requests only n number of items to a publisher(database), a client actually can't control how much data a publisher(database) send. Just packets are buffered in socket receive buffer first. When reactor-netty reads it from Channel, It stores data to a queue. later subscribers(consumers) get items as much as they requests through reactive stream "reqeust(n)" from queue, not purely from database.
database can send specific number of row items to client with fetch size option in case of postgresql, but It doesn't mean database understand how much data a consumer wants through reactive stream mechanism.
I wonder if I misunderstood its limitation. (if I understood correctly) I am also curious if there is anything under development to solve this limitation. (to fetch data from database through R2DBC with understanding consumer's requests purely).
Thank you
It uses tcp client of reactor netty and It relies on TCP flow control for backpressure mechanism
Mostly, but not only. When using cursored execution and configuring a fetch size, chunks of the cursor are read only if the previous chunk has been emitted. So cursors help with propagating backpressure to the server in some sense.
com.microsoft.sqlserver.jdbc.SQLServerException: The driver could not establish a secure connection to SQL Server by using Secure Sockets Layer (SSL) encryption. Error: "PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target".
since I upgrade com.microsoft.sqlserver:mssql-jdbc version from "9.4.1.jre8" to "10.2.0.jr8"@Configuration
class MySqlR2dbcConfiguration: AbstractR2dbcConfiguration() {
@Bean
override fun connectionFactory(): ConnectionFactory {
val connOpt = ConnectionFactoryOptions.parse(url).mutate()
.option(USER, "vault-username")
.option(PASSWORD, "vault-password")
.build()
return ConnectionFactories.get(connOpt)
}
}
ConnectionPool
has access to a logger so that it would be pretty simple to add such a feature. Care to file a feature request at https://github.com/r2dbc/r2dbc-pool?