Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
Ghost
@ghost~6047c92a6da03739846691e0
is it possible to set a query/statement timeout on a HikariDataSource. ? I am using doobie with the tranzactio library and I want to put in some fail safes against long running queries eating up connections
balthz
@balthz
I'm reading https://tpolecat.github.io/doobie/docs/13-Unit-Testing.html about testing: it's great how easy it is to type-check queries against the DB. Is there a way I can get the actual, interpolated SQL query back (meaning, the SQL query with ? replaced by the actual values sent to the DB) to run assertions on the SQL strings as well? Or is there a better practice for this?
jatcwang
@jatcwang:matrix.org
[m]
@balthz: The underlying DB (and also the JDBC API) will need to support it. I'm not aware of any DB that does but if you find out let me know :)
Jeff
@eceppda

Hi all, I think I'm missing a key detail about the use of Meta. I have a simple program consisting of a few case classes each with a few parameters. The parameters are all simple types: Int, String, some Options. I have defined an implicit Reads and Writes in the companion object of each. Did I need to define the implicit Read and Write for each, or can they be derived using Meta?

I read in the docs the inductive cases, We can Read or Write a product type (case class or tuple) if its shapeless Generic representation (i.e., its fields as an HList) can be read or written, respectively. This suggests to me that it can, but I haven't found a convincing example that this is the intended use.

Can I do something like this?

case class Thing(a: Option[Int], b: String, c: Long, d: Option[String])

object Thing {
    implicit val thingMeta: Meta[Thing] = ???
}
Jeff
@eceppda
My ultimate use case is a legacy system with a large, 130 column table. I want to minimize the number of times I have to write the column list.
Jeff
@eceppda
It looks like FirstExample.scala covers this.
ivan-klass
@ivan-klass
Hi guys! I'm sorry if I ask the question which has been probably asked multiple times already - I saw a lot of pain about timezone-aware datatype, but still can't clearly understand how do I deal with problem - it seems like a bug to me. My postgres DB is executed in UTC, while the app is local (UTC +2h). Doobie version is 1.0.0-RC2, scala 3.1.1.
Using Transactor.fromDriverManager in test scope.
The following assertion fails - there's a 2 hours time-shift in round-trip:
val instant: Instant = Instant.now().truncatedTo(ChronoUnit.SECONDS)
val dateTime: ZonedDateTime = instant.atZone(ZoneOffset.UTC)
assert(
  sql"""
    SELECT $dateTime
  """
    .query[ZonedDateTime]
    .unique
    .transact(xa)
    .unsafeRunSync()
    .toInstant == instant,
  "Expected identical instant, got time-shifted!"
)
ivan-klass
@ivan-klass
Currently I end up with following
transactorResource
  .preAllocate(
    Sync[F].delay(TimeZone.setDefault(TimeZone.getTimeZone("UTC")))
  )
jatcwang
@jatcwang:matrix.org
[m]

@ivan-klass: Most popular databases do not support a time type that stores the timezone, so you don't really want to be using ZonedDateTime.

You want to use either Instant (if this is an instant in time), or LocalDateTime (if you care about the wall clock time, e.g. maybe for a calendar event)

I don't recommend changing your timezone for the whole JVM like what you're doing with TimeZone.setDefault. Its global and can have nasty impact on other parts of your app
ivan-klass
@ivan-klass
@jatcwang:matrix.org thank you for the answer. I'm afraid most of the codebase in the project already bound to ZonedDateTime - the intent was to know both user wall-clock and exact moment. I understand the meaning of JVM-global timezone, but the app I'm working on is basically a standalone web service which production is hosted in UTC, so setting default doesn't seem dangerous for running app/tests locally. As for doobie support- I wonder why do we have dedicated typeclasses for ZonedDateTime then, even in postgres-specific scope? https://github.com/tpolecat/doobie/blob/409a51cd95406f0ea332a9649253c6df12f2ec15/modules/postgres/src/main/scala/doobie/postgres/JavaTimeInstances.scala#L45 - is there any chance to fix it there, or is it a JDBC design problem?
jatcwang
@jatcwang:matrix.org
[m]
@ivan-klass: I think it's a mistake to have that typeclass instance in core because as you've discovered it has quite surprising behaviour for 90% of the DBs out there 😓
Certain DBs does have proper ZonedDateTime support (storing the instant + timezone) but most dont
Diego Casella
@diego.casella_gitlab
Hi all,
I'm quite puzzled by the behavior of nested SELECT operations. I have a query defined like so
val query = fr"""SELECT I.id, I.tx_id FROM ($innerQuery) AS I""" // ++ whereAnd(fr"""I.tx_id = '$id'""")
which correctly returns a list of (Long, String) tuples.
However, if I uncomment the whereAnd(...) part, at runtime I get this exception:
org.postgresql.util.PSQLException: The column index is out of range: 3, number of columns: 2
what is going on here?
jatcwang
@jatcwang:matrix.org
[m]
@diego.casella_gitlab: Yeah that seems off. Once you've turned it into a Query/Query0 you can call .sql to see the actual SQL. Perhaps that'll help you figure out what's wrong?
Also I don't don't think you need/want the single quotes around $id. So this: whereAnd(fr"""I.tx_id = $id""")
degtev.o
@degtevO

Hi all!

I found an issue (or maybe not an issue, but "special case"), working with doobie 0.9.4 + clickhouse-jdbc latest versions:
I am trying to get metadata as mentioned early in this thread:

val sql = "select * from any_table limit 0"
HC.prepareStatement(sql)(HPS.getMetaData).transact(xa)

Older versions (before 0.3.2) clickhouse-jdbc works OK, but new versions returns null

I guess, it happens because old versions forces query executing, if it is not yet: https://github.com/ClickHouse/clickhouse-jdbc/blob/v0.3.1-patch/clickhouse-jdbc/src/main/java/ru/yandex/clickhouse/ClickHousePreparedStatementImpl.java#L464
But new versions simply returns null: https://github.com/ClickHouse/clickhouse-jdbc/blob/v0.3.2/clickhouse-jdbc/src/main/java/com/clickhouse/jdbc/ClickHousePreparedStatement.java#L122

Is there any way, to force doobie to execute query, before taking metadata, using HC.prepareStatement?

degtev.o
@degtevO

find the next approach:

val sql = "select * from any_table limit 0"
val test = for {
      ps <- FC.prepareStatement(sql)
      _ <- FC.embed(ps, FPS.executeQuery)
      md <- FC.embed(ps, FPS.getMetaData)
    } yield md
test.transact(xa)

Hope this helps to somebody

vaIgarashi
@vaIgarashi
How i can log prepared statement?
jatcwang
@jatcwang:matrix.org
[m]
Fernando Mora
@fernandomora
What is the right way to transform an IO into a ConnectIO so I can combine them?
It seems, in the past, it something like fa.to[ConnectionIO] was enough, but I feel migration to CE3 changed it, since there is no LiftIO[ConnectionIO] anymore. What I'm missing?
Fernando Mora
@fernandomora
@jatcwang:matrix.org the problem of that approach is that a WeakAsync.liftK[F, ConnectionIO].use(???) is always going to return an F. So there is now way of transforming a giving IO into an IO to compose with a ConnectionIO and return the connectionIO as it was possible before with the fa.to[ConnectionIO].
So I don't find any way to refactor a method like this:
def query(maxDateF: F[LocalDate]): ConnectionIO[List[Row]] = {
  for {
    maxDate <- maxDateF.to[ConnectionIO]
    rows <- sql"SELECT * FROM table WHERE date = $maxDate".query[Row].to[List]
  } yield rows
}
Nick Fellows
@halfhp
hello and thanks for the great library! just a couple quick questions regarding scala3 compatibility: i've been using doobie with scala 2.x for a little while now and it's been working well. recently i've tried to migrate over to scala3 and while everything compiles, doobie throws an NPE in doobie.hi.resultset$.get from what seems like a pretty textbook insert. i am wondering 1) if doobie is generally considered ready for primetime when it comes to scala3 and 2) is there any documentation around what kinds of things to look out for when migrating? thanks!
jatcwang
@jatcwang:matrix.org
[m]
@halfhp: Hmm that seems a bit odd. Can you raise a ticket with maybe a repo to reproduce the issue?
Hoang Ong
@hoangong
Hi, how do I get LiftIO[ConnectionIO] implicit in doobie 1.0.0?
ldeck
@ldeck
Hi all is there a link to older versions of the microsite so as to see the docs that are relevant to a particular version?
1 reply
Ilya
@turingHollow

Hi all, I was wondering if it is possible to achieve transactional behavior by setting up a Transactor (strategy)

val connection: Connection = ???
val tx = Transactor.fromConnection(connection)
val someId = 1
for {
<- sql""UPDATE tableOne SET key = 'newVal' WHERE id = $someId""".update.run.transact(tx) <- sql"""UPDATE tableTwo SET key = 'newVal' WHERE id = $someId""".update.run.transact(tx)
} yield ()

I know about being able to merge ConnectionIO before using the .transact(tx) method But what I'm interested in is the ability to do multiple operations in Connection and then commit them

jatcwang
@jatcwang:matrix.org
[m]
@turingHollow:
If you do merge ConnectionIO before using the .transact(tx) you will get the transactional behaviour you're looking for. That's the default behaviour when you call transact(tx) on a ConnectionIO
Lucky
@lucky_ignou_twitter

Hi all,
So I created this table in postgres: CREATE TABLE test (test_uuid varchar(36) NOT null);
inserted a row: insert into lmn.test(test_uuid) values('f90658ce-bb77-4115-9680-cf60fd7b8289')

Now, in doobie:
case class TestUuid(test_uuid: UUID) // its java UUID
sql"""select * from lmn.test""".query[TestUuid].to[List]

I am getting: SQL object of class java.lang.String cannot be cast to mapped class java.util.UUID.

I imported import doobie.postgres.implicits. but still same error persists.
tried, implicit val uuidMeta: Meta[UUID] =
Meta[String].timap[UUID] (UUID.fromString)(
.toString)

but it gave me a bunch of other errors:
what am I missing here?

jatcwang
@jatcwang:matrix.org
[m]

@lucky_ignou_twitter: I'd recommend using the UUID type in Postgres, instead of varchar(36). It's far more performant.

Although defining your own uuidMeta should work..

Lucky
@lucky_ignou_twitter
thanks for the tip @jatcwang:matrix.org
Zoltan Magyar
@zmagy
Hey folks. It's probably been asked already but I was wondering how the 1.0.0 release is coming up. We would like to upgrade one of our services to cats-effect-3 but the doobie version that supports it is still an RC. How stable is the RC-2 version?
jatcwang
@jatcwang:matrix.org
[m]
@zmagy: Hey I'm slowly working towards 1.0. RC2 is as stable as any other release (I use it in production as with many others I know) so there should be minimal risk to upgrade now.
RC3 will contain some breaking changes, so do recommend doing RC2 first
Zoltan Magyar
@zmagy
thanks for the response
Gaël Renoux
@gaelrenoux
Hey guys ! We're migrationg to Doobie 1.0.0-RC2 at work (from 0.13), and we can't find how to lift a basic Cats IO into a ConnectionIO. We're missing the implicit for LiftIO[ConectionIO], and we can't find it in Doobie. Has it moved somewhere else, or a we doing this wrong?
Gaël Renoux
@gaelrenoux
Aha, thank you ! I kept arriving on the other StackOverflow page (https://stackoverflow.com/questions/59657203/doobie-lifting-arbitrary-effect-into-connectionio) which refers to the older version of Doobie.
jatcwang
@jatcwang:matrix.org
[m]
@gaelrenoux: I've left an answer in your SO link to point ppl to the other one :)
briggssaint4
@briggssaint4:matrix.org
[m]
Yevhen
@Yevhensh
Hi there, I’m struggling if there is the way to get all column names from having only tableName using either doobie FC/HC api
so that i have method
def getColumnNames(tableName: String): IO[Seq[String]] ?
so that i’m not receiving resultset, but iterating over all the column names with some fold or something
Yevhen
@Yevhensh
and also wondering maybe if there are some live examples using HC/FC so it get easier to dive into
jatcwang
@jatcwang:matrix.org
[m]
@Yevhensh: Are you trying to find out what columns does a table have? It can just be a regular query like https://dba.stackexchange.com/questions/22362/list-all-columns-for-a-specified-table
Yevhen
@Yevhensh
ah thanks, btw maybe there’re some live examples of using HC/FC?
kyri-petrou
@kyri-petrou:matrix.org
[m]

Hi everyone 👋 I have a couple of questions related to PG persisted queries that can't seem to be able to figure out. My understanding is that queries will automatically be registered as persisted queries if they are executed 5 times. If my understanding is correct, then:

  1. With the HikariCP transactor, do they need to be executed 5 times on each connection before they're persisted? If yes, is it by any chance possible for them to be persisted across the entire connection pool just once?
  2. Is it possible to disable creating persisted queries for just for some queries or being able to specify plan_cache_mode=force_custom_plan (available in PG12+) for them?

The usecase for 2 is primarily for recursive queries where the initial condition has a big effect on the query plan

Yevhen
@Yevhensh
@jatcwang:matrix.org thanks for help, it was very useful
Watson Dinh
@linux-root

Hi forks,
I'm reading Doobie doc
About threading:

An ExecutionContext for awaiting connection to the database. Because there can be an unbounded number of connections awaiting database access this should be a bounded pool.
A cats.effect.Blocker for executing JDBC operations. Because your connection pool limits the number of active connections this should be an unbounded pool.

Could someone tell me more detail about bounded/unbounded pool ? and how are they used in each situation?

1 reply
Ian Agius
@ianagius

Hi everyone, I was reading about BATCH INSERTing using Doobie and I was curious and tried it out. I had a list of objects and I wanted to do a batch INSERT on them using the code as follows:

val sql: String = s"INSERT INTO $tableName ($columnStringWithoutId) values (?, ?, ?)
Update[MyObject](sql).updateMany(myObjectList)

In this particular test run I had 3 objects and this resulted in 3 INSERT statements being created as follows:

INSERT INTO my_object (name, surname, address) VALUES ("ex1", "ex1", "ex1")
INSERT INTO my_object (name, surname, address) VALUES ("ex2", "ex2", "ex2")
INSERT INTO my_object (name, surname, address) VALUES ("ex3", "ex3", "ex3")

To be honest I was expecting that these insert statements would be grouped as follows:

INSERT INTO my_object (name, surname, address) VALUES 
("ex1", "ex1", "ex1"),
("ex2", "ex2", "ex2"),
("ex3", "ex3", "ex3");

Did I misunderstand the batch part or am I writing it wrongly somehow?

jatcwang
@jatcwang:matrix.org
[m]

@ianagius: It uses JDBC addBatch under the hood e.g. https://www.baeldung.com/jdbc-batch-processing

doobie doesn't magically change SQL because it doesn't try to parse it (which it'll need to do to achieve what you want).
But since we're using PreparedStatement under the hood, it should be as efficient (if not marginally more) than your version