Hi all,
I have following SQL Query
SELECT o.* FROM resource_changes o INNER JOIN (
select min(id) as id from resource_changes i where sent = FALSE AND updated_at > '2020-04-18' group by resource_uuid
) t ON o.id = t.id;
I'd like to reproduce in Slick, I've tried
val q = for {
idResult <- eventsTable
.sortBy(_.id.asc)
.sortBy(_.updatedAt.asc)
.filter(_.sent === sent)
.filter(_.updatedAt > Instant.now().minus(30, ChronoUnit.DAYS))
.groupBy(_.resourceUuid)
.map(_._2.map(_.id).min)
event <- eventsTable.filter(_.id === idResult)
} yield event
q.result.statements.foreach(println)
q.result
but that produces
SELECT *
FROM
(SELECT
MIN(x3.x4) AS x5
FROM
(SELECT
`id` AS x4,
`updated_at` AS x6,
`sent` AS x7,
`resource_uuid` AS x8
FROM
`resource_changes`
ORDER BY `updated_at` , `id`) x3
WHERE
(x3.x7 = FALSE)
AND (x3.x6 > { ts '2021-04-19 12:43:53.17752' })
GROUP BY x3.x8) x9,
`resource_changes` x2
WHERE
x2.`id` = x9.x5;
https://stackoverflow.com/questions/394853/sql-inner-join-syntax
I think
WHERE
x2.`id` = x9.x5;
is inner join
edit: ok, I looked around a bit more and I'm not so sure anymore
eventsTable.join([your sub query]).on(_.id === _)
you could also turn on these loggers
<logger name="slick.jdbc.JdbcBackend.statement" level="DEBUG" />
<logger name="slick.jdbc.JdbcBackend.parameter" level="DEBUG" />
<logger name="slick.jdbc.JdbcBackend.statementAndParameter" level="DEBUG" />
to get cleaner sql
root 2021-05-19T20:40:37.148 [http4s-demo.database-5] DEBUG s.j.J.statement - Preparing insert statement (returning: book_id,book_title,book_isbn,author_id,created_at): insert into "books" ("book_title","book_isbn","author_id") values (?,?,?)
@sherpal suggestion produced
SELECT
*
FROM
`resource_changes` x2,
(SELECT
MIN(x3.x4) AS x5
FROM
(SELECT
`id` AS x4,
`updated_at` AS x6,
`sent` AS x7,
`resource_uuid` AS x8
FROM
`resource_changes`
ORDER BY `updated_at` , `id`) x3
WHERE
(x3.x7 = FALSE)
AND (x3.x6 > { ts '2021-04-19 16:51:57.441539' })
GROUP BY x3.x8) x9
WHERE
x2.`id` = x9.x5
which comes back quick enough,
I've tried many variations on this:
// Customise the types...
class JdbcTypes extends super.JdbcTypes {
override val zonedDateType = new ZonedDateTimeJdbcType {
override def sqlType = java.sql.Types.TIMESTAMP_WITH_TIMEZONE
override def setValue(v: ZonedDateTime, p: PreparedStatement, idx: Int) = {
p.setObject(idx, v.toInstant(), sqlType)
}
override def getValue(r: ResultSet, idx: Int): ZonedDateTime = {
ZonedDateTime.ofInstant(r.getObject(idx, classOf[java.sql.Timestamp]).toInstant, Utc)
}
override def updateValue(v: ZonedDateTime, r: ResultSet, idx: Int) = {
r.updateObject(idx, v.toInstant())
}
override def valueToSQLLiteral(value: ZonedDateTime) = "{ts '"+value.format(sqlDateTimeFormatter)+"'}"
}
}
but I get errors no matter what I do. I can't get the combination of TIMESTAMP_WITH_TIMEZONE
, Timestamp
, Instant
and ZonedDateTime
right
ZonedDateTime
as a column type? I'd also add some logging (temporarily) to these methods so you can get some clarity on when they are in scope, and worry about them having the right behavior after that.
column[ZonedDateTime]
with a SQL type of timestamp with time zone
in postgres. H2 doesn't matter so much as long as it works as we have no persisted data in h2
ZonedDateTime
-> timestamptz
mapping, so you won't have to define your own one.
For example I have a Table and associated mapped class
case class MyModel(
myPk: Int,
someText: String
)
class MyTable(tag: Tag) extends Table[MyModel](tag, "my_table") {
def myPk: Rep[Int] = column("my_pk")
def someText: Rep[String] = column("some_text")
def * = (myPk, someText) <> ((MyModel.apply _).tupled, MyModel.unapply)
def pk = primaryKey("my_pk_pk", myPk)
}
I have a migration that changes the myPk
column from an integer to a varchar, so I now need to somehow represent that change within my table and case class with the option to roll back. The actual DB rollback is handled (rolled my own migration framework based off of Forklift) but it's handling the code rollback that is alluding me.
I could duplicate MyTable
with myPk
as an integer and then also duplicate MyModel
using myPk
as an int still but then I'm having to potentially weave these two competing tables and models through the code depending on whether we've migrated or rolled back...
MyModel
with myPk
as a String
as well as MyTable
with myPk
as a String
/varchar
@since
might work if anybody has experience implementing that with Slick