One workaround i was thinking was:
val subQuery = savedQueryCronTabTable
.groupBy(e => (e.queryId, e.isEnabled))
.map {
case ((queryId, isEnabled), group) => (queryId, isEnabled, group.length)
}
but this produces 2 rows, i wish to be able to have isEnabled and disabled counts in 1 row
class SavedQueryCronDefinitionTable(tag: Tag) extends Table[SavedQueryCronDefinition](tag, "saved_query_cron_tab") {
def * =
(
queryId,
cronExpression,
tabIdentifier,
creation,
startDate,
endDate,
niceLevel,
executionJSON,
isEnabled
) <> (SavedQueryCronDefinition.tupled, SavedQueryCronDefinition.unapply)
def tabIdentifier: Rep[UUID] = column[UUID]("tab_identifier")
def creation: Rep[LocalDateTime] = column[LocalDateTime]("creation")
def startDate: Rep[Option[LocalDateTime]] = column[Option[LocalDateTime]]("start_date")
def endDate: Rep[Option[LocalDateTime]] = column[Option[LocalDateTime]]("end_date")
def niceLevel: Rep[Int] = column[Int]("niceness")
def executionJSON: Rep[ExecuteSavedQueryRequest] = column[ExecuteSavedQueryRequest]("execution_json")
def isEnabled: Rep[Boolean] = column[Boolean]("is_enabled")
def pk: PrimaryKey = primaryKey("saved_query_current_schedule_pkey", (queryId, cronExpression))
def queryId: Rep[String] = column[String]("query_id")
def cronExpression: Rep[String] = column[String]("cron_expression")
}
Rep[Boolean]
type which is isEnabled
val joinedResult: ResultType = for {
(((savedQuery, resultInfo), uddsInfo), cronTabInfo) <- joinQuery
.joinLeft(queryResultMetadataTable)
.on(_.queryId === _.queryId)
.joinLeft(userDefinedDataSourceTable)
.on(_._1.queryId === _.savedQueryId)
.joinLeft(subQuery)
.on(_._1._1.queryId === _._1)
} yield (savedQuery, resultInfo, uddsInfo, cronTabInfo)
I haven't tested to actually run the query, but doing
val my_great_table = TableQuery[SavedQueryCronDefinitionTable]
my_great_table
.map(elem => (elem.queryId, Case If elem.isEnabled Then 1 Else 0))
.groupBy(_._1)
.map {
case (id, elems) => (id, elems.map(_._2).sum, elems.length - elems.map(_._2).sum)
}
.result
.statements
.foreach(println)
gives the statement
select "query_id", sum((case when "is_enabled" then 1 else 0 end)), count(1) - sum((case when "is_enabled" then 1 else 0 end)) from "saved_query_cron_tab" group by "query_id"
which, I believe, is what you want.
Hi all,
I have following SQL Query
SELECT o.* FROM resource_changes o INNER JOIN (
select min(id) as id from resource_changes i where sent = FALSE AND updated_at > '2020-04-18' group by resource_uuid
) t ON o.id = t.id;
I'd like to reproduce in Slick, I've tried
val q = for {
idResult <- eventsTable
.sortBy(_.id.asc)
.sortBy(_.updatedAt.asc)
.filter(_.sent === sent)
.filter(_.updatedAt > Instant.now().minus(30, ChronoUnit.DAYS))
.groupBy(_.resourceUuid)
.map(_._2.map(_.id).min)
event <- eventsTable.filter(_.id === idResult)
} yield event
q.result.statements.foreach(println)
q.result
but that produces
SELECT *
FROM
(SELECT
MIN(x3.x4) AS x5
FROM
(SELECT
`id` AS x4,
`updated_at` AS x6,
`sent` AS x7,
`resource_uuid` AS x8
FROM
`resource_changes`
ORDER BY `updated_at` , `id`) x3
WHERE
(x3.x7 = FALSE)
AND (x3.x6 > { ts '2021-04-19 12:43:53.17752' })
GROUP BY x3.x8) x9,
`resource_changes` x2
WHERE
x2.`id` = x9.x5;
https://stackoverflow.com/questions/394853/sql-inner-join-syntax
I think
WHERE
x2.`id` = x9.x5;
is inner join
edit: ok, I looked around a bit more and I'm not so sure anymore
eventsTable.join([your sub query]).on(_.id === _)
you could also turn on these loggers
<logger name="slick.jdbc.JdbcBackend.statement" level="DEBUG" />
<logger name="slick.jdbc.JdbcBackend.parameter" level="DEBUG" />
<logger name="slick.jdbc.JdbcBackend.statementAndParameter" level="DEBUG" />
to get cleaner sql
root 2021-05-19T20:40:37.148 [http4s-demo.database-5] DEBUG s.j.J.statement - Preparing insert statement (returning: book_id,book_title,book_isbn,author_id,created_at): insert into "books" ("book_title","book_isbn","author_id") values (?,?,?)
@sherpal suggestion produced
SELECT
*
FROM
`resource_changes` x2,
(SELECT
MIN(x3.x4) AS x5
FROM
(SELECT
`id` AS x4,
`updated_at` AS x6,
`sent` AS x7,
`resource_uuid` AS x8
FROM
`resource_changes`
ORDER BY `updated_at` , `id`) x3
WHERE
(x3.x7 = FALSE)
AND (x3.x6 > { ts '2021-04-19 16:51:57.441539' })
GROUP BY x3.x8) x9
WHERE
x2.`id` = x9.x5
which comes back quick enough,
ZonedDateTime
scala types, mapped to java.sql.Timestamp
. Slick takes it from there. Our database type is timestamp with time zone
I've tried many variations on this:
// Customise the types...
class JdbcTypes extends super.JdbcTypes {
override val zonedDateType = new ZonedDateTimeJdbcType {
override def sqlType = java.sql.Types.TIMESTAMP_WITH_TIMEZONE
override def setValue(v: ZonedDateTime, p: PreparedStatement, idx: Int) = {
p.setObject(idx, v.toInstant(), sqlType)
}
override def getValue(r: ResultSet, idx: Int): ZonedDateTime = {
ZonedDateTime.ofInstant(r.getObject(idx, classOf[java.sql.Timestamp]).toInstant, Utc)
}
override def updateValue(v: ZonedDateTime, r: ResultSet, idx: Int) = {
r.updateObject(idx, v.toInstant())
}
override def valueToSQLLiteral(value: ZonedDateTime) = "{ts '"+value.format(sqlDateTimeFormatter)+"'}"
}
}
but I get errors no matter what I do. I can't get the combination of TIMESTAMP_WITH_TIMEZONE
, Timestamp
, Instant
and ZonedDateTime
right
ZonedDateTime
as a column type? I'd also add some logging (temporarily) to these methods so you can get some clarity on when they are in scope, and worry about them having the right behavior after that.
column[ZonedDateTime]
with a SQL type of timestamp with time zone
in postgres. H2 doesn't matter so much as long as it works as we have no persisted data in h2
ZonedDateTime
-> timestamptz
mapping, so you won't have to define your own one.