Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
    Anirudh Vyas
    @AnirudhVyas
    ignore details of these tables for clarity - part of the thing i dislike about this is joinLefts etc. but those need to exist ...
    Antoine Doeraene
    @sherpal

    I haven't tested to actually run the query, but doing

        val my_great_table = TableQuery[SavedQueryCronDefinitionTable]
        my_great_table
          .map(elem => (elem.queryId, Case If elem.isEnabled Then 1 Else 0))
          .groupBy(_._1)
          .map {
            case (id, elems) => (id, elems.map(_._2).sum, elems.length - elems.map(_._2).sum)
          }
          .result
          .statements
          .foreach(println)

    gives the statement

    select "query_id", sum((case when "is_enabled" then 1 else 0 end)), count(1) - sum((case when "is_enabled" then 1 else 0 end)) from "saved_query_cron_tab" group by "query_id"

    which, I believe, is what you want.

    Anirudh Vyas
    @AnirudhVyas
    let me give it a try, thank you!
    Anirudh Vyas
    @AnirudhVyas
    thanks @sherpal that definitely works
    Antoine Doeraene
    @sherpal
    Cool, glad I could help. In the first map you could also add a third element with the if the other way around, and sum that column instead in the group by. It "might" improve performance...
    Anirudh Vyas
    @AnirudhVyas
    hmm I'll give that a try thanks!
    Min Joo Kang
    @mandoolala
    hello, I found that when upserting the column entries of BigDecimal, somehow the number is automatically rounded up.. how can I keep the original value with the original decimal places?
    Antoine Doeraene
    @sherpal
    @mandoolala Are you sure the problem is with Slick? Could it be in the database itself?
    Min Joo Kang
    @mandoolala
    yeah just realised it is the database itself! thanks!!
    Ghost
    @ghost~5cc594f1d73408ce4fbee018

    Hi all,
    I have following SQL Query

    SELECT o.* FROM resource_changes o INNER JOIN (
            select min(id) as id from resource_changes i where sent = FALSE AND updated_at > '2020-04-18'  group by resource_uuid
        ) t ON o.id = t.id;

    I'd like to reproduce in Slick, I've tried

    val q = for {
          idResult <- eventsTable
                        .sortBy(_.id.asc)
                        .sortBy(_.updatedAt.asc)
                        .filter(_.sent === sent)
                        .filter(_.updatedAt > Instant.now().minus(30, ChronoUnit.DAYS))
                        .groupBy(_.resourceUuid)
                        .map(_._2.map(_.id).min)
          event <- eventsTable.filter(_.id === idResult)
        } yield event
        q.result.statements.foreach(println)
        q.result

    but that produces

    SELECT * 
    FROM
        (SELECT 
            MIN(x3.x4) AS x5
        FROM
            (SELECT 
            `id` AS x4,
                `updated_at` AS x6,
                `sent` AS x7,
                `resource_uuid` AS x8
        FROM
            `resource_changes`
        ORDER BY `updated_at` , `id`) x3
        WHERE
            (x3.x7 = FALSE)
                AND (x3.x6 > { ts '2021-04-19 12:43:53.17752' })
        GROUP BY x3.x8) x9,
        `resource_changes` x2
    WHERE
        x2.`id` = x9.x5;
    Aim is to get the earliest unsent entries grouped by resource_uuid
    Antoine Doeraene
    @sherpal

    when you do your for comprehension like that, you are making a cross join. What you want is an inner join, so instead of

    event <- eventsTable.filter(_.id === idResult)

    you should do

    event <- eventsTable
    if event.id === idResult.id
    Ghost
    @ghost~5cc594f1d73408ce4fbee018
    that actually produced the same SQL
    Rohan Sircar
    @rohan-sircar
    slick generated query looks ok to me, can't be sure due to all the aliases but the format is similar. Are you getting different results from the two of them?
    Rohan Sircar
    @rohan-sircar

    https://stackoverflow.com/questions/394853/sql-inner-join-syntax

    I think

    WHERE
        x2.`id` = x9.x5;

    is inner join

    edit: ok, I looked around a bit more and I'm not so sure anymore

    Antoine Doeraene
    @sherpal
    ah yes indeed it's the same, but as Rohan says it's the same. If think if you want the inner join syntax, you have to do
    eventsTable.join([your sub query]).on(_.id === _)
    Rohan Sircar
    @rohan-sircar

    Actually these examples are equivalent and neither is a cartesian product. A cartesian product is returned when you join two tables without specifying a join condition, such as in
    select *
    from t1,t2

    hmm

    Ghost
    @ghost~5cc594f1d73408ce4fbee018
    I'll need the inner join as the table is quite large sub-select times out
    Rohan Sircar
    @rohan-sircar
    the point is, that is inner join
    ok, I think I read quite a while back that bugs in the db server might cause a cross join so explicit inner join is a best practice
    so you can do
    eventsTable.join([your sub query]).on(_.id === _)
    as Antoine suggested

    you could also turn on these loggers

    <logger name="slick.jdbc.JdbcBackend.statement" level="DEBUG" />
      <logger name="slick.jdbc.JdbcBackend.parameter" level="DEBUG" />
      <logger name="slick.jdbc.JdbcBackend.statementAndParameter" level="DEBUG" />

    to get cleaner sql

    root 2021-05-19T20:40:37.148 [http4s-demo.database-5] DEBUG s.j.J.statement - Preparing insert statement (returning: book_id,book_title,book_isbn,author_id,created_at): insert into "books" ("book_title","book_isbn","author_id")  values (?,?,?)
    Ghost
    @ghost~5cc594f1d73408ce4fbee018

    @sherpal suggestion produced

    SELECT 
        *
    FROM
        `resource_changes` x2,
        (SELECT 
            MIN(x3.x4) AS x5
        FROM
            (SELECT 
            `id` AS x4,
                `updated_at` AS x6,
                `sent` AS x7,
                `resource_uuid` AS x8
        FROM
            `resource_changes`
        ORDER BY `updated_at` , `id`) x3
        WHERE
            (x3.x7 = FALSE)
                AND (x3.x6 > { ts '2021-04-19 16:51:57.441539' })
        GROUP BY x3.x8) x9
    WHERE
        x2.`id` = x9.x5

    which comes back quick enough,

    thx
    Rohan Sircar
    @rohan-sircar
    huh, it's the same query but
    select * from B,A
    instead of
    select * from A,B
    Antoine Doeraene
    @sherpal
    Note that depending on your database system, using the "exists" syntax might be much better in terms of performances
    James Phillips
    @jdrphillips
    could anybody help me with how to upgrade our zoned date times columns to use slick 3.3.0?
    I'm sure you get these sorts of questions a lot... Migrating the data isn't an option unfortunately
    We currently have ZonedDateTime scala types, mapped to java.sql.Timestamp. Slick takes it from there. Our database type is timestamp with time zone
    With the update, this mapping is ignored, and everything is broken
    Production environment is postgres, test environment is H2. H2 works out of the box with the upgrade (but I suspect this is a misdirection and it's broken in an insidious way)

    I've tried many variations on this:

      // Customise the types...
      class JdbcTypes extends super.JdbcTypes {
    
        override val zonedDateType = new ZonedDateTimeJdbcType {
    
          override def sqlType = java.sql.Types.TIMESTAMP_WITH_TIMEZONE
          override def setValue(v: ZonedDateTime, p: PreparedStatement, idx: Int) = {
            p.setObject(idx, v.toInstant(), sqlType)
          }
          override def getValue(r: ResultSet, idx: Int): ZonedDateTime = {
            ZonedDateTime.ofInstant(r.getObject(idx, classOf[java.sql.Timestamp]).toInstant, Utc)
          }
          override def updateValue(v: ZonedDateTime, r: ResultSet, idx: Int) = {
            r.updateObject(idx, v.toInstant())
          }
          override def valueToSQLLiteral(value: ZonedDateTime) = "{ts '"+value.format(sqlDateTimeFormatter)+"'}"
        }
      }

    but I get errors no matter what I do. I can't get the combination of TIMESTAMP_WITH_TIMEZONE, Timestamp, Instant and ZonedDateTime right

    Any help greatly appreciated
    (there might be inconsistencies in the above code due to me just trying anything to get it work - first task is to read data, then focus on writing later)
    Richard Dallaway
    @d6y
    @jdrphillips let me check the following: you have imported your own custom profile (one for H2 and one for PG), you're using those in place of the standard H2 and PG profile imports, and the custom profiles have the above JdbcTypes in them in them? And on your table you have used ZonedDateTime as a column type? I'd also add some logging (temporarily) to these methods so you can get some clarity on when they are in scope, and worry about them having the right behavior after that.
    James Phillips
    @jdrphillips
    They're definitely in scope correctly as I get different errors when I change the above implementations
    Sometimes null error, sometimes "type 93" errors
    But you're correct. It is a column[ZonedDateTime] with a SQL type of timestamp with time zone in postgres. H2 doesn't matter so much as long as it works as we have no persisted data in h2
    (but I think the new "out of the box" behaviour won't work as the default type is varchar, so our comparisons will be broken due to inconsistently timezoned strings being inserted)
    MichaƂ Pawlicki
    @MichalPawlicki
    @jdrphillips Have you tried https://github.com/tminglei/slick-pg ? It supports the ZonedDateTime -> timestamptz mapping, so you won't have to define your own one.
    Roberto Leibman
    @rleibman
    So.... I'm trying to get my stack of projects into scala 3.... the migration tools are complaining because slick uses macros. Is there any ETA for a scala 3 version?
    nafg
    @nafg
    @rleibman no, follow slick/slick#2187
    Or get involved with it
    Meanwhile you can of course have some modules on 2.13 and some on 3 in your application
    James Phillips
    @jdrphillips
    @MichalPawlicki I'll give it a look - thanks
    nafg
    @nafg
    If anyone wants to video chat about Slick, or possibly watch me do some slick-related coding, I'm on https://meet.google.com/umj-wvqv-xxs now
    Matt Oliver
    @halfmatthalfcat
    Hey everybody, question about migrations. So what is the best practice around changing column types which map to case class types and rolling those back?
    I'm basically looking for a way to fallback to the previous type in some ergonomic way without having to either (1) duplicate a lot of code or (2) somehow leverage the current model implicitly through the app depending on what "version" of the model is currently applied
    Matt Oliver
    @halfmatthalfcat

    For example I have a Table and associated mapped class

    case class MyModel(
      myPk: Int,
      someText: String
    )
    
    class MyTable(tag: Tag) extends Table[MyModel](tag, "my_table") {
      def myPk: Rep[Int] = column("my_pk")
      def someText: Rep[String] = column("some_text")
    
      def * = (myPk, someText) <> ((MyModel.apply _).tupled, MyModel.unapply)
    
      def pk = primaryKey("my_pk_pk", myPk)
    }

    I have a migration that changes the myPk column from an integer to a varchar, so I now need to somehow represent that change within my table and case class with the option to roll back. The actual DB rollback is handled (rolled my own migration framework based off of Forklift) but it's handling the code rollback that is alluding me.

    I could duplicate MyTable with myPk as an integer and then also duplicate MyModel using myPk as an int still but then I'm having to potentially weave these two competing tables and models through the code depending on whether we've migrated or rolled back...

    I almost need someway to version MyModel with myPk as a String as well as MyTable with myPk as a String/varchar