Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
    Sagar Gupta
    @sgrG24
    I am using slick with postgresdb. My question is performance related. I am running a simple query for getting a row based on name (select * from table_name where name = 'some-name'). The table contains 2 rows only. When I ran the query only one time, it rough took 50 ms to return the output. When I ran the same query for 5000 times using a loop, it took around 3000 ms to return for each query. Although preparing the query took only ~1 ms and executing the query took 30 ms roughly(which is same as when I ran the query only one time). So I want to understand from where the extra time in 3000ms is coming? I am using connection pool and running this query on local postgres instance
    Naftoli Gugenheim
    @nafg
    @sgrG24 it might be from compiling it
    Sagar Gupta
    @sgrG24
    @nafg
    '''
    DEBUG s.compiler.QueryCompilerBenchmark - ------------------- Phase: Time ---------
    DEBUG s.compiler.QueryCompilerBenchmark - assignUniqueSymbols: 0.023818 ms
    DEBUG s.compiler.QueryCompilerBenchmark - unrollTailBinds: 0.004980 ms
    DEBUG s.compiler.QueryCompilerBenchmark - inferTypes: 0.010956 ms
    DEBUG s.compiler.QueryCompilerBenchmark - expandTables: 0.045446 ms
    DEBUG s.compiler.QueryCompilerBenchmark - forceOuterBinds: 0.011063 ms
    DEBUG s.compiler.QueryCompilerBenchmark - removeMappedTypes: 0.010334 ms
    DEBUG s.compiler.QueryCompilerBenchmark - expandSums: 0.000574 ms
    DEBUG s.compiler.QueryCompilerBenchmark - expandRecords: 0.007662 ms
    DEBUG s.compiler.QueryCompilerBenchmark - flattenProjections: 0.017521 ms
    DEBUG s.compiler.QueryCompilerBenchmark - rewriteJoins: 0.003304 ms
    DEBUG s.compiler.QueryCompilerBenchmark - verifySymbols: 0.005054 ms
    DEBUG s.compiler.QueryCompilerBenchmark - relabelUnions: 0.002576 ms
    DEBUG s.compiler.QueryCompilerBenchmark - createAggregates: 0.000468 ms
    DEBUG s.compiler.QueryCompilerBenchmark - resolveZipJoins: 0.002218 ms
    DEBUG s.compiler.QueryCompilerBenchmark - pruneProjections: 0.012427 ms
    DEBUG s.compiler.QueryCompilerBenchmark - createResultSetMapping: 0.005653 ms
    DEBUG s.compiler.QueryCompilerBenchmark - hoistClientOps: 0.021788 ms
    DEBUG s.compiler.QueryCompilerBenchmark - reorderOperations: 0.004274 ms
    DEBUG s.compiler.QueryCompilerBenchmark - mergeToComprehensions: 0.058560 ms
    DEBUG s.compiler.QueryCompilerBenchmark - optimizeScalar: 0.003857 ms
    DEBUG s.compiler.QueryCompilerBenchmark - fixRowNumberOrdering: 0.000493 ms
    DEBUG s.compiler.QueryCompilerBenchmark - removeFieldNames: 0.028186 ms
    DEBUG s.compiler.QueryCompilerBenchmark - codeGen: 0.032082 ms
    DEBUG s.compiler.QueryCompilerBenchmark - TOTAL: 0.313294 ms
    [postgres-18] DEBUG slick.jdbc.JdbcBackend.benchmark - Execution of prepared statement took 18ms
    '''
    The future returned by db.run() method took 5000ms to complete. The preparation time and execution time is very less. Any idea here why it is taking 5000ms
    Andy Czerwonka
    @andyczerwonka
    What is the idiomatic way to handle failures on insert where the new record fails a constraint? E.g. duplicate name on a unique constraint, check constraint, etc?
    Naftoli Gugenheim
    @nafg
    @andyczerwonka is that a slick question or a database programming question?
    @sgrG24 no idea, can you create a self contained reproducer?
    akozumpl
    @akozumpl:matrix.org
    [m]
    Andy: personally I haven't found anything better than catching the driver-specific exception (they have error codes) and responding accordingly. Perhaps there is a slick extension out there that unifies this for various drivers.
    Oliver Schrenk
    @oschrenk

    I'm switching the test database from h2 to an embedded postgres. It is harder than I thought as it is uncovering some (circular?) table dependency issues:

    When I run

    db.run((Tables.Address.schema ++ Tables.Profile.schema).createIfNotExists).futureValue

    I get

    [info]   The future returned an exception of type: org.postgresql.util.PSQLException, with message: ERROR: relation "profile" does not exist. (PersistenceTest.scala:56)

    and when I switch the order

    db.run((Tables.Profile.schema ++ Tables.Address.schema).createIfNotExists).futureValue

    I get

    [info]   The future returned an exception of type: org.postgresql.util.PSQLException, with message: ERROR: relation "address" does not exist. (PersistenceTest.scala:56)

    Any advice?

    (I'm running 3.4.0-M1 since I ran into similar issues with dropIfExists which were solved by upgrading from 3.3.3 to 3.4.0)

    Arnaud Tanguy
    @ArnaudT

    Hello,

    I have trouble using slick with distinct and sort by a distance (postgis in postgresSQL)
    to run this query :
    It generate a SELECT DISTINCT ON expressions must match initial ORDER BY expressions
    but I can not see how to create a subquery for this

    ```
    val queryCenterLimit = (for {
      (((((((center, _), _), _), _), _), _), _) <- joinQueries
    } yield (center.id, center.location))
      .distinctOn(_._1)
      .sortBy(r => location.bind.distance(r._2))
      .drop(pagination.after.map(_.toInt).getOrElse(0))
      .take(pagination.limit) 
    ```

    Do you see how I can solve this ?
    Thanks for your help

    Carlo Romo
    @caromo

    What is the idiomatic way to handle failures on insert where the new record fails a constraint? E.g. duplicate name on a unique constraint, check constraint, etc?

    Hopping on this question as well, is there an example of error handling with slick? I want to be able to distinguish general db errors with more specific errors (e.g. failed constraints)

    James Phillips
    @jdrphillips
    Hello I have a query of a tuple. I would like to map this, and add a fake None column to it
    IE query.map { case (a, b) => (a, b, None) }
    however None is not a Rep of course, so I can't do this
    how do I input a rep of None into this? I have tried many things and none work
    I am trying to unify this query's type with a separate one that does an extra joinLeft. Note that the None is not a simple type, it is an Option[EntireTableClass] , rather than int or something
    Thus Rep.None does not seem to work, because it is a table rather than a TypedType
    Naftoli Gugenheim
    @nafg
    @oschrenk can you provide a reproducer please?
    @ArnaudT not sure I understand the question
    @jdrphillips can you wrap it in LiteralColumn
    水山清风
    @djx314
    @jdrphillips Have a try in LiteralColumn(Option.empty[Your Based Type])
    coding4cc
    @coding4cc

    @ArnaudT I have same question in postgresql;

    tables.articleMaterials
        .join(tables.articles).on(_.materialId === _.materialId)
        .filter(_._1.ownerDomain === cmd.domainId)
        .filterIf(cmd.beginTime > 0)(_._1.createTime > cmd.beginTime)
        .filterIf(cmd.endTime < Timestamps.MaxValue)(_._1.createTime <= cmd.endTime)
        .filterIf(cmd.ownerIds.nonEmpty)(_._1.ownerId.inSet(cmd.ownerIds))
        .filterOpt(cmd.kw.filter(Strings.nonEmpty).map(it => s"%$it%"))((it, q) => it._1.remarks.like(q) || it._2.title.like(q))
        .map(it => it._1)
        .distinctOn(_.materialId)
        .sortBy(_.modifyTime.desc)
        .drop(cmd.skip)
        .take(cmd.limit)
        .result

    this query works fine in mysql; in postgresql will cause a exception[SELECT DISTINCT ON expressions must match initial ORDER BY expressions] @nafg

    Naftoli Gugenheim
    @nafg
    @coding4cc a PR with a test case or even a fix would be appreciated ;)
    anqit
    @anqit

    I have a data model that consists of 2 entities, Users and Events, and a mapping between them, EventInvitations. Here's a simplified look:

    case class User(id: String, ...)
    case class Event(id: String, ...)
    case class EventInvitation(userId: String, eventId: String, ...)

    I have created slick table definitions for the 3 entities as you might expect, with foreign keys from the invitation fields to the user and event ids in those respective tables. I would like to create a function that takes a user id, and returns all of the events to which that user is invited to, as well as all of the invitations for those events, so a signature similar to the following:

    def getEventsForUser(userId: String): Seq[(Event, Seq[EventInvitation])]

    I have come up with the following to create the DBIOAction that seems to work from my limited testing so far:

        type PopulatedEvent = (Event, Seq[EventInvitation])
    
        def getEventsByUserAction(userId: String): DBIOAction[Seq[PopulatedEvent], NoStream, Effect.Read] =
            val eventInvitationsQuery: Query[schema.EventInvitations, EventInvitation, Seq] = eventInvitationDao.getEventInvitationsByUserQuery(userId) // queries event invitations by user id
    
            val eventsQuery = tableQuery filter { _.id in eventInvitationsQuery.map { _.eventId } } // here, `tableQuery` is the table query for the Events table, so this gets all events for that a user is invited to
            for {
                events <- eventsQuery.result
                allInvites <- eventInvitationsQuery.result map { _.map { invite => (invite.eventId, invite) }} map { MultiDict.from[EventId, EventInvitation](_) } // using MultiDict from scala-collections-contrib (https://www.javadoc.io/doc/org.scala-lang.modules/scala-collection-contrib_2.13/latest/scala/collection/MultiDict.html), this essentially groups invitations by event id
            } yield events map { event => (event, allInvites.get(event.id).toSeq) }

    This seems to work from the limited testing I've done, but for some reason, feels a bit hacky to me. I would have to think this is a somewhat common use case (aggregating some entity and a collection of related entities together), so my guess is that someone has probably come up with a more efficient way to accomplish this than I just have. Does anyone here have ideas or examples on a better way to do this? I have left out the table definitions for brevity and because they are fairly standard as I can tell, but I can show those (or any other missing code) as well. Happy to clarify anything. Thanks!

    Simão Mata
    @simao
    Hi. What can we expect from a -M1 release? Is there ready to be used in production? We've been waiting for a 3.4 release but not sure if this is ready or we should wait?
    Michael Stokley
    @mastokley_gitlab

    i'm struggling with RejectedExecutionException

    Caused by: java.util.concurrent.RejectedExecutionException: Task slick.basic.BasicBackend$DatabaseDef$$anon$3@369ff44b rejected from slick.util.AsyncExecutor$$anon$1$$anon$2@1db1c8bd[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 13]
        at java.base/java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2055)
        at java.base/java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:825)
        at java.base/java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1355)
        at slick.util.AsyncExecutor$$anon$1$$anon$4.execute(AsyncExecutor.scala:161)
        at slick.basic.BasicBackend$DatabaseDef.runSynchronousDatabaseAction(BasicBackend.scala:264)
        at slick.basic.BasicBackend$DatabaseDef.runSynchronousDatabaseAction$(BasicBackend.scala:262)
        at slick.jdbc.JdbcBackend$DatabaseDef.runSynchronousDatabaseAction(JdbcBackend.scala:37)
        at slick.basic.BasicBackend$DatabaseDef.slick$basic$BasicBackend$DatabaseDef$$runInContextInline(BasicBackend.scala:241)
        at slick.basic.BasicBackend$DatabaseDef.runInContextSafe(BasicBackend.scala:147)
        at slick.basic.BasicBackend$DatabaseDef.runInContext(BasicBackend.scala:141)
        at slick.basic.BasicBackend$DatabaseDef.runInContext$(BasicBackend.scala:140)
        at slick.jdbc.JdbcBackend$DatabaseDef.runInContext(JdbcBackend.scala:37)
        at slick.basic.BasicBackend$DatabaseDef.runInternal(BasicBackend.scala:76)
        at slick.basic.BasicBackend$DatabaseDef.runInternal$(BasicBackend.scala:75)
        at slick.jdbc.JdbcBackend$DatabaseDef.runInternal(JdbcBackend.scala:37)
        at slick.basic.BasicBackend$DatabaseDef.run(BasicBackend.scala:73)
        at slick.basic.BasicBackend$DatabaseDef.run$(BasicBackend.scala:73)
        at slick.jdbc.JdbcBackend$DatabaseDef.run(JdbcBackend.scala:37)

    I've tried increasing the queue size and the number of threads but it doesn't seem to help.

    SeenivasanSeeni
    @Seenivasanseeni
    Hi there. I want to learn about how slick converts ResultSet into scala objects and try to tweak it for my personal use case. Is there any docs that can help me with this?
    Andy Czerwonka
    @andyczerwonka
    Hi, I'm trying to write an insert statement that increments a version, where the version is the max version + 1 or coalesce to version 1 if the table is empty. Can anyone help me express that as a Slick query?
    2 replies
    Naftoli Gugenheim
    @nafg
    @simao yes it should be ready. Mostly small PRs since 3.3.3. There have been some binary changes, hence the bump. I released RC1 last week. I did a small fix since then so I have to release RC2 soon
    1 reply
    davidvirgilnaranjo
    @davidvirgilnaranjo
    Hello guys, do you know when the new release 3.4.0 will be released? I ve seen that the latest stable version was released 2 years ago. Is there any reason for lack of activity in the slick open source project?
    2 replies
    bino013
    @bino013

    Hi All,

    I'm trying to trying to translate this query to Scala Slick but I'm having a hard time.

    select x2."retrieval_reference_no",
           x2."transaction_timestamp",
           x3."account_id",
           x3."entry_code",
           x3."amount",
           x3."current",
           x3."hold",
           x3."pending",
           x3."account_type",
           (select coalesce(x4."description", x3."entry_code")
            from "transaction_description" x4
            where x4."entry_code" = x3."entry_code"),
           x2."response_code",
           x2."response_detail",
           x2."transaction_id",
           x2."original_transaction_id",
           x2."transaction_type"
    from "transaction" x2,
         "transaction_history" x3
    where ((x2."transaction_id" = x3."transaction_id") and (x2."tenant_id" = x3."tenant_id"))
      and (((x3."tenant_id" = 'SampleTenantId') and (x3."account_id" = 'SrcAcctId')))
    order by x2."transaction_timestamp" desc

    Here is my attempt to translate this to Slick Query.

    
     def getEntryDescription(
          entryCode: Rep[String]
      ): Query[Rep[String], String, Seq] = {
        transactionDescriptionQuery
          .filter(_.entryCode === entryCode)
          .map(row => row.description.ifNull(entryCode))
      }
    
        val whereCondition = for {
          (txn, history) <- joinHistoryTables
          if history.tenantId === params.tenantId && history.accountId === params.accountId
          description <- getEntryDescription(history.entryCode)
        } yield {
          (txn, history, description)
        }
    
        whereCondition
          .sortBy { case (txn, _) => txn.transactionTimestamp.desc }
          .map { case (txn, history, description) =>
            (
              txn.retrievalReferenceNo,
              txn.transactionTimestamp,
              history.accountId,
              history.entryCode,
              history.amount,
              history.current,
              history.hold,
              history.pending,
              history.accountType,
              description,
              txn.responseCode,
              txn.responseDetail,
              txn.transactionId,
              txn.originalTransactionId,
              txn.transactionType
            ) <> (TransactionHistoryDetails.tupled, TransactionHistoryDetails.unapply)
          }
          .drop(params.offset)
          .take(params.limit)

    But this give me:

    select x2."retrieval_reference_no",
           x2."transaction_timestamp",
           x3."account_id",
           x3."entry_code",
           x3."amount",
           x3."current",
           x3."hold",
           x3."pending",
           x3."account_type",
           coalesce(x4."description", x3."entry_code")),
           x2."response_code",
           x2."response_detail",
           x2."transaction_id",
           x2."original_transaction_id",
           x2."transaction_type"
    from "transaction" x2,
         "transaction_history" x3
    where ((x2."transaction_id" = x3."transaction_id") and (x2."tenant_id" = x3."tenant_id"))
      and (((x3."tenant_id" = 'SampleTenantId') and (x3."account_id" = 'SrcAcctId'))) and x4."entry_code" = x3."entry_code"
    order by x2."transaction_timestamp" desc
    Naftoli Gugenheim
    @nafg
    @bino013 can you put a self-contained reproducer on github or scastie?
    Ryan Stradling
    @rstradling
    looking at how to make slick parameterize queries. And so far I have come up with the following for sqlserver.
    Compiled queries.
    And interpolated strings
    Are the only way? There is no way to do this with just a normal Table.filter(_.id === parameterId). I am just trying to confirm if I am correct or not.
    Naftoli Gugenheim
    @nafg
    @rstradling I don't understand the question
    Aakash Sabharwal
    @aakashsa-lasso
    Hi friends, quick question: is anyone aware of any examples of a compiled queries / prepared statement with a collection types such as (Rep[List[String]]) or option (Option[String])?
    Ryan Stradling
    @rstradling
    @nafg Parameterized queries/Prepared Statements with parameters/bind parameters. I am seeing the SQL spit out as SELECT ... FROM TABLE where id = 3 vs SELECT ... FROM TABLE where id = ? unless I compile the query or use an interpolated string.
    Olav Grøndal
    @ogroendal
    I am running into a what I suspect is a bug in Slick. I have a groupBy and then a map where i use avg, whick will result in a runtime error, with the message "slick.SlickTreeException: Cannot convert node to SQL Comprehension". If I just change that to using the .length method then the code works in runtime.
    I am not sure if it could be related to this issue: slick/slick#1355
    I am using Slick 3.3.3
    Takafumi Iju
    @ijufumi

    Hi,

    I'm developing a scala web app using Slick.
    However, I face a problem that I can't resolve.
    I have no idea what the cause is.

    16:21:11 % sbt clean run
    [info] welcome to sbt 1.6.2 (Eclipse Adoptium Java 17.0.4)
    [info] loading settings for project openreports_scala-build from plugins.sbt ...
    [info] loading project definition from /Users/ijufumi/workspaces/ijufumi/production/openreports_scala/project
    [info] loading settings for project root from build.sbt ...
    [info] set current project to Open Report API (in build file:/Users/ijufumi/workspaces/ijufumi/production/openreports_scala/)
    [success] Total time: 0 s, completed Aug 14, 2022, 4:30:47 PM
    [info] compiling 34 Scala sources to /Users/ijufumi/workspaces/ijufumi/production/openreports_scala/target/scala-2.13/classes ...
    [error] /Users/ijufumi/workspaces/ijufumi/production/openreports_scala/src/main/scala/jp/ijufumi/openreports/entities/Member.scala:47:7: value <> is not a member of (slick.lifted.Rep[Option[Int]], slick.jdbc.PostgresProfile.api.Rep[String], slick.jdbc.PostgresProfile.api.Rep[String], slick.jdbc.PostgresProfile.api.Rep[String], slick.jdbc.PostgresProfile.api.Rep[String], slick.jdbc.PostgresProfile.api.Rep[String], slick.jdbc.PostgresProfile.api.Rep[java.sql.Timestamp], slick.jdbc.PostgresProfile.api.Rep[java.sql.Timestamp], slick.jdbc.PostgresProfile.api.Rep[Long])
    [error] possible cause: maybe a semicolon is missing before `value <>`?
    [error]     ) <> (Member.tupled, Member.unapply)
    [error]       ^
    [error] /Users/ijufumi/workspaces/ijufumi/production/openreports_scala/src/main/scala/jp/ijufumi/openreports/entities/Member.scala:47:33: missing argument list for method unapply in object Member
    [error] Unapplied methods are only converted to functions when a function type is expected.
    [error] You can make this conversion explicit by writing `unapply _` or `unapply(_)` instead of `unapply`.
    [error]     ) <> (Member.tupled, Member.unapply)
    [error]                                 ^
    [error] two errors found
    [error] (Compile / compileIncremental) Compilation failed
    [error] Total time: 3 s, completed Aug 14, 2022, 4:30:49 PM

    My code is here.

    Can someone please help me?

    Thanks

    Naftoli Gugenheim
    @nafg
    @ijufumi the problem is that for <> to work it needs an implicit for Timestamp column type, but it's hitting an ambiguous implicit issue. (Once you solve this, you don't need the explicit (timestampType) on the columns).
    One way to solve it is to exclude the driver's implicit when you import it:
    import slick.jdbc.PostgresProfile.api.{timestampColumnType => _, _}
    Another option is to extend the profile and its API and override timestampColumnType with your implementation
    Also, if you could rename your implicit to that and arrange things so you import it or mix it in in an inner scope, then it might shadow the profile's implicit
    Option 2 is the best if you want this in a few places
    @rstradling I don't know the answer
    Takafumi Iju
    @ijufumi
    @nafg
    I am grateful for your support.
    I could resolve this problem.
    Naftoli Gugenheim
    @nafg
    This message was deleted
    @/all you can join the new #slick channel on the main Scala Discord: https://discord.gg/yQheBhUtAa