Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
    Jonas Chapuis
    @jchapuis
    hello all, I have a SO when building a large DBIO, see exerpts of the stack trace below, any idea what could be the issue? I'm running slick 3.3.2
    thanks!!
    java.lang.StackOverflowError: null
        at slick.basic.BasicBackend$DatabaseDef.slick$basic$BasicBackend$DatabaseDef$$runInContextInline(BasicBackend.scala:166)
        at slick.basic.BasicBackend$DatabaseDef.runInContextSafe(BasicBackend.scala:148)
    ...
     slick.basic.BasicBackend$DatabaseDef.slick$basic$BasicBackend$DatabaseDef$$runInContextInline(BasicBackend.scala:172)
        at slick.basic.BasicBackend$DatabaseDef$$anon$2.run(BasicBackend.scala:154)
        at slick.dbio.DBIOAction$sameThreadExecutionContext$.runTrampoline(DBIOAction.scala:284)
        at slick.dbio.DBIOAction$sameThreadExecutionContext$.execute(DBIOAction.scala:297)
        at slick.basic.BasicBackend$DatabaseDef.runInContextSafe(BasicBackend.scala:160)
    ...    
    at slick.basic.BasicBackend$DatabaseDef.slick$basic$BasicBackend$DatabaseDef$$runInContextInline(BasicBackend.scala:172)
        at slick.basic.BasicBackend$DatabaseDef.runInContextSafe(BasicBackend.scala:148)
        at slick.basic.BasicBackend$DatabaseDef.runInContext(BasicBackend.scala:142)
        at slick.basic.BasicBackend$DatabaseDef.runInContext$(BasicBackend.scala:141)
        at slick.jdbc.JdbcBackend$DatabaseDef.runInContext(JdbcBackend.scala:37)
        at slick.basic.BasicBackend$DatabaseDef.$anonfun$runInContextInline$1(BasicBackend.scala:172)
        at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:433)
        at akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:56)
        at akka.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:93)
        at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18)
        at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94)
        at akka.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:93)
        at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:48)
        at kamon.instrumentation.executor.ExecutorInstrumentation$InstrumentedForkJoinPool$TimingRunnable.run(ExecutorInstrumentation.scala:662)
        at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:48)
        at java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:290)
        at java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1016)
        at java.base/java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1665)
        at java.base/java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1598)
        at java.base/java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:183)
    strangely, I can see some trampolining happening but it still overflows
    Praveen Sampath
    @prvnsmpth_twitter
    @jchapuis I suppose it would help to also share the query you are attempting to run
    Jonas Chapuis
    @jchapuis
    it's not a single query actually, it's a big composed DBIO with many subqueries and futures, all running within a transaction, that is composed using a tagless approach. I suspect the hardcoded if (stackLevel < 100) is not sensitive enough in my case, not sure yet what I'm doing wrong
    Jonas Chapuis
    @jchapuis
    ok we were using a stack size of 228K and I suppose this wasn't enough, it's now ok with a larger stack. Was there some tuning for this trampolining threshold of 100?
    Antoine Doeraene
    @sherpal
    hello, I ran into some weird issue. I'm not even sure it's slick's fault. I have a SQL server database, to which I connect using Microsoft's own driver. I have some datetime column that I represent as a LocalDateTime. When I ran the code in sbt on my Mac, everything works. When I package my app with sbt-assembly, everything still works. But when I run the packaged fat jar into a docker with a Linux Alpine os, it fails to decode a "string" as a datetime. My first suspect here is the driver, as slick's query generator should work the same (?) on Linux or Mac, but I wanted to check if anyone else had this issue already. (Note: I found a workaround by writing queries by hand, reading datetime columns as string and parsing with a DateTimeFormatter)
    Ben Fradet
    @BenFradet
    hello, are there instances of GetResult for java.time types?
    Antoine Doeraene
    @sherpal
    I don't know of the top of my head, but I would say :
    If asking implicitly for it does not compile then probably no, however since they are built in for column type it should trivial to make your own
    Roman Gorodischer
    @rgorodischer
    Hey folks, does anyone know any context regarding index hints support in Slick? I've found this issue from 2013 slick/slick#563. Is there any fundamental technical problem with implementing this issue?
    Matthias Berndt
    @mberndt123
    Hey there, I found a bug in Slick's code generator and fixed it.
    Can somebody take a look at the Pull Request?
    slick/slick#2149
    Matthias Berndt
    @mberndt123
    Oh wait, my fix is broken too, lol
    Give me 10 minutes
    Matthias Berndt
    @mberndt123
    OK, should be correct now. Please review :-)
    Ali Ustek
    @austek

    Hi, I have following tables

    CREATE TABLE `test`.`cds` (
      `id` int(11) NOT NULL AUTO_INCREMENT,
      `uuid` char(36) NOT NULL,
      PRIMARY KEY (`id`),
      UNIQUE KEY `uuid` (`uuid`)
    );
    CREATE TABLE `test`.`sortable_table` (
      `id` int(11) NOT NULL AUTO_INCREMENT,
      `uuid` char(36)  NOT NULL,
      `cds_uuid` char(36)  NOT NULL,
      `order_index` int(11) NOT NULL,
      PRIMARY KEY (`id`),
      UNIQUE KEY `sortable_cds_order_index_uk` (`order_index`,`cds_uuid`),
      UNIQUE KEY `uuid` (`uuid`),
      KEY `fk_cds_uuid` (`cds_uuid`),
      CONSTRAINT `fk_cds_uuid` FOREIGN KEY (`cds_uuid`) REFERENCES `cds` (`uuid`)
    );
    
    INSERT INTO `cds` VALUES (1,'45cb10a1-18e0-4632-9d6e-3bdad87b8145');
    
    INSERT INTO `sortable_table` VALUES 
    (1,'61d46c77-72eb-4262-8d1b-2481b131c316','45cb10a1-18e0-4632-9d6e-3bdad87b8145',5),
    (2,'6236bafa-0de1-4496-9314-8bad7de043ae','45cb10a1-18e0-4632-9d6e-3bdad87b8145',4),
    (3,'7796bec5-0f71-497b-a1e8-d39fca37d634','45cb10a1-18e0-4632-9d6e-3bdad87b8145',3),
    (4,'b1ab4ffa-8a97-43cd-9ded-7049f32633e4','45cb10a1-18e0-4632-9d6e-3bdad87b8145',2),
    (5,'108dee09-7c4d-4488-a390-ce6d6a635375','45cb10a1-18e0-4632-9d6e-3bdad87b8145',1);

    I need to shuffle sortable_table.order_index

    so far I have some complex queries that doesn’t really do the job and too complex
    Ali Ustek
    @austek
    I have following in mind
    update `sortable_table` set order_index=-1 where uuid='61d46c77-72eb-4262-8d1b-2481b131c316';
    update `sortable_table` set order_index=-2 where uuid='6236bafa-0de1-4496-9314-8bad7de043ae';
    update `sortable_table` set order_index=-3 where uuid='7796bec5-0f71-497b-a1e8-d39fca37d634';
    update `sortable_table` set order_index=-4 where uuid='b1ab4ffa-8a97-43cd-9ded-7049f32633e4';
    update `sortable_table` set order_index=-5 where uuid='108dee09-7c4d-4488-a390-ce6d6a635375';
    update `sortable_table` set order_index=1 where uuid='61d46c77-72eb-4262-8d1b-2481b131c316'; 
    update `sortable_table` set order_index=2 where uuid='6236bafa-0de1-4496-9314-8bad7de043ae'; 
    update `sortable_table` set order_index=3 where uuid='7796bec5-0f71-497b-a1e8-d39fca37d634'; 
    update `sortable_table` set order_index=4 where uuid='b1ab4ffa-8a97-43cd-9ded-7049f32633e4'; 
    update `sortable_table` set order_index=5 where uuid='108dee09-7c4d-4488-a390-ce6d6a635375';
    but it needs to be done negatives first and then positives
    and here is the code
    def updateOrderIndices(
        cdsVisitActivityQuestionOrders: Seq[VisitActivityQuestionOrderClientDivisionScheme]
      ): DBIOAction[Seq[Either[OperationDecision, VisitActivityQuestionOrderClientDivisionScheme]], NoStream, Read with Write] = {
        val negativeAndPositiveUpdates = DBIO
          .sequence {
            cdsVisitActivityQuestionOrders.map { vaqOrder =>
              val newOrderIndex: OrderIndex = vaqOrder.orderIndex
              for {
                activityQuestions <- clientDivisionSchemeActivityQuestions.filter(q => q.uuid === vaqOrder.uuid && q.orderIndex =!= newOrderIndex).result
                resultN <- DBIO.sequence(activityQuestions.map(cdsAq => auditedUpdate(cdsAq, vaqOrder, Negative)))
                resultP <- DBIO.sequence(activityQuestions.map(cdsAq => auditedUpdate(cdsAq, vaqOrder, Positive)))
              } yield resultN -> resultP
            }
          }
        negativeAndPositiveUpdates.map(_.flatMap(_._1)) andThen negativeAndPositiveUpdates.map(_.flatMap(_._2))
      }
    Kemal Durmus
    @mkemaldurmus
    hi frienda
    hi friends
     `  def upsertCpcValues(merchantId: Long, cpcRangeValue: CpcRangeValue): Future[Int] = upsertCpcValuesTimer.timeFuture {
    val query = {
      cpcRangeValues
        .filter(_.merchantId === merchantId)
        .take(1)
        .forUpdate
        .result
        .headOption
        .flatMap {
          case Some(cpcFromDb) if (cpcFromDb.merchantId != null)  =>
          cpcRangeValues.update(cpcRangeValue)
          case Some(_) => DBIO.successful(0)
          case None => cpcRangeValues += cpcRangeValue
        }
    }
    
    db.run(query)
    }
    `
    PUT merchants/:merchantId/cpcValues/
    Kemal Durmus
    @mkemaldurmus
    I want to do an upsert operation with edpoint. If merhantId is null it will insert, otherwise cpcValue will update.
    1 reply
    David-hod
    @David-hod
    using slick with postgress DB. I have instant field on scala and 'TIMESTAMP WITH TIME ZONE' on postgress. it seems like it is not the exact time before persiting and after loading from DB. any suggestionns?
    Felipe Bonezi
    @felipebonezi
    hey guys, how can I filter a Option[LocalDate] if it's greater than another LocalDate?
    1 reply
    Ali Ustek
    @austek
    @felipebonezi have you tried
    def filter(date: Option[Instant] = None)= { tableQuery.filter { item => List( date.map(item.updatedAt > _) ).collect({ case Some(criteria) => criteria }).reduceLeftOption(_ && _).getOrElse(true: LiteralColumn[Boolean]) }.result }
    SynappsGiteau
    @SynappsGiteau
    Hi everyone, is there a possibility with slick to add an automatic filtering on all queries?
    Usecase: in a multi-tenant application, with an implicit value, it would be interesting not to have to worry about this filtering when we make our queries.
    Anyone have an idea?
    4 replies
    Arsene Tochemey Gandote
    @Tochemey
    Hello folks how can I get a JdbcBackend database instance from JdbcProfile
    AIIIN
    @AIIIN

    Hey guys,
    I am currently using like to filter my rows, but somehow I can not manage to do it when the column contains Long values. Here is some code:

    val term: String = "some string"
    val searchTerm: String = s"%${term.toLowerCase}%"
    ...
    (...).filter(r =>
                (r._1._1._1.text.toLowerCase like searchTerm) ||        // works
                (s"%${r._1._1._1.Id}%" like searchTerm)                 // does not work
           )

    In the second like query the compiler sais Cannot resolve symbol like.
    I also tried (r._1._1._1.Id.toString like searchTerm) but that gives me the same error...

    AIIIN
    @AIIIN
    Found it! You need to pass the column as String:
    r._1._1._1.Id.asColumnOf[String] like searchTerm)
    amahonee
    @amahonee
    Hi Guys, is there a nice way/ is it possible to batch a Set[FixedSqlAction[Int, NoStream, Effect.Write]] into a single db.run{...} call? Thanks in advance
    Heikki Vesalainen
    @hvesalai
    @amahonee sequence them (see DBIOAction.sequence)
    (there is also seq and fold)
    Daniel Robert
    @drobert
    fold has some stack-size limitations though. e.g. if you fold 10,000 || operations together you can end up with stackoverflow
    hmm, I suppose I'm thinking more of reduce, per slick/slick#1606
    amahonee
    @amahonee
    Hey guys thanks for the help on the last issue I had! I have a question regarding updating a jsonb column. I have a Set[(Instant, Option[Instant]] for describing an IDs active periods. Is it possible to access this set to update a None value when it is deactivated and modify the column in one db action?
    I want to read the value from the DB, perform actions on it, and reinsert it into the DB with an table.update(). Thanks again in advance
    Nikhil Arora
    @nikhilaroratgo_gitlab
    Hello Guys, I am using Lagom 1.6.4 and Java with PostgreSql. I want to read data from DB in a akka Streamed way ie. Create a Source for some Select query. Is there are way to get a akka stream Source ? I found that alpakka-slick is one way and I am currently experimenting with that. It works but it creates its own Hikari pool and I want to avoid that. I used SlickSession.forConfig("slick-postgres" and then Slick.source(session,query,mapper). Any suggestions on this please. Thanks
    2 replies
    oybek
    @oybek

    Hello Guys!
    I couldn't find any answer in slick documentation https://books.underscore.io/essential-slick/essential-slick-3.html (perhaps my poor search skills)
    My question is, I have the following query construction:

    messages.filter(_.sender === "Dave")

    But in practice folk does

    messages.filter(_.sender === "Dave".bind)

    What is the difference and why is it motivated to do so?
    I tinkered a little in REPL with slick, and that is all I've got:

    scala> messages.filter(_.sender === "Dave").result.statements.toString
    res7: String = List(select "sender", "content", "id" from "message" where "sender" = 'Dave')
    
    scala> messages.filter(_.sender === "Dave".bind).result.statements.toString
    res8: String = List(select "sender", "content", "id" from "message" where "sender" = ?)

    In the first case we got query with static injected value in it, it is in '' and I think there is no danger of sql injection.
    In second scenario we've got just placeholder, in this case when the values are actually will be inserted in query?
    why the second case with .bind preferable?

    Sorry for my terrible english with a lot of syntax error's

    Richard Dallaway
    @d6y
    I think the bind version is, in principle, reusable (for different values) by the database/jdbc-land. That would save the database from making a new queey plan for each call. In practice, I don’t know if that actually happens. I’m hoping others can confirm/refute that.
    Queey? Query plan, I mean!
    oybek
    @oybek
    How can I reuse and substitute different value is such a case?
    1 reply
    micky44
    @Micky44Scoll_twitter
    Have slick support scala 3?
    nafg
    @nafg
    @oybek a while ago, @szeiger made a scala 3 version but I don't think I got merged or published
    Sorry that was for @Micky44Scoll_twitter
    Seth Tisue
    @SethTisue
    @nafg @Micky44Scoll_twitter the Scala 3 PR has been revived at slick/slick#2187 — I'm not sure how close to complete it is
    if you think you might have any spare time to help out, please subscribe to that PR (and/or slick/slick#2177) and keep an eye out for opportunities to help keep it moving
    Seth Tisue
    @SethTisue
    oh btw anyone interested in the future of Slick might also be interested in:
    also slick/slick#2169 about updating versions of the database connectors
    Seth Tisue
    @SethTisue
    looks like we're on track for a 3.4.0 release, maybe in April?