Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
    nafg
    @nafg
    For instance, if you write var x = 1 its type will be inferred to Int and then you can't assign a String to it. But if you did var x: Any = 1 then you could, but then its type is too broad to be useful. You can't + together Anys.
    (In general avoid Any, this is just to illustrate the principle)
    Separate point, usually code is easier to reason about with vals than with vars, since you don't have to check for "what code may have potentially mutated it," you only have to look at where it's defined
    AIIIN
    @AIIIN
    okay thank you very much :)!
    Rakesh
    @rakeshkr00

    Hi, I am unsuccessfully trying to stream records from PostgreSQL DB using Slick 3.3.0 and Akka 2.5.22 using below code. The table has some 100 million records.

    def getRecords(fromDt: Timestamp, toDt: Timestamp) = {
      sql""" SELECT FID, NAME, MODIFIED_DT
             FROM test.dummy_table
             WHERE MODIFIED_DT > '1970-01-01 00:00:00' and MODIFIED_DT < '2020-12-31 23:13:41'
          """.as[(Long,String,Timestamp)]
    }
    
    Source.fromPublisher(getRecords(Timestamp.valueOf("1970-01-01 00:00:00"), Timestamp.valueOf("2020-12-31 23:13:41")))
    .runWith(Sink.foreach(println))

    The app crashes by throwing org.postgresql.util.PSQLException: Ran out of memory retrieving query results. In my opinion, as I am streaming and don't store the result in variable, I shouldn't get OOM issue. Any thoughts?
    Below one works fine:
    def getRecords(fromDt: Timestamp, toDt: Timestamp) = { sql""" SELECT FID, NAME, MODIFIED_DT FROM test.dummy_table WHERE MODIFIED_DT > '1970-01-01 00:00:00' and MODIFIED_DT < '2020-12-31 23:13:41' limit 100""".as[(Long,String,Timestamp)] }

    Rakesh
    @rakeshkr00
    Even after I applying the below, it doesn't help:
    Source.fromPublisher{getRecords(Timestamp.valueOf("1970-01-01 00:00:00"), Timestamp.valueOf("2020-12-31 23:13:41"))
      .withStatementParameters(
        fetchSize = 100,
        rsType = ResultSetType.ForwardOnly,
        rsConcurrency = ResultSetConcurrency.ReadOnly
      )}
    .runWith(Sink.foreach(println))
    Rakesh
    @rakeshkr00
    It worked after I added transactionally:
    .withStatementParameters(
            fetchSize = 100,
            rsType = ResultSetType.ForwardOnly,
            rsConcurrency = ResultSetConcurrency.ReadOnly
          ).transactionally
    yamini-ban
    @yamini-ban
    Hi, I have been trying to work with computed field(also known as generated column) using Slick(using Scala) but all went in vein. When I am inserting the record in DB, the generated column need not to be specified(which means one column less than actual number of columns during insertion operation) but when I am fetching the record, I want to get the generated column value as well. Any work around for this case?
    nafg
    @nafg
    @yamini-ban yes use different projections
    A projection is the <> thing, you might be used to only defining the * projection but you can make your own and use those for inserting or other queries
    Praveen Sampath
    @prvnsmpth_twitter

    Hey everyone! I've been trying to understand joins in slick. In the following scenario, assuming a one-to-many relation between persons and emails:

        val action =
          for {
            (p, e) <- persons join emails on (_.id === _.personId)
          } yield (p, e)
        val future = db.run(action.result)

    The shape of the query result is Seq[(Person, Email)]. Is it possible in slick to get a Seq[(Person, Seq[Email])] instead? Basically, I want to avoid creating duplicate Person objects for each matching Email object.

    3 replies
    Jonas Chapuis
    @jchapuis
    hello all, I have a SO when building a large DBIO, see exerpts of the stack trace below, any idea what could be the issue? I'm running slick 3.3.2
    thanks!!
    java.lang.StackOverflowError: null
        at slick.basic.BasicBackend$DatabaseDef.slick$basic$BasicBackend$DatabaseDef$$runInContextInline(BasicBackend.scala:166)
        at slick.basic.BasicBackend$DatabaseDef.runInContextSafe(BasicBackend.scala:148)
    ...
     slick.basic.BasicBackend$DatabaseDef.slick$basic$BasicBackend$DatabaseDef$$runInContextInline(BasicBackend.scala:172)
        at slick.basic.BasicBackend$DatabaseDef$$anon$2.run(BasicBackend.scala:154)
        at slick.dbio.DBIOAction$sameThreadExecutionContext$.runTrampoline(DBIOAction.scala:284)
        at slick.dbio.DBIOAction$sameThreadExecutionContext$.execute(DBIOAction.scala:297)
        at slick.basic.BasicBackend$DatabaseDef.runInContextSafe(BasicBackend.scala:160)
    ...    
    at slick.basic.BasicBackend$DatabaseDef.slick$basic$BasicBackend$DatabaseDef$$runInContextInline(BasicBackend.scala:172)
        at slick.basic.BasicBackend$DatabaseDef.runInContextSafe(BasicBackend.scala:148)
        at slick.basic.BasicBackend$DatabaseDef.runInContext(BasicBackend.scala:142)
        at slick.basic.BasicBackend$DatabaseDef.runInContext$(BasicBackend.scala:141)
        at slick.jdbc.JdbcBackend$DatabaseDef.runInContext(JdbcBackend.scala:37)
        at slick.basic.BasicBackend$DatabaseDef.$anonfun$runInContextInline$1(BasicBackend.scala:172)
        at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:433)
        at akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:56)
        at akka.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:93)
        at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18)
        at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94)
        at akka.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:93)
        at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:48)
        at kamon.instrumentation.executor.ExecutorInstrumentation$InstrumentedForkJoinPool$TimingRunnable.run(ExecutorInstrumentation.scala:662)
        at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:48)
        at java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:290)
        at java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1016)
        at java.base/java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1665)
        at java.base/java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1598)
        at java.base/java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:183)
    strangely, I can see some trampolining happening but it still overflows
    Praveen Sampath
    @prvnsmpth_twitter
    @jchapuis I suppose it would help to also share the query you are attempting to run
    Jonas Chapuis
    @jchapuis
    it's not a single query actually, it's a big composed DBIO with many subqueries and futures, all running within a transaction, that is composed using a tagless approach. I suspect the hardcoded if (stackLevel < 100) is not sensitive enough in my case, not sure yet what I'm doing wrong
    Jonas Chapuis
    @jchapuis
    ok we were using a stack size of 228K and I suppose this wasn't enough, it's now ok with a larger stack. Was there some tuning for this trampolining threshold of 100?
    Antoine Doeraene
    @sherpal
    hello, I ran into some weird issue. I'm not even sure it's slick's fault. I have a SQL server database, to which I connect using Microsoft's own driver. I have some datetime column that I represent as a LocalDateTime. When I ran the code in sbt on my Mac, everything works. When I package my app with sbt-assembly, everything still works. But when I run the packaged fat jar into a docker with a Linux Alpine os, it fails to decode a "string" as a datetime. My first suspect here is the driver, as slick's query generator should work the same (?) on Linux or Mac, but I wanted to check if anyone else had this issue already. (Note: I found a workaround by writing queries by hand, reading datetime columns as string and parsing with a DateTimeFormatter)
    Ben Fradet
    @BenFradet
    hello, are there instances of GetResult for java.time types?
    Antoine Doeraene
    @sherpal
    I don't know of the top of my head, but I would say :
    If asking implicitly for it does not compile then probably no, however since they are built in for column type it should trivial to make your own
    Roman Gorodischer
    @rgorodischer
    Hey folks, does anyone know any context regarding index hints support in Slick? I've found this issue from 2013 slick/slick#563. Is there any fundamental technical problem with implementing this issue?
    Matthias Berndt
    @mberndt123
    Hey there, I found a bug in Slick's code generator and fixed it.
    Can somebody take a look at the Pull Request?
    slick/slick#2149
    Matthias Berndt
    @mberndt123
    Oh wait, my fix is broken too, lol
    Give me 10 minutes
    Matthias Berndt
    @mberndt123
    OK, should be correct now. Please review :-)
    Ghost
    @ghost~5cc594f1d73408ce4fbee018

    Hi, I have following tables

    CREATE TABLE `test`.`cds` (
      `id` int(11) NOT NULL AUTO_INCREMENT,
      `uuid` char(36) NOT NULL,
      PRIMARY KEY (`id`),
      UNIQUE KEY `uuid` (`uuid`)
    );
    CREATE TABLE `test`.`sortable_table` (
      `id` int(11) NOT NULL AUTO_INCREMENT,
      `uuid` char(36)  NOT NULL,
      `cds_uuid` char(36)  NOT NULL,
      `order_index` int(11) NOT NULL,
      PRIMARY KEY (`id`),
      UNIQUE KEY `sortable_cds_order_index_uk` (`order_index`,`cds_uuid`),
      UNIQUE KEY `uuid` (`uuid`),
      KEY `fk_cds_uuid` (`cds_uuid`),
      CONSTRAINT `fk_cds_uuid` FOREIGN KEY (`cds_uuid`) REFERENCES `cds` (`uuid`)
    );
    
    INSERT INTO `cds` VALUES (1,'45cb10a1-18e0-4632-9d6e-3bdad87b8145');
    
    INSERT INTO `sortable_table` VALUES 
    (1,'61d46c77-72eb-4262-8d1b-2481b131c316','45cb10a1-18e0-4632-9d6e-3bdad87b8145',5),
    (2,'6236bafa-0de1-4496-9314-8bad7de043ae','45cb10a1-18e0-4632-9d6e-3bdad87b8145',4),
    (3,'7796bec5-0f71-497b-a1e8-d39fca37d634','45cb10a1-18e0-4632-9d6e-3bdad87b8145',3),
    (4,'b1ab4ffa-8a97-43cd-9ded-7049f32633e4','45cb10a1-18e0-4632-9d6e-3bdad87b8145',2),
    (5,'108dee09-7c4d-4488-a390-ce6d6a635375','45cb10a1-18e0-4632-9d6e-3bdad87b8145',1);

    I need to shuffle sortable_table.order_index

    so far I have some complex queries that doesn’t really do the job and too complex
    Ghost
    @ghost~5cc594f1d73408ce4fbee018
    I have following in mind
    update `sortable_table` set order_index=-1 where uuid='61d46c77-72eb-4262-8d1b-2481b131c316';
    update `sortable_table` set order_index=-2 where uuid='6236bafa-0de1-4496-9314-8bad7de043ae';
    update `sortable_table` set order_index=-3 where uuid='7796bec5-0f71-497b-a1e8-d39fca37d634';
    update `sortable_table` set order_index=-4 where uuid='b1ab4ffa-8a97-43cd-9ded-7049f32633e4';
    update `sortable_table` set order_index=-5 where uuid='108dee09-7c4d-4488-a390-ce6d6a635375';
    update `sortable_table` set order_index=1 where uuid='61d46c77-72eb-4262-8d1b-2481b131c316'; 
    update `sortable_table` set order_index=2 where uuid='6236bafa-0de1-4496-9314-8bad7de043ae'; 
    update `sortable_table` set order_index=3 where uuid='7796bec5-0f71-497b-a1e8-d39fca37d634'; 
    update `sortable_table` set order_index=4 where uuid='b1ab4ffa-8a97-43cd-9ded-7049f32633e4'; 
    update `sortable_table` set order_index=5 where uuid='108dee09-7c4d-4488-a390-ce6d6a635375';
    but it needs to be done negatives first and then positives
    and here is the code
    def updateOrderIndices(
        cdsVisitActivityQuestionOrders: Seq[VisitActivityQuestionOrderClientDivisionScheme]
      ): DBIOAction[Seq[Either[OperationDecision, VisitActivityQuestionOrderClientDivisionScheme]], NoStream, Read with Write] = {
        val negativeAndPositiveUpdates = DBIO
          .sequence {
            cdsVisitActivityQuestionOrders.map { vaqOrder =>
              val newOrderIndex: OrderIndex = vaqOrder.orderIndex
              for {
                activityQuestions <- clientDivisionSchemeActivityQuestions.filter(q => q.uuid === vaqOrder.uuid && q.orderIndex =!= newOrderIndex).result
                resultN <- DBIO.sequence(activityQuestions.map(cdsAq => auditedUpdate(cdsAq, vaqOrder, Negative)))
                resultP <- DBIO.sequence(activityQuestions.map(cdsAq => auditedUpdate(cdsAq, vaqOrder, Positive)))
              } yield resultN -> resultP
            }
          }
        negativeAndPositiveUpdates.map(_.flatMap(_._1)) andThen negativeAndPositiveUpdates.map(_.flatMap(_._2))
      }
    Kemal Durmus
    @mkemaldurmus
    hi frienda
    hi friends
     `  def upsertCpcValues(merchantId: Long, cpcRangeValue: CpcRangeValue): Future[Int] = upsertCpcValuesTimer.timeFuture {
    val query = {
      cpcRangeValues
        .filter(_.merchantId === merchantId)
        .take(1)
        .forUpdate
        .result
        .headOption
        .flatMap {
          case Some(cpcFromDb) if (cpcFromDb.merchantId != null)  =>
          cpcRangeValues.update(cpcRangeValue)
          case Some(_) => DBIO.successful(0)
          case None => cpcRangeValues += cpcRangeValue
        }
    }
    
    db.run(query)
    }
    `
    PUT merchants/:merchantId/cpcValues/
    Kemal Durmus
    @mkemaldurmus
    I want to do an upsert operation with edpoint. If merhantId is null it will insert, otherwise cpcValue will update.
    1 reply
    David-hod
    @David-hod
    using slick with postgress DB. I have instant field on scala and 'TIMESTAMP WITH TIME ZONE' on postgress. it seems like it is not the exact time before persiting and after loading from DB. any suggestionns?
    Felipe Bonezi
    @felipebonezi
    hey guys, how can I filter a Option[LocalDate] if it's greater than another LocalDate?
    1 reply
    Ghost
    @ghost~5cc594f1d73408ce4fbee018
    @felipebonezi have you tried
    def filter(date: Option[Instant] = None)= { tableQuery.filter { item => List( date.map(item.updatedAt > _) ).collect({ case Some(criteria) => criteria }).reduceLeftOption(_ && _).getOrElse(true: LiteralColumn[Boolean]) }.result }
    SynappsGiteau
    @SynappsGiteau
    Hi everyone, is there a possibility with slick to add an automatic filtering on all queries?
    Usecase: in a multi-tenant application, with an implicit value, it would be interesting not to have to worry about this filtering when we make our queries.
    Anyone have an idea?
    4 replies
    Arsene Tochemey Gandote
    @Tochemey
    Hello folks how can I get a JdbcBackend database instance from JdbcProfile
    AIIIN
    @AIIIN

    Hey guys,
    I am currently using like to filter my rows, but somehow I can not manage to do it when the column contains Long values. Here is some code:

    val term: String = "some string"
    val searchTerm: String = s"%${term.toLowerCase}%"
    ...
    (...).filter(r =>
                (r._1._1._1.text.toLowerCase like searchTerm) ||        // works
                (s"%${r._1._1._1.Id}%" like searchTerm)                 // does not work
           )

    In the second like query the compiler sais Cannot resolve symbol like.
    I also tried (r._1._1._1.Id.toString like searchTerm) but that gives me the same error...

    AIIIN
    @AIIIN
    Found it! You need to pass the column as String:
    r._1._1._1.Id.asColumnOf[String] like searchTerm)
    amahonee
    @amahonee
    Hi Guys, is there a nice way/ is it possible to batch a Set[FixedSqlAction[Int, NoStream, Effect.Write]] into a single db.run{...} call? Thanks in advance
    Heikki Vesalainen
    @hvesalai
    @amahonee sequence them (see DBIOAction.sequence)
    (there is also seq and fold)
    Daniel Robert
    @drobert
    fold has some stack-size limitations though. e.g. if you fold 10,000 || operations together you can end up with stackoverflow
    hmm, I suppose I'm thinking more of reduce, per slick/slick#1606
    amahonee
    @amahonee
    Hey guys thanks for the help on the last issue I had! I have a question regarding updating a jsonb column. I have a Set[(Instant, Option[Instant]] for describing an IDs active periods. Is it possible to access this set to update a None value when it is deactivated and modify the column in one db action?
    I want to read the value from the DB, perform actions on it, and reinsert it into the DB with an table.update(). Thanks again in advance
    Nikhil Arora
    @nikhilaroratgo_gitlab
    Hello Guys, I am using Lagom 1.6.4 and Java with PostgreSql. I want to read data from DB in a akka Streamed way ie. Create a Source for some Select query. Is there are way to get a akka stream Source ? I found that alpakka-slick is one way and I am currently experimenting with that. It works but it creates its own Hikari pool and I want to avoid that. I used SlickSession.forConfig("slick-postgres" and then Slick.source(session,query,mapper). Any suggestions on this please. Thanks
    2 replies
    oybek
    @oybek

    Hello Guys!
    I couldn't find any answer in slick documentation https://books.underscore.io/essential-slick/essential-slick-3.html (perhaps my poor search skills)
    My question is, I have the following query construction:

    messages.filter(_.sender === "Dave")

    But in practice folk does

    messages.filter(_.sender === "Dave".bind)

    What is the difference and why is it motivated to do so?
    I tinkered a little in REPL with slick, and that is all I've got:

    scala> messages.filter(_.sender === "Dave").result.statements.toString
    res7: String = List(select "sender", "content", "id" from "message" where "sender" = 'Dave')
    
    scala> messages.filter(_.sender === "Dave".bind).result.statements.toString
    res8: String = List(select "sender", "content", "id" from "message" where "sender" = ?)

    In the first case we got query with static injected value in it, it is in '' and I think there is no danger of sql injection.
    In second scenario we've got just placeholder, in this case when the values are actually will be inserted in query?
    why the second case with .bind preferable?

    Sorry for my terrible english with a lot of syntax error's