Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
    Luis Martinez
    @luis3m
    Thanks
    q10
    @q10
    Hello, is there a Slick way to do conditional upsert? I would like to upsert, but only if a certain condition holds true: insert into "table" (...) values (?...) on conflict (primary_keys) do update set ... where .... Slick's insertOrUpdate is able to do everything except adding that condition.
    Pascal Danek
    @pascaldanek
    Hello, i am curious about scala 3 support. Does someone know what we should expect ?
    nafg
    @nafg
    In @djspiewak's talk he said it was already ported. I know @szeiger was working on it a long time ago but I didn't know he finished it. In anyone knows more info I'm interested too
    Daniel Spiewak
    @djspiewak
    if I misspoke I apologize; I was under the impression it was fully ported
    nafg
    @nafg
    @djspiewak I have no idea, I was wondering where you got your info from
    Maybe it is fully ported in that fork+branch, do you know if it's published?
    Daniel Spiewak
    @djspiewak
    rewinding back several months and evesdropping on @szeiger in the dotty channel
    I don't know, but I'd assume it hasn't been published
    nafg
    @nafg
    I remember him saying he was working on it
    Daniel Spiewak
    @djspiewak
    I remember him having success at one point in getting the major stuff reencoded in terms of Dotty's metaprogramming
    nafg
    @nafg
    cool
    great talk btw
    Daniel Spiewak
    @djspiewak
    ty!
    Antoine Doeraene
    @sherpal
    I also remember seeing a tweet from @szeiger saying that he successfully ported Slick to Scala 3. It was several months ago already.
    Pascal Danek
    @pascaldanek
    Thanks guys, i guess at some point we will have some sort of official statement
    AIIIN
    @AIIIN

    Hey Guys,
    I am new to Scala and slick. I am using it within the play framework. I have a question on the following query:

        val query = table.filter(tab => (tab.id === id && tab.otherId === otherId)).map(column => (column.desiredCell))
        val dataBaseIOAction = query.result
        val desiredCell: Future[String] = db.run(dataBaseIOAction) // db.run(dataBaseIOAction) is of type Future[Seq[Option[String]]]

    The last line of code has a type mismatch. Scala tells me that it will result in a Future[Seq[Option]]], however I know that there will be only one column that will satisfy the filterand map should return the value from the column I want and since it is one one column I should get the cell value.
    I am clearly missing something, could anyone help?

    Thanks in advance!

    nafg
    @nafg
    @AIIIN query.result.head
    AIIIN
    @AIIIN
    @nafg I see. If I use query.result.head Scala knows, that it is not a Seq, right?
    Roger Rojas
    @roger-rojas
    Hey, new to Scala and Slick too. I'm having an issue where data that is too long to fit into a MySQL TEXT column is throwing an exception. Rather than throw an exception I would like to truncate it (with some additional processing). My question: Is there a way of discovering the max size of the column in Slick? I know MySQL's default TEXT field size is 65535, but I would like to be able to discover that when the app starts up instead of hardcoding the value, hopefully keeping the truncation + processing step independent of the db.
    nafg
    @nafg
    @AIIIN yes, look at its return type
    1 reply
    Richard Dallaway
    @d6y
    @roger-rojas I think there are API calls to find information about columns (it's in the JDBC API that Slick uses). Take a look at https://stackoverflow.com/questions/35413956/trying-to-get-the-column-size-of-a-column-using-jdbc-metadata for an example of that. To access the JDBC API from Slick, there's https://scala-slick.org/doc/3.3.3/dbio.html#jdbc-interoperability
    1 reply
    AIIIN
    @AIIIN

    I have another question. It seems that I have problems to retrieve an optional value called bID inside a join statement:

    val bQuery = tableQuery.map(column => (column.bID, column.info))
    aQuery = aQuery joinLeft bQuery on (_._1._1.aID === _._1.bID)

    I also tried _.bID and _._1.map(_.bID).
    bID is of type Rep[Option[Long]].
    I would be very greatful for any help.

    Richard Dallaway
    @d6y
    @AIIIN what might work for you is _1.aID.? --- as the ? method takes your aID into an Option, the same as bID. In SQL, it'll give you = semantics on the two columns.
    AIIIN
    @AIIIN
    @d6y Thanks for the quick answer. Unfortunatly it does not work. I still get a type mismatch error where it still sais that a non Optional value is expected.
    In your answer you wrote _1.aID.? but you mean _.1._1.aID.?, right?
    1 reply
    AIIIN
    @AIIIN
    I created a new query instead of reusing aQuery as a varand now the error is gone. I have no clue why. I thought in scala you are allowed to reassign as long as something is declared as a var. Or is the type fixed after the first decleration?
    nafg
    @nafg
    @AIIIN yes you can't change the type of a var
    If you want its type to be more general then you can write it explicitly instead of letting type inference give a more specific type
    But at the same time, if the type is more general then you can do less with it
    For instance, if you write var x = 1 its type will be inferred to Int and then you can't assign a String to it. But if you did var x: Any = 1 then you could, but then its type is too broad to be useful. You can't + together Anys.
    (In general avoid Any, this is just to illustrate the principle)
    Separate point, usually code is easier to reason about with vals than with vars, since you don't have to check for "what code may have potentially mutated it," you only have to look at where it's defined
    AIIIN
    @AIIIN
    okay thank you very much :)!
    Rakesh
    @rakeshkr00

    Hi, I am unsuccessfully trying to stream records from PostgreSQL DB using Slick 3.3.0 and Akka 2.5.22 using below code. The table has some 100 million records.

    def getRecords(fromDt: Timestamp, toDt: Timestamp) = {
      sql""" SELECT FID, NAME, MODIFIED_DT
             FROM test.dummy_table
             WHERE MODIFIED_DT > '1970-01-01 00:00:00' and MODIFIED_DT < '2020-12-31 23:13:41'
          """.as[(Long,String,Timestamp)]
    }
    
    Source.fromPublisher(getRecords(Timestamp.valueOf("1970-01-01 00:00:00"), Timestamp.valueOf("2020-12-31 23:13:41")))
    .runWith(Sink.foreach(println))

    The app crashes by throwing org.postgresql.util.PSQLException: Ran out of memory retrieving query results. In my opinion, as I am streaming and don't store the result in variable, I shouldn't get OOM issue. Any thoughts?
    Below one works fine:
    def getRecords(fromDt: Timestamp, toDt: Timestamp) = { sql""" SELECT FID, NAME, MODIFIED_DT FROM test.dummy_table WHERE MODIFIED_DT > '1970-01-01 00:00:00' and MODIFIED_DT < '2020-12-31 23:13:41' limit 100""".as[(Long,String,Timestamp)] }

    Rakesh
    @rakeshkr00
    Even after I applying the below, it doesn't help:
    Source.fromPublisher{getRecords(Timestamp.valueOf("1970-01-01 00:00:00"), Timestamp.valueOf("2020-12-31 23:13:41"))
      .withStatementParameters(
        fetchSize = 100,
        rsType = ResultSetType.ForwardOnly,
        rsConcurrency = ResultSetConcurrency.ReadOnly
      )}
    .runWith(Sink.foreach(println))
    Rakesh
    @rakeshkr00
    It worked after I added transactionally:
    .withStatementParameters(
            fetchSize = 100,
            rsType = ResultSetType.ForwardOnly,
            rsConcurrency = ResultSetConcurrency.ReadOnly
          ).transactionally
    yamini-ban
    @yamini-ban
    Hi, I have been trying to work with computed field(also known as generated column) using Slick(using Scala) but all went in vein. When I am inserting the record in DB, the generated column need not to be specified(which means one column less than actual number of columns during insertion operation) but when I am fetching the record, I want to get the generated column value as well. Any work around for this case?
    nafg
    @nafg
    @yamini-ban yes use different projections
    A projection is the <> thing, you might be used to only defining the * projection but you can make your own and use those for inserting or other queries
    Praveen Sampath
    @prvnsmpth_twitter

    Hey everyone! I've been trying to understand joins in slick. In the following scenario, assuming a one-to-many relation between persons and emails:

        val action =
          for {
            (p, e) <- persons join emails on (_.id === _.personId)
          } yield (p, e)
        val future = db.run(action.result)

    The shape of the query result is Seq[(Person, Email)]. Is it possible in slick to get a Seq[(Person, Seq[Email])] instead? Basically, I want to avoid creating duplicate Person objects for each matching Email object.

    3 replies
    Jonas Chapuis
    @jchapuis
    hello all, I have a SO when building a large DBIO, see exerpts of the stack trace below, any idea what could be the issue? I'm running slick 3.3.2
    thanks!!
    java.lang.StackOverflowError: null
        at slick.basic.BasicBackend$DatabaseDef.slick$basic$BasicBackend$DatabaseDef$$runInContextInline(BasicBackend.scala:166)
        at slick.basic.BasicBackend$DatabaseDef.runInContextSafe(BasicBackend.scala:148)
    ...
     slick.basic.BasicBackend$DatabaseDef.slick$basic$BasicBackend$DatabaseDef$$runInContextInline(BasicBackend.scala:172)
        at slick.basic.BasicBackend$DatabaseDef$$anon$2.run(BasicBackend.scala:154)
        at slick.dbio.DBIOAction$sameThreadExecutionContext$.runTrampoline(DBIOAction.scala:284)
        at slick.dbio.DBIOAction$sameThreadExecutionContext$.execute(DBIOAction.scala:297)
        at slick.basic.BasicBackend$DatabaseDef.runInContextSafe(BasicBackend.scala:160)
    ...    
    at slick.basic.BasicBackend$DatabaseDef.slick$basic$BasicBackend$DatabaseDef$$runInContextInline(BasicBackend.scala:172)
        at slick.basic.BasicBackend$DatabaseDef.runInContextSafe(BasicBackend.scala:148)
        at slick.basic.BasicBackend$DatabaseDef.runInContext(BasicBackend.scala:142)
        at slick.basic.BasicBackend$DatabaseDef.runInContext$(BasicBackend.scala:141)
        at slick.jdbc.JdbcBackend$DatabaseDef.runInContext(JdbcBackend.scala:37)
        at slick.basic.BasicBackend$DatabaseDef.$anonfun$runInContextInline$1(BasicBackend.scala:172)
        at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:433)
        at akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:56)
        at akka.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:93)
        at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.scala:18)
        at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:94)
        at akka.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:93)
        at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:48)
        at kamon.instrumentation.executor.ExecutorInstrumentation$InstrumentedForkJoinPool$TimingRunnable.run(ExecutorInstrumentation.scala:662)
        at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:48)
        at java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:290)
        at java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1016)
        at java.base/java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1665)
        at java.base/java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1598)
        at java.base/java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:183)
    strangely, I can see some trampolining happening but it still overflows
    Praveen Sampath
    @prvnsmpth_twitter
    @jchapuis I suppose it would help to also share the query you are attempting to run
    Jonas Chapuis
    @jchapuis
    it's not a single query actually, it's a big composed DBIO with many subqueries and futures, all running within a transaction, that is composed using a tagless approach. I suspect the hardcoded if (stackLevel < 100) is not sensitive enough in my case, not sure yet what I'm doing wrong
    Jonas Chapuis
    @jchapuis
    ok we were using a stack size of 228K and I suppose this wasn't enough, it's now ok with a larger stack. Was there some tuning for this trampolining threshold of 100?
    Antoine Doeraene
    @sherpal
    hello, I ran into some weird issue. I'm not even sure it's slick's fault. I have a SQL server database, to which I connect using Microsoft's own driver. I have some datetime column that I represent as a LocalDateTime. When I ran the code in sbt on my Mac, everything works. When I package my app with sbt-assembly, everything still works. But when I run the packaged fat jar into a docker with a Linux Alpine os, it fails to decode a "string" as a datetime. My first suspect here is the driver, as slick's query generator should work the same (?) on Linux or Mac, but I wanted to check if anyone else had this issue already. (Note: I found a workaround by writing queries by hand, reading datetime columns as string and parsing with a DateTimeFormatter)
    Ben Fradet
    @BenFradet
    hello, are there instances of GetResult for java.time types?
    Antoine Doeraene
    @sherpal
    I don't know of the top of my head, but I would say :
    If asking implicitly for it does not compile then probably no, however since they are built in for column type it should trivial to make your own