Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
    Naveen
    @dexter2305
    Found: I missed import slick.jdbc.PostgresProfile.api._. Is there a way to simplify the inclusion ? class Tasks should have this and also the class that has the insertQuery. That factor is forcing to make Tasks & Repositoy to be in the same file.
    nafg
    @nafg
    Nah you just need that import in every file you do query stuff
    Scala 3 export might be able to solve that, not sure
    but slick doesn't support scala 3 yet
    Antoine Doeraene
    @sherpal
    However you can still abstract away which profile you use (it can be helpful if you want to use the H2Profile in your tests (especially since compatibility between PostgresProfile and the postgres mode for h2 broke)
    nafg
    @naftoligug:matrix.org
    [m]
    In general that's not a good idea because of inherent differences between databases, especially nowadays that it's so easy to spin up a postgres instance with docker or via testcontainers.org
    Antoine Doeraene
    @sherpal
    Yes for advanced uses it's usually bad... But spinning a docker is still quite expensive
    nafg
    @naftoligug:matrix.org
    [m]
    It shouldn't be expensive
    Expensive how?
    Antoine Doeraene
    @sherpal
    I'm only speaking about computational power. You easily get 10-20s of overhead
    1 reply
    Naveen
    @dexter2305
    I am looking at "how to design the classes with domain object Task (as quoted in the snippet above) along with DAO/Repository. Now that we spoke of H2 (only for testing), question I am getting is - is it a good idea to test with H2 when the production is going to be something else. Kind of beginner with slick - not sure if this advisable. If so - any directions ? Thanks
    Antoine Doeraene
    @sherpal
    As nafg said above, it's easy (in terms of code) these days to spin up a database inside a Docker container with your database of choice (in this case, postgres). The only thing that you have to do is to apply the evolutions/migrations to it (because obviously, you get a fresh one) and put some data in it. How you apply these evolutions will depend on your evolutions framework (on the JVM in scala the two most probable are Flyway or Play, let us know if you need help ;) ). For managing the docker container, as suggested also, you can use https://github.com/testcontainers/testcontainers-scala
    Naveen
    @dexter2305
    Yes, I am spinning a docker container for postgres. Since it will be a fresh one, classSchemaGenerator is needed. SchemeGenerator requires Database and collection Table[Model]. There is also this sticky import statement import slick.jdbc.PostgresProfile.api._ which I want to keep it min files as possible. Given these I am looking for design these classes.
    Antoine Doeraene
    @sherpal

    The question is why you want to keep that import statement in as min files as possible? If it's because your project is the barebones for building other projects, then you can always store somewhere in a global object the profile you want to use. Something like

    object DatabaseGlobals {
      def profile = slick.jdbc.PostgresProfile
    }

    and then you can import that instead, via my.package.DatabaseGlobals.profile.api._.
    If the reason is for mocks in tests, then I think you can put your table definitions inside a trait which requires an abstract jdbc profile, and feed that profile by extending the trait (I think that works)

    Margus Sipria
    @margussipria
    I there easy way to make extension to Query that will allow add NOWAIT to select queries for Postgres?
    nafg
    @nafg
    Yeah there's some method to modify the SQL, I don't remember offhand though
    Margus Sipria
    @margussipria
    yah, i have tried to find out, but probably not well documented, haven't find anything
    nafg
    @naftoligug:matrix.org
    [m]
    Margus Sipria
    @margussipria
    this seems to work
        val action = query.result.headOption
        action.overrideStatements(action.statements.map(_ + " NOWAIT"))
    i'm just worried if binding will happen correctly in every case. was wondering if there could be some way making this work with WrappingQuery (like forUpdate)
      implicit protected class AddNoWait[E, U, C[_]](val q: Query[E, U, C]) {
        def nowait: Query[E, U, C] = {
          ???
          new WrappingQuery[E, U, C](q.toNode, q.shaped)
        }
      }
    nafg
    @naftoligug:matrix.org
    [m]
    No idea sorry
    Anil Singh
    @asjadoun
    Is there a way to connect to Sybase database ? Any help or example is appreciated.
    Antoine Doeraene
    @sherpal
    There is a jdbc driver so in principle that means you can connect. The difficulty is that you need to teach slick the syntax via a profile, unless the syntax is the same as another existing db
    Naveen
    @dexter2305

    The question is why you want to keep that import statement in as min files as possible? If it's because your project is the barebones for building other projects, then you can always store somewhere in a global object the profile you want to use. Something like

    object DatabaseGlobals {
      def profile = slick.jdbc.PostgresProfile
    }

    and then you can import that instead, via my.package.DatabaseGlobals.profile.api._.
    If the reason is for mocks in tests, then I think you can put your table definitions inside a trait which requires an abstract jdbc profile, and feed that profile by extending the trait (I think that works)

    @sherpal - Intention is to use an in memory database for testing.

    nafg
    @nafg
    I plan to do some maintenance tasks in say half an hour live on https://discord.gg/afDsPzmU, if anyone wants to join
    nafg
    @nafg
    On now
    Debasish Ghosh
    @debasishg

    Here is something that I am trying to do with Slick.

    • I have a column in a table which is NOT a primary key but I want to have a global ordering on it. Initial thinking was to have it an AUTO_INC column
    • On insert things are fine with the AUTO_INC doing its part in assigning an incremented value automatically
    • Now my use case demands that when a row gets updated I want to change the value of that column to the (max-value of the column + 1), something like update table set global_offset = select max(global_offset)+1 from table

    Besides the fact that this might have performance implications, the underlying sequence of the AUTO_INC column does not get updated. Hence this same value can be inserted in another row as well through an insert statement.

    Any help how I can do this with Slick ? Thanks.

    nafg
    @nafg
    @debasishg what RDBMS?
    I think you have a few different questions. I would suggest first figuring out how to do it in SQL in your RDBMS, then breaking down the different Slick questions that remain
    In general, Slick does not have built in support for update expressions, AFAIK (only literal values)
    Of course you can use the sql string interpolator, or construct DBIOs with raw JDBC interaction (I forgot the exact way to do that offhand)
    Debasish Ghosh
    @debasishg
    @nafg Can I do the following in Slick ? I know AutoInc is a better option but I want to have a handle on the sequence as I would like to use it during update of this table.
    CREATE TABLE state (
      offset integer NOT NULL DEFAULT nextval('state_ordering_seq'),
      ..
    );
    nafg
    @nafg
    @debasishg I'm not sure if O.Default takes an expression or raw SQL. But in general I don't recommend using Slick to generate your DDL
    It's nice that you can write slick table definitions and get some CREATE TABLE statements but then how do you do migrations? IMO it's convenient for getting started but not practical for production code
    Especially if you codegen your table definitions from the database
    If you draw a dependency diagram between the database and your code the arrows shouldn't reveal a cycle
    I use Flyway. Write your DDL in Flyway migrations, and then worry about how your code talks to the database
    Jeremiah Malina
    @jjmalina

    Can someone tell me if this config looks correct for mysql using Slick 3.3.1 on Scala 2.12?

    nc {
      mysql {
        profile = "slick.jdbc.MySQLProfile$"
        dataSourceClass = "slick.jdbc.DatabaseUrlDataSource"
        properties = {
          driver = "com.mysql.cj.jdbc.Driver"
          databaseName = "analytics_cache_schema"
          serverName = "localhost"
          portNumber = 3306
          user = "analytics_cache"
          password = "qwe90qwe"
          characterEncoding = "utf8"
          useUnicode = true
        }
        numThreads = 10
        keepAliveConnection = true
      }
    }

    I initialize with Database.forConfig("nc.mysql") but getting this error: java.lang.ClassNotFoundException: slick.jdbc.DatabaseUrlDataSource
    It only happens in EMR 6.0.0 in Spark 2.4.4, and not in my tests run by sbt
    If I inspect my jar with I can see slick/jdbc/DatabaseUrlDataSource.class is included

    nafg
    @naftoligug:matrix.org
    [m]
    Can you post the full output
    1 reply
    Naveen
    @dexter2305
    @jjmalina I used a similar configuration with mysql. I have build.sbt with "mysql" %% "mysql-connector-java" % "6.0.6". Can you check if you mysql driver library dependency defined in your build.sbt ?
    1 reply
    nafg
    @naftoligug:matrix.org
    [m]
    He said it works in sbt just not in spark
    Akinmolayan Olushola
    @osleonard
    @jjmalina slick and spark wont work together if what you are interested in is reading and writing to mysql you should use the structured streaming api's that supports any type of data storage
    1 reply
    Andy Czerwonka
    @andyczerwonka

    I have a simple insert that looks like:

    records += Record(c1, c2, c3)

    I'd like to change it to make that insert conditional. I don't want to proceed with the insert if a record exists where the all the columns match but c3 is null.

    I'd like to have Slick generate the following:

    insert into mytable (c1, c2, c3) values ('one', 'two', 'three')
    where not exists (select 1 from mytable where c1 = 'one' and c2 = 'two' and c3 is null);
    5 replies
    amahonee
    @amahonee

    Hi everyone, does anyone have experience using Slick with the Squants library? I am trying to add two colums of Rep[Length] together. The thing I'm running into seems like an implicit string conversion is being applied, messing things up. The first value (bow) is lifted and read as Rep[Length], the second (stern) however has anyOptionLift(primitiveShape(stringColumnType)) implicit added to it. Both bow and stern in the for comprehension are read as Rep[Length] but the yield uses the any2stringadd implicit causing a String expected error.

    val length: Rep[Option[Length]] = for {
          bow   <- nearbyVessel.distanceToBow
          stern <- nearbyVessel.distanceToStern
        } yield bow + stern

    Any suggestions on how I can solve this by using explicit extension methods or writing my own are greatly appreciated, cheers.

    MrFincher
    @MrFincher
    Hi,
    I have a situation where I have a complex query, for parts of which I already have lifted slick queries, the rest is (still) plain sql at the moment.
    In the end I want to use the query's results for an update which, as far as I can tell, is not currently possible with slick without loading it from the db into the memory of my service and sending it back to the db again, correct?
    But I would like to use lifted slick as much as possible.
    Baased on this , the two solutions I see are:
    • running the complex query using slick and storing the results in a temporary table to then do the update in plain sql
    • get the sql statement generated from my lifted slick query as a string and use it in a plain sql update statement (via string concatenation)
      The first issue I encountered while trying out the second option is that the generated sql contains generated column names like x1, x2 etc. I managed to workaround that by wrapping the query in some plain sql that renames the columns based on their position (e.g. using a with-clause/cte) but all this gives me the impression that this is a bad practice. I would be grateful for some thoughts on this.
    Jeremiah Malina
    @jjmalina
    So I'm now in spark-shell trying to debug java.lang.ClassNotFoundException: slick.jdbc.DatabaseUrlDataSource
    when I run Database.forConfig("nc.mysql") (see earlier message)
    What's odd is that I can initialize DatabaseUrlDataSource directly in the shell
    scala> import slick.jdbc.DatabaseUrlDataSource
    import slick.jdbc.DatabaseUrlDataSource
    scala> new DatabaseUrlDataSource()
    res7: slick.jdbc.DatabaseUrlDataSource = slick.jdbc.DatabaseUrlDataSource@15ff0f79
    Jeremiah Malina
    @jjmalina
    So what I think is happening is that I have slick-hikaricp 3.3.1 which depends on HikariCP 3.2.0 but in Spark on EMR HikariCP 2.4.12 is installed
    Jeremiah Malina
    @jjmalina
    Disabling connection pooling solved the ClassNotFoundException during database initialization, but now I'm facing a different error
    java.lang.NullPointerException
        at slick.jdbc.DriverDataSource.getConnection(DriverDataSource.scala:101)
        at slick.jdbc.DataSourceJdbcDataSource.createConnection(JdbcDataSource.scala:68)
        at slick.jdbc.JdbcBackend$BaseSession.<init>(JdbcBackend.scala:494)
        at slick.jdbc.JdbcBackend$DatabaseDef.createSession(JdbcBackend.scala:46)
        at slick.jdbc.JdbcBackend$DatabaseDef.createSession(JdbcBackend.scala:37)
        at slick.basic.BasicBackend$DatabaseDef.acquireSession(BasicBackend.scala:250)
        at slick.basic.BasicBackend$DatabaseDef.acquireSession$(BasicBackend.scala:249)
        at slick.jdbc.JdbcBackend$DatabaseDef.acquireSession(JdbcBackend.scala:37)
        at slick.basic.BasicBackend$DatabaseDef$$anon$3.run(BasicBackend.scala:275)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
    Jeremiah Malina
    @jjmalina
    Decided to try out using 3.2.3 instead of 3.3.3 because that uses an older version of HikariCP that hopefully won't conflict, but the codegen in my project now is not working. It runs without error but doesnt generate classes for tables. Switching back to 3.3.3 solves that
    Jeremiah Malina
    @jjmalina
    Managed to fix the classnotfoundexception by using shading in sbt. Now my issue is that when I initialize my database with Database.forURL and try to write to the db, the mysql driver connects using a different database URL and I get an error saying the table doesn't exist. Anyone experience something like this?