Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
    Gavin Bisesi
    @Daenyth
    And the generic pattern of that repeats for maybe 4-5 data types from various tables
    Dmitriy
    @dpogretskiy
    @Daenyth then it's something like what i do, it's not too boilerplate heavy tho, check out the code i posted above, that'd be the filter i take from the UI, just a json post body essentially. Sorting is a bit more complicated as i wrote, you pretty much have to pattern match on what you are trying to sort by, also i have some utility like QueryRange[T](from: Option[T], to: Option[T]) that does something like from <= n && n <= to, some inSet checks for Option[Seq[T]], and pagination is a combination of limit and offset obviously and count(*) over() to count total number or rows that satisfy the sorting condition
    Dmitriy
    @dpogretskiy
         def count(): WindowFunc[Long] =
          com.github.tminglei.slickpg.window
            .WindowFunc[Long](new SqlFunction("count"), Seq(1.bind.toNode))
    
        def withTotalCount[I](x: I): (I, WindowFuncRep[Long]) =
          x -> (count() :: Over)
        def extractCount[T](xs: Seq[(T, Long)]): CountedData[T] =
          CountedData(xs.map(_._1), xs.headOption.map(_._2).getOrElse(0L))
      }
    those are for counting, so slick query looks something like tableQueryVal.filter(filterObject.filterOn).map(withTotalCount).drop(offset).take(limit).result.map(extractTotalCount) put some sorting in there and you're done
    case class CountedData[T](data: Seq[T], totalCount: Long) is the CountedData
    eltherion
    @eltherion

    Hi, I'm expecting strange behaviour while using Slick 3.3 with latest H2 in tests. I'm performing simple insert and for some unknown to me reason LocalDateTime is not converted to String like 2019-11-26 ..., but it's converted to sth like aced0005740017323031392d31312d32362031323a33323a32372e373038. Then, an attempt to put it into a column of H2 type TIMESTAMP obviously fails.

    The error:

    org.h2.jdbc.JdbcSQLException, with message: Cannot parse "TIMESTAMP" constant "aced0005740017323031392d31312d32362031323a33323a32372e373038"; SQL statement:
    [info]     insert into "table_name" ("code","date_time","subjects")  values (?,?,?) -- (?1, ?2, ?3) [22007-196].

    Have you ever had such an issue?

    Richard Dallaway
    @d6y
    @eltherion that's an odd one - not seen anything like that. I can insert LocalDateTime into H2 OK in a simple main method example. The aced... looks familiar. Maybe a JVM object serialization? Or something internal to JDBC. Not sure where that would be coming from (if that's right).
    What's changed for you? Just H2 upgrade?
    eltherion
    @eltherion
    Actually nothing has changed, I'm covering with tests code which is already working with PostgreSQL in runtime. I'm just adding tests, nothing more.
    Richard Dallaway
    @d6y
    @eltherion ok - what I think is happening is you have the Postgres profile imported in your h2 tests. You'll need to use import slick.jdbc.H2Profile.api._ rather than import slick.jdbc.PostgresProfile.api._ for the test cases.
    ShankarShastri
    @shankarshastri
    Hi guys,
    I wanted to achieve this one(https://stackoverflow.com/questions/22225324/add-temp-column-for-use-in-single-mysql-query-as-label) in slick, is it feasible using tablequery?
    Please help me out.
    eltherion
    @eltherion

    @eltherion ok - what I think is happening is you have the Postgres profile imported in your h2 tests. You'll need to use import slick.jdbc.H2Profile.api._ rather than import slick.jdbc.PostgresProfile.api._ for the test cases.

    @d6y I've got specifically import slick.jdbc.H2Profile.api._

    Konstantin Simeonov
    @KonstantinSimeonov

    Hi, I'm having the following error with autoincremented primary keys with postgres 12.1 and slick 3.3.2

    2019-11-29 17:25:51.314 EET [45010] DETAIL:  Failing row contains (null, john).
    2019-11-29 17:25:51.314 EET [45010] STATEMENT:  insert into "shoppers" ("john")  values ($1)

    I'm using the following table definitions:

    case class Shopper(name: String, id: Option[Long] = None)
    
    class ShopperDB(tag: Tag) extends Table[Shopper](tag, "shoppers") {
      def id = column[Long]("shopper_id", O.PrimaryKey, O.AutoInc)
      def name = column[String]("shopper_name", O.Unique)
    
      def * = (name, id.?).mapTo[Shopper]
    }
    
    // this fails
    db.run(
        shoppers += Shopper(name)
     )

    I've also tried various other insertions, but all of them yielded the same result:

    shoppers.map(_.name) += name
    // or
    shoppers += (0, name)

    Can you point me in some direction?

    Konstantin Simeonov
    @KonstantinSimeonov
    The generated sql for the table creation is the following:
    create table if not exists "shoppers" ("shopper_name" VARCHAR NOT NULL UNIQUE,"shopper_id" BIGINT NOT NULL PRIMARY KEY)
    Dmitriy
    @dpogretskiy
    shoppers.map(_.name) += name will do
    cause you're trying to insert null as an id, what i usually do in that case is split the id from actual class so that table looks like Table[(Long, Shopper)]
    and you have def shopper = (x,y,z).mapTo[Shopper] and def * = (id, shopper)
    works like a charm
    Konstantin Simeonov
    @KonstantinSimeonov
    thanks for the reply! I've tried your solution and unfortunately I got the same problem (still inserting null)
    updated code:
    case class Shopper(name: String)
    
    class ShopperDB(tag: Tag) extends Table[(Long, Shopper)](tag, "shoppers") {
      def id = column[Long]("shopper_id", O.AutoInc, O.PrimaryKey)
      def name = column[String]("shopper_name", O.Unique)
    
      def shopper = name.mapTo[Shopper]
      def * = (id, shopper)
    }
    
    // insertion
    shoppers.map(_.name) += name
    I feel like I'll just go timestamp + uuid as string or something like that
    Dmitriy
    @dpogretskiy
    oooh, i see, you don't have default on your id column, that's it
    that's why i'm not a fan of creating schema from slick definitions, you want id column to be of type BIGSERIAL or setting default next_val('yourtableseq') on it
    Konstantin Simeonov
    @KonstantinSimeonov
    yup, but I went with UUID as string for now
    Richard Dallaway
    @d6y
    @eltherion - thanks. So if you've abstracted over the profile (i.e., not imported postgres profile into your core queries/schema), then yes, it's not an import problem. Is there any code (or simplification) you can post in a repo so we can take a look at it?
    Richard Dallaway
    @d6y
    @KonstantinSimeonov I think your code should work. Are you sure that the schema you have (for the shopper_id column in the database) is as you think?
    Konstantin Simeonov
    @KonstantinSimeonov
    I think that's exactly the problem - when I look at the generated sql for creating the table with O.AutoInc, it's BIGINT NOT NULL PRIMARY KEY
    and it should BIGSERIAL
    it's working with MySqlProfile though
    but not with PostgresProfile
    Richard Dallaway
    @d6y
    Something's odd. Here's an example I just ran against PG 11: https://github.com/d6y/pg-insert
    Konstantin Simeonov
    @KonstantinSimeonov
    I'm using PG 12.1
    No idea why it's not working, I'll try the example later :)
    Richard Dallaway
    @d6y

    With PG 12 the code generates:

    Preparing statement: create table "shoppers" ("shopper_name" VARCHAR NOT NULL UNIQUE,"shopper_id" BIGSERIAL NOT NULL PRIMARY KEY)

    (and works)

    Konstantin Simeonov
    @KonstantinSimeonov
    That's great, I probably have something else wrong
    thanks a lot guys!
    Richard Dallaway
    @d6y
    :+1: The only thing to be aware of is that with AutoIncrement columns Slick knows to skip the primary key when generating insert statements (unless you use forceInsert, but I've never had to do that). Good luck... what you were doing looks just fine.
    Cate Nestor
    @cnestor
    Hello! I want to make it less of a pain in the ass to do really basic join queries (SELECT * FROM table_1 JOIN table_2 on table_1.table_2_id == table_2.id) with generated case classes... does anyone have any suggestions? The two things I'm thinking of trying are: A) try to modify the DSL to create a query that returns a tuple of the generated case classes for each table or, if that fails, B) create views for some common join queries and generate case classes from those views.
    The issue being that I want to return all the fields from each table, not just a subset, and don't want to write out and then deal with a tuple of like 20 fields.
    Cate Nestor
    @cnestor
    Oh, nvm, the tuple thing already works by default! :) Nice.
    Although maybe someone should add that to the query documentation: https://scala-slick.org/doc/3.0.0/queries.html
    Richard Dallaway
    @d6y

    @cnestor the source for that page is https://github.com/slick/slick/blob/master/doc/src/queries.md and if you could have a go at a PR for what you'd find helpful, I'll can try to review it or help you in some way.

    If there's something you'd find helpful missing in the docs that you've figured out, I'm pretty sure others would appreciate an update in the docs. But understand that people are busy, so no problem if not.

    Michael Steinkogler
    @michih57
    Hi everybody! I have a question about slick code generation. There is an option for the SourceCodeGenerator that generates separate source files for each table (outputToMultipleFiles). Unfortunately this results in code where the entity classes are missing (the <Table>Row case classes). Am I missing something here? Has anyone else used this option successfully?
    Andy Bailey
    @apps4uco

    Hi, I have slick with h2 and the jts on the classpath.

    The table is defined like this

    create table telemetry (
        id int8 primary key,
        pos GEOMETRY(POINT, 4326)
    );

    The jdbc driver returns a jts Point when getObject is called on a result set.

    I would like to use slick like this

    class Telemetry(tag:Tag) extends Table[ (Long,Point) ](tag,"TELEMETRY"){
    
        def id = column[Long]("ID",O.PrimaryKey)
        def pos = column[Point]("POS") 
        def * = (id,pos)
      }
      val telemetry = TableQuery[Telemetry]

    and get

    Error:(39, 28) could not find implicit value for parameter tt: slick.ast.TypedType[org.locationtech.jts.geom.Point]
        def pos = column[Point]("POS") //,O.DBType("Geometry")
    Error:(39, 28) not enough arguments for method column: (implicit tt: slick.ast.TypedType[org.locationtech.jts.geom.Point])slick.lifted.Rep[org.locationtech.jts.geom.Point].
    Unspecified value parameter tt.
        def pos = column[Point]("POS") //,O.DBType("Geometry")

    Then I tried

     implicit def pointMapper = MappedJdbcType.base[Point,Object](_, _.asInstanceOf[Point])

    and get

    Error:(33, 63) could not find implicit value for evidence parameter of type slick.jdbc.H2Profile.BaseColumnType[Object]
      implicit def pointMapper = MappedJdbcType.base[Point,Object](
    Error:(33, 63) not enough arguments for method base: (implicit evidence$1: scala.reflect.ClassTag[org.locationtech.jts.geom.Point], implicit evidence$2: slick.jdbc.H2Profile.BaseColumnType[Object])slick.jdbc.H2Profile.BaseColumnType[org.locationtech.jts.geom.Point].
    Unspecified value parameter evidence$2.
      implicit def pointMapper = MappedJdbcType.base[Point,Object](

    What is the most elegant way to tell slick that the jdbc driver is really returning a Point for this column?

    Richard Dallaway
    @d6y
    @apps4uco that's an interesting problem. I think the pointMapper won't work out because you need to map to something Slick understands, and I don't think it knows what to do with Object. I'm not sure what the easiest way is to handle this. Possible have a look at https://github.com/tminglei/slick-pg and see how they have handled extending Postgres for geo support.
    For my own education, I've hacked up something that appears to work (it's not a domain I'm familiar with): https://github.com/d6y/h2-point
    ...but it's not very elegant (e.g., having to be explicit with an O.SqlType), not having any point-based operators (if there are any). Slick-pg is very likely to be the place to look for inspiration.
    Andy Bailey
    @apps4uco
    @d6y thanks ever so much. Thats exactly what I wanted. Would it work to override def sqlTypeName(sym: Option[FieldSymbol]): String to avoid the O.SqlType that you mentioned. For me your solution is fine as I need to specify the SRID
    Richard Dallaway
    @d6y
    Probably! There must be flexibility in that part of Slick. Let us know how you get on :+1: