Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
    فارس
    @serafss2
    Oh I see it, a nested list. column_id expects an id not a List
    Sindhuja
    @Sindu__twitter

    Here is the case class:

       case class GroupByFields(
                          columnid: Long,
                          seq: Long
                        )
        case class ReportsFields(
                          selectedColumns: List[SelectedColumns],
                          queryBuilderString: String,
                          groupByFields: List[GroupByFields],
                          orderByFileds: List[OrderByFileds]
                        )

    Here is the complete function:

        def insertReportFields(cubeId: Long, DashboardName: String, reports: 
         List[ReportsFields]) = {
         NamedDB('default) localTx  (implicit session => {
        val group_by_details = reports.map(rpt => (rpt.groupByFields.map(s => 
        Seq(s.columnid,s.seq))))
        println(group_by_details) //this prints -> List(List(List(1, 1), List(1, 2)))
        SQL("insert into group_by_details (column_id,sequence) values (?,?)")
          .batch(group_by_details: _*).apply()
           })
         }  
    فارس
    @serafss2
    try batch(group_by_details.head: _*)
    it looks like you have one unecessary nesting
    Also can you edit the last message to wrap the code into a code snippet if you don't mind?
    Sindhuja
    @Sindu__twitter
    thanks, that does seem to work
    فارس
    @serafss2
    Great, what you can do is use a flatmap in val group_by_details = reports.map( (I think) instead of using the group_by_details.head
    Sindhuja
    @Sindu__twitter
    ok, will try that out
    Sindhuja
    @Sindu__twitter

    Hi, is it possible to add the results of a select query into nested case classes?
    MainReportFields is the main case class, I am trying to do something like this:

         val dashboardDetails: List[MainReportFields] =
        SQL("select " +
          "d.dashboard_id, " +
          "d.dashboard_name, " +
          "rpt.cube_id, " +
         "rpt.report_id," +
         "rpt.query_builder_json," +
         "rpt.action," +
          "g.column_id,g.sequence," +
         "sc.selected_column_id," +
          "sc.column_name," +
          "sc.sequence," +
          "fd.formatting_id," +
         "fd.selected_column_id," +
         "fd.format_id," +
           "fd.type " +
          "from ReportingMetabase.dashboard_details d " +
          "inner join ReportingMetabase.report_details rpt " +
           "on d.dashboard_id = rpt.dashboard_id " +
            "inner join ReportingMetabase.group_by_details g " +
           "on rpt.report_id = g.report_id " +
          "inner join ReportingMetabase.selected_columns sc " +
           "on rpt.report_id = sc.report_id " +
          "inner join ReportingMetabase.formatting_details fd " +
           "on sc.selected_column_id = fd.selected_column_id " +
            "inner join ReportingMetabase.format_type_details ftd " +
            "on fd.format_id = ftd.format_id")
            .map(_.get[MainReportFields](1)).list() apply()

    case class looks like this:

    case class ReportsFields(
                      reportid: Option[Long],
                      reportName: String,
                      selectedColumns: List[SelectedColumns],
                      queryBuilderString: String,
                      havingjson: String,
                      groupByFields: List[GroupByFields],
                      orderByFileds: List[OrderByFileds]
                    )
    
    case class MainReportFields(
                         cubeid: Long,
                         dashboardId: Long,
                         dashboardName: String,
                         reports: List[ReportsFields]
                       )

    _

    Paul Cleary
    @pauljamescleary
    Hello, is it possilbe to execute bulk insert using INSERT INTO table (a, b, c) VALUES (?, ?, ?) for mysql in bulk insert with scalikejdbc?
    I turned on logging, and they all reported out like individual / separate inserts
    I am doing sql"INSERT INTO table (a, b, c) VALUES (?, ?, ?)".batch(group: _*).apply()
    Sindhuja
    @Sindu__twitter

    @pauljamescleary , yes it's possible:

        val selectedKey = SQL("insert into selected_columns 
        (report_id,sequence,cube_column_id,function_id) values 
        (?,?,?,?)").batchAndReturnGeneratedKey(insertSelectColumns: _*).apply()

    you can try this and see if selectedKey returns the primary keys of the inserted values

    Paul Cleary
    @pauljamescleary
    @Sindu__twitter I meant the bulk insert INSERT INTO table (a, b, c) VALUES (?, ?, ?), (?, ?, ?), (?, ?, ?), (?, ?, ?)...
    when I tried doing this with batch, I saw individual inserts as opposed to one big insert statement that I was expecting
    فارس
    @serafss2
    Hi, do you guys have best practices for database connections?
    What I do now is I pass the (implicit session: DBSession) from the session at the main server level all the way down to the queries (so every function takes in parameter the implicit session) idk how to fix this mess
    Paul Cleary
    @pauljamescleary
    @Sindu__twitter Found out you need rewriteBatchStatements parameter to be set. Also, I was attempting to use .setBlob(new MariaDbBlob), and that was causing issues. Instead, used .setBytes(bArray) and everything worked well. Thanks!
    فارس
    @serafss2
    The problem with this is that it will start a new database session for every object, i only want to use 1 session and multiple connections in that session
    Reason for this is in my tests, when I write integration tests I need to keep multiple queries from multiple data accessors in one transaction, which is not possible if we creat a new session for each object
    Or am I wrong?
    Kazuhiro Sera
    @seratch
    @pauljamescleary Sorry for my late reply. I haven't checked here for a long time. If you need one big insert statement, you need to build such a query dynamically by your own. For instance,
    val valuesPart: SQLSyntax = sqls.csv(group.map { params => sqls"(${sqls.csv(params:_*)})" }: _*)
    sql"INSERT INTO table (a, b, c) VALUES ${valuesPart}".execute.apply()
    @Sindu__twitter It's possible to bind values from ResultSet to nested data structure. See also: http://scalikejdbc.org/documentation/one-to-x.html However, mapping to one to many to many structure by issuing a single SQL query can be inefficient for most cases. Also, you can do such mapping manually but one-to-x API doesn't support the case.
    Paul Cleary
    @pauljamescleary
    @seratch it actually does work with rewriteBatchedStatements set to true :) Verified with the general log
    Andrii Lytvynenko
    @satyriasizz

    Hi, people. I'm trying to connect ScalikeJDBC with SQLite and getting
    java.sql.SQLException: Cannot change read-only flag after establishing a connection. Use SQLiteConfig#setReadOnly and SQLiteConfig.createConnection().
    all the time.

    Is there any way to connect the lib with SQLite?

    Mark Harris
    @SiliconMeeple
    Apologies if this is a FAQ and I've just missed it somewhere. I've got a data set on S3 I'm looking to query from a lambda. The data's stored in parquet, so Athena seems like a good choice. Last time I looked, it wasn't possible to use the athena connector and scalike because it's based entirely on prepared statements, which the connector doesn't support.
    Is that still the case?
    If so, has anyone successfully used https://github.com/zaneli/scalikejdbc-athena or some other way to query Athena at a higher level than hand-rolling a lot of AWS SDK cruft?
    Jem Mawson
    @Synesso
    Hi. I'm trying to do an UPDATE ... RETURNING statement with an AutoSession, but it fails because SQLToResult enforces a readonly txn. Can I circumvent this? https://stackoverflow.com/questions/55213167/update-returning-queries-in-scalikejdbc
    Jem Mawson
    @Synesso

    Another question… I have a method that works in prod:

    DB.localTx { implicit s =>
     …
    }

    But when run in test it does not see the data set up in the fixture, as part of scalikejdbc.specs2.mutable.AutoRollback. That’s because the spec has its own transaction.
    Instead of DB.localTx, what should I use that would join the existing transaction if necessary (spec) or start a new transaction (prod)? I have access to session.tx: Option[Tx] which is Nonein prod and Some in test.

    Jem Mawson
    @Synesso
    I solved it by matching on the session type. If it’s AutoSession, I know I’m inside play. Otherwise I’m probably in a spec and need to play in the same session. This minimises the difference between prod and test code in the prod class. Not perfect, but good enough for me. Open to other suggestions:
    def statement(implicit session: DBSession): Iterator[Payment] = {
      sql"""
        update payments
        set status='submitted', submitted=$now
        where status='pending'
        and scheduled <= $now
        returning $paymentFields
      """.map(from).iterable().apply().toIterator
    }
    
    session match {
      case AutoSession => DB.localTx(statement(_))
      case _           => statement(session)
    }
    Kenneth Lu
    @krlu
    Hi all, I want to create a scala function that lets you create a table with some input name
    I have something like this def createTable(name: String): Unit = sql"""create table $name (id serial not null primary key, name varchar(64), created_at timestamp not null)""" .execute().apply()
    but if i'm understanding this right, scalike treats $name as a binding for a sql query, hence this doesn't work
    any suggestions ?
    Vladimir Morozov
    @greenhost87
    Hi all, how to correct read and write Seq[T] stored as JSONB in PostrgeSQL?
    Jonas Natten
    @jnatten
    This isn't really a scalikejdbc issue, but I'm using json4s to serialize and deserialize data to jsonb. And after upgrading json4s from 3.5.5 to 3.6.x something goes wrong.
    I just use read[LearningPath](jsonString) in my constructor to deserialize and the LearningPath type is defined like this: http://scalikejdbc.org/documentation/one-to-x.html
    stacktrace -> https://hastebin.com/naxadomano.scala
    Any suggestions?
    larstoc
    @larstoc

    Hey, I was wondering how I would call a stored procedure with scalike.

    CREATE OR REPLACE FUNCTION truncate_tables() RETURNS void AS $$
    DECLARE
        statements CURSOR FOR
            SELECT tablename FROM pg_tables
            WHERE schemaname = 'public';
    BEGIN
        FOR stmt IN statements LOOP
            EXECUTE 'TRUNCATE TABLE ' || quote_ident(stmt.tablename) || ' CASCADE;';
        END LOOP;
    END;
    $$ LANGUAGE plpgsql;

    This is the stored procedure I'm using. I was hoping I could do something like this:

    val sql = sqls"select truncate_tables()"
    sql.single.apply()

    Is there a way to do this?

    larstoc
    @larstoc
    No one watching this channel anymore?
    Kenneth Lu
    @krlu
    @larstoc so, I'm not a contributor to scalike, but I think the best way to get your question answered might be to post a github issue:
    amirhac
    @amirhac
    Hello, is anyone use scalikejdbcGen with Presto? When I tried it, I am getting java.sql.SQLFeatureNotSupportedException: primary keys not supported Exception since primary keys is not supported in Presto jdbc library.
    uneewk
    @uneewk
    Hello, also interested in knowing if anyone here has used ScalikeJDBC with AWS Snowflake. I pretty much have it working for Inserts/Deletes but Select statements fail because it appears ScalikeJDBC is using readOnly DB session and attempting to set the readOnly flag on the underlying JDBC driver which is failing in snowflake-jdbc because setting the flag is not supported. Has anyone run into this before or know a workaround?
    uneewk
    @uneewk
    Perhaps lemme ask question in a different way: can a run a select query via ScalikeJDBC query interpolation in a non-readonly way?
    Kazuhiro Sera
    @seratch
    @larstoc Thanks for submitting an issue - as I already answered there, it's not supported yet scalikejdbc/scalikejdbc#1062
    @amirhac hmm, I've never tried to run the command with Presto. If fixing the incompatibility needs a little efforts, it may be possible to address it.
    @uneewk will respond to your GH issue later
    hanchi321
    @hanchi321
    Hello, is anyone read Typesafe Config from HDFS path? For different Environment using different config path
    Han Ju
    @darkjh
    Hi. Just found out that when I'm using sql interpolation with java.time.LocalDateTime, scalikejdbc acutally changes the time according to the default time zone. Is there a way to deactivate this behavior? Thanks!
    What we want is just to write the LocalDateTime as is into DB, regardless of the timezone on the machine.
    Ilya Kachalsky
    @IlyaHalsky
    Hi everyone, does anyone know where to find change log for new versions?