Where communities thrive

  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
Repo info
    Dick Wall
    got the connection.createArrayFor working - I had some boneheaded code making the connection null, would still love to know why a connection is actually needed though, versus just a static method on a JDBC driver (but that's a JDBC question, not a scalikejdbc one)
    Kazuhiro Sera
    You can work with any types by using ParameterBinder although it's a bit verbose way. If we can deal with Array values more smoothly, that would be quite better. http://scalikejdbc.org/documentation/operations.html#using-parameterbinder
    Sergey Konyshev
    Hi there!
    I'm working with 2 database schema. Is there a way to send session explicitly? For now my code looks like:
    sql"""select * from $etlTableName where id_package=$packageId""".foreach { rs => defaultDb.localTx { implicit session => ... }}
    As I understand there is starting new transaction for each row, I prefer of course to have it in single tx.
    I'm new to all this, so I'm sorry if it is mostly Scala-related.
    Kazuhiro Sera
    Explicitly passing DBSession will work as you expected.
    defaultDb.localTx { defaultSession =>
      etlDB.readOnly { etlSessin =>
        sql"""select * from $etlTableName where id_package=$packageId""".foreach { etlRs => 
          sql"insert something".update.apply()(defaultSession)
    Sergey Konyshev
    Thank you! It works!
    But one thing... while trying to apply the same approach to single.apply(), I've got the "Uncpecified value parameters ...". It looks like I must explicitly define all of them.
    Vladimir Morozov
    Is it possible to mix DSL with custom string. For example I want to add ON CONFLICT DO NOTHING to end of my batch Insert query:
    withSQL {
            c.id -> sqls.?,
            c.sendDate -> sqls.?,
            c.msgType -> sqls.?,
            c.state -> sqls.?,
            c.body -> sqls.?
        }.batch(params: _*).apply()
    Vasili Novikov
    Hi! Are there any caveats in running the scalikejdbc examples from readme.md on github? I tried to run them but I constantly get an exception on the sql"create table ***".execute.apply() step.
    I tried to run this in an SBT console, in main method and in tests - result is the same.
    What's worse, I do not see the error reason. The stack trace is cut of in both directions, and the reason of the exception is not printed.
    java -version -> "openjdk version 1.8.0_92", scalilike version -> 2.4.2.
    Vasili Novikov
    Never mind, it seems scalikejdbc shows the stack trace in debug mode as a normal operation.
    I still don't kno why sql"create table *".execute.apply() returned false, but it's probably just some internals that I do not understand..
    excuse me ,is anyone here?
    Kazuhiro Sera
    Hi, ask questions at https://gitter.im/scalikejdbc/en
    In scalikejdbc ,how to write the sql select * from table where rownum=1
    IS anyone online?
    Kazuhiro Sera
    sql"select * from table where rownum=1" is not enough?
    i want to use it IN sql"
    Kazuhiro Sera
    I don't understand what you question means yet.
    how to do this in scala
    select * from table where a in (1,2,3)
    like this
    select ... from
    is anyone here?
    is there anyone here?
    how to use substring in scalikeJdbc
    select * from exchange_connection_cond_mst
    where SUBSTRING(service_start_date,1,10) <= '2016-12-01' 
    and SUBSTRING(service_end_date,1,10) >= '2016-12-01'
    like this
    Yuriy Badalyantc

    Hi everybody. I want to understand, why sql interpolation syntax allows to map unknown types as Objects. Recently, I faced a problem, when I wrote sql like:

    case class MyType(value: Long) extends AnyVal
    val list: List[MyType] = ???
    sql"select * from some_table where id in (${list})"

    The most suprprising thing for me was the fact, that this code compiled without any warning. Maybe it would be better to use type class for this binding? Like TypeBinder, but for binding in prepared statement.

    hey can any one help me with dynamic assigning column to table ?
    Kazuhiro Sera
    I don't understand your question yet.
    Hey, I’m a bit confused on using scalikejdbc with play. I have a play app with compile time DI. I enabled the plugins for scalikejdbc, but I’m not sure how to actually get the db in my controllers. I would expect that I add something to my controller arguments and then pass it down to the service that will use it. I basically have repositories that expose methods with implicit session argument, then one level up I have a service that uses the respositories but also calls their methods within a transaction boundary. How do I pass a reference to DB through to those services?
    Kazuhiro Sera
    @alexeygolev answered at scalikejdbc/en
    Paul Cleary
    Is there any difference between bindByName style sql templates and string interpolation? In other words, do we gain any performance improvements of having a PreparedStatement when using bindByName?
    Kazuhiro Sera
    answered at scalikejdbc/en
    Serge Klochkov
    Pardon my question, I think it is more scalike-async related, but this project doesn't seem to have separate Gitter channel.
    The problem is: given col BIGINT with UNIQUE INDEX on it, query SELECT * FROM table WHERE col = ? performs sequential scan instead of index scan using scalikejdbc-async 0.7.1. With scalikejdbc everything works as intended - appropriate index scan is applied.
    Explain says that in case of sync driver following sample parameter is binded: Index Cond: (col = '-952809911961998644'::bigint)
    However, in case of async driver I see really fishy conversions Filter: (col)::numeric = '-7104469523658443411'::numeric)
    What's the appropriate way to overcome described situation (I really want to keep using async driver)?
    Serge Klochkov
    and there is another question related to scalikejdbc. Is it possible to use Binders / PBF while performing batch insert ?

    Hi, is it possible to add the results of a select query into nested case classes?
    MainReportFields is the main case class, I am trying to do something like this:

     val dashboardDetails: List[MainReportFields] =
        SQL("select " +
          "d.dashboard_id, " +
          "d.dashboard_name, " +
          "rpt.cube_id, " +
          "rpt.report_id," +
          "rpt.query_builder_json," +
          "rpt.action," +
          "g.column_id,g.sequence," +
          "sc.selected_column_id," +
          "sc.column_name," +
          "sc.sequence," +
          "fd.formatting_id," +
          "fd.selected_column_id," +
          "fd.format_id," +
          "fd.type " +
          "from ReportingMetabase.dashboard_details d " +
          "inner join ReportingMetabase.report_details rpt " +
          "on d.dashboard_id = rpt.dashboard_id " +
          "inner join ReportingMetabase.group_by_details g " +
          "on rpt.report_id = g.report_id " +
          "inner join ReportingMetabase.selected_columns sc " +
          "on rpt.report_id = sc.report_id " +
          "inner join ReportingMetabase.formatting_details fd " +
          "on sc.selected_column_id = fd.selected_column_id " +
          "inner join ReportingMetabase.format_type_details ftd " +
          "on fd.format_id = ftd.format_id")
          .map(_.get[MainReportFields](1)).list() apply()

    case class looks like this:

        case class ReportsFields(
                          reportid: Option[Long],
                          reportName: String,
                          selectedColumns: List[SelectedColumns],
                          queryBuilderString: String,
                          havingjson: String,
                          groupByFields: List[GroupByFields],
                          orderByFileds: List[OrderByFileds]
        case class MainReportFields(
                             cubeid: Long,
                             dashboardId: Long,
                             dashboardName: String,
                             reports: List[ReportsFields]
    Paul Cleary
    is it possible to do INSERT INTO VALUES (?, ?, ?), (?, ?, ?) (bulk inset) with scalikejdbc?
    Saverio Veltri

    Hi, in my application, users are able to provide a generic query, without placeholders.
    This query is executed in this way

    SQL(queryString).map(rs => … )

    is there a way to parse a generic query constructed as above in order to get a SelectSQLBuilder. My objective would be to be able to add where conditions provided not contextually to the query itself.

    Thanks for the support

    Kazuhiro Sera
    At this moment, scalikejdbc doesn't offer such functionalities. Also, I don't have any good idea to implement it with small efforts.
    Saverio Veltri
    Thanks @seratch. We ended up using this library https://github.com/JSQLParser/JSqlParser to parse the input query and add where conditions. Hope it could help.
    Hi all, I am trying to do a delete and create in a transaction block. DB.localTx { implicit session => delete(id) create(..,..,..) } But i am passing session from outside.Since local transaction creates its own session in the block, while unit testing it fails, since it is not aware of the values inserted from the test case.It fails with the following error . delete from price where id = 2 Lock wait timeout exceeded; try restarting transaction com.mysql.cj.jdbc.exceptions.MySQLTransactionRollbackException: Lock wait timeout exceeded; try restarting transaction. Kindly share solutions if you are familiar with the issue.
    Kelvin Trambadiya
    Hello Guys, I am new to scalikejdbc Can somebody please help to write queryDSL for this query
    select a.*, b.*, c.*, d.* from cbs_payment_req_line as a
        JOIN cbs_pre_payment_line as b on b.ID=(select max(id) from cbs_pre_payment_line WHERE credit_transaction_id=a.transaction_id) 
            AND b.credit_transaction_type = "P" 
            AND b.status != "R"
        JOIN cbs_payappdet_line as d on d.credit_id = b.credit_id
        JOIN cbs_tranopelog_line as c on c.transaction_id=(select max(transaction_id) from cbs_tranopelog_line where source_transaction_id=a.transaction_id)
        WHERE a.status != "R" 
            AND a.reversal_transaction_id IS NULL 
            AND a.payment_mode = 1
    how can I use enum types for inserting data using scalikejdbc?
    can anyone share any Reference
    Han Ju

    Hi, I've a problem with long running DBSession

    conn.autoCommit { implicit session =>

    The long running job can take several hours, longer than the mysql's wait_timeout so the connection becomes invalid for dbOps2() call.

    I also need the 2 db ops to in the same session to be able to test them in integration tests.
    I've set the validationQuery in my connection pool settings but it does not help in this case.
    Any idea what to do in this case? Use autoReconnect=true in mysql's connection string is a good idea?