Model class & Table class are respectively. I am not able to understand compilation failure with insert Query. Can someone help ?
case class Task(id: Int = -1, text: String, project: Option[String], context: Option[String], completed: Boolean)
class Tasks(tag: Tag) extends Table[Task](tag, "tasks"){
def id: Rep[Int] = column[Int]("id")
def text: Rep[String] = column[String]("text")
def project: Rep[Option[String]]= column[Option[String]]("project")
def context: Rep[Option[String]] = column[Option[String]]("context")
def isCompleted: Rep[Boolean] = column[Boolean]("isCompleted")
override def * = (id, text, project, context, isCompleted) <> ( Task.tupled, Task.unapply)
}
InsertQuery looks like this now.
val insertQuery= tasks += newTask
A lot of examples are in similar way but I could not figure out the difference here for a couple of hours now.
export
might be able to solve that, not sure
Task
(as quoted in the snippet above) along with DAO/Repository. Now that we spoke of H2 (only for testing), question I am getting is - is it a good idea to test with H2 when the production is going to be something else. Kind of beginner with slick - not sure if this advisable. If so - any directions ? Thanks
SchemaGenerator
is needed. SchemeGenerator
requires Database
and collection Table[Model]
. There is also this sticky import statement import slick.jdbc.PostgresProfile.api._
which I want to keep it min files as possible. Given these I am looking for design these classes.
The question is why you want to keep that import statement in as min files as possible? If it's because your project is the barebones for building other projects, then you can always store somewhere in a global object the profile you want to use. Something like
object DatabaseGlobals {
def profile = slick.jdbc.PostgresProfile
}
and then you can import that instead, via my.package.DatabaseGlobals.profile.api._
.
If the reason is for mocks in tests, then I think you can put your table definitions inside a trait which requires an abstract jdbc profile, and feed that profile by extending the trait (I think that works)
WrappingQuery
(like forUpdate
) implicit protected class AddNoWait[E, U, C[_]](val q: Query[E, U, C]) {
def nowait: Query[E, U, C] = {
???
new WrappingQuery[E, U, C](q.toNode, q.shaped)
}
}
The question is why you want to keep that import statement in as min files as possible? If it's because your project is the barebones for building other projects, then you can always store somewhere in a global object the profile you want to use. Something like
object DatabaseGlobals { def profile = slick.jdbc.PostgresProfile }
and then you can import that instead, via
my.package.DatabaseGlobals.profile.api._
.
If the reason is for mocks in tests, then I think you can put your table definitions inside a trait which requires an abstract jdbc profile, and feed that profile by extending the trait (I think that works)
@sherpal - Intention is to use an in memory database for testing.
Here is something that I am trying to do with Slick.
AUTO_INC
columnAUTO_INC
doing its part in assigning an incremented value automaticallyupdate table set global_offset = select max(global_offset)+1 from table
Besides the fact that this might have performance implications, the underlying sequence of the AUTO_INC
column does not get updated. Hence this same value can be inserted in another row as well through an insert statement.
Any help how I can do this with Slick ? Thanks.
Can someone tell me if this config looks correct for mysql using Slick 3.3.1 on Scala 2.12?
nc {
mysql {
profile = "slick.jdbc.MySQLProfile$"
dataSourceClass = "slick.jdbc.DatabaseUrlDataSource"
properties = {
driver = "com.mysql.cj.jdbc.Driver"
databaseName = "analytics_cache_schema"
serverName = "localhost"
portNumber = 3306
user = "analytics_cache"
password = "qwe90qwe"
characterEncoding = "utf8"
useUnicode = true
}
numThreads = 10
keepAliveConnection = true
}
}
I initialize with Database.forConfig("nc.mysql")
but getting this error: java.lang.ClassNotFoundException: slick.jdbc.DatabaseUrlDataSource
It only happens in EMR 6.0.0 in Spark 2.4.4, and not in my tests run by sbt
If I inspect my jar with I can see slick/jdbc/DatabaseUrlDataSource.class
is included