The question is why you want to keep that import statement in as min files as possible? If it's because your project is the barebones for building other projects, then you can always store somewhere in a global object the profile you want to use. Something like
object DatabaseGlobals {
def profile = slick.jdbc.PostgresProfile
}
and then you can import that instead, via my.package.DatabaseGlobals.profile.api._
.
If the reason is for mocks in tests, then I think you can put your table definitions inside a trait which requires an abstract jdbc profile, and feed that profile by extending the trait (I think that works)
WrappingQuery
(like forUpdate
) implicit protected class AddNoWait[E, U, C[_]](val q: Query[E, U, C]) {
def nowait: Query[E, U, C] = {
???
new WrappingQuery[E, U, C](q.toNode, q.shaped)
}
}
The question is why you want to keep that import statement in as min files as possible? If it's because your project is the barebones for building other projects, then you can always store somewhere in a global object the profile you want to use. Something like
object DatabaseGlobals { def profile = slick.jdbc.PostgresProfile }
and then you can import that instead, via
my.package.DatabaseGlobals.profile.api._
.
If the reason is for mocks in tests, then I think you can put your table definitions inside a trait which requires an abstract jdbc profile, and feed that profile by extending the trait (I think that works)
@sherpal - Intention is to use an in memory database for testing.
Here is something that I am trying to do with Slick.
AUTO_INC
columnAUTO_INC
doing its part in assigning an incremented value automaticallyupdate table set global_offset = select max(global_offset)+1 from table
Besides the fact that this might have performance implications, the underlying sequence of the AUTO_INC
column does not get updated. Hence this same value can be inserted in another row as well through an insert statement.
Any help how I can do this with Slick ? Thanks.
Can someone tell me if this config looks correct for mysql using Slick 3.3.1 on Scala 2.12?
nc {
mysql {
profile = "slick.jdbc.MySQLProfile$"
dataSourceClass = "slick.jdbc.DatabaseUrlDataSource"
properties = {
driver = "com.mysql.cj.jdbc.Driver"
databaseName = "analytics_cache_schema"
serverName = "localhost"
portNumber = 3306
user = "analytics_cache"
password = "qwe90qwe"
characterEncoding = "utf8"
useUnicode = true
}
numThreads = 10
keepAliveConnection = true
}
}
I initialize with Database.forConfig("nc.mysql")
but getting this error: java.lang.ClassNotFoundException: slick.jdbc.DatabaseUrlDataSource
It only happens in EMR 6.0.0 in Spark 2.4.4, and not in my tests run by sbt
If I inspect my jar with I can see slick/jdbc/DatabaseUrlDataSource.class
is included
I have a simple insert that looks like:
records += Record(c1, c2, c3)
I'd like to change it to make that insert conditional. I don't want to proceed with the insert if a record exists where the all the columns match but c3 is null
.
I'd like to have Slick generate the following:
insert into mytable (c1, c2, c3) values ('one', 'two', 'three')
where not exists (select 1 from mytable where c1 = 'one' and c2 = 'two' and c3 is null);
Hi everyone, does anyone have experience using Slick with the Squants library? I am trying to add two colums of Rep[Length] together. The thing I'm running into seems like an implicit string conversion is being applied, messing things up. The first value (bow
) is lifted and read as Rep[Length]
, the second (stern
) however has anyOptionLift(primitiveShape(stringColumnType))
implicit added to it. Both bow
and stern
in the for comprehension are read as Rep[Length]
but the yield uses the any2stringadd
implicit causing a String expected error.
val length: Rep[Option[Length]] = for {
bow <- nearbyVessel.distanceToBow
stern <- nearbyVessel.distanceToStern
} yield bow + stern
Any suggestions on how I can solve this by using explicit extension methods or writing my own are greatly appreciated, cheers.
java.lang.ClassNotFoundException: slick.jdbc.DatabaseUrlDataSource
Database.forConfig("nc.mysql")
(see earlier message)DatabaseUrlDataSource
directly in the shellscala> import slick.jdbc.DatabaseUrlDataSource
import slick.jdbc.DatabaseUrlDataSource
scala> new DatabaseUrlDataSource()
res7: slick.jdbc.DatabaseUrlDataSource = slick.jdbc.DatabaseUrlDataSource@15ff0f79
ClassNotFoundException
during database initialization, but now I'm facing a different error java.lang.NullPointerException
at slick.jdbc.DriverDataSource.getConnection(DriverDataSource.scala:101)
at slick.jdbc.DataSourceJdbcDataSource.createConnection(JdbcDataSource.scala:68)
at slick.jdbc.JdbcBackend$BaseSession.<init>(JdbcBackend.scala:494)
at slick.jdbc.JdbcBackend$DatabaseDef.createSession(JdbcBackend.scala:46)
at slick.jdbc.JdbcBackend$DatabaseDef.createSession(JdbcBackend.scala:37)
at slick.basic.BasicBackend$DatabaseDef.acquireSession(BasicBackend.scala:250)
at slick.basic.BasicBackend$DatabaseDef.acquireSession$(BasicBackend.scala:249)
at slick.jdbc.JdbcBackend$DatabaseDef.acquireSession(JdbcBackend.scala:37)
at slick.basic.BasicBackend$DatabaseDef$$anon$3.run(BasicBackend.scala:275)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Database.forURL
and try to write to the db, the mysql driver connects using a different database URL and I get an error saying the table doesn't exist. Anyone experience something like this?
Hi, I have a Main program that queries data from DB using slick. It constructs JSON from the record it gets from DB and then sends publishes to some kafka topics. Almost at the end of the program, I am trying to call a stored procedure that updates the records.
My code looks like this :
implicit val system: ActorSystem = ActorSystem()
implicit val mat: ActorMaterializer = ActorMaterializer()
implicit val ec = system.dispatcher
implicit val session: SlickSession = SlickSession.forConfig("my-mysql")
then
for {
record1 <- selectQuery1
record2 <- selectQuery2
// create JSON and publish to kafka, this is done using akka streams to get list of Future[Done]
topicsExecuted: List[Done] <- topicsExecutedF
procedureExecuted: Int <- session.db.run(myProcedure)
}
{
session.close()
system.terminate()
}
procedureExecuted
prints some value and confirms that the procedure is executed however, it does not end the main program.
If I remove the procedure call, the programs ends fine. I also converted the procedure call to a JDBC statement, which is synchronous and that also works fine. But when I try to call the procedure using Async code (something that returns Future), program does not end somehow.
These are my library specifications :
Manifest-Version: 1.0
Implementation-Title: akka-stream-alpakka-slick
Automatic-Module-Name: akka.stream.alpakka.slick
Implementation-Version: 1.0-M1
Specification-Vendor: Lightbend Inc.
Specification-Title: akka-stream-alpakka-slick
Implementation-Vendor-Id: com.lightbend.akka
Specification-Version: 1.0-M1
Implementation-URL: https://github.com/akka/alpakka
Implementation-Vendor: Lightbend Inc.
Thanks in advance for any help
[E] Adaptation of argument list by inserting () is deprecated: this is unlikely to be what you want.
[E] signature: SQLInterpolation.sql[P](param: P)(implicit pconv: scala.slick.jdbc.SetParameter[P]): scala.slick.jdbc.SQLInterpolationResult[P]
[E] given arguments: <none>
[E] after adaptation: SQLInterpolation.sql((): Unit)
[E] L1006: sql"""select now()""".as[DateTime].first
Hello Folks,
I am facing a problem while adding timestamp. I am using slick sql interpolation.
Original sql statement which works
(date_trunc('seconds', ('2021-06-24T12:30Z' - timestamp '2021-06-24T12:30Z') / 300) * 300 + timestamp '2021-06-24T12:30Z')
Slick is failing at
((date_trunc('seconds', (bucket - ${start.floor(FiveMinutes)}) / ${interval.duration.toSeconds}) * ${interval.duration.toSeconds} + ${start.floor(FiveMinutes)}))
i am getting invalid input syntax for type interval: "2021-06-24 12:30:00+00.
Any help?
hi everyone, could someone please tell me how i can produce where 'str literal' = 'str literal'
with slick's filter method? i've tried using
.filter(_ => SimpleLiteral[String]("str literal") === SimpleLiteral[String]("str literal"))
but it errors with Unknown column 'str literal' in 'where clause'
In case i'm trying to solve my underlying problem the wrong way, the context is that I'd like to be able to "label" my queries so that it's easier to trace them from AWS performance insights to their original code source, but I don't think slick allows me to add comments directly, and I can't rewrite all my code using plain sql
i think i solved my immediate issue. instead of "str literal"
i needed to use "'str literal'"
with the single quotes inside
that said, if anyone has any opinion on how I could maybe accomplish my real goal in a better, less hacky way, I would appreciate it. thank you!
result.statements
and query.overrideStatements
to append a comment to the end of the SQL.