Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Oct 20 19:34
    markuskobler starred getquill/quill
  • Oct 20 17:10

    juliano on master

    Update sqlite-jdbc to 3.36.0.3 … (compare)

  • Oct 20 17:10
    juliano closed #2273
  • Oct 20 13:20
    juliano synchronize #2273
  • Oct 20 13:20

    juliano on master

    Update mysql-connector-java to … (compare)

  • Oct 20 13:20
    juliano closed #2278
  • Oct 20 04:47
    ibreaz starred getquill/quill
  • Oct 19 22:18
    scala-steward synchronize #2273
  • Oct 19 22:09
    juliano synchronize #2278
  • Oct 19 22:09

    juliano on master

    Update postgresql to 42.3.0 (#2… (compare)

  • Oct 19 22:09
    juliano closed #2276
  • Oct 19 21:47
  • Oct 19 19:40
    cilicio starred getquill/quill
  • Oct 19 17:06
    fabianhjr closed #2227
  • Oct 19 15:57
    thinkiny starred getquill/quill
  • Oct 19 00:28
    scala-steward opened #2278
  • Oct 18 17:40
    Jordanwbutterwick opened #2277
  • Oct 18 16:59
    scala-steward opened #2276
  • Oct 18 14:50
    pshemass edited #2267
  • Oct 15 18:58
    reidrac starred getquill/quill
druhinsgoel
@druhinsgoel
@deusaquilus What is the correct way to create an infix for an dynamic query? For example, if I want to do a FOR UPDATE on a dynamic query:
def queryPerson(
    idOpt: Option[Long] = None
) = {
  ctx.run(
    dynamicQuery[Person]
      .filterOpt(lift(idOpt))((person, id) => quote(person.id == id))
      .forUpdate
   )
}
David Bouyssié
@david-bouyssie
Hi there. I'm searching for documentation about RunQueryResult.
I'm wondering why it is expressed as a List instead of Iterable.
It lets me think that all the records are gathered after ctx.run, while I would prefer a lazy loading/iteration.
Philippe Derome
@phderome
Can I use dynamic queries over JDBC (in stream mode preferably, but that's orthogonal concern) where the number of columns and their types are well defined at runtime for each row but not at compile time (I'd know which binding to do for each column on a Sql.ResultSet) from which I would get for each row a List[Any] to represent the row. Is there support for that?
Philippe Derome
@phderome
Maybe the solution would entail doing casting to ::text within the SQL and now we have a uniform list of strings to return (as I am not interested in the actual type safe values, just the string representations). So something like .as[Query[List[String]]]
e-Evolution
@e-Evolution
Hi everybody

I am testing the new implementation with ZIO, but I have the problem that the implicit val organizationSchemaMeta with which I have mapped the database columns does not work.

Unlike the other implementations I am using context.stream:

context.stream ( query [Organization] .filter (_. organizationId == lift (organizationId)) )
Any idea how to solve, here is my code:

implicit val organizationSchemaMeta: context.SchemaMeta[Organization] = schemaMeta[Organization]( "AD_Org", _.tenantId -> "AD_Client_ID", _.organizationId -> "AD_Org_ID", _.isActive -> "IsActive", _.created -> "Created", _.createdBy -> "CreatedBy", _.updated -> "Updated", _.updatedBy -> "UpdatedBy", _.value -> "Value", _.name -> "Name", _.description -> "Description", _.replicationStrategyId -> "AD_ReplicationStrategy_ID", _.parentOrganizationId -> "Parent_Org_ID", _.uuid -> "UUID" )
def getById( organizationId: Id ): ZStream[ OrganizationZioRepository with QConnection, Throwable, Organization ] = context.stream( query[Organization].filter(_.organizationId == lift(organizationId)) )
the error is :
╠─A checked error was not handled. ║ org.postgresql.util.PSQLException: ERROR: column x14.clientid does not exist ║ Position: 8
so it is not mapping the database columns
Oleg Pyzhcov
@oleg-py
could happen if e.g. "clientid" is defined as a field of Organization case class but you don't have it remapped in this schema meta
e-Evolution
@e-Evolution
ok let me check
e-Evolution
@e-Evolution
@oleg-py thank alot , my mapping had an error , I solve now
Adriani Furtado
@Adriani277
Hey all, I have come across the Codegen and has a few questions on it.
  1. I am using new SimpleJdbcCodegen(datasource, "my.package"), my datasource is pointed to a specific schema. However when the code gets generated it pulls every schema in my server, I'm I doing something wrong here?
  2. My next question is, assuming I have solved issue 1, is it possible for me to filter which tables in a schema I would like to generate? We have a schema of about 20 table and I would like to generate the case classes for only 4 of those
Mathieu Prevel
@mprevel

Hi everyone,

I try to use quill with monix to get a stream of db objects where rows are filtered on an enum column.

When not using a stream I am able to get the expected results from the database.

query[MyTable]
.filter(_.myField == lift(value))

Stream 1:
When filtering the query before using it as a stream it compiles but crashes at runtime.

stream(query[MyTable]
.filter(_.myField == lift(value))
)
'ERROR: operator does not exist: t_my_enum = character varying
Indice : No operator matches the given name and argument types. You might need to add explicit type casts.
Position : 198'. Aborting connection.
...
Caused by: [CIRCULAR REFERENCE: org.postgresql.util.PSQLException: ERROR: operator does not exist: t_my_enum = character varying
...

Stream 2:
When filtering the stream (not at the query level) it works as expected.

stream(query[MyTable])
.filter(_.myField == value)

Is it expected for the stream 1 to fail ?
Should I use the stream 2 version ? Does it mean that the whole table will be streamed to the application and filtered by the application ?

dassum
@dassum
how can I use usingTtl for my cassandra inserts when I am using dynamicQuery
Akhil Kodali
@akhil

Hi, I am using MysqlMonixJdbcContext and trying to do a batch insert with

run(
      liftQuery(images).map(i => query[Image].insert(i).onConflictIgnore)
  )

But I get the following error

exception during macro expansion: 
scala.reflect.macros.TypecheckException: Case class type io.getquill.Insert[Image] has no values
Artūras Šlajus
@arturaz
Is it possible to write extension methods for Quill contexts?
implicit class MyDBContextExts(val ctx: MyDBContext) extends AnyVal {
   def doSomething2(): ctx.IO[Unit, ctx.Effect.Read] = ctx.doSomething()
}
if I try to use doSomething2() from somewhere where i have the ctx, the types don't match, I can't pass the returned IO to ctx.performIO
[error]  found   : qual$1.ctx.IO[Either[String,app.db.CreateNewUserIOResult[app.verticles.http.Routes.Login.Response.Success]],qual$1.ctx.Effect.Read with qual$1.ctx.Effect.Write with qual$1.ctx.Effect.Transaction]
[error]  required: dbCtx.IO[?, _]
[error] Error occurred in an application involving default arguments.
[error]           )(
[error]            ^
[info] qual$1.ctx.IO[Either[String,app.db.CreateNewUserIOResult[app.verticles.http.Routes.Login.Response.Success]],qual$1.ctx.Effect.Read with qual$1.ctx.Effect.Write with qual$1.ctx.Effect.Transaction] <: dbCtx.IO[?, _]?
[info]   qual$1.ctx.type = dbCtx.type?
[info]   false
[info] false
Oleg Pyzhcov
@oleg-py
what if you try an empty refinement, i.e. val ctx: MyDbContext { }?
or even implicit class MyDBContextExts[T <: MyDbContext](t: T { }) extends AnyVal?
Artūras Šlajus
@arturaz
let me try
What do empty braces here do? t: T { }
Oleg Pyzhcov
@oleg-py
it's a refinement type, like T { type Res = Int ; val foo: Long }, except it's empty
Artūras Šlajus
@arturaz
  implicit class MyDBContextExts[T <: MyDBContext](val ctx: T { }) extends AnyVal {
    import ctx._

    def createNewUserLoginResponseIO(args: CreateNewUserArgs)(implicit runtimeCfg: RuntimeAppConfig): IO[
      Either[String, CreateNewUserIOResult[Login.Response.Success]],
      Effect.Read with Effect.Write with Effect.Transaction
    ] = {
          val io_ = dbCtx.createNewUserLoginResponseIO(args)
          val result = performIO(io_).getOrThrowLeft
[error] W:\work\unity\sob-review\server\app\src\main\scala\app\commands\DevGenerateUsers.scala:33:34: type mismatch;
[error]  found   : _1.ctx.IO[scala.util.Either[String,app.db.CreateNewUserIOResult[app.verticles.http.Routes.Login.Response.Success]],_1.ctx.Effect.Read with _1.ctx.Effect.Write with _1.ctx.Effect.Transaction] where val _1: app.db.MyDBContextExts.MyDBContextExts[app.db.MyDBContext]
[error]  required: dbCtx.IO[?, _]
[error] Error occurred in an application involving default arguments.
[error]           val result = performIO(io_).getOrThrowLeft
[error]                                  ^
[info] _1.ctx.IO[scala.util.Either[String,app.db.CreateNewUserIOResult[app.verticles.http.Routes.Login.Response.Success]],_1.ctx.Effect.Read with _1.ctx.Effect.Write with _1.ctx.Effect.Transaction] <: dbCtx.IO[?, _]?
[info]   _1.ctx.type = dbCtx.type?
[info]   false
[info] false
Oleg Pyzhcov
@oleg-py
well, I'm out of dark magics that could help :c
Artūras Šlajus
@arturaz
Heh, thanks for trying though :)
I can always design it other way around but I'm just very confused why the type system thinks the type suddenly changes. I thought returning a dependent type should make it the same type.
Zett98
@Zett98
Any examples on how to integrate quill-ndbc and cats effect?
Alexis Durieux
@alexisdurieux
Hello ! I have an List[(Float, Float)] representing an array of numrange in postgres. I am having an issue with the decoder. The insertion works fine but the returning is not working. Any examples I could get inspired with ? Thank you and happy easter !
implicit def arrayNumrangeEncoder[Col <: Seq[(Float, Float)]]
        : Encoder[Col] = arrayRawEncoder[(Float, Float), Col]
    implicit def arrayNumrangeDecoder[Col <: Seq[(Float, Float)]](
        implicit bf: CBF[(Float, Float), Col]
    ): Decoder[Col] =
      arrayRawEncoder[(Float, Float), Col]
2 replies
discobaba
@uncleweirdo_twitter
Just setting started here; where do I put the sortBy(d.processDt), filter, etc?
val q = quote { for { z <- query[ZipIndex] f <- query[GeneFile] if (f.geneFileId == z.geneFileId) d <- query[GeneDir] if (d.geneDirId == f.geneDirId) } yield (z) }
Oleg Pyzhcov
@oleg-py
you can't sort in for comprehension; if describes filter
discobaba
@uncleweirdo_twitter
so what's the correct approach with a bunch of joins?
Oleg Pyzhcov
@oleg-py
you can still do e.g. q.sortBy(_.something) on quoted bit, you can use explicit .join(...).on(...) syntax instead of for-comprehension, or you can wrap all in giant parentheses.
discobaba
@uncleweirdo_twitter
How do you do table aliases where that is required, e.g. sortBy(t.something)?
Oleg Pyzhcov
@oleg-py
quill automatically uses aliases
you refer to stuff via variables in for comprehensions/lambdas
discobaba
@uncleweirdo_twitter
I'm not getting how I would distinguish between fields with the same name in ZipIndex and GeneFile in the 3rd line, for instance, or how I would tell it which table processDt is coming from in the sortBy.
run(query[ZipIndex].
    join(query[GeneFile]).on((z, f) => z.geneFileId == f.geneFileId).
    join(query[GeneDir]).on((f, d) => f.geneDirId == d.geneDirId).
    filter(uid == lift(uid)).
    sortBy(_.processDt)(Ord.desc).take(1))
Oleg Pyzhcov
@oleg-py
that won't compile because join gives you a tuple2. Join twice gives you a tuple inside a tuple. You would be writing something like e.g. .filter { case ((z, f), d) => d.uid == lift(uid) }.sortBy { case ((z, f), d) => d.processDt }(Ord.desc)
discobaba
@uncleweirdo_twitter
ah, that gives me a little traction. Thanks
N.S. Cutler
@godenji
Been quite awhile, but I believe you can just write:
val q = quote {
  (for {
    (zi, gf) <- ZipIndex join GeneFile on(_.geneFileId == _.geneFileId)
         gd  <- GeneDir join(_.geneDirId == gf.geneDirId) if (uid == lift(uid))
  } yield (zi, gf, gd)).sortBy({ case (_,_,d) => d.processDt })(Ord.desc)
}
run(q)
b-gyula
@b-gyula
Is there a plan to implement Common table expressions ? getquill/quill#580
corentin
@corenti13711539_twitter
Does Quill support filtering on parameters of type Option[A] such that if the type is Some[A] a filter condition is added, but condition is omitted when it's None? Similar to e.g. filterOpt in Slick.
5 replies