Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • 07:39
    renovate[bot] edited #2351
  • May 16 06:58
    renovate[bot] synchronize #2351
  • May 16 06:58

    renovate[bot] on configure

    Add renovate.json (compare)

  • May 16 05:15

    deusaquilus on master

    Update README.md (compare)

  • May 16 05:13

    deusaquilus on master

    Update README.md (compare)

  • May 16 05:12

    deusaquilus on master

    Update README.md (compare)

  • May 14 08:28
    renovate[bot] edited #2351
  • May 13 13:13
    fabienfoerster starred zio/zio-quill
  • May 13 07:55
    renovate[bot] edited #2351
  • May 10 16:07
    Yomanz synchronize #2374
  • May 10 07:05
    renovate[bot] synchronize #2351
  • May 10 07:05

    renovate[bot] on configure

    Add renovate.json (compare)

  • May 10 02:53

    deusaquilus on website

    (compare)

  • May 10 02:53

    deusaquilus on v3.16.5

    (compare)

  • May 10 02:53

    deusaquilus on master

    Setting version to 3.16.5 Setting version to 3.16.6-SNAPS… (compare)

  • May 10 02:52

    deusaquilus on website

    (compare)

  • May 10 02:09

    deusaquilus on master

    Trigger Release 3.16.5 (#2480) (compare)

  • May 10 02:09
    deusaquilus closed #2480
  • May 10 01:30
    deusaquilus opened #2480
  • May 10 01:16
    scala-steward opened #2479
Alexander Ioffe
@deusaquilus
let me double-check if it's needed
Li Haoyi
@lihaoyi-databricks
seems to work without it
Alexander Ioffe
@deusaquilus
Yup
all good
Li Haoyi
@lihaoyi-databricks
so this is the state of the art
@ ctx.run {
      query[db.ResultName]
        .rightJoin(infix"SELECT UNNEST(${lift(Seq("foo","test-shard-local-database"))})".as[io.getquill.Query[Unnest]])
        .on(_.text == _.unnest)
        .map{case (rnOpt, n) => rnOpt.map(_.id)}
    }
cmd12.sc:1: SELECT x1.id FROM result_name x1 RIGHT JOIN (SELECT UNNEST(?)) AS x2 ON x1.text = x2.unnest
val res12 = ctx.run {
                    ^
res12: List[Option[Long]] = List(None, Some(2674443566L))
Alexander Ioffe
@deusaquilus
lol
Li Haoyi
@lihaoyi-databricks
basically the only thing I was missing is the UNNEST thing to turn the array into a table
Alexander Ioffe
@deusaquilus
yeah
I don't think I could adapt liftQuery to this kind of functionality
The problem is that postgres expects a query coming out of unnest, not a scalar
Li Haoyi
@lihaoyi-databricks
a bit of a weird postgres quirk but I guess not the worst one I've hit
Alexander Ioffe
@deusaquilus
yeah, lots of that in postgres
Li Haoyi
@lihaoyi-databricks
doesn't compare to the time where deleting old records made the query planner go haywire and start doing table scans
Alexander Ioffe
@deusaquilus
heh, maybe liftUnnestQuery(list)
SQL query planners are the bane of my existance
half the time if they'd just cache a complex sub-view the entire problem would be solved but there's no directive in SQL to do that. You'd think that what CTFs do but it's not
Li Haoyi
@lihaoyi-databricks
basically SQL is the wrong level of abstraction. It tries to hide the implementation, but whether a query runs in 40milliseconds or 40minutes actually matters for a lot of use cases...
Most of the time I would be happier writing query plans directly
Alexander Ioffe
@deusaquilus
Lol, welcome to my life
Li Haoyi
@lihaoyi-databricks
Like I want to specify what index the query will use, and if I want a table scan I'll ask for it thank you very much
Alexander Ioffe
@deusaquilus
The problem is, if we start doing that we're basically back to writing stored-procs... that's essentially what they do
Li Haoyi
@lihaoyi-databricks
sounds good to me
I think this query plan funkiness is a large reason why "dumber" databases like Mongo took off
Alexander Ioffe
@deusaquilus
Nah, stored-procs are a nightmare to maintain. They're too low level.
Li Haoyi
@lihaoyi-databricks
sure talking to mongo may involve over-fetching tons of data, and lots of round-trips, but at least it's a predictable about of over-fetching and round trips
whereas postgres things hum along nicely until suddenly your query plan crosses some heuristic and all hell breaks loose
and naturally it only happens in production
hooray
Alexander Ioffe
@deusaquilus
... and that's databases in a nutshell!
that's why databases are a sub-speciality
that's why and entire class of pseudo-engineer was created to manage them
i.e. the DBA
Problem is, now we're in the Post-SQL, Post-DBMS and by the time a DBA time would effectively model the data, the requirements would totally change
I remember 10 years ago Gartner saying "With the rapid decrease of DBA staffing, data governance will increase to become a challenge in the modern corporation"
Li Haoyi
@lihaoyi-databricks
I don't really see how a DBA would deal with this stuff any better than I would though
Alexander Ioffe
@deusaquilus
He wouldn't. That's the problem
Li Haoyi
@lihaoyi-databricks
in the end its still "postgres decided to go haywire in production because N rows became N+1"
I complain a lot about postgres but I don't imagine MySQL is any better, it's not known for having a lack of footguns
Alexander Ioffe
@deusaquilus
I think MySQL is worse
nafg
@nafg
Postgres is way better than MySQL
Also, MySQL is way better than it used to be...
At least that's my understanding
Alexander Ioffe
@deusaquilus
SQL Server and Oracle are a bit more consistent in how the handle workloads but they'll cost you an arm and a leg (and your immortal soul for the latter as well)
I think a sophisticated system of Query-hinting would solve 95% of problems. I.e something like:
select foo, bar from #cache(select  bar baz from someplace) as s join something sn on #index(sn.foo = bar)
Alexander Ioffe
@deusaquilus
Anyhow, I really don't like liftQuery actually. I think it should be replaced with liftDataset in Spark, and the inLifted operator
(e.g. people.filter(p => p.name inLifted (set)) )
Maybe inLiftedSet
Then there should be liftUnest which does something like we did above
it would be really nice to have just people.filter(p => p.name inSet (lift(set))) though
Maybe for Dotty I could do that
I already have multiple interpretations of lift in Dotty