Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
Paul Snively
@paul-snively
The sooner the industry gets over this cancerous multi-decade set of delusions, the better. It's a shame I'll be dead first.
But being immortal would be worse.
Gabriel Claramunt
@gclaramunt
I just use an embedded database and run the tests against it, when mocking I feel really inproductive (tho sometimes is better to avoid instantiate the whole thing)
Ryan Zeigler
@rzeigler
^^
highly recommend
Rob Norris
@tpolecat
It can be useful to write simulators for external systems but I think "the database" is usually too big to simulate in a useful way (unless you're writing a database library).
Gavin Bisesi
@Daenyth
@gclaramunt that often fails though because embedded options often don't behave the same way. Like you can't use H2 to test postgres queries, for example
Rob Norris
@tpolecat
As with all tests it's important to understand what you're testing. Sometimes it's fine, but not always.
Gabriel Claramunt
@gclaramunt
yeah... I've used testcontainers before... is amazing and horrifying at the same time :D (I mean... start a docker image with a whole db/queue/etc for a unit test ?)
Gavin Bisesi
@Daenyth
"unit" test is often a poor framing I find
Paul Snively
@paul-snively
I strongly disbelieve in “unit” tests. They’re a profoundly misleading waste of time.
Gabriel Claramunt
@gclaramunt
as long as you don't get religious about it, I don't have a big problem with them.... I mean, I want to test my code :D
Paul Snively
@paul-snively
I want to meaningfully test my code. 😉
Gabriel Claramunt
@gclaramunt
yes!
Paul Snively
@paul-snively
Unsurprisingly, my view hasn’t changed noticeably from https://www.infoq.com/presentations/Types-Tests/
Andy Hamon
@andrewhamon

Hey all :wave:

I'm currently evaluating doobie and slick for database access. I'm wondering, with doobie, if anyone has ever implemented any sort of linting or static analysis to ensure that all queries are checked for type safety? Having to rely purely on discipline to manually make a unit test for each query seems doomed to fail as soon as more than a small handful of folks become involved in a project

Jacob Wang
@jatcwang

@andrewhamon https://tpolecat.github.io/doobie/docs/06-Checking.html

But it's still essentially a integration test against the DB you're using. I also recommend writing integration tests against the actual DB you're using. Behaviour like upserts or distinct are crucial for correct business logic so it should be tested!

Hopefully you have code / coverage review process that can help catch missing tests.

Andy Hamon
@andrewhamon
@jatcwang maybe I wasn't clear... I'm aware of the checking facilities in doobie, but am interested in development/test tooling to ensure that no query in my application goes unchecked
Jacob Wang
@jatcwang
There's no compile time type checking against the DB that I know of. It's possible I think, using macros that connect to a DB when compiling, but that's exactly as terrible as it sounds
Andy Hamon
@andrewhamon
Gotcha. I think quill has compile time checking (by connecting and introspecting a DB, similar to doobie checks) but i agree, that sounds kind of terrible in a lot of ways. I was more thinking a slightly-smarter-than-dumb linter that would nag you if it found a query that didn't have a check somewhere in specs
I'm still very new to scala so I'm not sure what the linting landscape looks like, but in ruby it wouldn't be too difficult to do something similar in rubocop I don't think

Somewhat relatedly, i read in the documentation that only Query0/Update0 types can be checked, and not ConnectionIOs... this also seems a bit problematic. Imagine the very simple case of a method to find a single user by name... That might look like

def findUserByName(name: String): ConnectionIO[Option[Person]] = sql"select id, name from users where name = ${name} limit 1".query[User].option

The user with the specified name might not exist, so I want to represent that fact with an optional, always. Calling .optional converts the Query0 into a ConnectionIO, though, which means it can't be checked.

I must instead write

def findUserByName(name: String): Query0[Person] = sql"select id, name from users where name = ${name} limit 1".query[User]

Which doesn't contain the correct semantics in the type signature anymore (no Option[Person]), so each caller must remember to add .optional, and not, say, .unique which would be incorrect (we expect 0 or 1 records, not exactly 1). I can't encapsulate that this query returns Some or None in the type system without forgoing the ability to run check on the underlying Query0. I guess I could have two methods, one that returns a Query0 and is private other than for testing, and then another that wraps it with .optional and returns a ConnectionIO, but that seems needlessly verbose. It would be great if there were instead a way to extract the component Query0s and Update0s from a ConnectionIO and check those.

Big caveat to the above is that I am new to scala, doobie, and functional programming, so I might just be missing some really obvious patterns.

Jacob Wang
@jatcwang

About linting, there's scalafix / scalameta. So if you're really really motivated, you can compile the whole project and ensure all methods returning Query0 etc has a corresponding query check in a file in the test scope.

You're right about the query type checking. Unfortunately I don't think you can "pull apart" ConnectionIO, because each step is chained together using flatMap. What this means is that the later queries you run can depend on the value of a previous step, so without executing the ConnectionIO there's no way to find out what queries you're running.

Taking a step back, I personally prefer integration tests because it's less brittle. I can refactor my ConnectionIO to use one query instead of two, without needing to touch the integration tests (while it ensures that my refactor retains the original behaviour)

Andy Hamon
@andrewhamon
I'm with you, but I'm in a team setting with varied levels of experience and motivation to Do The Right Thing -- in that light, I am very interested in fully automated query type safety as a baseline safety net
Not being able to pull apart a ConnectionIO makes sense, i totally forgot that queries can depend on the result of previous queries!
Jacob Wang
@jatcwang

I can sympathize. I think a lot of the hard stuff is already there if you want to go down that path. You can inspect the scala code with https://scalameta.org/.

Just to put it in your horizon, there are concepts out there that allows you to inspect/optimize chained operation, such as free applicatives or arrows. For example https://github.com/47degrees/fetch

Billzabob
@Billzabob
I have a query with a lot of data that I’m trying to .stream. It seems to take a long time before it even starts to stream while it’s coming up with an execution plan I’m guessing. It looks like it’s timing out eventually. Does anyone know about timeouts like this and how I could increase it? It seems to happen after ~3 minutes.
Rob Norris
@tpolecat
You can usually set the timeout with a connection parameter.
What back end are you using?
Billzabob
@Billzabob
MySQL
I tried hikariDataSource.setConnectionTimeout(bigNumber) and it doesn’t seem to fix it.
Rob Norris
@tpolecat
I never was able to get streaming to work with MySQL, I think it just loads everything into memory.
The driver is terrible.
The connection timeout isn't what you want. There is probably something you can add to your connect URL.
Billzabob
@Billzabob
Ok thanks, I’ll try that out. I thought streaming worked with MySQL, does it really not? I feel like I’ve done things in the past that definitely would’ve crashed if it was actually loading the whole thing into memory.
Rob Norris
@tpolecat
It's possible that it got fixed, it's been a while.
Billzabob
@Billzabob

From here: https://dev.mysql.com/doc/connector-j/5.1/en/connector-j-reference-implementation-notes.html

By default, ResultSets are completely retrieved and stored in memory. In most cases this is the most efficient way to operate and, due to the design of the MySQL network protocol, is easier to implement. If you are working with ResultSets that have a large number of rows or large values and cannot allocate heap space in your JVM for the memory required, you can tell the driver to stream the results back one row at a time.
To enable this functionality, create a Statement instance in the following manner:

stmt = conn.createStatement(java.sql.ResultSet.TYPE_FORWARD_ONLY,
              java.sql.ResultSet.CONCUR_READ_ONLY);
stmt.setFetchSize(Integer.MIN_VALUE);

This is how doobie streams right? So it should work?

Rob Norris
@tpolecat
Yes, but iirc it actually doesn't.
In any case the timeout is probably orthogonal.
You can use VisualVM or something to see if it's spending all that time allocating memory.
Alessandro Buggin
@abuggin
I'm trying to test how my code handles an edge case of flaky datasource.
Sometimes I get an exception from the datasource, and I'm wrapping the .transact method to retry in case of a particular SQL exception. Looks like it's working but I'm trying to find write a test.
Is there a way to build a Transactor.fromDataSource(ds) where the ds is flaky in the tests?
Do you happen to have a better approach in mind?
Rob Norris
@tpolecat
DataSource is a very small interface, you could write an implementation that wraps a real one and occasionally flakes out.
Eric K Richardson
@ekrich
@tpolecat Does you team use PostGIS and if so any thoughts?
Rob Norris
@tpolecat
No, we don’t use it.
Eric K Richardson
@ekrich
Ok thank you. Do you use any coordinate systems associated with timestamps and fancy indexes or anything?
Rob Norris
@tpolecat
Thus far we don’t need to do proximity search for targets in our system so we don’t need a special index.
We do have a star catalog that supports proximity searches but that’s in another department and I don’t know how they do it.
Eric K Richardson
@ekrich
Ok, just thought I'd ask to see.
Øyvind Raddum Berg
@oyvindberg
if anyone is using intellij and doobie you may want to show some interest (or help dream up a better solution) in this issue I created, so we can get better editor support for sql"select * from $tablename" and similar constructions: https://youtrack.jetbrains.com/issue/SCL-18170