Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • Jan 21 12:02
    neko-kai closed #1374
  • Jan 21 03:03
    scala-steward opened #1374
  • Jan 19 20:54
    neko-kai closed #1373
  • Jan 19 19:04
    scala-steward opened #1373
  • Jan 19 15:49
    neko-kai commented #1369
  • Jan 19 15:49
    neko-kai closed #1369
  • Jan 19 15:49
    neko-kai closed #1372
  • Jan 19 15:49
    neko-kai opened #1372
  • Jan 19 15:49
    neko-kai review_requested #1372
  • Jan 19 15:48
    neko-kai closed #1370
  • Jan 19 15:47
    neko-kai labeled #1371
  • Jan 19 15:47
    neko-kai labeled #1371
  • Jan 19 15:47
    neko-kai labeled #1371
  • Jan 19 15:47
    neko-kai labeled #1371
  • Jan 19 15:47
    neko-kai labeled #1371
  • Jan 19 15:47
    neko-kai opened #1371
  • Jan 19 14:19
    Edvin-san closed #1365
  • Jan 19 10:36
    Edvin-san commented #1138
  • Jan 18 19:02
    scala-steward opened #1370
  • Jan 17 09:28
    Fruzenshtein opened #1369
Kai
@neko-kai

@Adriani277 Assuming you’re inheriting a new class BIOCatsParallel from BIOCatsSync, where:

class BIOCatsSync[F[+_, +_]](override val F: BIO[F]) extends BIOCatsBracket[F](F) with cats.effect.Sync[F[Throwable, ?]]

Then the Monad[M] is fulfilled by thisoverride val monad: Monad[F[Throwable, ?]] = this
The Parallel.applicative field is the parallel applicative - aka an applicative where zip is implemented by zipPar. So you’d override type F[A] = F[Throwable, A] and implement the applicative inline:

override lazy val applicative: Applicative[F] = new Applicative[F] {
  def zip = zipPar
  def traverse = parTraverse
  def ap = ...
}
The parallel typeclass witnesses a two-way conversion between this M and some kind of parallel variant of M, F. In this case the parallel variant is the same type, we just use different functions to implement the parallel applicative.
Adriani Furtado
@Adriani277
Ahh I see, thanks for the help
Kai
@neko-kai
:+1:
Adriani Furtado
@Adriani277
@neko-kai I believe I have most of the code for cats.Parallel I have come across an issue which I am not sure what the best way to resolve it is
Kai
@neko-kai
@Adriani277 Answered you in the review
Didac
@umbreak
Hi everyone!
Is there a way to use with the distage testing framework something like the scalaTest BeforeAndAfterAll, where I can have access to the injected Resources?
I want to do some sort of health checks into an endpoint where the docker container is running before starting my tests
Paul S.
@pshirshov
You may create shared ("memoized") resources to get behaviour identical to Before/After hooks. Though you don't have to do that manually, distage-testkit can perform integration checks (including checks on docker containers) for you
  override protected def config: TestConfig = {
    super.config.copy(
      memoizationRoots = Set(DIKey.get[PgSvcExample]),
...
That's how you declare a dependency to be shared across all the tests
PgSvcExample may be a resource, in that case it would be created before all the tests and finalized after all the tests
Kai
@neko-kai
@umbreak The above suggestion is correct, the way to do this currently is to create a DIResource with acquire/release actions that are your before/afterAll actions. Then you can pin the resource to be global via memoizationRoots AND pin the resource to be a dependency of all tests such that it doesn’t need to be summoned manually as a parameter for its acquire/release actions to execute.
final class MyTransactor[F[_]] extends DIResourceBase[F, MyTransactor[F]] {
   def acquire = waitForPostgres
   def release = deleteAllTables
}

trait MyTest extends DistageAbstractScalatestSpec[IO] {
  override def config = super.config.copy(
    memoizationRoots = super.config.memoizationRoots ++ Set(
      DIKey.get[MyTransactor[F]], // force MyTransactor to be acquired/released only once per test-run 
    ),
    forcedRoots = super.config.forcedRoots ++ Set(
      DIKey.get[MyTransactor[F]], // force MyTransactor to be acquired/released always, no matter if the running tests declare it as a parameter
    )
  ) 
}
Paul S.
@pshirshov
Technically we have basic support for 'await until ready' semantic. I think we may improve the docker layer to support your scenario too (by the way, P/Rs are welcomed)
Kai
@neko-kai
@umbreak Also, would it be correct to assume that the reason IntegrationCheck doesn’t fit your use-case is because it’s synchronous and doesn’t have access to the effect type? I think in that case we can add the F[_] parameter to it in the next release
Paul S.
@pshirshov
There is some asynchronous awaiting logic in docker integration layer
Didac
@umbreak
That would help, yes. Also as you can see in the example I’m building a Resource for a Transactor (make[Transactor[IO]].fromEffect(...)) and for the waiting logic I need the transactor itself
private def waitForPostgresReady(xa: Transactor[IO], maxDelay: FiniteDuration = 30.seconds): IO[Unit] = ???
Yep, that may be kind of a problem
Kai
@neko-kai
@umbreak I think in that case just adding your existing Transactor[IO] to memoizationKeys will make it happen only once
Paul S.
@pshirshov
Integration checks run early and integration checks with complex dependencies may lead to unexpected outcomes. One of the possible options is to memoize the transactor while keeping ICs on per-test level
@umbreak : could you describe your usecase as precise as possible please?
I mean what behaviour you wish to have?
Didac
@umbreak
cool thanks. One other thing: allowReuse = false on DockerContainerModule will create a new Docker container for every single test block, right?
I’ll detail it soon, I have to answer a call first. Thanks again
Kai
@neko-kai
It will create a new container every time the resource is acquired - if it’s memoized, it will only be once per-run/per-memoization scope anyway
Paul S.
@pshirshov
Nope, for every single test env
Kai
@neko-kai
For every test if not memoized, for every test env if memoized
reuse=true allows reusing the same container between multiple test runs on top of that
Paul S.
@pshirshov

By the way, I had a quick look at your project and got surprised by the fact that you use LocatorRef: make[Cli[F]].tagged(Repo.Dummy).from { locatorRef: LocatorRef => new Cli[F](Some(locatorRef)) }

I'm very curious why did you decide to use that feature?

Paul S.
@pshirshov
That looks kinda strange. I need to dig deeper but I'm sure LocatorRef can be avoided for your usecase
We implemented it as a last-resort feature, it is not expected to be needed under normal circumstances
Didac
@umbreak
ok. probably you are right. We have just recently started using distage…maybe there are simlper ways
Paul S.
@pshirshov
Also I noted that you use don't use distage to wire implicit arguments of your components. I would suggest to wire them with distage, it works well
1 reply
Kai
@neko-kai
This will (optionally) rebuild the entire application, right? distage has a mechanism ‘roles’ for the use-case of building parts of the application for sub-commands - https://izumi.7mind.io/latest/release/doc/distage/distage-framework.html#roles
It’s possible they could be used to avoid the recursive design here, probably. e.g. you could pass parameters to influx cli subcommand like ./app :influx param1 —param2
Didac
@umbreak
i’ll check into roles
Didac
@umbreak

@neko-kai If I had something like you posted:

trait AbstractTest extends DistageAbstractScalatestSpec[IO] {
  override def config = super.config.copy(
    memoizationRoots = super.config.memoizationRoots ++ Set(
      DIKey.get[Transactor[F]], DIKey.get[InfluxClient[F]]
    )
  ) 
}

And then I have 2 tests: PostgresSpec extends AbstractTest and InfluxSpec extends AbstractTest and I run testOnly *InfluxSpec I can see how the Postgres container is also started and the waitForPostgresReady is also running, even though I’m just running the InfluxSpec.

Kai
@neko-kai
@umbreak Actually the problem here is that sbt testOnly doesn’t work correctly with distage-testkit – in reality all tests are run, not just the ones reported. Try running InfluxSpec class test from IntelliJ IDEA - it will work correctly. You can also run select test cases with testOnly — -z “<test-name-patter>”, but this will work for running specific test classes, not specific test suites
Kai
@neko-kai
The problem here is that scalatest’s sbt runner does not pass information down to suites as to what suites it’s trying to launch – and distage-testkit relies on launching just once and does the rest of test discovery on its own. For intellij’s / scalatest’s own ScalaTestRunner, distage does not do discovery because they have a specific protocol - all test classes are constructed first and tests launch after everything is constructed – this allows distage-testkit to register during construction and once its’ run method is called, collect tests that are registered – this enables global sharing and all the features stemming from having a single entrypoint. Under sbt this doesn’t work because scalatest’s sbt runner launches tests one-by-one and does not pass any parameters to figure out the enabled suites, the only way to figure out which suites were selected would be to wait until there was a big enough pause between run calls, which would be pretty horrifying - so it instead it just launches everything
Didac
@umbreak
Right, running the test though IntellIJ works
Kai
@neko-kai
There’s currently an effort in zio to provide enough features to run distage-testkit on top of it, so we may at some point provide an implementation on top of ZIO Test’s runner that wouldn’t suffer the issue. Scalatest is nearly abandoned and extremely unresponsive to issues and PRs so we didn’t try to patch the issue upstream, also since we’re using Intellij for development fixing this wasn’t a very high priority yet
Didac
@umbreak
We are also using Intellij. So it isn’t a problem for us to run the tests through the IDE. Thanks
Paul S.
@pshirshov
We have some plans on implementing our own testkit (or integrating with zio test). Unfortunately scalatest is too limited to properly support our workflow with global planning. Though at the moment we have no ETA, it's not an easy task
Bogdan Roman
@bogdanromanx
thanks @neko-kai and @pshirshov for the explanations on how to properly use the container support in distage; it’s really useful!
to comment on the need for accessing on the locator directly:
  • we need to be able to configure the injection plan based on arguments that are passed to the CLI
  • unfortunately, due to the way that decline works, we’re only able to produce a configuration object only after evaluating all the subcommands and arguments / flags; a simple example would be the following: imagine some part of the code relies on having a Transactor[F] instance, but if you have something like —dbhost 127.0.0.1 as a possible argument, you can only assemble the configuration and bindings only after the evaluation of the command
  • this makes stuff a bit awkward since it would be a lot of code repetition for shared arguments if you’d like to benefit from the instance construction from distage; having access to the locator as a resource is quite cool, since you can decide what to execute and still benefit from the distage’s wiring
on a separate note: what is your experience in using bifunctors over monofunctors in real applications; we’ve mostly used EitherT but it’s not really ergonomic
Bogdan Roman
@bogdanromanx
we do prefer defering the decision on choosing an effect type, so most of our codebase is using tagless; since typeclasses for F[+_,+_] are not defined in something like cats-effect, it’s either monofunctor with cats typeclasses or zio without abstracting over the effect
Adriani Furtado
@Adriani277
@bogdanromanx have looked into BIO as a means to use tagless with a bi-functor?
You can covert your codebase to bi-functor tagless with BIO and use the interop whenever you need cats instances