Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • Jun 14 19:32
    neko-kai closed #1530
  • Jun 14 04:02
    scala-steward review_requested #1530
  • Jun 14 04:02
    scala-steward review_requested #1530
  • Jun 14 04:02
    scala-steward opened #1530
  • Jun 11 21:14
    neko-kai closed #1526
  • Jun 11 12:11
    Caparow labeled #1529
  • Jun 11 12:11
    Caparow assigned #1529
  • Jun 11 12:11
    Caparow opened #1529
  • Jun 11 12:09
    Caparow review_requested #1528
  • Jun 11 12:09
    Caparow review_requested #1528
  • Jun 11 12:09
    Caparow assigned #1528
  • Jun 11 12:09
    Caparow labeled #1528
  • Jun 11 12:09
    Caparow opened #1528
  • Jun 11 11:15
    0x736861 opened #1527
  • Jun 10 00:02
    scala-steward review_requested #1526
  • Jun 10 00:02
    scala-steward review_requested #1526
  • Jun 10 00:02
    scala-steward opened #1526
  • Jun 08 23:23
    neko-kai closed #1524
  • Jun 08 21:40
    neko-kai labeled #1525
  • Jun 08 21:37
    neko-kai labeled #1525
Kai
@neko-kai
@voonchav_gitlab You need to create your own StringRenderingPolicy in code - see izumi.logstage.sink.ConsoleSink for an example - there are two bundled policies - ColoredConsoleSink - the default & SimpleConsoleSink for terminals without color
vonchav
@voonchav_gitlab
Got it. I'll look into it. I'm more interested in the format, rather than the colors, which is fine by me :)
Kai
@neko-kai
Yeah, the policy specifies both (colours are just special characters added to the text, interpreted by the terminal)
vonchav
@voonchav_gitlab
cool cool. will dig into it. thank you.
vonchav
@voonchav_gitlab
Hi @neko-kai I suppose, if I declare a val for my message outside of the logger call, the logStage macro wouldn't do the intended interpolation in that case. Can you confirm?
val message = s"A is $a; B is $b"
logger.info(message)
Kai
@neko-kai
@voonchav_gitlab Yes. If you want to assemble a message before-hand use Message:
import logstage.Log.Message
import logstage.Info

val message = Message(s"A is $a; B is $b”)
logger.log(Info)(message)
vonchav
@voonchav_gitlab
Nice. @neko-kai It'd be nice that Message be added to the docs... well, if I get a chance, I'll submit a PR.
Kai
@neko-kai
@voonchav_gitlab That would be much appreciated! Thanks :+1:
andres-pipicello
@andres-pipicello
Hi! I want to use the plugins from the Izumi SBT toolkit. I see that it has moved to the sbtgen repo, but it seems that the style described in the documentation is deprecated. Is that right?
Kai
@neko-kai
@andres-pipicello Yeah, basically, they’re not used much except for the BOM part and IzumiResolverPlugin. Instead we’ve moved to describing builds with sbtgen’s DSL. You can still add & use them with:
addSbtPlugin("io.7mind.izumi.sbt" % "sbt-izumi" % "0.0.53")
Adriani Furtado
@Adriani277

Hi all, I am currently trying izumi-fundamentals, I have something like this

import izumi.functional.bio.{BIOMonadError, F}
case class MyClass[F[+_, +_]: BIOMonadError](){
    F.fail("Some Error")
}

From looking at the izumi BIO page, I am lead to believe the code above should give me back an F however I am getting back a ZIO.
Is there something I may be missing here? There is no ZIO imports in scope

Adriani Furtado
@Adriani277
On a separate note, is there something similar to cats.Parallel in BIO? I have a BIOMonadError in scope and would like to perform 2 operations in parallel but the ap2/map2 instance is Monadic
Kai
@neko-kai
@Adriani277
Enable -Ypartial-unification scalac option or update to Scala 2.13 - this will fix the issue
@Adriani277 BIOAsync provides race & parTraverse
Adriani Furtado
@Adriani277

@neko-kai thanks for the info. I am trying to basically achieve a zipPar
I currently have the following simple/naive implementation

  def zip2Par[F[+_, +_]: BIOFork: BIOMonadError, E, A0, A1](
      a: F[E, A0],
      b: F[E, A1]
  ): F[E, (A0, A1)] =
    for {
      f0 <- a.fork
      f1 <- b.fork
      j0 <- f0.join.catchAll ( e =>
             f1.interrupt *> BIOMonadError[F].fail(e)
           )
      j1 <- f1.join
    } yield (j0, j1)

Is this functionality something that already exists in BIO?

Kai
@neko-kai
@Adriani277 Nope, zipPar is not exposed currently, it would be great if you could submit a pull request adding it to BIOAsync. (Or you could even add BIOParallel sub-hierarchy, if you got a a grip of how subhierarchies such as BIOAsk, BIOBifunctor and BIOProfunctor are implemented)
Adriani Furtado
@Adriani277
@neko-kai awesome. I will have a go at it
Kai
@neko-kai
@Adriani277 Thanks! :+1:
Adriani Furtado
@Adriani277
@neko-kai, I have raised a PR 7mind/izumi#1029
Adriani Furtado
@Adriani277
@neko-kai I have been looking at adding cats instance for Parallel but I am not really sure how it can be done. Parallel needs a Monad and an Applicative but I don't quite see what we can provide for both values although we now have BIOParallel. Any ideas?
Kai
@neko-kai

@Adriani277 Assuming you’re inheriting a new class BIOCatsParallel from BIOCatsSync, where:

class BIOCatsSync[F[+_, +_]](override val F: BIO[F]) extends BIOCatsBracket[F](F) with cats.effect.Sync[F[Throwable, ?]]

Then the Monad[M] is fulfilled by thisoverride val monad: Monad[F[Throwable, ?]] = this
The Parallel.applicative field is the parallel applicative - aka an applicative where zip is implemented by zipPar. So you’d override type F[A] = F[Throwable, A] and implement the applicative inline:

override lazy val applicative: Applicative[F] = new Applicative[F] {
  def zip = zipPar
  def traverse = parTraverse
  def ap = ...
}
The parallel typeclass witnesses a two-way conversion between this M and some kind of parallel variant of M, F. In this case the parallel variant is the same type, we just use different functions to implement the parallel applicative.
Adriani Furtado
@Adriani277
Ahh I see, thanks for the help
Kai
@neko-kai
:+1:
Adriani Furtado
@Adriani277
@neko-kai I believe I have most of the code for cats.Parallel I have come across an issue which I am not sure what the best way to resolve it is
Kai
@neko-kai
@Adriani277 Answered you in the review
Didac
@umbreak
Hi everyone!
Is there a way to use with the distage testing framework something like the scalaTest BeforeAndAfterAll, where I can have access to the injected Resources?
I want to do some sort of health checks into an endpoint where the docker container is running before starting my tests
Paul S.
@pshirshov
You may create shared ("memoized") resources to get behaviour identical to Before/After hooks. Though you don't have to do that manually, distage-testkit can perform integration checks (including checks on docker containers) for you
  override protected def config: TestConfig = {
    super.config.copy(
      memoizationRoots = Set(DIKey.get[PgSvcExample]),
...
That's how you declare a dependency to be shared across all the tests
PgSvcExample may be a resource, in that case it would be created before all the tests and finalized after all the tests
Kai
@neko-kai
@umbreak The above suggestion is correct, the way to do this currently is to create a DIResource with acquire/release actions that are your before/afterAll actions. Then you can pin the resource to be global via memoizationRoots AND pin the resource to be a dependency of all tests such that it doesn’t need to be summoned manually as a parameter for its acquire/release actions to execute.
final class MyTransactor[F[_]] extends DIResourceBase[F, MyTransactor[F]] {
   def acquire = waitForPostgres
   def release = deleteAllTables
}

trait MyTest extends DistageAbstractScalatestSpec[IO] {
  override def config = super.config.copy(
    memoizationRoots = super.config.memoizationRoots ++ Set(
      DIKey.get[MyTransactor[F]], // force MyTransactor to be acquired/released only once per test-run 
    ),
    forcedRoots = super.config.forcedRoots ++ Set(
      DIKey.get[MyTransactor[F]], // force MyTransactor to be acquired/released always, no matter if the running tests declare it as a parameter
    )
  ) 
}
Paul S.
@pshirshov
Technically we have basic support for 'await until ready' semantic. I think we may improve the docker layer to support your scenario too (by the way, P/Rs are welcomed)
Kai
@neko-kai
@umbreak Also, would it be correct to assume that the reason IntegrationCheck doesn’t fit your use-case is because it’s synchronous and doesn’t have access to the effect type? I think in that case we can add the F[_] parameter to it in the next release
Paul S.
@pshirshov
There is some asynchronous awaiting logic in docker integration layer
Didac
@umbreak
That would help, yes. Also as you can see in the example I’m building a Resource for a Transactor (make[Transactor[IO]].fromEffect(...)) and for the waiting logic I need the transactor itself
private def waitForPostgresReady(xa: Transactor[IO], maxDelay: FiniteDuration = 30.seconds): IO[Unit] = ???
Yep, that may be kind of a problem
Kai
@neko-kai
@umbreak I think in that case just adding your existing Transactor[IO] to memoizationKeys will make it happen only once
Paul S.
@pshirshov
Integration checks run early and integration checks with complex dependencies may lead to unexpected outcomes. One of the possible options is to memoize the transactor while keeping ICs on per-test level
@umbreak : could you describe your usecase as precise as possible please?
I mean what behaviour you wish to have?
Didac
@umbreak
cool thanks. One other thing: allowReuse = false on DockerContainerModule will create a new Docker container for every single test block, right?
I’ll detail it soon, I have to answer a call first. Thanks again
Kai
@neko-kai
It will create a new container every time the resource is acquired - if it’s memoized, it will only be once per-run/per-memoization scope anyway
Paul S.
@pshirshov
Nope, for every single test env
Kai
@neko-kai
For every test if not memoized, for every test env if memoized
reuse=true allows reusing the same container between multiple test runs on top of that