Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • 02:02
    scala-steward review_requested #1559
  • 02:02
    scala-steward review_requested #1559
  • 02:02
    scala-steward opened #1559
  • Aug 04 21:45
    pshirshov commented #1557
  • Aug 04 20:49
    neko-kai commented #1550
  • Aug 04 20:49
    neko-kai commented #1537
  • Aug 04 20:48
    neko-kai closed #1550
  • Aug 04 20:48
    codecov-commenter commented #1550
  • Aug 04 20:48
    neko-kai synchronize #1550
  • Aug 04 20:43
    neko-kai closed #1537
  • Aug 04 20:43
    neko-kai closed #1555
  • Aug 04 20:18
    neko-kai closed #1558
  • Aug 04 20:03
    neko-kai auto_merge_enabled #1558
  • Aug 04 20:02
    scala-steward opened #1558
  • Aug 04 20:02
    scala-steward review_requested #1558
  • Aug 04 20:02
    scala-steward review_requested #1558
  • Aug 04 12:57
    amricko0b edited #1557
  • Aug 04 12:56
    amricko0b opened #1557
  • Aug 04 10:26
    neko-kai closed #1556
  • Aug 04 06:03
    scala-steward review_requested #1556
Vasily Sulatskov
@redvasily
@neko-kai I guess what I am looking for is an equivalent of Guice's providers. My real problem is like this: I need to be able to construct instances of KafkaConsumer classes when I need to. And these classes take a java Map<String, Object> configs as an argument. At the same time I have different types of consumers in the application, constructed with different properties:
  make[KafkaConsumer[String, String]].named("transactional").from {
    props: Map[String, AnyRef] @Id("transactional") => new KafkaConsumer[String, String](props.asJava)
  }
  make[KafkaConsumer[String, String]].named("nonTransactional").from {
    props: Map[String, AnyRef] @Id("nonTransactional") => new KafkaConsumer[String, String](props.asJava)
}
But in my code I would like to be able to just have a "provider" for KafkaConsumer with these annotations. With Guice that would've been something like this:
transactionalProvider: Provider[KafkaConsumer[String, String]] @Named("transactional") with provider being created automatically.
Vasily Sulatskov
@redvasily
I think that the equivalent to that in distage is AutoFactories, though I do understand that it's mostly created for assisted injection. But I can't seem to get distage to work for this case. Is there a way to have distage create factories like this:
make[() => KafkaConsumer[String, String]].named("transactional") // from....
make[() => KafkaConsumer[String, String]].named("tnonTransactional") // from...
At the moment I create these factories manually, but it feels like there should be a better way do this.
Kai
@neko-kai

@redvasily
Aha, I get it now. Distage auto-factories unfortunately aren't equivalent to guice providers since they always use default class constructor, not the constructor specified in DI – we actually ended up never using auto-factories, so didn’t think of such a case.
There’s no built-in way to summon the known constructor of something, except by manually querying the structure to find it there:

class MyClass(thisGraph: LocatorRef) {
  val kafkaConstructor = thisGraph.get.plan.collect {
    // only IFF kafka is bound with a lambda as above
    case CallProvider(key, func, _) if key == DIKey.get[KafkaConsumer[String, String]].named(“transactional”) =>
      ProviderMagnet(func.provider)
  }.steps.head

  val newConsumer = thisGraph.get.run(kafkaConstructor)
}

You may open a GitHub issue or PR for this functionality

In real code i’d just make a manual factory for this case:

class KafkaFactory(
  propTransactional: Map[String, AnyRef] @Id(“transactional”),
  propNonTransactional: Map[String, AnyRef] @Id(“nonTransactional”),
) {
  def transactional: KafkaConsumer[String, String] = mkConsumer(propTransactional)
  def nonTransactional: KafkaConsumer[String, String] = mkConsumer(propNonTransactional)

  private def mkConsumer(props: Map[String, String]): KafkaConsumer[String, String]
}
And reuse it in global bindings if i need to:
make[KafkaConsumer[String, String]].named(“transactional”).from((_: KafkaFactory).transactional)
make[() => KafkaConsumer[String, String]].named(“transactional”).from((k: KafkaFactory) => () => k.transactional)
Vasily Sulatskov
@redvasily
@neko-kai Thanks for the explanation. I guess not having providers is not that big of a deal, as adding a provider manually when necessary is not that hard. However I am just curious, is it possible automatic provider creation from the information already provided to the plan if possible (CallProvider case or maybe constructor can work as well)?
Kai
@neko-kai
@redvasily Yeah, it is. That’s kind of what PlanInterpreter does, except it just executes the constructors instead of making them available. I think it would be best to modify PlanInterpreter, but you may also do it with a hook or a separate pass. You may for example write an ImportStrategy that for all keys like () => T fills them with a function that calls WiringExecutor#execute on operation of T. And construct a new Injector with a BootstrapModule with your ImportStrategy
Miguel Silvestre
@msilvestre
Hi! I'm importing logstage to my project. How can I add a FileSink with rotation? Is there any example on that? Thank you
Miguel Silvestre
@msilvestre
FileSink is abstract so it can't be instatiated. Do I need to implement mine?
Kai
@neko-kai
@msilvestre It can be instantiated with new FileSink { … }, the only method that requires an implementation is def recoverOnFail(e: String): Unit. However the FileSink code currently does not have a maintainer and needs to be rewritten at some point, if there’s an option to use ConsoleSink I’d do that
Miguel Silvestre
@msilvestre
Thank you @neko-kai. How can I use ConsoleSink to write to file with file rotation?
Kai
@neko-kai
@msilvestre You can’t unless there’s an option to piggyback on your environment to do it for you (e.g. if you’re running under k8s/docker). If there’s not, then FileSink should work
Miguel Silvestre
@msilvestre
I'll use docker (which is also new to me). Ok, so I'm just going to create console log. For local it will be text and json for staging/production/dev...
Miguel Silvestre
@msilvestre
Is there a way to disable logging when running scalatest?
Kai
@neko-kai
@msilvestre You mean in distage-testkit-scalatest? You can override the logger in your test class and bump the log level for starting messages
override protected def logger: IzLogger = IzLogger.NullLogger
override protected def bootstrapLogLevel: Log.Level = Log.Level.Crit
Miguel Silvestre
@msilvestre
Thank you @neko-kai
When I start my app inside scala console I can't see any log on the console
why?
Is there some missing configuration?
Kai
@neko-kai
@msilvestre Does this happen with sbt runMain or only in scala REPL?
Miguel Silvestre
@msilvestre
@neko-kai inside sbt console (using runMain) it works
Kai
@neko-kai
@msilvestre This is very odd. Are you using slf4j and logstage-to-slf4j adapter? That may explain it, you may have differences between app and sbt classpath. Otherwise this is very odd – can you look with a debugger into what’s happening or even possibly make a reproduction?
Miguel Silvestre
@msilvestre
@neko-kai I'm not getting, what should I debug? When I start a scala console and then start my application doing package.App.main(Array()) I don't see the logs,but I start app using intellij run or debug I can see it. Regarding slf4j I'm not importing it into the project. However hadoop-common is, cloud be it then.
Actually is hadoop, kafka-streams and scalatra that are importing slf4j
Kai
@neko-kai

@msilvestre I would put breakpoints on IzLogger methods, like log, info,.acceptable, etc. or just put a breakpoint near where logging happens and step forward to see where exactly are your logs are going to. Also, slf4j prints the backend it chose at the start, you may try to carefully examine what’s printed out.

but I start app using intellij run or debug I can see it.

IntelliJ prints the shell command & classpath it uses to run the app, you may check the first line in Run, if it’s not there or doesn’t contain the classpath, check the Edit configurations->Shorten commandline options

Also, maybe you may be missing the slf4j setup action (https://izumi.7mind.io/latest/release/doc/logstage/index.html#slf4j-router)
StaticLogRouter.instance.setup(myLogger.router)
Miguel Silvestre
@msilvestre
Just found the culprit. I'm setting log level based on what I have on application.conf file. I have one in test/resources and other on src/resources. For some reason when I run scala console and start the app from there it's the test/resources/application.conf that is loaded, and on that configuration I have crit level. That's why I wasn't seeing any log. :-/
Thank you for your support.
Kai
@neko-kai
That explains it. No bother!
vonchav
@voonchav_gitlab
is there soon going to be an update to have zio updated to rc18?
vonchav
@voonchav_gitlab
Also, I don't think logstage-config has 0.10.2-M8 on Maven. I only see 0.9.17. sbt fails downloading 0.10.2-M8 either.
vonchav
@voonchav_gitlab
Getting this exception when trying to initialize with withFiberId. I'm using ZIO RC18. I think it is because of the binary incompatibility.
Exception in thread "main" java.lang.NoSuchMethodError: zio.ZIO$.succeed(Ljava/lang/Object;)Lzio/ZIO; at izumi.functional.bio.impl.BIOZio.pure(BIOZio.scala:20) at izumi.functional.bio.impl.BIOZio.pure(BIOZio.scala:17) at izumi.functional.bio.package$BIOApplicative.$init$(package.scala:83) at izumi.functional.bio.impl.BIOZio.<init>(BIOZio.scala:17) at izumi.functional.bio.impl.BIOZio$.<init>(BIOZio.scala:15) at izumi.functional.bio.impl.BIOZio$.<clinit>(BIOZio.scala:15) at logstage.LogstageZIO$$anon$1.<init>(LogstageZIO.scala:13) at logstage.LogstageZIO$.withFiberId(LogstageZIO.scala:13)
Kai
@neko-kai
@voonchav_gitlab There’s now no logstage-config artifact, so that’s correct. There will be a release for ZIO RC18-2 once it’s out, which should be about today-tomorrow (https://github.com/zio/zio/releases/tag/untagged-9c897cd6b002dd43fa73)
There are some issues in 18-1 that can cause deadlocks if you’re using cats, so I’d wait for 18-2 anyway
vonchav
@voonchav_gitlab
Got it. I'm waiting on RC18-2 too :) Thanks @neko-kai
Kai
@neko-kai
@voonchav_gitlab If you’re looking for alternative to logstage-config, there’s configuration from HOCON in distage-framework, it’s also a good first issue to extract it out of there into a separate module - 7mind/izumi#868
vonchav
@voonchav_gitlab
Thanks, I will check it out.
vonchav
@voonchav_gitlab
@neko-kai Is update (using zio rc18-2) coming soon? :)
Kai
@neko-kai
@voonchav_gitlab I think tomorrow. I’m going to finish a macro to support zio.Has today, add the docker bugfixes on milestone https://github.com/7mind/izumi/milestone/11 and then ship
vonchav
@voonchav_gitlab
no rush. was just wondering. really want to switch to logstage :)
thanks!!!
Kai
@neko-kai
@voonchav_gitlab got delayed, but it’s happening soon: 7mind/izumi#960 ...
vonchav
@voonchav_gitlab
thanks for the update. much appreciated.
vonchav
@voonchav_gitlab
Thanks Kai. I just upgraded to 0.10.2. Now zioLogger works!
Kai
@neko-kai
:+1:
vonchav
@voonchav_gitlab
Hi @neko-kai , is there a doc/md that describes how to customize log format, similar to pattern in logback: %date{ISO8601} [%thread] %-5level %logger{72} - %msg%n%ex?
Kai
@neko-kai
@voonchav_gitlab You need to create your own StringRenderingPolicy in code - see izumi.logstage.sink.ConsoleSink for an example - there are two bundled policies - ColoredConsoleSink - the default & SimpleConsoleSink for terminals without color
vonchav
@voonchav_gitlab
Got it. I'll look into it. I'm more interested in the format, rather than the colors, which is fine by me :)
Kai
@neko-kai
Yeah, the policy specifies both (colours are just special characters added to the text, interpreted by the terminal)