@neko-kai Lexical control definitely becomes more complicated with a stack trace solution, but it isn’t impossible by any means. Both solutions suffer from instrumenting call sites which evaluate prior to the run loop, though the costs of such instrumentation is fairly low because it’s bounded by the size of the static program. Dynamic loops are the only place where performance has to be really really tight in the disabled case, and those are the easiest to disable from the run loop.
Really, the problem with bytecode instrumentation is it doesn’t work well at all with polymorphism. ZIO’s ecosystem is pretty monomorphic, so that limitation is felt less frequently, but Cats Effect tends to be used in polymorphic contexts essentially by default, where instrumentation based tracing doesn’t provide much useful information at all. Single frame (fast) traces are also limited, but at least you have the option then of multi frame (slow) tracing when you need it.
Not saying either design is wrong, really. Just different trade offs and challenges, sparked by very different ecosystems and standard usage patterns.
Also agreed that tracing should always be on by default for the same reason we include debug symbols in production builds, and the runtime costs need to be negligible.
@djspiewak
Both solutions suffer from instrumenting call sites which evaluate prior to the run loop
Well, all ZIO tracing happens directly in the run loop – it can’t happen prior, since the data is just not created yet.
Really, the problem with bytecode instrumentation is it doesn’t work well at all with polymorphism. ZIO’s ecosystem is pretty monomorphic
Not saying either design is wrong, really. Just different trade offs and challenges, sparked by very different ecosystems and standard usage patterns.
I’ve made ZIO tracing specifically so that it would work well with tagless final – rejecting e.g. static instrumentation with macros because that would never work with TF. Now, it doesn’t work well with monad transformers, that’s unfortunately correct, but I disagree that default even in CE ecosystem is to use monad transformers, total IME ahead: I’ve seen most people stick to F=IO in business logic or use monix Task directly, and the only person I’ve seen use cats-mtl also used it with Ref instances – that tracing is ok with, not transformers.
I sketched the kind of traces you’d get from an exception in place of constructor…
import cats.data.OptionT
import zio._
import zio.interop.catz._
import zio.syntax._
object TestTraceOfOptionT extends zio.App {
def x(z: Any => Task[Int]): OptionT[Task, Int] = OptionT.liftF[Task, Int](1.succeed).flatMap(_ => y(z))
def y(z: Any => Task[Int]): OptionT[Task, Int] = OptionT[Task, Int](ZIO.some(()).flatMap {
case Some(value) => z(value).map(Some(_))
case None => ZIO.none
})
def z: Any => Task[Int] = _ => throw new RuntimeException
override def run(args: List[String]): UIO[Int] = {
x(z).getOrElse(0).orDie
}
}
java.lang.RuntimeException
at TestTraceOfOptionT$.$anonfun$z$1(TestTraceOfOptionT.scala:13)
at TestTraceOfOptionT$.$anonfun$y$1(TestTraceOfOptionT.scala:10)
at zio.internal.FiberContext.evaluateNow(FiberContext.scala:272)
at zio.internal.FiberContext.$anonfun$fork$1(FiberContext.scala:596)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
For OptionT this does look pretty relevant, but I don’t know what’ll happen with suspending transformers like Free or Stream. I do have a gut feeling that supporting every transformer will require a lot of manual work, though.
Certainly hard coding transformer or third party library call sites is intractable. Stack frame instrumentation though can ameliorate this issue since you have the thunk class and the full runtime stack, so you have the opportunity to trace up. Also, as I mention in the gist, providing extra frame information is really helpful for when that kind of thing fails and you’re trying to track down an error.
To be clear, I don’t think there’s anything wrong with ZIO’s tracing. It gives information that is very useful in a monomorphic or bijective polymorphic context (like your tagless final example). The problem is it is defeated immediately by something like Stream or even some library usage (tracing through an http4s app is varyingly useful, for example), and it doesn’t work at all with transformers. Again, that’s a fair tradeoff, particularly given the ZIO ecosystem’s focus on a single master effect without induction. I just think it’s possible to improve on that, albeit by accepting a totally different tradeoff (greater complexity in implementing lexical configuration), and given how cats IO is often used, it seems worthwhile to at least explore that direction.
Either way, all of these approaches have some unfortunate caveats. It’s possible to build stellar examples of the strengths of both, and also of the weaknesses of both. I’m not sure there’s ever going to be a silver bullet.
I'm trying to understand a bit more about cats-effect, and thus did some experiments by modifying the example here: https://typelevel.org/cats-effect/concurrency/basics.html#thread-scheduling
what I did to make it have only one ExecutionContext
and ContextShift
and a simple producer/consumer using an MVar
. However, here I stumbled upon a (to me) puzzling behaviour. In my first example using an explicit ExecutionContext
and ContextShift
it works as I expect, it keeps running till I terminate it: https://pastebin.com/EdB2k1Cq
in my second example though, I tried to trim down my application to rely on the default in IOApp, but here my program terminates almost immediately: https://pastebin.com/QuC0Putm
why does this happen? What should I read/see/etc to understand this behavior better?
start
means that you return control immediately, and one fiber keeps going while another is spawn asynchronously
producer.start >> consumer.start >> IO(println("when does this happen"))
start
means "spawn something and keep going without waiting for it"
end
will happen without waiting for the two started things to finish
daemon threads
do not prevent the JVM from shutting down
concurrently