by

Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Sep 05 2019 14:43
    @typelevel-bot banned @jdegoes
  • Jan 31 2019 21:17
    codecov-io commented #484
  • Jan 31 2019 21:08
    scala-steward opened #484
  • Jan 31 2019 18:19
    andywhite37 commented #189
  • Jan 31 2019 02:41
    kamilongus starred typelevel/cats-effect
  • Jan 30 2019 00:01
    codecov-io commented #483
  • Jan 29 2019 23:51
    deniszjukow opened #483
  • Jan 29 2019 23:37
  • Jan 29 2019 23:22
  • Jan 29 2019 20:26
    Rui-L starred typelevel/cats-effect
  • Jan 29 2019 18:01
    jdegoes commented #480
  • Jan 29 2019 17:04
    thomaav starred typelevel/cats-effect
  • Jan 28 2019 17:43
    asachdeva starred typelevel/cats-effect
  • Jan 28 2019 07:12
    alexandru commented #480
  • Jan 28 2019 05:45
    codecov-io commented #482
  • Jan 28 2019 05:35
    daron666 opened #482
  • Jan 27 2019 13:56
    codecov-io commented #481
  • Jan 27 2019 13:46
    lrodero opened #481
  • Jan 27 2019 05:47
    codecov-io commented #460
  • Jan 27 2019 05:37
    codecov-io commented #460
davidnadeau
@davidnadeau
also ioa.start.void will still get timeout exception.
Gabriel Volpe
@gvolpe
You would only get the timeout if the timeout happens before calling ioa.start.void
If you don't want that, then you'd need to do things differently
davidnadeau
@davidnadeau
it will timeout even after calling ioa.start.void
Gabriel Volpe
@gvolpe
that shouldn't happen, are you sure?
davidnadeau
@davidnadeau
  def firenforgetExample: IO[Unit] = {
    (for {
      _ <- IO(println("start"))
      _ <- IO.shift >> longTask.start.void
      _ <- IO(println("end"))
      _ <- IO.never
    } yield ()).timeout(500 millis)
  }

  def longTask: IO[Unit] =
    for {
      _ <- IO(println("first part"))
      _ <- IO.sleep(600 millis)
      _ <- IO(println("second part"))
    } yield ()
never prints second part
start
first part
end
Gabriel Volpe
@gvolpe
are you running this program in intellij idea by any chance?
davidnadeau
@davidnadeau
ya i am
Gabriel Volpe
@gvolpe
run it from the terminal using sbt and see if the behavior changes, normally that's just Idea being wrong
davidnadeau
@davidnadeau
oh, it works from sbt.
ok great, this is the behaviour i wanted. Thanks for the help.
Gabriel Volpe
@gvolpe
No problems :)
Ivan Aristov
@Ssstlis
I rerun this example for i few times and it not always do println("second part"), also i see that @davidnadeau using shifting, but you shift to default runloop ContextShift, is this doing nothing, am i right?
Gabriel Volpe
@gvolpe
That's due to the main thread being killed when the main program finishes. You need to keep the main thread alive in order to keep other fibers running in the background, otherwise they get killed too.
And yes, that IO.shift doesn't do anything. start already introduces an async boundary IIUC.
@Ssstlis
Ivan Aristov
@Ssstlis
Thanks :)
Kai
@neko-kai

@djspiewak

it's interesting that it's actually slower than the new Throwable().getStackTrace() approach and less flexible

The only way to get new Throwable in the correct place is to modify every IO constructor – if you do that it’s impossible to turn tracing on/off NON-globally – ZIO supports _.traced and _.untraced regions for lexically-scoped control of tracing, which allows you to omit tracing overhead for the parts that are heavy on flatMaps and low on information (fs2, Gen, etc.) or in hot spots and gain back 2x performance before tracing patch

Kai
@neko-kai
That was the main reasoning for choosing bytecode parsing – users should never be faced with a dillemma “do I run with tracing to debug my problem or run at high performance?”, because IMHO the worst problems are the problems you don’t expect and can’t reproduce – tracing should be always on to provide the most info in a bad situation and the odd monadic hotspots can be wrapped ad-hoc.
Daniel Spiewak
@djspiewak

@neko-kai Lexical control definitely becomes more complicated with a stack trace solution, but it isn’t impossible by any means. Both solutions suffer from instrumenting call sites which evaluate prior to the run loop, though the costs of such instrumentation is fairly low because it’s bounded by the size of the static program. Dynamic loops are the only place where performance has to be really really tight in the disabled case, and those are the easiest to disable from the run loop.

Really, the problem with bytecode instrumentation is it doesn’t work well at all with polymorphism. ZIO’s ecosystem is pretty monomorphic, so that limitation is felt less frequently, but Cats Effect tends to be used in polymorphic contexts essentially by default, where instrumentation based tracing doesn’t provide much useful information at all. Single frame (fast) traces are also limited, but at least you have the option then of multi frame (slow) tracing when you need it.

Not saying either design is wrong, really. Just different trade offs and challenges, sparked by very different ecosystems and standard usage patterns.

Also agreed that tracing should always be on by default for the same reason we include debug symbols in production builds, and the runtime costs need to be negligible.

Kai
@neko-kai

@djspiewak

Both solutions suffer from instrumenting call sites which evaluate prior to the run loop

Well, all ZIO tracing happens directly in the run loop – it can’t happen prior, since the data is just not created yet.

Really, the problem with bytecode instrumentation is it doesn’t work well at all with polymorphism. ZIO’s ecosystem is pretty monomorphic
Not saying either design is wrong, really. Just different trade offs and challenges, sparked by very different ecosystems and standard usage patterns.

I’ve made ZIO tracing specifically so that it would work well with tagless final – rejecting e.g. static instrumentation with macros because that would never work with TF. Now, it doesn’t work well with monad transformers, that’s unfortunately correct, but I disagree that default even in CE ecosystem is to use monad transformers, total IME ahead: I’ve seen most people stick to F=IO in business logic or use monix Task directly, and the only person I’ve seen use cats-mtl also used it with Ref instances – that tracing is ok with, not transformers.

I sketched the kind of traces you’d get from an exception in place of constructor…

import cats.data.OptionT
import zio._
import zio.interop.catz._
import zio.syntax._

object TestTraceOfOptionT extends zio.App {

  def x(z: Any => Task[Int]): OptionT[Task, Int] = OptionT.liftF[Task, Int](1.succeed).flatMap(_ => y(z))
  def y(z: Any => Task[Int]): OptionT[Task, Int] = OptionT[Task, Int](ZIO.some(()).flatMap {
    case Some(value) => z(value).map(Some(_))
    case None => ZIO.none
  })
  def z: Any => Task[Int] = _ => throw new RuntimeException

  override def run(args: List[String]): UIO[Int] = {
    x(z).getOrElse(0).orDie
  }
}
java.lang.RuntimeException
    at TestTraceOfOptionT$.$anonfun$z$1(TestTraceOfOptionT.scala:13)
    at TestTraceOfOptionT$.$anonfun$y$1(TestTraceOfOptionT.scala:10)
    at zio.internal.FiberContext.evaluateNow(FiberContext.scala:272)
    at zio.internal.FiberContext.$anonfun$fork$1(FiberContext.scala:596)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)

For OptionT this does look pretty relevant, but I don’t know what’ll happen with suspending transformers like Free or Stream. I do have a gut feeling that supporting every transformer will require a lot of manual work, though.

Daniel Spiewak
@djspiewak

Certainly hard coding transformer or third party library call sites is intractable. Stack frame instrumentation though can ameliorate this issue since you have the thunk class and the full runtime stack, so you have the opportunity to trace up. Also, as I mention in the gist, providing extra frame information is really helpful for when that kind of thing fails and you’re trying to track down an error.

To be clear, I don’t think there’s anything wrong with ZIO’s tracing. It gives information that is very useful in a monomorphic or bijective polymorphic context (like your tagless final example). The problem is it is defeated immediately by something like Stream or even some library usage (tracing through an http4s app is varyingly useful, for example), and it doesn’t work at all with transformers. Again, that’s a fair tradeoff, particularly given the ZIO ecosystem’s focus on a single master effect without induction. I just think it’s possible to improve on that, albeit by accepting a totally different tradeoff (greater complexity in implementing lexical configuration), and given how cats IO is often used, it seems worthwhile to at least explore that direction.

Either way, all of these approaches have some unfortunate caveats. It’s possible to build stellar examples of the strengths of both, and also of the weaknesses of both. I’m not sure there’s ever going to be a silver bullet.

Daniel Spiewak
@djspiewak
Oh I should clarify that the OptionT example trace does look quite reasonable. Lazy transformers definitely make things worse, as do functions inherited via abstractions (which happens a lot with cats IO). I guess I should be careful when I say “doesn’t work at all with transformers”, because it’s not really that cut and dried (as your example shows).
Rohde Fischer
@rfftrifork

I'm trying to understand a bit more about cats-effect, and thus did some experiments by modifying the example here: https://typelevel.org/cats-effect/concurrency/basics.html#thread-scheduling

what I did to make it have only one ExecutionContext and ContextShift and a simple producer/consumer using an MVar. However, here I stumbled upon a (to me) puzzling behaviour. In my first example using an explicit ExecutionContext and ContextShift it works as I expect, it keeps running till I terminate it: https://pastebin.com/EdB2k1Cq

in my second example though, I tried to trim down my application to rely on the default in IOApp, but here my program terminates almost immediately: https://pastebin.com/QuC0Putm

why does this happen? What should I read/see/etc to understand this behavior better?

Fabio Labella
@SystemFw
do you know about daemon threads?
(a JVM concept, not a cats-effect one)
Rohde Fischer
@rfftrifork
@SystemFw not really :/ I'm halfway guessing that's the cause from the way you ask
Fabio Labella
@SystemFw
yeah
depending on your point of view, it's debatable which one of the two examples is puzzling
think about it: start means that you return control immediately, and one fiber keeps going while another is spawn asynchronously
so you have producer.start >> consumer.start >> IO(println("when does this happen"))
the third statement should execute pretty much immediately
which means that the whole application should shut down immediately
does that view point make sense? (as in, do you see how that's coherent, if not correct)
Rohde Fischer
@rfftrifork
because Fibers aren't threads?
Fabio Labella
@SystemFw
no, you can have the same concept with real threads
Rohde Fischer
@rfftrifork
where you start two threads and it still terminates?
Fabio Labella
@SystemFw
Thread1.start(); Thread2.start(), println("when does this happen")
wait, we're getting to what actually happens
but do you see why that semantics is coherent?
start means "spawn something and keep going without waiting for it"
that's the whole point of it actually
so if you start, start, end
end will happen without waiting for the two started things to finish
Rohde Fischer
@rfftrifork
ah I see the coherency now, yes
Fabio Labella
@SystemFw
right, another viewpoint is
if some things have started, I don't want to shut down until they are done
(this is the behaviour you were expecting)
that also makes sense right?