Where communities thrive

  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
Repo info
  • Sep 05 2019 14:43
    @typelevel-bot banned @jdegoes
  • Jan 31 2019 21:17
    codecov-io commented #484
  • Jan 31 2019 21:08
    scala-steward opened #484
  • Jan 31 2019 18:19
    andywhite37 commented #189
  • Jan 31 2019 02:41
    kamilongus starred typelevel/cats-effect
  • Jan 30 2019 00:01
    codecov-io commented #483
  • Jan 29 2019 23:51
    deniszjukow opened #483
  • Jan 29 2019 23:37
  • Jan 29 2019 23:22
  • Jan 29 2019 20:26
    Rui-L starred typelevel/cats-effect
  • Jan 29 2019 18:01
    jdegoes commented #480
  • Jan 29 2019 17:04
    thomaav starred typelevel/cats-effect
  • Jan 28 2019 17:43
    asachdeva starred typelevel/cats-effect
  • Jan 28 2019 07:12
    alexandru commented #480
  • Jan 28 2019 05:45
    codecov-io commented #482
  • Jan 28 2019 05:35
    daron666 opened #482
  • Jan 27 2019 13:56
    codecov-io commented #481
  • Jan 27 2019 13:46
    lrodero opened #481
  • Jan 27 2019 05:47
    codecov-io commented #460
  • Jan 27 2019 05:37
    codecov-io commented #460
Fabio Labella
@julien-truffaut the async node contains a function that takes a callback
the only thing you can do with a function is to call it
so what you do is pass a callback to it
if it's the top one, the callback is passed to you, which is why runAsync takes one
like all forms of CPS is a bit mind bendy
does that make any sense? (feel free to say no, maybe we can work through an example)
Julien Truffaut
yes the runAsync case makes sense
I am less sure what happens for the runSync
Fabio Labella
you still pass a callback
"you" meaning the runloop
actually it's quicker to just show you
 val latch = new OneShotLatch
 var ref: Either[Throwable, A] = null

    ioa unsafeRunAsync { a =>
      // Reading from `ref` happens after the block on `latch` is
      // over, there's a happens-before relationship, so no extra
      // synchronization is needed for visibility
      ref = a
so in this case you have to rely on the native platform "waiting" mechanism
for the JVM, this is thread blocking
for JS, it's unsupported and you just throw UnsupportedOperationException
below those lines, you try and acquire that latch, which blocks the thread until it gets released
at which point you return the contents of ref
Fabio Labella
so basically unsafeRunSync is "create shared state, create waiting condition, run asynchronously (sets state and releases waiting condition), wait on condition, return shared state"
makes any sense?
Julien Truffaut
yes I think it does, thanks for the explanation. I need to read more about latches and OneShotLatch. I have seen it but I have never used it
Fabio Labella
it's basically Deferred
but with thread blocking instead of fiber blocking
so if you look at toIO on ConcurrentEffect, it's the same mechanism but "one level up"
wait, I actually I think that was reimplemented with async directly now. It used to be Deferred though
Julien Truffaut
ah I was looking for OneShotLatch, didn't notice it was defined inline
so this is the code that waits for the latch to be released either indefinitely or a certain amount of time
case Duration.Inf =>
case f: FiniteDuration if f > Duration.Zero =>
        blocking(latch.tryAcquireSharedNanos(1, f.toNanos))
Fabio Labella
I skipped a few layers of indirection
unsafeRunSync is unsafeRunTimed(Duration.Inf) and there are some other details
but the key idea is not affected by those
Julien Truffaut
awesome I get it
Fabio Labella
nice :)
Julien Truffaut
Fabio Labella
no worries
Julien Truffaut
by any chance, do you have any resources to recommend to read about scala.concurrent.blocking
Fabio Labella
not really, but explaining that takes a couple of sentences. Basically it's a hint to the thread pool that the code enclosed it in it will block. Thread pools that are designed for a mixture of blocking and non blocking code (basically, global), will react to the hint by creating a new thread, since they know that it's going to be blocked
Julien Truffaut
I see, thanks. Does it apply to cached thread pool? What is used in Blocker?
Fabio Labella
it applies to cached thread pool philosophically. I think in practice only global does something with it
Gavin Bisesi
The ExecutionContext implementation needs to be explicitly aware of it
Fabio Labella
for cats-effect itself, the philosophy is slightly different though
and it's about separating blocking and non-blocking code
Blocker uses a CachedThreadPool
ContextShift uses (as of recently) a FixedThreadPool
blocking is mostly about trying to make sure blocking code stays out of the way of non-blocking one, under the assumption that they share the same thread pool. Cats-effect tries to avoid them sharing the same thread pool to begin with (hence Blocker vs ContextShift)
Julien Truffaut
ok so if you discard the global execution context, scala.concurrent.blocking is generally not useful
yes it makes more sense
to separate thread pools
Fabio Labella
it makes more sense once you have the tools and an ecosystem that's based on non-blocking code
basically everything kinda builds on top of each other
Julien Truffaut
I guess the standard library pushes you to have a single implicit thread pool