Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Sep 05 2019 14:43
    @typelevel-bot banned @jdegoes
  • Jan 31 2019 21:17
    codecov-io commented #484
  • Jan 31 2019 21:08
    scala-steward opened #484
  • Jan 31 2019 18:19
    andywhite37 commented #189
  • Jan 31 2019 02:41
    kamilongus starred typelevel/cats-effect
  • Jan 30 2019 00:01
    codecov-io commented #483
  • Jan 29 2019 23:51
    deniszjukow opened #483
  • Jan 29 2019 23:37
  • Jan 29 2019 23:22
  • Jan 29 2019 20:26
    Rui-L starred typelevel/cats-effect
  • Jan 29 2019 18:01
    jdegoes commented #480
  • Jan 29 2019 17:04
    thomaav starred typelevel/cats-effect
  • Jan 28 2019 17:43
    asachdeva starred typelevel/cats-effect
  • Jan 28 2019 07:12
    alexandru commented #480
  • Jan 28 2019 05:45
    codecov-io commented #482
  • Jan 28 2019 05:35
    daron666 opened #482
  • Jan 27 2019 13:56
    codecov-io commented #481
  • Jan 27 2019 13:46
    lrodero opened #481
  • Jan 27 2019 05:47
    codecov-io commented #460
  • Jan 27 2019 05:37
    codecov-io commented #460
Fabio Labella
@SystemFw
actually it's quicker to just show you
 val latch = new OneShotLatch
 var ref: Either[Throwable, A] = null

    ioa unsafeRunAsync { a =>
      // Reading from `ref` happens after the block on `latch` is
      // over, there's a happens-before relationship, so no extra
      // synchronization is needed for visibility
      ref = a
      latch.releaseShared(1)
      ()
    }
so in this case you have to rely on the native platform "waiting" mechanism
for the JVM, this is thread blocking
for JS, it's unsupported and you just throw UnsupportedOperationException
below those lines, you try and acquire that latch, which blocks the thread until it gets released
at which point you return the contents of ref
Fabio Labella
@SystemFw
so basically unsafeRunSync is "create shared state, create waiting condition, run asynchronously (sets state and releases waiting condition), wait on condition, return shared state"
makes any sense?
Julien Truffaut
@julien-truffaut
yes I think it does, thanks for the explanation. I need to read more about latches and OneShotLatch. I have seen it but I have never used it
Fabio Labella
@SystemFw
it's basically Deferred
but with thread blocking instead of fiber blocking
so if you look at toIO on ConcurrentEffect, it's the same mechanism but "one level up"
wait, I actually I think that was reimplemented with async directly now. It used to be Deferred though
Julien Truffaut
@julien-truffaut
ah I was looking for OneShotLatch, didn't notice it was defined inline
so this is the code that waits for the latch to be released either indefinitely or a certain amount of time
case Duration.Inf =>
        blocking(latch.acquireSharedInterruptibly(1))
case f: FiniteDuration if f > Duration.Zero =>
        blocking(latch.tryAcquireSharedNanos(1, f.toNanos))
Fabio Labella
@SystemFw
yeah
I skipped a few layers of indirection
unsafeRunSync is unsafeRunTimed(Duration.Inf) and there are some other details
but the key idea is not affected by those
Julien Truffaut
@julien-truffaut
awesome I get it
Fabio Labella
@SystemFw
nice :)
Julien Truffaut
@julien-truffaut
thanks
Fabio Labella
@SystemFw
no worries
Julien Truffaut
@julien-truffaut
by any chance, do you have any resources to recommend to read about scala.concurrent.blocking
Fabio Labella
@SystemFw
not really, but explaining that takes a couple of sentences. Basically it's a hint to the thread pool that the code enclosed it in it will block. Thread pools that are designed for a mixture of blocking and non blocking code (basically, global), will react to the hint by creating a new thread, since they know that it's going to be blocked
Julien Truffaut
@julien-truffaut
I see, thanks. Does it apply to cached thread pool? What is used in Blocker?
Fabio Labella
@SystemFw
it applies to cached thread pool philosophically. I think in practice only global does something with it
Gavin Bisesi
@Daenyth
The ExecutionContext implementation needs to be explicitly aware of it
Fabio Labella
@SystemFw
for cats-effect itself, the philosophy is slightly different though
and it's about separating blocking and non-blocking code
Blocker uses a CachedThreadPool
ContextShift uses (as of recently) a FixedThreadPool
blocking is mostly about trying to make sure blocking code stays out of the way of non-blocking one, under the assumption that they share the same thread pool. Cats-effect tries to avoid them sharing the same thread pool to begin with (hence Blocker vs ContextShift)
Julien Truffaut
@julien-truffaut
ok so if you discard the global execution context, scala.concurrent.blocking is generally not useful
yes it makes more sense
to separate thread pools
Fabio Labella
@SystemFw
it makes more sense once you have the tools and an ecosystem that's based on non-blocking code
basically everything kinda builds on top of each other
Julien Truffaut
@julien-truffaut
I guess the standard library pushes you to have a single implicit thread pool
with implicit ec: ExecutionContext
Fabio Labella
@SystemFw
yeah
global is a good default choice for that philosophy
Julien Truffaut
@julien-truffaut
thanks a lot Fabio
Fabio Labella
@SystemFw
no problem :)
Julien Truffaut
@julien-truffaut
I heard several people to recommend to have 3 threads pools, one for blocking, one for work and one for dispatch: https://impurepics.com/posts/2018-04-21-thread-pools-basics.html
Blocker provides the blocking pool
What's the recommended pattern to use disptach / work pool with cats-effect?
e.g. IOApp provides a ContextShift[IO], does it shift from dispatch to work pool?
Fabio Labella
@SystemFw
@julien-truffaut the dispatch is Timer, and already shifts to ContextShift, for work