Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Sep 05 2019 14:43
    @typelevel-bot banned @jdegoes
  • Jan 31 2019 21:17
    codecov-io commented #484
  • Jan 31 2019 21:08
    scala-steward opened #484
  • Jan 31 2019 18:19
    andywhite37 commented #189
  • Jan 31 2019 02:41
    kamilongus starred typelevel/cats-effect
  • Jan 30 2019 00:01
    codecov-io commented #483
  • Jan 29 2019 23:51
    deniszjukow opened #483
  • Jan 29 2019 23:37
  • Jan 29 2019 23:22
  • Jan 29 2019 20:26
    Rui-L starred typelevel/cats-effect
  • Jan 29 2019 18:01
    jdegoes commented #480
  • Jan 29 2019 17:04
    thomaav starred typelevel/cats-effect
  • Jan 28 2019 17:43
    asachdeva starred typelevel/cats-effect
  • Jan 28 2019 07:12
    alexandru commented #480
  • Jan 28 2019 05:45
    codecov-io commented #482
  • Jan 28 2019 05:35
    daron666 opened #482
  • Jan 27 2019 13:56
    codecov-io commented #481
  • Jan 27 2019 13:46
    lrodero opened #481
  • Jan 27 2019 05:47
    codecov-io commented #460
  • Jan 27 2019 05:37
    codecov-io commented #460
TomasStanek
@TomasStanek
I also have an example of a deadlock with racing fs2.async.mutable.Queue.peek1, but the example is more convoluted...
Fabio Labella
@SystemFw
because you'd basically want to avoid recurring until the losing sleeper has been cancelled
TomasStanek
@TomasStanek
exactly
Alexandru Nedelcu
@alexandru
    val p = IO.pure(())
    val to = IO.sleep(1.seconds)
    def loop(): IO[Unit] = {
      IO.race(to, p)
        .flatMap(_ => loop())
    }

@SystemFw @TomasStanek no, this sample shouldn't leak in the current version, however note that race is pretty resource intensive so in loops like this the garbage collector might not cope well.

until the losing sleeper has been cancelled

N.B. cancelling a scheduled tick is instant, but it depends on the thread-pool's implementation if the resources are cleared immediately or if the entry is just marked for deletion at a later point in time. Obviously scheduling a lot of sleeps in a hot loop isn't such a good idea.

The current implementation is sound in regards to cancellation, it's never a good idea to block until resources have been cleared 😉 (you only need to ensure that the cancelation signal has been sent) I'm actually reluctant to push for #305, but we need it for bracket.
In other words #305 basically pushes that responsability to users. If users want to use heavy finalizers that take a long time and have them sequenced, then it's now their problem and there are indeed cases where you want just that.
Alexandru Nedelcu
@alexandru

We've got our milestone plan 🙂
https://github.com/typelevel/cats-effect/milestone/6

Let's close these and release 1.0.0-RC3.

Fabio Labella
@SystemFw
Yeah, I've seen in the discussion in #305
Is there any chance you could answer these when you have time? (just questions, no feature requests or anything :P)
:point_up: August 12, 2018 6:16 PM
Luka Jacobowitz
@LukaJCB
🎉
Alexandru Nedelcu
@alexandru

1) where are the yield points in cats-effect implementation of concurrency

Only in race conditions, e.g. start, race. In general these operations are very rare, in my experience most of the time people just use chains of flatMaps, which don't require concurrency support. This is one reason I like IO's current implementation.

2) how does Monix Task implement Concurrent without requiring Timer (or something to shift)

Task requires a Scheduler in its runAsync to be provided by the environment, which makes Timer pure. This is basically the equivalent of the RTS that you want. However I want our types to support both Monix Task and the current IO and even the former (pre 0.10) IO, otherwise the types are not generic enough. The problem with a RTS is that it is implementation specific, whereas Timer is polymorphic.

Is there any scenario in which you'd want a different Timer but you are not unsafeRunning something?

Ideally no such scenario should happen, however in your typical JVM application, due to the libraries being used and to JVM processes being built to be fat (with lots of features), what happens is that users have to choice but to juggle with multiple thread-pools.

Just to give an example, last time I used MongoDB, I used the ReactiveMongo client for Scala which insists on starting its own Akka System with its own thread-pool of course. Which is stupid of course, but once you have that in your process, you have to work with it and thus keep Mongo-related operations on the Mongo-related thread-pool. And this doesn't happen due to sloppiness. I'm a very careful developer yet on one of the components I was in charge at E.On I counted no less than 6 thread-pools in the same process and I can only guess what goes on in the JVM projects built by beginners.

On top of Haskell maybe I'd agree, because Haskell doesn't do 1:1 kernel level multi-threading, but on platforms such as the JVM, you cannot have "coherency" for Timer or the newly added ContextShift.

Fabio Labella
@SystemFw
:+1: on ContextShift for sure
2) is as I expected then, but how do you write Effect[Task]? Effect[Scheduler => Task]?
as for 1) then I'm a bit confused by how start works, in particular the interleaving of different Fs
ah, sorry
Only in race conditions, e.g. start, race.
I guess I need to delve into the code a bit more
Alexandru Nedelcu
@alexandru
Btw, I'm a little less available these days on Gitter; it's not that I don't find the discussion interesting, I do 🙂 but I've cut down on my online time
On start and race, they wouldn't need ContextShift / Timer if we wouldn't have made them to auto-fork before executing the involved tasks
Fabio Labella
@SystemFw
right, I'm referring to things that are non-blocking and execute on a single thread, but you can still start them
and the only way I'm familiar with is some form of cooperative multithreading, or the interpreter interleaving them
or through the trampoline as another yielding mechanism
but I still haven't had the time to delve into that part of the code in as much depth as I'd like
Alexandru Nedelcu
@alexandru
The initial implementation of start was giving you an already completed fiber if the task was synchronous. Now it does a shift first.
Well, it is more user friendly right now. Not doing forking in those ops caught users by surprise.
Fabio Labella
@SystemFw
right, but less say you have two simple flatMap chain, that execute on a singleThreadedExecutor
do we just shift and let the native threading do the scheduling?
TomasStanek
@TomasStanek
@alexandru It's not about being GC heavy, it's about that the loop brings jvm gradually to halt. (I'm observing that with 1.0.0-RC2, and I'll try that with 1.0.0-RC3 when it's out). What you say is therefore that the cancelation is not synchronized (and will never be - it is by design) and the user must therefore employ other means of synchronization, if there are some resources to be freed to continue safely, right?
(I mean resources related to the race looser)
TomasStanek
@TomasStanek
btw, https://typelevel.org/cats-effect/datatypes/io.html shows an example that implements timeoutTo which does quite exactly what I'm doing
Fabio Labella
@SystemFw
are you 100% sure that code is the cause of your problems? it's surprising
Alexandru Nedelcu
@alexandru

@TomasStanek

@alexandru It's not about being GC heavy, it's about that the loop brings jvm gradually to halt. (I'm observing that with 1.0.0-RC2, and I'll try that with 1.0.0-RC3 when it's out). What you say is therefore that the cancelation is not synchronized (and will never be - it is by design) and the user must therefore employ other means of synchronization, if there are some resources to be freed to continue safely, right?

What do you mean by cancelation not being synchronized?

I will test, but no, race shouldn't leak if the underlying scheduler doesn't and if the underlying scheduler leaks, then there's nothing we can do.

I'm not discounting the possibility of a bug, but if you're having problems, it has nothing to do with #305

So if you get more hints, open an issue.

TomasStanek
@TomasStanek
Well, it happens slowly (I guess), so i spawn four of those loops in parallel (then it happens quickly). See: https://gist.github.com/TomasStanek/f06b89d888e0aed2350375470367cbcd
Alexandru Nedelcu
@alexandru
Did you do any profiling on it? Open an issue, because it's hard to track Gitter conversations.
I'm not saying we don't have a bug, I am saying that the currently released version shouldn't leak.
TomasStanek
@TomasStanek
by "not being synchronized" I mean that the action which actually reacts to cancelation (like removing the dequer/peeker in fs2.async.mutable.Queue, or removing timeout from some timeoutScheduler) may happen later than println is called in queue.peek1.start.flatMap(fib => fib.cancel).flatMap(_ => println("not yet canceled"); ...)
I will open an issue with more detailed description later today. I'm new to cats/cats-effect so I wanted to ask first if I'm not doing something silly.
TomasStanek
@TomasStanek
And I think that is the case because racing two peeks of two queues in a loop (where the winning queue is dequed) may cause a deadlock.
Most likely, the loosing peeker is removed after another one is registered. (There may be only one peeker in the Queue implementation)
Alexandru Nedelcu
@alexandru

@TomasStanek the current code is like this:

      if (isActive.getAndSet(false)) {
        // First interrupts the other task
        try other.cancel() finally {
          main.pop()
          cb(Right(r))
        }

What happens is that the loser gets cancelled before the winner's result is signaled. Now if that cancellation logic is synchronous, there shouln't be any issues. But if that cancellation logic happens asynchronously, then indeed you might have issues, however cancelling a timeout shouldn't be async obviously.

In the new code in that PR, it is now changed to this:

      if (isActive.getAndSet(false)) {
        // First interrupts the other task
        other.cancel.unsafeRunAsync { r2 =>
          main.pop()
          cb(Right(r))
          maybeReport(r2)
        }

So now the cancellation can happen asynchronously, although there are some caveats here as well, because due to multi-threading, there's always the possibility that the finalizer you want might not be registered yet, but that's another discussion.

So for simple things, like operating data structures or cancelling a timeout, the current logic should work. We might have a leak, there's always that possibility, but we'll need to do some profiling because it isn't obvious at all.
TomasStanek
@TomasStanek
@alexandru I've tried compiling against branch pr/305 (commit: bbb15854098d5eb3887a10702de5fe41fb886911) and the leak is still there (there are java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask stacking up and java.util.concurrent.RunnableScheduledFuture[] array keeps growing).
The issue with queue.peek1 deadlock also persisted.
I'll open a proper issue hopefully tomorrow evening
Luciano
@lJoublanc
I'm looking at Resource and struggling to understand why it's there; it appears superfluous to Bracket, as Resource.use requires an implicit Bracket. Is this just so that if your clean-up code is unwieldy, you can avoid writing F.bracket(<my huge initialization/finalization code>)?
Fabio Labella
@SystemFw
@lJoublanc are you aware of the difference between a streaming bracket (like fs2.bracket), and a IO bracket, like F.bracket?
Luciano
@lJoublanc
no. I assumed that the fs2 was just wrapping this op.
Fabio Labella
@SystemFw
nope
forget about Resource for a second
look at this code
for now I'll be talking about fs2 0.10, we changed the signature to make things clearer in 1.0, which will hopefully be clear soon
Luciano
@lJoublanc
ok, been using 1.0-M3