Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Sep 05 14:43
    @typelevel-bot banned @jdegoes
  • Jan 31 21:17
    codecov-io commented #484
  • Jan 31 21:08
    scala-steward opened #484
  • Jan 31 18:19
    andywhite37 commented #189
  • Jan 31 02:41
    kamilongus starred typelevel/cats-effect
  • Jan 30 00:01
    codecov-io commented #483
  • Jan 29 23:51
    deniszjukow opened #483
  • Jan 29 23:37
  • Jan 29 23:22
  • Jan 29 20:26
    Rui-L starred typelevel/cats-effect
  • Jan 29 18:01
    jdegoes commented #480
  • Jan 29 17:04
    thomaav starred typelevel/cats-effect
  • Jan 28 17:43
    asachdeva starred typelevel/cats-effect
  • Jan 28 07:12
    alexandru commented #480
  • Jan 28 05:45
    codecov-io commented #482
  • Jan 28 05:35
    daron666 opened #482
  • Jan 27 13:56
    codecov-io commented #481
  • Jan 27 13:46
    lrodero opened #481
  • Jan 27 05:47
    codecov-io commented #460
  • Jan 27 05:37
    codecov-io commented #460
Fabio Labella
@SystemFw
right, I'm referring to things that are non-blocking and execute on a single thread, but you can still start them
and the only way I'm familiar with is some form of cooperative multithreading, or the interpreter interleaving them
or through the trampoline as another yielding mechanism
but I still haven't had the time to delve into that part of the code in as much depth as I'd like
Alexandru Nedelcu
@alexandru
The initial implementation of start was giving you an already completed fiber if the task was synchronous. Now it does a shift first.
Well, it is more user friendly right now. Not doing forking in those ops caught users by surprise.
Fabio Labella
@SystemFw
right, but less say you have two simple flatMap chain, that execute on a singleThreadedExecutor
do we just shift and let the native threading do the scheduling?
TomasStanek
@TomasStanek
@alexandru It's not about being GC heavy, it's about that the loop brings jvm gradually to halt. (I'm observing that with 1.0.0-RC2, and I'll try that with 1.0.0-RC3 when it's out). What you say is therefore that the cancelation is not synchronized (and will never be - it is by design) and the user must therefore employ other means of synchronization, if there are some resources to be freed to continue safely, right?
(I mean resources related to the race looser)
TomasStanek
@TomasStanek
btw, https://typelevel.org/cats-effect/datatypes/io.html shows an example that implements timeoutTo which does quite exactly what I'm doing
Fabio Labella
@SystemFw
are you 100% sure that code is the cause of your problems? it's surprising
Alexandru Nedelcu
@alexandru

@TomasStanek

@alexandru It's not about being GC heavy, it's about that the loop brings jvm gradually to halt. (I'm observing that with 1.0.0-RC2, and I'll try that with 1.0.0-RC3 when it's out). What you say is therefore that the cancelation is not synchronized (and will never be - it is by design) and the user must therefore employ other means of synchronization, if there are some resources to be freed to continue safely, right?

What do you mean by cancelation not being synchronized?

I will test, but no, race shouldn't leak if the underlying scheduler doesn't and if the underlying scheduler leaks, then there's nothing we can do.

I'm not discounting the possibility of a bug, but if you're having problems, it has nothing to do with #305

So if you get more hints, open an issue.

TomasStanek
@TomasStanek
Well, it happens slowly (I guess), so i spawn four of those loops in parallel (then it happens quickly). See: https://gist.github.com/TomasStanek/f06b89d888e0aed2350375470367cbcd
Alexandru Nedelcu
@alexandru
Did you do any profiling on it? Open an issue, because it's hard to track Gitter conversations.
I'm not saying we don't have a bug, I am saying that the currently released version shouldn't leak.
TomasStanek
@TomasStanek
by "not being synchronized" I mean that the action which actually reacts to cancelation (like removing the dequer/peeker in fs2.async.mutable.Queue, or removing timeout from some timeoutScheduler) may happen later than println is called in queue.peek1.start.flatMap(fib => fib.cancel).flatMap(_ => println("not yet canceled"); ...)
I will open an issue with more detailed description later today. I'm new to cats/cats-effect so I wanted to ask first if I'm not doing something silly.
TomasStanek
@TomasStanek
And I think that is the case because racing two peeks of two queues in a loop (where the winning queue is dequed) may cause a deadlock.
Most likely, the loosing peeker is removed after another one is registered. (There may be only one peeker in the Queue implementation)
Alexandru Nedelcu
@alexandru

@TomasStanek the current code is like this:

      if (isActive.getAndSet(false)) {
        // First interrupts the other task
        try other.cancel() finally {
          main.pop()
          cb(Right(r))
        }

What happens is that the loser gets cancelled before the winner's result is signaled. Now if that cancellation logic is synchronous, there shouln't be any issues. But if that cancellation logic happens asynchronously, then indeed you might have issues, however cancelling a timeout shouldn't be async obviously.

In the new code in that PR, it is now changed to this:

      if (isActive.getAndSet(false)) {
        // First interrupts the other task
        other.cancel.unsafeRunAsync { r2 =>
          main.pop()
          cb(Right(r))
          maybeReport(r2)
        }

So now the cancellation can happen asynchronously, although there are some caveats here as well, because due to multi-threading, there's always the possibility that the finalizer you want might not be registered yet, but that's another discussion.

So for simple things, like operating data structures or cancelling a timeout, the current logic should work. We might have a leak, there's always that possibility, but we'll need to do some profiling because it isn't obvious at all.
TomasStanek
@TomasStanek
@alexandru I've tried compiling against branch pr/305 (commit: bbb15854098d5eb3887a10702de5fe41fb886911) and the leak is still there (there are java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask stacking up and java.util.concurrent.RunnableScheduledFuture[] array keeps growing).
The issue with queue.peek1 deadlock also persisted.
I'll open a proper issue hopefully tomorrow evening
Luciano
@lJoublanc
I'm looking at Resource and struggling to understand why it's there; it appears superfluous to Bracket, as Resource.use requires an implicit Bracket. Is this just so that if your clean-up code is unwieldy, you can avoid writing F.bracket(<my huge initialization/finalization code>)?
Fabio Labella
@SystemFw
@lJoublanc are you aware of the difference between a streaming bracket (like fs2.bracket), and a IO bracket, like F.bracket?
Luciano
@lJoublanc
no. I assumed that the fs2 was just wrapping this op.
Fabio Labella
@SystemFw
nope
forget about Resource for a second
look at this code
for now I'll be talking about fs2 0.10, we changed the signature to make things clearer in 1.0, which will hopefully be clear soon
Luciano
@lJoublanc
ok, been using 1.0-M3
I did notice that you changed e.g. io.Socket into a Resource. I guess that' swhere this is going ... but go on.
Fabio Labella
@SystemFw
Stream.bracket(openFile)(file => Stream.emit(file), file => file.close).flatMap(readFromFile)
F.bracket(openFile)( file => IO.pure(file), file => file.close).flatMap(readFromFile)
how do you think each will behave?
Luciano
@lJoublanc
Ok ... this is the path I was going down that made me ask this question.
The second bit of code looks dodgy, because of the pure.
Fabio Labella
@SystemFw
yeah
with the Stream version, you can just emit the resource, and it will all be fine (emit is also pure)
with the F version, by the time you flatMap the file is closed, and readFromFile fails
if you want to understand this inconsistency, this is the mental model
bracket closes a resource as soon as the F finishes emitting elements
now, IO can emit one element at most
so when you do pure, you emit, so you have finished emitting, so the resource gets closed
when you do emit on Stream, the stream has not finished emitting, because a Stream, unlike IO, can emit multiple elements, and therefore the thing is still fine
Luciano
@lJoublanc
:rage1:
This explains a problem I've been having for a few weeks...
Fabio Labella
@SystemFw
does that make sense, this first part?
Luciano
@lJoublanc
Right so the Stream version only releases the resource when there is a 'complete' for lack of a better expression.
Whereas the F version releases as soon as the first (and only) element is emitted.