Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Sep 05 2019 14:43
    @typelevel-bot banned @jdegoes
  • Jan 31 2019 21:17
    codecov-io commented #484
  • Jan 31 2019 21:08
    scala-steward opened #484
  • Jan 31 2019 18:19
    andywhite37 commented #189
  • Jan 31 2019 02:41
    kamilongus starred typelevel/cats-effect
  • Jan 30 2019 00:01
    codecov-io commented #483
  • Jan 29 2019 23:51
    deniszjukow opened #483
  • Jan 29 2019 23:37
  • Jan 29 2019 23:22
  • Jan 29 2019 20:26
    Rui-L starred typelevel/cats-effect
  • Jan 29 2019 18:01
    jdegoes commented #480
  • Jan 29 2019 17:04
    thomaav starred typelevel/cats-effect
  • Jan 28 2019 17:43
    asachdeva starred typelevel/cats-effect
  • Jan 28 2019 07:12
    alexandru commented #480
  • Jan 28 2019 05:45
    codecov-io commented #482
  • Jan 28 2019 05:35
    daron666 opened #482
  • Jan 27 2019 13:56
    codecov-io commented #481
  • Jan 27 2019 13:46
    lrodero opened #481
  • Jan 27 2019 05:47
    codecov-io commented #460
  • Jan 27 2019 05:37
    codecov-io commented #460
ybasket
@ybasket
@gerryfletch Exactly, pure is the way to go. Note that you also can use a less powerful type class to raise the error: ApplicativeError[F, Throwable].raiseError(new Exception("")). Doesn’t change anything per se (and hence isn’t too important), but keeps your code at the level of power it needs and not more, so consider it a good practice.
Gerry Fletcher
@gerryfletch

Thank you for the feedback @ybasket , I saw ApplicativeError as you wrote in the docs but struggled to get something compiling. The end result is kind of disgusting with all the type hints...

res.status match {
  case StatusCodes.OK       => Async[F].pure(Option(res.entity))
  case StatusCodes.NotFound => Async[F].pure(Option.empty[ResponseEntity])
  case _                    => Async[F].raiseError[Option[ResponseEntity]](new Exception(""))
}

But it compiles! And that's what's important

ybasket
@ybasket
@gerryfletch There’s some syntax for that, you can use none[ResponseEntity].pure[F] and res.entity.some.pure[F]. There’s also something for raising the exception in F, but don’t have the time to look up the exact syntax (should be in cats.syntax.applicativeError)
Gerry Fletcher
@gerryfletch
Thank you!
That is much cleaner, thank you so much @ybasket !
Daniel Spiewak
@djspiewak

@slouc Attempting to answer your questions!

Async#shift(ec) (in companion object) will be replaced by Async#evalOn(f, ec) (in the typeclass)

Broadly, yes. shift right now just tosses the continuation of the computation onto the given thread pool, without any guarantees about anything. evalOn is a lot more constrained in that you hand it an effect wherein all actions will be moved to the given pool (similar to the continuation of today's shift), and then once that effect is finished you revert back to whatever your default is (which may in turn be set by some evalOn which is wrapped around you).

One way to think about this evalOn(evalOn(fa, ec1), ec2) has the semantics you would expect. You can't replicate that with `shift.

Also note that doing things in this fashion allows us to implement the executionContext: F[ExecutionContext] effect on Async, which gives you the ability to get access to the ExecutionContext which is governing your scheduling. This is super-important for interoperating with non-cats-effect APIs that still need a raw context (most notably Future).

ContextShift#shift will be replaced by Concurrent#cede (naming is still WIP)

cede is actually a little more general than shift. It doesn't take a executor, obviously, and is really more about yielding control back to the scheduler. It is the fundamental operation of parallelism in a cooperative multitasking environment. In a purely cooperative system (such as coop), if you never cede, then you get no parallelism and one fiber will hog all the resources.

Now, in CE2, shift is the only real way we have of declaring a yield, so it kind of serves double-duty in that sense: by shifting back to the pool you're already assigned to, you yield control and allow other tasks to run. cede will do this in many implementations, except yielding back to the pool which is already in the reader environment (which is why it doesn't need to be specified). Critically, cede is also lawfully compatible with implementations that don't use ExecutionContext at all! (such as coop)

cede is also a hint. It's not strictly required to do anything. So implementing datatypes such as Monix might ignore a cede if it comes immediately after an auto-yield boundary, for example.

Fundamentally, it's about fairness. Whenever you encode a long-running loop within F, you should probably toss a cede in there every so often in order to ensure that other fibers get their turn. How often you do this determines how much throughput (i.e. how quickly you compute the response for a single request) you want to sacrifice to achieve better fairness (i.e. how long it takes you to respond to concurrent requests).

I'm not sure about IO#shift(ec) and IO#shift(cs), but I'm assuming that they will be removed as well, and their role will be fulfilled by Async[IO]#evalOn(f, ec)

Same deal! There will probably be an IO.evalOn just for convenience, but it'll do the same thing as Async[IO].evalOn

Sinisa Louc
@slouc
Thank you @djspiewak ! :clap:
Sinisa Louc
@slouc
Speaking of fairness vs throughput, I always thought of cats-effect being on one end of the spectrum, completely in favor of throughput and "opting-in" on the fairness when desired. In the similar fashion, I thought of Future as being on the other end, completely favoring fairness over throughput (even though I read that they recently started batching bind calls instead of going to EC on each one), and I thought of Monix / ZIO as being somewhere in the middle given that they yield every n steps.
Would you agree with this viewpoint, and do you think cats-effect should retain its philosophy?
Fabio Labella
@SystemFw
I think the characterisation is fair (no pun intended)
the thing, real world code is a lot more fair than one would conceptually expect
since there are a lot of async boundaries being inserted as part of other operations
Sinisa Louc
@slouc
:thumbsup: thanks!
Daniel Spiewak
@djspiewak

Would you agree with this viewpoint, and do you think cats-effect should retain its philosophy?

Strongly agree with that viewpoint. This is basically how I think about it as well.

As for whether or not cats-effect IO should retain that mode of operation… I'm honestly not sure. I kind of like the fact that it fills the niche at the "throughput by default" end of the spectrum, since the rest of the spectrum is well covered by the other options. However, the lack of auto-yielding has bitten me (though very, very very rarely for the reasons that @SystemFw pointed out: most code is fairer than you think it is).

I think there's room for discussion on this point for sure.

Sinisa Louc
@slouc
Makes sense!
Gavin Bisesi
@Daenyth
I think throughput by default is a better model. You can always recover fairness yourself by adding a yield, but if it's made for fairness you can't recover throughput because the yields are in components you don't own
Sinisa Louc
@slouc
I share that opinion. But, I'm wondering if there are cases where you have a lot of flatmapped actions stacked up on each other in a gigantic IO program, and you know you would like to yield every once in a while, but it's hard to inject that into your code because it's not really clear where those points actually are.
Bob Glamm
@glammr1

Question: given a factory method that effectfully creates an object in, say, F, and the created object is also created from a class parameterized over an effect type, do I need to do something like:

def create[F, G](implicit F: ...)(...): F[Thing[G]]

or do you all normally just leave everything as F and be done with it assuming all effect types converge to IO?

Gavin Bisesi
@Daenyth
CE stuff names that shape usually as def in with a convenience 1-arg version that impls as in[F, F](..)
it's good to have because it's flexible
Bob Glamm
@glammr1
So it's not abnormal for something to be created in one effect type but use an entirely different effect type, then?
Gavin Bisesi
@Daenyth
no, http4s uses that technique pretty heavily
Bob Glamm
@glammr1
perfect, thanks
Daniel Spiewak
@djspiewak

I share that opinion. But, I'm wondering if there are cases where you have a lot of flatmapped actions stacked up on each other in a gigantic IO program, and you know you would like to yield every once in a while, but it's hard to inject that into your code because it's not really clear where those points actually are.

That's the concern exactly. In practice I've found this is rare to the point of non-existent, but I imagine it could happen. The only times I've been bitten by lack of fairness, it was my own fault and relatively easy to fix. If you think about what has to happen for a long series of CPU-hogging code to avoid any async boundaries at all, it usually requires a ton of pure compute code (so, you're doing map with some f which is expensive but pure). In that case, it doesn't matter whether your semantics are auto-yielding or not: it's going to hog the thread.

For there to be code which can be more fairness-optimized by some mechanism in the effect type, but isn't already so optimized, you need a ton of delay actions bound together with flatMaps. A ton of them. Without any async in between and without any shifts to other pools. That… happens… very very very rarely.

Which is to say that auto-yielding isn't as helpful as it sounds in practice. Certainly still meaningful, but more meaningful on paper than in reality.

This is a common theme in a lot of features of effect types, btw. ;-) Many features sound amazing on paper but end up having little or no benefit in practical applications. That doesn't mean they're bad features, per se, but they may not be as compelling as they seem at first glance. Effect types are hard to reason about.
Bob Glamm
@glammr1
If I use Async.async() inside of Resource.use(), I need to ensure myself that somehow the async operation is completed prior to the resource's release method is called (assuming that release destroys a critical resource that the async operation depends on), right?
Daniel Spiewak
@djspiewak

If I use Async.async() inside of Resource.use(), I need to ensure myself that somehow the async operation is completed prior to the resource's release method is called

The use method will not be called until (at least!) the callback inside of async is run. Which is to say, you have control over it. Remember that async doesn't mean "parallel", it just means non-blocking.

Bob Glamm
@glammr1
Sorry, I meant something different: during resource construction, I acquire a Resource R. Within the use method I use Async.async() on R (so the callback necessarily has to be called after the use method). Assuming I am not altering ExecutionContexts or ContextShifts, is it possible for the release method to be called prior to the callback finishing if I do not explicitly synchronize this?
Fabio Labella
@SystemFw

so the callback necessarily has to be called after the use method)

that's not the case if I understand correctly

it's called within use, not after
Bob Glamm
@glammr1
Yes, sorry, I was envisioning order of execution in my head so I used the word after
Fabio Labella
@SystemFw
and async returns only when the callback is called (it's not start, there is no logical forking)
Bob Glamm
@glammr1
Ok, so if I understand that correctly async must complete before use can complete, which means that invoking async within use should be safe
Fabio Labella
@SystemFw
yes
if you use start, then you have a problem
because you escape the resource scope
Bob Glamm
@glammr1
That's some magic library code if voluntary yielding is in place for async?
(This code is bridging CompletableFuture, which is why I'm asking)
Fabio Labella
@SystemFw
not sure what you mean
if you use async, you will be safe
also, monix-catnap gives you the machinery to lift CompletableFuture so you don't have to worry about it
Bob Glamm
@glammr1
I believe you, I am just trying to fill out my mental model of how async works :)
Fabio Labella
@SystemFw
my fiber talk goes into a lot of detail
Bob Glamm
@glammr1
I'll look at that talk again, I think I missed that detail on my first go-around. Thanks!
Fabio Labella
@SystemFw
the connection between returning and async is key
so, CPS basically
but I explain better in the talk
Bob Glamm
@glammr1
Nod. I just don't have a good model in my head for where all the continuations occur yet
Fabio Labella
@SystemFw
async exposes the bare structure
generally, "the IO completes" means "a callback was called"
you just don't see it most of the time