Where communities thrive

  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
Repo info
  • Sep 05 2019 14:43
    @typelevel-bot banned @jdegoes
  • Jan 31 2019 21:17
    codecov-io commented #484
  • Jan 31 2019 21:08
    scala-steward opened #484
  • Jan 31 2019 18:19
    andywhite37 commented #189
  • Jan 31 2019 02:41
    kamilongus starred typelevel/cats-effect
  • Jan 30 2019 00:01
    codecov-io commented #483
  • Jan 29 2019 23:51
    deniszjukow opened #483
  • Jan 29 2019 23:37
  • Jan 29 2019 23:22
  • Jan 29 2019 20:26
    Rui-L starred typelevel/cats-effect
  • Jan 29 2019 18:01
    jdegoes commented #480
  • Jan 29 2019 17:04
    thomaav starred typelevel/cats-effect
  • Jan 28 2019 17:43
    asachdeva starred typelevel/cats-effect
  • Jan 28 2019 07:12
    alexandru commented #480
  • Jan 28 2019 05:45
    codecov-io commented #482
  • Jan 28 2019 05:35
    daron666 opened #482
  • Jan 27 2019 13:56
    codecov-io commented #481
  • Jan 27 2019 13:46
    lrodero opened #481
  • Jan 27 2019 05:47
    codecov-io commented #460
  • Jan 27 2019 05:37
    codecov-io commented #460
Mark Tomko
Here it is with some error handling. I made a design choice which I think is debatable:
import cats.effect.concurrent.{Deferred, Ref}
import cats.effect.{Concurrent, Resource, Timer}
import cats.implicits._
import fs2.Stream

import scala.concurrent.duration._

trait PeriodicMonitor[F[_], A] {
  def get: F[A]

object PeriodicMonitor {

  def create[F[_]: Concurrent: Timer, A](
      action: F[A],
      t: FiniteDuration
  ): Resource[F, PeriodicMonitor[F, A]] = {
    sealed trait State
    case class FirstCall(waitV: Deferred[F, Either[Throwable, A]]) extends State
    case class NextCalls(v: Either[Throwable, A]) extends State

    val initial: F[Ref[F, State]] =
      for {
        d <- Deferred[F, Either[Throwable, A]]
        r <- Ref[F].of(FirstCall(d): State)
      } yield r

      .flatMap { state: Ref[F, State] =>
        val read = new PeriodicMonitor[F, A] {
          def get: F[A] = state.get.flatMap {
            case FirstCall(wait) => wait.get.rethrow
            case NextCalls(v)    => v.pure[F].rethrow

        val write: F[Unit] = action.attempt.flatMap { v =>
          state.modify {
            case FirstCall(wait) => NextCalls(v) -> wait.complete(v).void
            case NextCalls(_)    => NextCalls(v) -> ().pure[F]


I didn't add an Error state to the State ADT. I think what this means is that if the monitor encounters an error, it keeps working - there's no permanent transition from FirstCall or NextCalls to Error, and the stream continues processing. If the caller of PeriodicMonitor.get notices an error (raised by the reader), then the caller can take action on that. However, if I understand this code properly, it's possible the action can fail, be written, and then recover and be overwritten with a subsequent successful value. I think this is not a bad property, although it wasn't what I originally set out to do.
Mark Tomko
I ran a very superficial test (no errors) and it seems to do what I expected.
Adam Rosien

hello everyone - any suggestions as to why I am not getting the same behaviour if I replace this

  def life(b: Board): IO[Unit] =
    for {
      _ <- cls
      _ <- showCells(b)
      _ <- life(nextgen(b))
    } yield ()

with this

  def life(b: Board): IO[Unit] =
    cls *> showCells(b) *> life(nextgen(b))

cls and showCells both return IO[Unit]
Am I losing the trampolining in the second version?

Fabio Labella
@philipschwarz it should work if you use >>
the problem with *> is before IO even executes, it takes a strict argument
btw the first version also has a potential memory problem if you don't use better-monadic-for, since those yield () get transformed into a massive chain of map calls
Hey @SystemFw - yes, >> works - thank you very much for your help and for the explanation, and the warning ! much appreciated :thumbsup: :smiley:
FWIW, just for the curious, in ZIO it looks like *> would work https://youtu.be/TWdC7DhvD8M?t=1199
ah, but probably only because the recursion is finite in that example
in my case I would have the same issue because the life function has no base case
Fabio Labella
I don't think that's the main issue. *> in cats is strict, and changing it would require changing a lot of other things
in ZIO it's probably lazy
but we do have a lazy *>, and that's >>
in fact it looks like *> in ZIO is implemented with flatMap, so it's literally >>
yes, thank you, that makes sense - I thought it was maybe kind of interesting that the right shark had different behaviour (eager/lazy) in different libraries
Raas Ahsan
@SystemFw in your Fibers talk, you mentioned that concurrency and parallelism are generally independent of each other. Based on your definitions of both terms, how would you characterize the interaction of fibers or threads whose effects or steps aren't necessarily interleaved? e.g. several fibers running simultaneously and interacting with each other via Ref, but they are all bound to their own thread, so the discrete steps that comprise the fiber aren't necessarily interleaved. Would you consider that to be purely parallelism or is there something to be said about concurrency as well
Fabio Labella
@RaasAhsan it's a tricky discussion tbh, because strictly speaking Threads are also concurrency
there is interleaving there, you are not guaranteed there is parallelism
unless those get mapped to different processors
it's also worth pointing out that those definitions are only valid in the context of the talk, if we were talking about, say, distributed systems, I'd use a different definition of concurrency
Raas Ahsan
got it, I was trying to reconcile the definition of concurrency you gave with concurrent data structures, but it sounds like they are possibly different notions of concurrency
Fabio Labella
yeah it's one of those cases where the terms are used either interchangeably, or very loosely, or anyway with different meanings
which is why I gave the definitions I was working with at the start of the talk
not that I entirely came up with them though, Simon Marlow in Parallel and Concurrent Programming in Haskell uses the same definition, and Concurrency is not parallelism on the Go (!) blog, uses a very similar one
that being said, concurrent data structures can mean different things as well
although I think in scala the two things I'm thinking of are actually correctly named as concurrent data structures (like Maps) and parallel collections
Raas Ahsan
I don't think I've ever come across parallel collections :sweat_smile:
Fabio Labella
not missing much, they never really delivered
anyway, in case you're wondering, the most general definition of concurrency I use is when working with distributed systems
where you would say that there is no total order on events
there is a partial order (we can say that some events happened before other), but not a total order: for some pairs of events we don't know if a < b or b < a (where < in this case is happened before), and therefore a and b are concurrent
btw keep up the great work on tracing! I haven't commented on the PR but I'm following it :)
Raas Ahsan
IIRC the Java memory model gives a very similar definition of concurrency for distributed shared-memory systems w.r.t the happens-before relationship
and thanks! excited to get it to a usable state
I like that definition though. I think you can interpret both the interleaving of events and the interaction of threads through that definition
Fabio Labella
well, interleaving works with threads as well though
like, at the level of the JVM threads are also being interleaved, just like fibers are (although through different mechanisms)
so conceptually threads aren't really any more "parallel" than fibers are (although there are potential implications from the memory model)
but yeah, happens before is more general (Lamport really is a genius), but doesn't help with the aspect that matters of the talk, i.e. the logical thread as a structuring abstraction
Raas Ahsan
BTW when you say logical thread, are you referring specifically to fibers or is it a general term that captures both fibers and OS-level threads? Because people have long used regular OS-level threads to structure their programs as well
Fabio Labella
in part 1 of my talk, I'm referring to either
just the concept that you have multiple synchronous sequences of steps that actually abstract over interleaving
and in that sense java.lang.Thread and Fiber are not any different
so in short no, when I'm giving definitions in part 1, I'm not talking about fibers specifically
and if you see that "sequence of step" definition applies to both
it does not apply to akka actors, for example, so really I am talking about the "thread" abstraction
then in part 2, I start talking about processes vs Thread vs fibers