Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Sep 05 2019 14:43
    @typelevel-bot banned @jdegoes
  • Jan 31 2019 21:17
    codecov-io commented #484
  • Jan 31 2019 21:08
    scala-steward opened #484
  • Jan 31 2019 18:19
    andywhite37 commented #189
  • Jan 31 2019 02:41
    kamilongus starred typelevel/cats-effect
  • Jan 30 2019 00:01
    codecov-io commented #483
  • Jan 29 2019 23:51
    deniszjukow opened #483
  • Jan 29 2019 23:37
  • Jan 29 2019 23:22
  • Jan 29 2019 20:26
    Rui-L starred typelevel/cats-effect
  • Jan 29 2019 18:01
    jdegoes commented #480
  • Jan 29 2019 17:04
    thomaav starred typelevel/cats-effect
  • Jan 28 2019 17:43
    asachdeva starred typelevel/cats-effect
  • Jan 28 2019 07:12
    alexandru commented #480
  • Jan 28 2019 05:45
    codecov-io commented #482
  • Jan 28 2019 05:35
    daron666 opened #482
  • Jan 27 2019 13:56
    codecov-io commented #481
  • Jan 27 2019 13:46
    lrodero opened #481
  • Jan 27 2019 05:47
    codecov-io commented #460
  • Jan 27 2019 05:37
    codecov-io commented #460
Raas Ahsan
@RaasAhsan
@SystemFw in your Fibers talk, you mentioned that concurrency and parallelism are generally independent of each other. Based on your definitions of both terms, how would you characterize the interaction of fibers or threads whose effects or steps aren't necessarily interleaved? e.g. several fibers running simultaneously and interacting with each other via Ref, but they are all bound to their own thread, so the discrete steps that comprise the fiber aren't necessarily interleaved. Would you consider that to be purely parallelism or is there something to be said about concurrency as well
Fabio Labella
@SystemFw
@RaasAhsan it's a tricky discussion tbh, because strictly speaking Threads are also concurrency
there is interleaving there, you are not guaranteed there is parallelism
unless those get mapped to different processors
it's also worth pointing out that those definitions are only valid in the context of the talk, if we were talking about, say, distributed systems, I'd use a different definition of concurrency
Raas Ahsan
@RaasAhsan
got it, I was trying to reconcile the definition of concurrency you gave with concurrent data structures, but it sounds like they are possibly different notions of concurrency
Fabio Labella
@SystemFw
yeah it's one of those cases where the terms are used either interchangeably, or very loosely, or anyway with different meanings
which is why I gave the definitions I was working with at the start of the talk
not that I entirely came up with them though, Simon Marlow in Parallel and Concurrent Programming in Haskell uses the same definition, and Concurrency is not parallelism on the Go (!) blog, uses a very similar one
that being said, concurrent data structures can mean different things as well
although I think in scala the two things I'm thinking of are actually correctly named as concurrent data structures (like Maps) and parallel collections
Raas Ahsan
@RaasAhsan
I don't think I've ever come across parallel collections :sweat_smile:
Fabio Labella
@SystemFw
not missing much, they never really delivered
anyway, in case you're wondering, the most general definition of concurrency I use is when working with distributed systems
where you would say that there is no total order on events
there is a partial order (we can say that some events happened before other), but not a total order: for some pairs of events we don't know if a < b or b < a (where < in this case is happened before), and therefore a and b are concurrent
btw keep up the great work on tracing! I haven't commented on the PR but I'm following it :)
Raas Ahsan
@RaasAhsan
IIRC the Java memory model gives a very similar definition of concurrency for distributed shared-memory systems w.r.t the happens-before relationship
and thanks! excited to get it to a usable state
I like that definition though. I think you can interpret both the interleaving of events and the interaction of threads through that definition
Fabio Labella
@SystemFw
well, interleaving works with threads as well though
like, at the level of the JVM threads are also being interleaved, just like fibers are (although through different mechanisms)
so conceptually threads aren't really any more "parallel" than fibers are (although there are potential implications from the memory model)
but yeah, happens before is more general (Lamport really is a genius), but doesn't help with the aspect that matters of the talk, i.e. the logical thread as a structuring abstraction
Raas Ahsan
@RaasAhsan
BTW when you say logical thread, are you referring specifically to fibers or is it a general term that captures both fibers and OS-level threads? Because people have long used regular OS-level threads to structure their programs as well
Fabio Labella
@SystemFw
in part 1 of my talk, I'm referring to either
just the concept that you have multiple synchronous sequences of steps that actually abstract over interleaving
and in that sense java.lang.Thread and Fiber are not any different
so in short no, when I'm giving definitions in part 1, I'm not talking about fibers specifically
and if you see that "sequence of step" definition applies to both
it does not apply to akka actors, for example, so really I am talking about the "thread" abstraction
then in part 2, I start talking about processes vs Thread vs fibers
part of the problem with this topic is this layer structure, depending on where you're looking at things change, hence why the "chapter" structure of the talk
Fabio Labella
@SystemFw

r is it a general term

also to clarify, it depends on who's using that term, it's overloaded and often it means something even weaker than fiber. But in my talk, I meant the general idea as it applies to OS processes, Thread, fibers, and even manually stepped coroutines (like the one in PureConc in ce3)

Raas Ahsan
@RaasAhsan
so on the topic of the layering structure, I may have misunderstood this, you mentioned that Threads are mapped M:N to processes. Is this accurate for the purposes of scheduling? My understanding is that processes serve as a "grouping" mechanism for threads, but ultimately threads are scheduled on the CPUs, and not necessarily processes
I should take a look at the work being done for CE3
Fabio Labella
@SystemFw
@RaasAhsan yeah, that was a slight simplification (i.e. lie) in the talk
OS level threads are scheduled directly on the cpu
at least in modern OS'es, earlier versions had the actual M:N scheduling
so I decided to go with this description, even though is outdated, for greater conceptual clarity
Raas Ahsan
@RaasAhsan
it certainly sounded more appealing :)
pool
@hamstakilla

Why do i get stack overflow here, isnt flatmap suppose to be stack safe?

                    def loop: F[Unit] = {
                        x.state.read.flatMap {
                            case CommChannel.Ready =>
                                log("ready")
                            case CommChannel.Disconnected =>
                                log("disconnect")
                            case CommChannel.Other(content) =>
                                log(s"other $content")
                            case a: CommChannel.ActionRequired[F] =>
                                log(s"action required: ${a.content}") *>
                                    read.flatMap(x => a.response.complete(x))
                        } *> loop
                    }
                    loop.start *> Sync[F].unit

(state is MVar)

Fabio Labella
@SystemFw
@hamstakilla use >>. also you can use .void instead of >> Sync[F].unit
pool
@hamstakilla
oh
should i use >> by default?
Fabio Labella
@SystemFw
people disagree, but >> is certainly more thought-free
pool
@hamstakilla
got it, thanks
Gavin Bisesi
@Daenyth
@djspiewak Random question; in your ce3 test branch / proposal, specifically about Temporal - What's the plan for TestContext? I think it's become a pretty invaluable part of the ecosystem, and it's really unclear to me how fake timers would work if Temporal <: Concurrent
huh also, if Temporal <: Concurrent, does that mean any code wanting to read the current time also gains the ability to do concurrency? That would be a big drawback for parametric reasoning IMO
Daniel Spiewak
@djspiewak
Multiple points there. :-) I'll try to reply one at a time (also in a meeting so slightly delayed)