Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
Jakub Kozłowski
@kubukoz
what's MVar useful for, then? (in Scala)
if there are so many drawbacks
Fabio Labella
@SystemFw
well, I'm biased obviously since I consciously used to not go there. I think when dealing with synchronisation it might give slightly more control than Ref + Deferred, however at the price of significantly increased complexity as a design tool
for the record, in Haskell the recommended choice is STM unless you need fairness, which MVar gives you
Derek Williams
@derekjw
it's like a better SynchronousQueue
Jakub Kozłowski
@kubukoz
gotcha
@LukaJCB/@maintainers if you have a moment to look at my rant about Eq/diverging implicits (https://gitter.im/typelevel/cats?at=5b31861bce3b0f268d4311fa), please let me know if the idea to add a higher kinded Eq variant sounds appealing, I could come up with a proposal
Mateusz Górski
@goral09
Thanks @SystemFw and @derekjw
Ionuț G. Stan
@igstan
As far as I see, Ref + Deferred seems to be write-once, whereas MVar supports multiple writes (always alternated with reads, which makes it a single-element queue effectively).
Fabio Labella
@SystemFw
@igstan nope
Deferred is write-once
but with Ref + Deferred you can implement structures that aren't
e.g. Semaphore and Queue in fs2 are implemented with Ref + Deferred (used to be called Promise)
Mateusz Górski
@goral09
you could put new Deferred instances inside Ref right?
Fabio Labella
@SystemFw
that's exactly what you do
Mateusz Górski
@goral09
I will look at fs2's Queue implementation then
Fabio Labella
@SystemFw
the insight behind the design is this: separating synchronisation from concurrent state
trying to achieve orthogonality
MVar allows you to do both at once
whereas I started with Ref as something that could deal with concurrent state only
and that automatically drives some design decisions, e.g. it can't be empty (or you'd have to wait for reading, thus synchronisation)
Ionuț G. Stan
@igstan
@SystemFw I see, thanks.
Fabio Labella
@SystemFw
that also allowed a very very lightweight implementation in terms of a single AtomicReference (before the fs2 Ref was backed by a custom Actor implementation taken from scalaz)
after dealing with that, I tried to think about the simplest possible synchronisation semantics
and Deferred is very simple: starts empty, becomes full, can't be emptied or modified again. get on empty waits (asyncly), get on full immediately returns. complete on empty unblocks the readers, complete on full fails
even though they're simple, you can build many things with them
Mateusz Górski
@goral09
separating those two concerns makes sense. Although for building the communication channel, as in examples for MVar, it looks easier to get of the ground with MVar than Ref + Deferred
Ionuț G. Stan
@igstan
"Concurrent Programming in ML" has probably one of the best coverage of these concurrency concepts. Really underated book, IMO.
Fabio Labella
@SystemFw
@goral09 I disagree
MVar gives a lot more ways to end up in deadlock
it's rare that you can get away with one MVar
you usually need multiple, and each can be empty or full multiple times, and there's blocking involved in each transition
so the states you need to think about are many many more
(you can probably achieve slightly more fine grained semantics)
Mateusz Górski
@goral09
hmm, deadlock? I thought that if can either put or read/take from it (waiting asyncly in both cases) the only deadlock I can think of is when all the clients are either read or write only
Fabio Labella
@SystemFw
@goral09 that's one MVar
Mateusz Górski
@goral09
yes
Fabio Labella
@SystemFw
you can't implement a queue with one MVar
Mateusz Górski
@goral09
single-element queue?
Fabio Labella
@SystemFw
a single-element queue is hardly a queue...
but sure, a single-element queue is easy to implement with MVar since an MVar basically is a single-element queue
Mateusz Górski
@goral09
worker#1 -> mvar.put(1)
worker#2 -> mvar.put(2)
…
worker#n -> mvar.put(n)

// they all block asyncly if mvar is non-empty
// clients reading forever
client#1 -> mvar.take
client#2 -> mvar.take
…
client#n -> mvar.take
I imagine if all workers and clients share the same mvar instance we get something that looks like a queue
Fabio Labella
@SystemFw
you mean take, not read there
but that depends on the semantics of reawakening
Mateusz Górski
@goral09
probably yes, I don't know the API
Fabio Labella
@SystemFw
but it still doesn't fit the bill imo
imagine an unbounded queue
I want workers to enqueue and keep going
Mateusz Górski
@goral09
yeah, not fit this design
Fabio Labella
@SystemFw
since one of the main usages of queue is decoupling producers speed and consumers speed