F[List[(DataSpec, DataSetRef]]
to F[List[DataSetRef]]
map(_.map(_._2))
F[List[(DataSpec, DataSetRef]]
map(...).sequence.flatMap(_.flatten)
map(...).sequence
is traverse
traverse(...).flatMap(_.flatten)
is flatTraverse
Instead of papering over the final type
exactly
Task.map2
and Task.parMap2
. The documentation states that the former is sequential, although the latter is parallel. But it seems, that the implementation of these two are the same except a subtle difference: Task.parMap2
just calls Task.mapBoth
directly, although Task.map2
calls non-static (but final anyway) Task.zipMap
first, which, in turn, calls that Task.mapBoth
. So, in my understanding, they both should behave in the same way. What does make one of them sequential and the other one parallel?
Task
s. What's the recommended way to set this up? I'm currently trying to use monix-kafka's KafkaConsumerObservable
and in the process of building out an Observer
which I believe means eventually I need convert the ConsumerRecord
into my data type and kick off my processing within onNext()
. Does that mean I need to run the Task
within onNext()
? Is consumeWith
a better option? And would that mean I need to handle polling?
Task
s that are not yet attached to execution contexts should be serializable out of the box, no?
Observable
that waits for input? I imagine if you're streaming an Observable
from a file and there's a delay on disk IO, it will patiently wait before getting the next bits and processing them. Is there a way to get the same behaviour waiting for input from my program?
Task
versions whenever possible, there are also for traverse
, parTraverse
and normal sequence
+ unordered variants