Instead of papering over the final type
Task.parMap2. The documentation states that the former is sequential, although the latter is parallel. But it seems, that the implementation of these two are the same except a subtle difference:
Task.map2calls non-static (but final anyway)
Task.zipMapfirst, which, in turn, calls that
Task.mapBoth. So, in my understanding, they both should behave in the same way. What does make one of them sequential and the other one parallel?
Tasks. What's the recommended way to set this up? I'm currently trying to use monix-kafka's
KafkaConsumerObservableand in the process of building out an
Observerwhich I believe means eventually I need convert the
ConsumerRecordinto my data type and kick off my processing within
onNext(). Does that mean I need to run the
consumeWitha better option? And would that mean I need to handle polling?
Tasks that are not yet attached to execution contexts should be serializable out of the box, no?
Observablethat waits for input? I imagine if you're streaming an
Observablefrom a file and there's a delay on disk IO, it will patiently wait before getting the next bits and processing them. Is there a way to get the same behaviour waiting for input from my program?
Taskversions whenever possible, there are also for
sequence+ unordered variants
java.lang.NoSuchMethodError: monix.eval.Task$Context.frameRef()Lmonix/eval/Task$FrameIndexRef; at monix.kafka.KafkaConsumerObservable.$anonfun$feedTask$4(KafkaConsumerObservable.scala:90) at scala.concurrent.impl.ExecutionContextImpl$AdaptedForkJoinTask.exec(ExecutionContextImpl.scala:140) at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289) at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056) at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692) at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:157).
1.0.0-RC1, and cats
1.0.0. Does this mean that monix-kafka does not have a compatible version with monix 3.0.0-RC2 yet?