Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Sep 05 2019 14:43
    @typelevel-bot banned @jdegoes
  • Jan 31 2019 21:17
    codecov-io commented #484
  • Jan 31 2019 21:08
    scala-steward opened #484
  • Jan 31 2019 18:19
    andywhite37 commented #189
  • Jan 31 2019 02:41
    kamilongus starred typelevel/cats-effect
  • Jan 30 2019 00:01
    codecov-io commented #483
  • Jan 29 2019 23:51
    deniszjukow opened #483
  • Jan 29 2019 23:37
  • Jan 29 2019 23:22
  • Jan 29 2019 20:26
    Rui-L starred typelevel/cats-effect
  • Jan 29 2019 18:01
    jdegoes commented #480
  • Jan 29 2019 17:04
    thomaav starred typelevel/cats-effect
  • Jan 28 2019 17:43
    asachdeva starred typelevel/cats-effect
  • Jan 28 2019 07:12
    alexandru commented #480
  • Jan 28 2019 05:45
    codecov-io commented #482
  • Jan 28 2019 05:35
    daron666 opened #482
  • Jan 27 2019 13:56
    codecov-io commented #481
  • Jan 27 2019 13:46
    lrodero opened #481
  • Jan 27 2019 05:47
    codecov-io commented #460
  • Jan 27 2019 05:37
    codecov-io commented #460
Christopher Davenport
@ChristopherDavenport
It just has a single type as opposed to two.
Paul Cleary
@pauljamescleary
implicit CS: ContextShift[IO] perhaps
Christopher Davenport
@ChristopherDavenport
Yes.
Paul Cleary
@pauljamescleary
I really want to abstract away the effect type
I have to balance scratching that itch with making progress on other things
thanks @ChristopherDavenport for your help
Christopher Davenport
@ChristopherDavenport
Abstracting effect gives greater clarity about what a particular function can do. But getting to IO in general is a major improvement over future.
Fabio Labella
@SystemFw
@pauljamescleary note that you will be able to remove Ec everywhere (apart from legacy blocking code) when you move to F and IOApp
:+1: to what Chris said though
once you're in IO, you have a safe/sane refactoring path to abstract to F later
Paul Cleary
@pauljamescleary
@SystemFw thanks, yea, I may take a stab at doing that and see how far I get
13h3r2
@13h3r2

Hey! I am not sure I understand how Concurrent.start and Fiber works.

I have multiple concurrent computations which should be started by trigger. Then I need to wait for the completion of all of the computations. I am running multiple fibers and use Deferred for synchronization. I realized that after calling Deferred.complete my fibers are processed sequential, but not parallel (as I expected).

I added explicit ContextShift.shift after Deferred.get in my fiber and now it works in parallel.

Here it is a gist - https://gist.github.com/13h3r2/1923169269db6732170c0058d8a869c1 Pay attention to line 26.

Few questions based on this gist:

  • is it expected behavior?
  • if yes what causes sequential processing in this case: Fibers, Deferred or something else?
  • what is supposed way to avoid this kind of problems?

Thanks

Gavin Bisesi
@Daenyth
@13h3r2 if you "just" want to run computations in parallel and collect the answers, you can do
import cats.implicits._
import cats.effect.implicits._
val fa: IO[A]
val fb: IO[B]
val g: (A, B) => C
implicit val cs: ContextShift[IO] = ??? // could get via EC
val fc: IO[C] = (fa, fb).parMapN(g)
in your gist there's some awkward points
fibers.foldLeft(... _.join) means you will join them in order
13h3r2
@13h3r2
@Daenyth thanks. I found this parallel instance for IO, but I would like to figure out what I was doing wrong in my example. Looks like I am missing something.
Gavin Bisesi
@Daenyth
Using Thread.sleep also means you're blocking a real thread instead of async sleeping, not sure how that would interact
Have you looked at fs2?
It makes this kind of thing way easier
Fabio Labella
@SystemFw
@13h3r2 what version are you on, to begin with?
13h3r2
@13h3r2
I do not think join in order is important here. They are already fibers and should be shifted as far as I understand. Thread sleep should be fine here because I have a lot of background threads. In real example I was using timer.
@SystemFw 1.0.0
Fabio Labella
@SystemFw
ok, let me have a look at the code
13h3r2
@13h3r2
Thanks.
Gavin Bisesi
@Daenyth
for example;
fs2.Stream.range(start 0, stopExclusive = 10)
  .mapAsync(parallelLimit: Int)(_ => computation)
  // or mapAsyncUnordered if the order doesn't matter
Fabio Labella
@SystemFw
as a bit of advice, it's better to post example with the proper code (i.e. Timer and not Thread.sleep, which you should never use)
13h3r2
@13h3r2
The gist is runnable. You can remove shift and see the difference.
@SystemFw agree. will do next time
I can fix this one if you think this is important.
Fabio Labella
@SystemFw
no
I don't think is important enough to say "I don't want to look at it"
I'll fix it
just some advice in general :)
13h3r2
@13h3r2
Thanks.
Gavin Bisesi
@Daenyth
d.get *> this is weird
Fabio Labella
@SystemFw
(also, my biased opinion is also to use fs2, but let's fix this one first)
Gavin Bisesi
@Daenyth
also maybe fibers <- tasks.traverse(_.start) might be clearer
instead of the raw fold
13h3r2
@13h3r2
I agree. This can be improved. But it does not explains the behavior :)
Gavin Bisesi
@Daenyth
right, just trying to make sense of the code
I think it is the fold/join thing though
oh I see what the get is doing
13h3r2
@13h3r2
It is so important to note, that thread sleep represents cpu intensive operation. Not just waiting for something.
Gavin Bisesi
@Daenyth
using Deferred as a barrier
in that case I really recommend fs2
because as written you can't control how many cpu threads are going at a time except via the EC
where with fs2 you can tell it directly how many things are allowed in parallel no matter the inputs
13h3r2
@13h3r2
I got your point. But I really curious what is wrong with this code.
Gavin Bisesi
@Daenyth
hmm
I did think that start auto-shifted