Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Sep 05 2019 14:43
    @typelevel-bot banned @jdegoes
  • Jan 31 2019 21:17
    codecov-io commented #484
  • Jan 31 2019 21:08
    scala-steward opened #484
  • Jan 31 2019 18:19
    andywhite37 commented #189
  • Jan 31 2019 02:41
    kamilongus starred typelevel/cats-effect
  • Jan 30 2019 00:01
    codecov-io commented #483
  • Jan 29 2019 23:51
    deniszjukow opened #483
  • Jan 29 2019 23:37
  • Jan 29 2019 23:22
  • Jan 29 2019 20:26
    Rui-L starred typelevel/cats-effect
  • Jan 29 2019 18:01
    jdegoes commented #480
  • Jan 29 2019 17:04
    thomaav starred typelevel/cats-effect
  • Jan 28 2019 17:43
    asachdeva starred typelevel/cats-effect
  • Jan 28 2019 07:12
    alexandru commented #480
  • Jan 28 2019 05:45
    codecov-io commented #482
  • Jan 28 2019 05:35
    daron666 opened #482
  • Jan 27 2019 13:56
    codecov-io commented #481
  • Jan 27 2019 13:46
    lrodero opened #481
  • Jan 27 2019 05:47
    codecov-io commented #460
  • Jan 27 2019 05:37
    codecov-io commented #460
Rohde Fischer
@rfftrifork
as almost all things in programming :)
Fabio Labella
@SystemFw
anyway, you can join your fibers are the end to actually wait for them in the cats-effect realm, without knowing about threads
this concept (of not losing spawned concurrent processes) leads to a safer model of concurrency, and it's supported by e.g. fs2 concurrently
you can build it on top of a "lossy" model like cats-effect, which acts like a primitive
gtg now, hopefully it sheds some clarity
Rohde Fischer
@rfftrifork
a lot more than 10 minutes ago definitely, thanks a lot
Rohde Fischer
@rfftrifork
per the talk of daemon threads vs. non-daemon yesterday, I was actually wondering, would it be sound to always to join if I expect them to be non-daemon, so I can keep the code independent on which ExecutionContext is in use?
Paul Snively
@paul-snively
I prefer to always join if that's the desired behavior.
Rohde Fischer
@rfftrifork
@paul-snively (y)
Torsten Schmits
@tek
do you guys let timers run on the global EC?
Gavin Bisesi
@Daenyth
When we were migrating from App to IOApp we used the same pool that backed our ContextShift for the Timer. With IOApp it has its own ScheduledExecutorService
We didn't observe any issues from it
You probably don't want to use global for anything, ideally
Torsten Schmits
@tek
does a timer permanently occupy a thread?
Gavin Bisesi
@Daenyth
No, nothing in cats-effect does
You can run a whole concurrent app on one thread
Everything is nonblocking (at the thread level)
Torsten Schmits
@tek
can you describe a scenario in which running a timer on global poses a concrete problem?
Gavin Bisesi
@Daenyth
No
global itself is not well suited for use with cats-effect but that's it
Torsten Schmits
@tek
ok
Gavin Bisesi
@Daenyth
it's designed for mixed cpu+blocking-io work in a single pool, which makes it optimized for neither
cpu-bound work will suffer performance-wise vs using a bounded thread pool
Torsten Schmits
@tek
so using a fixed pool with cats-effect is generally a bad choice?
Gavin Bisesi
@Daenyth
and blocking work can preempt cpu-bound work if global decides not to put that work on its own thread
Fixed pool at #cores for ContextShift as your main pool
CachedThreadPool for blocking work, wrapped with Blocker to make the signatures clear and safe
Torsten Schmits
@tek
I see, thanks
Gavin Bisesi
@Daenyth
We've done Timer backed by the same pool as ContextShift before and didn't hit problems; it's somewhat better to give that its own 1-2 thread fixed pool that nothing else at all uses
I'll be sharing slides after my "intro to cats-effect" talk tonight
I cover this
Torsten Schmits
@tek
great
Gavin Bisesi
@Daenyth
The talk in case someone here is in Boston and doesn't watch the meetup group: https://www.meetup.com/boston-scala/events/265023178
Qi Wang
@Qi77Qi
is this expected
scala> IO(println(2))
res18: cats.effect.IO[Unit] = IO$1312380460

scala> IO(println(3))
res19: cats.effect.IO[Unit] = IO$480567155

scala> (res18, res19).parSequence_
res20: cats.effect.IO[Unit] = IO$1099161557

scala> res20.unsafeRunSync
3
I thought (res18, res19).parSequence_ is the same as
scala> List(res18, res19).parSequence_
res22: cats.effect.IO[Unit] = IO$1532827774

scala> res22.unsafeRunSync
2
3
Luka Jacobowitz
@LukaJCB
@Qi77Qi This isn’t the same because the Tuple2 instance is defined quite different from the List instance
I think this will be clearer if instead of parSequence_ you use the non-underscore version parSequence
If you want to mimic the list behavior, you shouldn’t use it’s Traverse instance, but it’s Applicative instance like so:
(res18, res19).parTupled
Qi Wang
@Qi77Qi
scala> (res18, res19).parSequence
res25: cats.effect.IO[(cats.effect.IO[Unit], Unit)] = <function1>
hmmm..mmmm
if this is intended behavior for Tuple2, what is it useful for? :thinking:
Daniel Spiewak
@djspiewak
@Qi77Qi I'm guessing that you managed to get that behavior because there's a Parallel[(A, ?)] where Monoid[A], and there's a Monoid[IO[A]] where Monoid[A], and in your case, A is Unit and thus forms a trivial Monoid.
Qi Wang
@Qi77Qi
I thikn import cats.implicits._ is the only relevant import I did
Daniel Spiewak
@djspiewak
actually I guess your tuple is (IO[A], IO[A])
yeah implicits has a ton of stuff
including all the instances you would need here :-)
but now that I think about it, my explanation doesn't make much sense. I'm not 100% sure where the inner IO[Unit] is coming from
Luka's answer is the more useful one
I'm just trying to trace through what implicits made your example possible
oh it's just because Tuple2 has a Traverse