Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Sep 05 2019 14:43
    @typelevel-bot banned @jdegoes
  • Jan 31 2019 21:17
    codecov-io commented #484
  • Jan 31 2019 21:08
    scala-steward opened #484
  • Jan 31 2019 18:19
    andywhite37 commented #189
  • Jan 31 2019 02:41
    kamilongus starred typelevel/cats-effect
  • Jan 30 2019 00:01
    codecov-io commented #483
  • Jan 29 2019 23:51
    deniszjukow opened #483
  • Jan 29 2019 23:37
  • Jan 29 2019 23:22
  • Jan 29 2019 20:26
    Rui-L starred typelevel/cats-effect
  • Jan 29 2019 18:01
    jdegoes commented #480
  • Jan 29 2019 17:04
    thomaav starred typelevel/cats-effect
  • Jan 28 2019 17:43
    asachdeva starred typelevel/cats-effect
  • Jan 28 2019 07:12
    alexandru commented #480
  • Jan 28 2019 05:45
    codecov-io commented #482
  • Jan 28 2019 05:35
    daron666 opened #482
  • Jan 27 2019 13:56
    codecov-io commented #481
  • Jan 27 2019 13:46
    lrodero opened #481
  • Jan 27 2019 05:47
    codecov-io commented #460
  • Jan 27 2019 05:37
    codecov-io commented #460
Daniel Spiewak
@djspiewak
@Qi77Qi I'm guessing that you managed to get that behavior because there's a Parallel[(A, ?)] where Monoid[A], and there's a Monoid[IO[A]] where Monoid[A], and in your case, A is Unit and thus forms a trivial Monoid.
Qi Wang
@Qi77Qi
I thikn import cats.implicits._ is the only relevant import I did
Daniel Spiewak
@djspiewak
actually I guess your tuple is (IO[A], IO[A])
yeah implicits has a ton of stuff
including all the instances you would need here :-)
but now that I think about it, my explanation doesn't make much sense. I'm not 100% sure where the inner IO[Unit] is coming from
Luka's answer is the more useful one
I'm just trying to trace through what implicits made your example possible
oh it's just because Tuple2 has a Traverse
oh yeah and you used println rather than Int
so yeah, that's how it works
implicit def catsStdInstancesForTuple2[X]: Traverse[(X, ?)]
that's defined in implicits._
def parSequence[T[_], M[_], A](tma: T[M[A]])(implicit arg0: Traverse[T], P: Parallel[M]): M[T[A]]
that's where parSequence comes from
the M in this case is IO
the T is (IO[Unit], ?)
does it make more sense now why only the right one is run?
this is uniform with Functor, btw, since (X, ?) forms a Functor for all X
(1, 2).map(_ + 1) // => (1, 3)
Daniel Spiewak
@djspiewak
List(1, 2).map(_ + 1) // => List(2, 3)
Qi Wang
@Qi77Qi
Ohhhh...
Daniel Spiewak
@djspiewak
:-)
as Luka said, parTupled will get you what you're looking for
Qi Wang
@Qi77Qi
yea…I tried parTupled as well...
was just curious why it’s different for tuple vs List…I think that makes sense
Daniel Spiewak
@djspiewak
:-)
Qi Wang
@Qi77Qi
cool cool..thanks!
Daniel Spiewak
@djspiewak
np!
Colt Frederickson
@coltfred
Is this the approved way to limit a parTraverse to N threads?
scala> import cats.implicits._, cats.effect._, cats._, scala.concurrent.ExecutionContext, java.util.concurrent._
import cats.implicits._
import cats.effect._
import cats._
import scala.concurrent.ExecutionContext
import java.util.concurrent._

scala> implicit val contextShift = IO.contextShift(ExecutionContext.global)
contextShift: cats.effect.ContextShift[cats.effect.IO] = cats.effect.internals.IOContextShift@22edc986

scala> val ec = ExecutionContext.fromExecutor(Executors.newFixedThreadPool(2))
ec: scala.concurrent.ExecutionContextExecutor = scala.concurrent.impl.ExecutionContextImpl@132c5e29

scala> 1.to(10).toList.parTraverse(i => contextShift.evalOn(ec)(IO{println(s"Starting $i"); Thread.sleep(i.toLong * 250); println(s"Ending $i")}))                                                                                                            
res0: cats.effect.IO[List[Unit]] = IO$849775196

scala> res0.unsafeRunSync
That is to say, you should evalOn the ec which we use to fix the concurrency.
Oleg Pyzhcov
@oleg-py
You can't limit parTraverse to N threads, it requires knowing that IOs inside don't spawn any fibers which you generally can't know.
If you want to limit it to N fibers/"logical threads", then there's parTraverseN syntax in cats.effect.implicits in few recent versions, or Semaphore.
Colt Frederickson
@coltfred
@oleg-py If your parallelism is higher than your thread count, will that cause problems?
Gavin Bisesi
@Daenyth
nope
Colt Frederickson
@coltfred
:confused:
Gavin Bisesi
@Daenyth
you can run thousands of concurrent fibers on one thread if you want
Colt Frederickson
@coltfred
Just when I think I know things...
Only the number of threads can be actually running at a single time, though.
Gavin Bisesi
@Daenyth
Yes
N threads can execute up to N Fibers in parallel
but any number of Fibers can make concurrent progress if they have an async yield point internally (shift or async or sleep etc or something that calls it)
Colt Frederickson
@coltfred
So, I'm writing a tool to "stress test" an api. They're going to be blocking a bunch, but not all the time. It seems like shifting onto a fixed pool might be a good idea?
Gavin Bisesi
@Daenyth
keywords to search: cooperative concurrency, green threading, M-to-N threading
When you say blocking, it's important to specify what kind
Thread level blocking (eg function returns Unit), or semantic blocking, which is asynchronous and callback-oriented under the abstraction
(everything in cats-effect is the latter, which is how you can run many Fibers)
For actual thread blocking, you will want to use Blocker.blockOn with the Blocker backed by a CachedThreadPool. You can construct a Resource[F, Blocker] using Blocker[F] (apply method)
Colt Frederickson
@coltfred
I'm calling into a native library which blocks the thread. That is to say our abstraction over that is just IO(...) around that call.
So in this case it's blocking the thread that it was called from.