Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • Nov 21 09:18
    kentfredric opened #710
  • Nov 21 00:42

    cuviper on rayon-core-v1.6.1

    (compare)

  • Nov 21 00:42

    cuviper on v1.2.1

    (compare)

  • Nov 21 00:35
    bors[bot] closed #709
  • Nov 21 00:35

    bors[bot] on master

    Avoid mem::uninitialized in par… Avoid mem::uninitialized in the… cargo fmt and 4 more (compare)

  • Nov 20 23:28
    cuviper labeled #707
  • Nov 20 23:06

    bors[bot] on staging.tmp

    (compare)

  • Nov 20 23:06

    bors[bot] on staging

    Avoid mem::uninitialized in par… Avoid mem::uninitialized in the… cargo fmt and 4 more (compare)

  • Nov 20 23:06

    bors[bot] on staging.tmp

    Avoid mem::uninitialized in par… Avoid mem::uninitialized in the… cargo fmt and 4 more (compare)

  • Nov 20 23:06

    bors[bot] on staging.tmp

    [ci skip][skip ci][skip netlify] (compare)

  • Nov 20 20:15
    cuviper opened #709
  • Nov 20 00:38
    cuviper closed #708
  • Nov 18 19:28
    calebwin closed #699
  • Nov 16 01:21
    bachue opened #708
  • Nov 08 19:41
    silwol synchronize #707
  • Nov 08 15:45
    silwol opened #707
  • Nov 08 12:23
    kornelski closed #706
  • Nov 07 23:24
    bors[bot] closed #705
  • Nov 07 23:24

    bors[bot] on master

    Regenerate compat-Cargo.lock Make sure that compat-Cargo.loc… Merge #705 705: Update compat-… (compare)

  • Nov 07 23:13
    kornelski opened #706
Josh Stone
@cuviper
for other ThreadPool instances, it should be possible
I think we already do this with some internal APIs for tests
I tinkered with a public API at some point, if I can find that branch
(feels like I say that a lot -- need to get better at publishing such things)
Emilio Cobos Álvarez
@emilio
yeah, I meant a regular ThreadPool instance
I guess I can and add an atomic counter to thread_shutdown or something... Not the prettiest thouh
*though
exit_handler, I mean :)
Given I know the expected number of threads... still not amazing, and I think not enough to fully guarantee that all TLS destructors have run
Josh Stone
@cuviper
or std::sync::Barrier
for TLS, we'd need to actually have joined the threads
Emilio Cobos Álvarez
@emilio
Hmm, barrier doesn't work since I cannot access captured stuff on thread shutdown, and it's not const... Though I guess it could be a lazy_static of some sort
Josh Stone
@cuviper
or Arc-wrapped
Emilio Cobos Álvarez
@emilio
Right, but I cannot access any Arc-wrapped thing in an exit_handler, afaict...
Oh, nvm, it is a closure, I thought it was just a plain fn
Josh Stone
@cuviper
@emilio please file an issue so this doesn't get lost, including some description of the workflow you want
I think we can do it properly in a new API, just need to design that
Emilio Cobos Álvarez
@emilio
Josh Stone
@cuviper
thanks
Emilio Cobos Álvarez
@emilio
np, thank you :)
Josh Stone
@cuviper
@nikomatsakis sync?
Niko Matsakis
@nikomatsakis
@cuviper :wave:
Josh Stone
@cuviper
hey
spawn_future is probably the most interesting thing we have to discuss :)
Niko Matsakis
@nikomatsakis
yeah, probably
sorry, was just skimming a bit
I'm catching up on that PR now
Josh Stone
@cuviper
ISTM that accepting a Future puts us in quite a different position than just returning one
Niko Matsakis
@nikomatsakis
I don't really understand Alex's comment
I guess I would assume he didn't read the PR very closely
anyway, I agree it is rather different
which is why I was pressing for it :)
as it seems significantly more enabling
Josh Stone
@cuviper
my loose understanding is that it would be more difficult for a rayon thread in WASM to wait for an async event from the outside environment (JS)
but it's not really clear to me who would get suspended where
async is still a bit magic to me
Niko Matsakis
@nikomatsakis
I guess I don't really understand what the WASM integration looks like
if we ignore that for a second,
you wrote this on the PR
I guess if the async work leading up to our part isn't ready, we'd just return NotReady too and let some outer executor deal with it?
I believe what would happen is roughly like this:
  • when you invoke spawn_future(F: impl Future), we would schedule a job that calls F::poll, indeed
  • we would give it a Waker that is tied to rayon, I think
  • that job would return NotReady (due to some I/O event), and it would have clone'd our Waker to hold on to it
  • when the I/O event occurs, it would invoke the wake method; this would cause us to add the job back to the (Rayon) thread-pool, at which point we go back to step 1
  • when I did this before, the "unsafety" bit was exactly around this Waker step. If you spawned a future into a scope, I wanted to guarantee that until that future was complete, the scope would not terminate. But the compiler couldn't know that.
  • it worked by having the future hold a ref on the scope, basically, that wasn't discharged until the future was enqueued etc
I also set things up to have exactly one Arc; I think it served as the waker + the "task" that we enqueued + the future that we returned to the user
anyway, I guess the question at hand is sort of ... does it make sense to have a "create future" API that just takes a closure? it does suffice for taking some heavy CPU computation that uses Rayon
Niko Matsakis
@nikomatsakis
what it doesn't do is let us have DAGs of computations or anything like that; but I don't know that anybody is asking for it -- and, if they did, I don't know that the Future API would be the way to set it up, or at least only for some sorts of cases