Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • Jan 27 00:59
    cuviper closed #231
  • Jan 22 06:12
    nlewycky opened #822
  • Jan 06 12:28
    CHCP edited #821
  • Jan 06 12:28
    CHCP edited #821
  • Jan 06 12:26
    CHCP edited #821
  • Jan 06 12:26
    CHCP edited #821
  • Jan 06 12:25
    CHCP opened #821
  • Dec 21 2020 22:09
    bluss closed #614
  • Dec 10 2020 19:06
    nikomatsakis closed #816
  • Dec 10 2020 19:05
    nikomatsakis closed #819
  • Dec 08 2020 09:15
    errrken closed #818
  • Dec 02 2020 23:08
    jpalus edited #820
  • Dec 02 2020 13:18
    jpalus edited #820
  • Dec 02 2020 13:15
    jpalus opened #820
  • Dec 02 2020 04:02
    tyan-boot opened #819
  • Nov 28 2020 17:54
    errrken edited #818
  • Nov 28 2020 17:49
    errrken opened #818
  • Nov 27 2020 11:39
    JackThomson2 edited #817
  • Nov 27 2020 11:38
    JackThomson2 synchronize #817
  • Nov 27 2020 11:14
    JackThomson2 opened #817
Josh Stone
@cuviper
@nikomatsakis any concerns before I publish 1.2? #686
nothing major in there, but a few folks were wanting the updated crossbeam-deque
Emilio Cobos Álvarez
@emilio
Is there any way (even with some runtime overhead) to wait on a thread-pool to shut down? I basically want a sync-drop, to ensure everything is finished by some point, for leak-checking purposes
If there is none, would there be any objection to adding such a switch? @cuviper?
Josh Stone
@cuviper
the global pool doesn't shut down at all
for other ThreadPool instances, it should be possible
I think we already do this with some internal APIs for tests
I tinkered with a public API at some point, if I can find that branch
(feels like I say that a lot -- need to get better at publishing such things)
Emilio Cobos Álvarez
@emilio
yeah, I meant a regular ThreadPool instance
I guess I can and add an atomic counter to thread_shutdown or something... Not the prettiest thouh
*though
exit_handler, I mean :)
Given I know the expected number of threads... still not amazing, and I think not enough to fully guarantee that all TLS destructors have run
Josh Stone
@cuviper
or std::sync::Barrier
for TLS, we'd need to actually have joined the threads
Emilio Cobos Álvarez
@emilio
Hmm, barrier doesn't work since I cannot access captured stuff on thread shutdown, and it's not const... Though I guess it could be a lazy_static of some sort
Josh Stone
@cuviper
or Arc-wrapped
Emilio Cobos Álvarez
@emilio
Right, but I cannot access any Arc-wrapped thing in an exit_handler, afaict...
Oh, nvm, it is a closure, I thought it was just a plain fn
Josh Stone
@cuviper
@emilio please file an issue so this doesn't get lost, including some description of the workflow you want
I think we can do it properly in a new API, just need to design that
Emilio Cobos Álvarez
@emilio
Josh Stone
@cuviper
thanks
Emilio Cobos Álvarez
@emilio
np, thank you :)
Josh Stone
@cuviper
@nikomatsakis sync?
Niko Matsakis
@nikomatsakis
@cuviper :wave:
Josh Stone
@cuviper
hey
spawn_future is probably the most interesting thing we have to discuss :)
Niko Matsakis
@nikomatsakis
yeah, probably
sorry, was just skimming a bit
I'm catching up on that PR now
Josh Stone
@cuviper
ISTM that accepting a Future puts us in quite a different position than just returning one
Niko Matsakis
@nikomatsakis
I don't really understand Alex's comment
I guess I would assume he didn't read the PR very closely
anyway, I agree it is rather different
which is why I was pressing for it :)
as it seems significantly more enabling
Josh Stone
@cuviper
my loose understanding is that it would be more difficult for a rayon thread in WASM to wait for an async event from the outside environment (JS)
but it's not really clear to me who would get suspended where
async is still a bit magic to me
Niko Matsakis
@nikomatsakis
I guess I don't really understand what the WASM integration looks like
if we ignore that for a second,
you wrote this on the PR
I guess if the async work leading up to our part isn't ready, we'd just return NotReady too and let some outer executor deal with it?
I believe what would happen is roughly like this:
  • when you invoke spawn_future(F: impl Future), we would schedule a job that calls F::poll, indeed
  • we would give it a Waker that is tied to rayon, I think
  • that job would return NotReady (due to some I/O event), and it would have clone'd our Waker to hold on to it
  • when the I/O event occurs, it would invoke the wake method; this would cause us to add the job back to the (Rayon) thread-pool, at which point we go back to step 1