Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • Jan 22 06:12
    nlewycky opened #822
  • Jan 06 12:28
    CHCP edited #821
  • Jan 06 12:28
    CHCP edited #821
  • Jan 06 12:26
    CHCP edited #821
  • Jan 06 12:26
    CHCP edited #821
  • Jan 06 12:25
    CHCP opened #821
  • Dec 21 2020 22:09
    bluss closed #614
  • Dec 10 2020 19:06
    nikomatsakis closed #816
  • Dec 10 2020 19:05
    nikomatsakis closed #819
  • Dec 08 2020 09:15
    errrken closed #818
  • Dec 02 2020 23:08
    jpalus edited #820
  • Dec 02 2020 13:18
    jpalus edited #820
  • Dec 02 2020 13:15
    jpalus opened #820
  • Dec 02 2020 04:02
    tyan-boot opened #819
  • Nov 28 2020 17:54
    errrken edited #818
  • Nov 28 2020 17:49
    errrken opened #818
  • Nov 27 2020 11:39
    JackThomson2 edited #817
  • Nov 27 2020 11:38
    JackThomson2 synchronize #817
  • Nov 27 2020 11:14
    JackThomson2 opened #817
  • Nov 23 2020 20:00
    vtavernier opened #816
Josh Stone
@cuviper
you could just pick workers[rayon::current_thread_index()] if the lengths match
Stuart Axelbrooke
@soaxelbrooke
that's good to know!
it was definitely a hack that has stuck around
Josh Stone
@cuviper
once you get this working, you might try for_each_init where the init function is the one that locks your worker locally, returning the MutexGuard
but backing up, it's strange that you don't even get your logged attempts
Stuart Axelbrooke
@soaxelbrooke
I added the thread index to the logging, and it looks like different threads are active, just never at the same time...
Josh Stone
@cuviper
ok, I was about to suggest that
weird
Stuart Axelbrooke
@soaxelbrooke
for context, this is a web scraper, and the work items are different URLs to scrape, though I don't think that would change anything
Josh Stone
@cuviper
that will definitely block the thread, as this isn't an async library, but other threads should still progress
if you attach a debugger, you should be able to get a backtrace of each thread, and see where they're blocked
Stuart Axelbrooke
@soaxelbrooke
oh god, I'm sorry, it was a shared num_processed variable they were all trying to lock at the same time
threading, how do you even
Josh Stone
@cuviper
whew
rustc will make sure your threading is safe, but not necessarily effective
Stuart Axelbrooke
@soaxelbrooke
you can only protect people from themselves so much :P
Niko Matsakis
@nikomatsakis
So @cuviper I left a comment on #679 -- basically I think that the signature of spawn_future is maybe not quite what I expected
Josh Stone
@cuviper
OK, I hadn't thought of it that way
I guess we would need to implement a Context and Waker then
which is probably doable
Josh Stone
@cuviper
interesting, async-task does look appropriate
Niko Matsakis
@nikomatsakis
@cuviper can't make sync today; first day of school and I want to take "DD" out to ice cream :)
but after digging a bit more into async-task, it did seem like a good fit for what we need -- haven't checked if there are more comments on #679 yet though
Josh Stone
@cuviper
no worries, kids are synchronous
Josh Stone
@cuviper
@nikomatsakis any concerns before I publish 1.2? #686
nothing major in there, but a few folks were wanting the updated crossbeam-deque
Emilio Cobos Álvarez
@emilio
Is there any way (even with some runtime overhead) to wait on a thread-pool to shut down? I basically want a sync-drop, to ensure everything is finished by some point, for leak-checking purposes
If there is none, would there be any objection to adding such a switch? @cuviper?
Josh Stone
@cuviper
the global pool doesn't shut down at all
for other ThreadPool instances, it should be possible
I think we already do this with some internal APIs for tests
I tinkered with a public API at some point, if I can find that branch
(feels like I say that a lot -- need to get better at publishing such things)
Emilio Cobos Álvarez
@emilio
yeah, I meant a regular ThreadPool instance
I guess I can and add an atomic counter to thread_shutdown or something... Not the prettiest thouh
*though
exit_handler, I mean :)
Given I know the expected number of threads... still not amazing, and I think not enough to fully guarantee that all TLS destructors have run
Josh Stone
@cuviper
or std::sync::Barrier
for TLS, we'd need to actually have joined the threads
Emilio Cobos Álvarez
@emilio
Hmm, barrier doesn't work since I cannot access captured stuff on thread shutdown, and it's not const... Though I guess it could be a lazy_static of some sort
Josh Stone
@cuviper
or Arc-wrapped
Emilio Cobos Álvarez
@emilio
Right, but I cannot access any Arc-wrapped thing in an exit_handler, afaict...
Oh, nvm, it is a closure, I thought it was just a plain fn
Josh Stone
@cuviper
@emilio please file an issue so this doesn't get lost, including some description of the workflow you want
I think we can do it properly in a new API, just need to design that
Emilio Cobos Álvarez
@emilio
Josh Stone
@cuviper
thanks
Emilio Cobos Álvarez
@emilio
np, thank you :)
Josh Stone
@cuviper
@nikomatsakis sync?