Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • Feb 08 15:46
    dssgabriel edited #1017
  • Feb 08 15:44
    dssgabriel opened #1017
  • Feb 07 17:55
    steffahn closed #1016
  • Feb 07 17:13
    steffahn edited #1016
  • Feb 07 17:13
    steffahn opened #1016
  • Feb 02 21:19
    HopedWall closed #922
  • Feb 02 12:28
    yu-re-ka synchronize #1012
  • Feb 02 12:28
    yu-re-ka synchronize #1012
  • Feb 02 12:28
    yu-re-ka edited #1012
  • Feb 02 12:27
    yu-re-ka synchronize #1012
  • Feb 02 12:19
    yu-re-ka synchronize #1012
  • Feb 01 12:17
    romancardenas closed #1015
  • Feb 01 12:13
    romancardenas opened #1015
  • Feb 01 00:25
    DarkFenX closed #1014
  • Jan 31 22:50
    DarkFenX edited #1014
  • Jan 31 22:50
    DarkFenX edited #1014
  • Jan 31 22:46
    DarkFenX edited #1014
  • Jan 31 22:41
    DarkFenX edited #1014
  • Jan 31 22:40
    DarkFenX opened #1014
  • Jan 22 22:18

    cuviper on rayon-core-v1.10.2

    (compare)

Rupansh
@rupansh

Thing is, I can skip to the next row once the element is removed.

if 2d array is a = [[1, 3], [2, 3], [3, 3]] flattened -> [1, 3, 2, 3, 3, 3]
Lets say i want to delete(or "filter out") a[0][1], a[1][0], a[2][1],
I wouldn't have to go through all the elements(Theoretically).
I am performing the operation on large images (4-8K+) so wouldn't it make a difference if I could skip a few chunks?

Josh Stone
@cuviper
do you have many such images? because it may be best to do those in parallel, but the operation itself serially.
Rupansh
@rupansh

do you have many such images? because it may be best to do those in parallel, but the operation itself serially.

Nope its just a single image.
For more context i am implementing seam carving.
I did a few benchmarks though and it seems that Vec::retain is fast enough (< 1ms to remove 25 seams)

Josh Stone
@cuviper
oh, yeah, retain is great if you don't need to do anything with the removed items.
I guess I was only thinking of drain_filter because we do have a parallel Drain, but not retain
Wojciech Bogócki
@wbogocki
Hey, does anyone know if there's exactly one item in a Vec, will a par_iter() run in place? I have a situation that looks like this bigass_loop { guys.par_iter().try_for_each(...) }
Half of the time there's only one guy in the list, will try to figure out the overhead soon, asking if anybody knows ahead of time :)
Josh Stone
@cuviper
Yes, it will run in place. It won't even move to the thread pool if you're not there already.
Wojciech Bogócki
@wbogocki
Understood, thank you :)
Zhishi
@whfuyn

Hi, I'm debugging a stack overflow problem in my project. The rayon related part is

            use rayon::prelude::*;
            msgs.into_par_iter()
                .map(|msg| account.sign(msg).into()) // The sign output a array(128 bytes), and convert to Vec<u8>
                .collect()

The core dump shows a long stack frame about rayon, I want to know if it's normal to have such a long call stack?

(gdb) where
#0  __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:49
#1  0x00007fbaaabff864 in __GI_abort () at abort.c:79
#2  0x0000555ba75d1307 in std::sys::unix::abort_internal () at library/std/src/sys/unix/mod.rs:259
#3  0x0000555ba75f5e5c in std::sys::unix::stack_overflow::imp::signal_handler ()
    at library/std/src/sys/unix/stack_overflow.rs:109
#4  <signal handler called>
#5  0x0000555ba73d50b1 in efficient_sm2::norop::norop_mul_pure ()
    at /rustc/4e0d3973fafdfb1c51011bc74e44257b5e3863f1/library/alloc/src/alloc.rs:317
#6  0x0000555ba73d4e82 in efficient_sm2::sm2p256::mont_pro::h8f980b8c31f05d79 ()
    at /rustc/4e0d3973fafdfb1c51011bc74e44257b5e3863f1/library/alloc/src/alloc.rs:317
#7  0x0000555ba73d541b in efficient_sm2::jacobian::exchange::affine_from_jacobian ()
    at /rustc/4e0d3973fafdfb1c51011bc74e44257b5e3863f1/library/alloc/src/alloc.rs:317
#8  0x0000555ba73d2690 in efficient_sm2::ec::signing::KeyPair::sign_digest ()
    at /rustc/4e0d3973fafdfb1c51011bc74e44257b5e3863f1/library/alloc/src/alloc.rs:317
#9  0x0000555ba73d0b49 in efficient_sm2::ec::signing::KeyPair::sign ()
    at /rustc/4e0d3973fafdfb1c51011bc74e44257b5e3863f1/library/alloc/src/alloc.rs:317
#10 0x0000555ba74d12e8 in kms::sm::sm2_sign ()
    at /rustc/4e0d3973fafdfb1c51011bc74e44257b5e3863f1/library/alloc/src/alloc.rs:317
#11 0x0000555ba74d1cea in rayon::iter::plumbing::bridge_producer_consumer::helper ()
    at /rustc/4e0d3973fafdfb1c51011bc74e44257b5e3863f1/library/alloc/src/alloc.rs:317
#12 0x0000555ba7445bf0 in rayon_core::job::StackJob<L,F,R>::run_inline ()
    at /rustc/4e0d3973fafdfb1c51011bc74e44257b5e3863f1/library/alloc/src/alloc.rs:317
#13 0x0000555ba74d2305 in rayon::iter::plumbing::bridge_producer_consumer::helper ()
    at /rustc/4e0d3973fafdfb1c51011bc74e44257b5e3863f1/library/alloc/src/alloc.rs:317
#14 0x0000555ba7445bf0 in rayon_core::job::StackJob<L,F,R>::run_inline ()
    at /rustc/4e0d3973fafdfb1c51011bc74e44257b5e3863f1/library/alloc/src/alloc.rs:317
#15 0x0000555ba74d2305 in rayon::iter::plumbing::bridge_producer_consumer::helper ()
    at /rustc/4e0d3973fafdfb1c51011bc74e44257b5e3863f1/library/alloc/src/alloc.rs:317

(omitted)

#6873 0x0000555ba74d2345 in rayon::iter::plumbing::bridge_producer_consumer::helper () at /rustc/4e0d3973fafdfb1c51011bc74e44257b5e3863f1/library/alloc/src/alloc.rs:317
#6874 0x0000555ba744563b in <rayon_core::job::StackJob<L,F,R> as rayon_core::job::Job>::execute () at /rustc/4e0d3973fafdfb1c51011bc74e44257b5e3863f1/library/alloc/src/alloc.rs:317
#6875 0x0000555ba731f6f2 in rayon_core::registry::WorkerThread::wait_until_cold ()
#6876 0x0000555ba74d2345 in rayon::iter::plumbing::bridge_producer_consumer::helper () at /rustc/4e0d3973fafdfb1c51011bc74e44257b5e3863f1/library/alloc/src/alloc.rs:317
#6877 0x0000555ba744563b in <rayon_core::job::StackJob<L,F,R> as rayon_core::job::Job>::execute () at /rustc/4e0d3973fafdfb1c51011bc74e44257b5e3863f1/library/alloc/src/alloc.rs:317
#6878 0x0000555ba731f6f2 in rayon_core::registry::WorkerThread::wait_until_cold ()
#6879 0x0000555ba751d02b in std::sys_common::backtrace::__rust_begin_short_backtrace ()
#6880 0x0000555ba751957e in core::ops::function::FnOnce::call_once{{vtable.shim}} ()
#6881 0x0000555ba75f7735 in alloc::boxed::{impl#44}::call_once<(), dyn core::ops::function::FnOnce<(), Output=()>, alloc::alloc::Global> () at /rustc/4e0d3973fafdfb1c51011bc74e44257b5e3863f1/library/alloc/src/boxed.rs:1691
#6882 alloc::boxed::{impl#44}::call_once<(), alloc::boxed::Box<dyn core::ops::function::FnOnce<(), Output=()>, alloc::alloc::Global>, alloc::alloc::Global> () at /rustc/4e0d3973fafdfb1c51011bc74e44257b5e3863f1/l
Zhishi
@whfuyn
I have no idea why it has such a deep call stack. And it works fine when the concurrency is low (even with larger msgs count).
WGH
@WGH:torlan.ru
[m]
I don't really know rayon internals, but at first glance it doesn't look normal
TheIronBorn
@TheIronBorn
Is there some form of find_any which also allows
me to collect the items encountered so far?
TheIronBorn
@TheIronBorn
found an approach for this: create storage beforehand,
data
    .par_iter_mut()
    .map(|x| { *x=do_something(x); })
    .find_any(|x| some_criteria(x))
Eduard Tolosa
@Edu4rdSHL
Hello! If I have a ThreadPool (threadpool2) running inside another ThreadPool (threadpool1), will threadpool1 wait until threadpool2 finish his tasks?
Josh Stone
@cuviper
@Edu4rdSHL the threadpool itself doesn't really "wait" for anything -- it's up to the way you called into each pool
e.g. if you make a blocking install/join call into your first pool, and that makes another blocking call into the second pool, that will all be synced as they return
TGRCDev
@TGRCdev
is there a guide for writing a consuming iterator? there are so many different traits that need to be implemented and it doesnt seem to have a clear "start" point for writing one
Josh Stone
@cuviper
@TGRCdev if possible, try to layer it on an existing iterator. Then you can just implement ParallelIterator and have that forward, like drive as self.base.map(...).drive()
beyond that, I don't think we have a good guide, just existing implementations as an example I guess
TGRCDev
@tgrcdev:matrix.org
[m]
alright, i'll feel it out. thanks
parazyd
@parazyd:dark.fi
[m]
Hey there? Can you see my messages?
Dirkjan Ochtman
@djc
sure, just ask your question
parazyd
@parazyd:dark.fi
[m]
ah cool. I joined with weechat-matrix which is sometimes flaky so I wasn't sure it worked :)
Anyway, I'm learning rayon, and wondering - how could I stop iter::repeat from spawning new jobs, for example when I catch SIGINT?
I had hoped there would be some kind of take_while(), but turns out it doesn't work.
WGH
@WGH:torlan.ru
[m]
I suppose you could use fallible try_*, and check for a flag inside your handler function
parazyd
@parazyd:dark.fi
[m]
Wait how would that look like?
I'm trying to use this example: https://github.com/Detegr/rust-ctrlc#example-usage
And before the blocking recv, there is my ThreadPool which I spawn, and inside it there's iter::repeat
WGH
@WGH:torlan.ru
[m]
(0..100).into_par_iter()
    .try_for_each(|x| {
        if sigint_raised() {
            return Err(());
        }
        // ...
    });
sigint_raised would probably be implemented by checking some atomic variable
and SIGINT handler would set such variable
parazyd
@parazyd:dark.fi
[m]
Yeah like AtomicBool
hmm ok, I'll try something out in a bit
Thanks
Aleksey Kladov
@matklad
Question: I want to process a number of items in parallel, so I do xs.into_par_iter().... The problem is, the lenght of xs is at most 4, and Items are very heavy, so I want to make sure that rayon splits each of vec's element into a separate stolable job. What's the right API for that?
Aleksey Kladov
@matklad
And perhaps more generally: where can I read how rayon decides to split slices?
Aleksey Kladov
@matklad
Hm, actually my testing seems to show that rayon just distributes the work across all cores out of the box... Is there some official confirmatio to that end somewhere?
Josh Stone
@cuviper
@matklad I don't think we document any guarantees about how the default mode works, but you can force .with_max_len(1)
the details of how are found in src/iter/plumbing/ if you want to go reading
WGH
@WGH:torlan.ru
[m]
isn't the bottom line, as things are right now, .with_min_len is useful if every work unit is small, otherwise, there's no need to touch anything?
Josh Stone
@cuviper
hopefully yes, the default "adaptive" splitting should work pretty well in most situations
Josh Stone
@jistone:fedora.im
[m]
(ooh, I haven't tried matrix here before, neat!)
Zhengyi Yang
@zhengyi-yang
Hi guys, does anyone know if it is possible to turn off work stealing in rayon's ThreadPool? Thanks a lot
Josh Stone
@jistone:fedora.im
[m]
@zhengyi-yang: not really -- I did have some ideas about adding a "critical section" primitive where you could prevent stealing while a particular job is blocked. Can you elaborate why you want this?
oliver-giersch
@oliver-giersch
hello, I have a cargo workspace with a bunch of crates. In one of them I collect benchmarks using criterion (which depends on rayon) and in one of the other crates I want to use rayon as a git dependency (master), in order to be able to use std::ops::ControlFlow, which is already integrated but not in the latest version from May 21. However, rayon-core appears to explicitly prohibit this mixing of different versions. Is there a work around for this restriction?
Josh Stone
@jistone:fedora.im
[m]
@oliver-giersch: I think maybe if you use cargo [patch...] to override git rayon, it might still use the published rayon-core, but I'm not sure. The best answer is that I should just publish a new release for you. :)