cuviper on rayon-core-v1.10.2
bors[bot] on master
Release rayon-core 1.10.2 Merge #1013 1013: Release rayo… (compare)
bors[bot] on staging.tmp
do you have many such images? because it may be best to do those in parallel, but the operation itself serially.
Nope its just a single image.
For more context i am implementing seam carving.
I did a few benchmarks though and it seems that Vec::retain is fast enough (< 1ms to remove 25 seams)
drain_filter
because we do have a parallel Drain
, but not retain
Hi, I'm debugging a stack overflow problem in my project. The rayon
related part is
use rayon::prelude::*;
msgs.into_par_iter()
.map(|msg| account.sign(msg).into()) // The sign output a array(128 bytes), and convert to Vec<u8>
.collect()
The core dump shows a long stack frame about rayon
, I want to know if it's normal to have such a long call stack?
(gdb) where
#0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:49
#1 0x00007fbaaabff864 in __GI_abort () at abort.c:79
#2 0x0000555ba75d1307 in std::sys::unix::abort_internal () at library/std/src/sys/unix/mod.rs:259
#3 0x0000555ba75f5e5c in std::sys::unix::stack_overflow::imp::signal_handler ()
at library/std/src/sys/unix/stack_overflow.rs:109
#4 <signal handler called>
#5 0x0000555ba73d50b1 in efficient_sm2::norop::norop_mul_pure ()
at /rustc/4e0d3973fafdfb1c51011bc74e44257b5e3863f1/library/alloc/src/alloc.rs:317
#6 0x0000555ba73d4e82 in efficient_sm2::sm2p256::mont_pro::h8f980b8c31f05d79 ()
at /rustc/4e0d3973fafdfb1c51011bc74e44257b5e3863f1/library/alloc/src/alloc.rs:317
#7 0x0000555ba73d541b in efficient_sm2::jacobian::exchange::affine_from_jacobian ()
at /rustc/4e0d3973fafdfb1c51011bc74e44257b5e3863f1/library/alloc/src/alloc.rs:317
#8 0x0000555ba73d2690 in efficient_sm2::ec::signing::KeyPair::sign_digest ()
at /rustc/4e0d3973fafdfb1c51011bc74e44257b5e3863f1/library/alloc/src/alloc.rs:317
#9 0x0000555ba73d0b49 in efficient_sm2::ec::signing::KeyPair::sign ()
at /rustc/4e0d3973fafdfb1c51011bc74e44257b5e3863f1/library/alloc/src/alloc.rs:317
#10 0x0000555ba74d12e8 in kms::sm::sm2_sign ()
at /rustc/4e0d3973fafdfb1c51011bc74e44257b5e3863f1/library/alloc/src/alloc.rs:317
#11 0x0000555ba74d1cea in rayon::iter::plumbing::bridge_producer_consumer::helper ()
at /rustc/4e0d3973fafdfb1c51011bc74e44257b5e3863f1/library/alloc/src/alloc.rs:317
#12 0x0000555ba7445bf0 in rayon_core::job::StackJob<L,F,R>::run_inline ()
at /rustc/4e0d3973fafdfb1c51011bc74e44257b5e3863f1/library/alloc/src/alloc.rs:317
#13 0x0000555ba74d2305 in rayon::iter::plumbing::bridge_producer_consumer::helper ()
at /rustc/4e0d3973fafdfb1c51011bc74e44257b5e3863f1/library/alloc/src/alloc.rs:317
#14 0x0000555ba7445bf0 in rayon_core::job::StackJob<L,F,R>::run_inline ()
at /rustc/4e0d3973fafdfb1c51011bc74e44257b5e3863f1/library/alloc/src/alloc.rs:317
#15 0x0000555ba74d2305 in rayon::iter::plumbing::bridge_producer_consumer::helper ()
at /rustc/4e0d3973fafdfb1c51011bc74e44257b5e3863f1/library/alloc/src/alloc.rs:317
(omitted)
#6873 0x0000555ba74d2345 in rayon::iter::plumbing::bridge_producer_consumer::helper () at /rustc/4e0d3973fafdfb1c51011bc74e44257b5e3863f1/library/alloc/src/alloc.rs:317
#6874 0x0000555ba744563b in <rayon_core::job::StackJob<L,F,R> as rayon_core::job::Job>::execute () at /rustc/4e0d3973fafdfb1c51011bc74e44257b5e3863f1/library/alloc/src/alloc.rs:317
#6875 0x0000555ba731f6f2 in rayon_core::registry::WorkerThread::wait_until_cold ()
#6876 0x0000555ba74d2345 in rayon::iter::plumbing::bridge_producer_consumer::helper () at /rustc/4e0d3973fafdfb1c51011bc74e44257b5e3863f1/library/alloc/src/alloc.rs:317
#6877 0x0000555ba744563b in <rayon_core::job::StackJob<L,F,R> as rayon_core::job::Job>::execute () at /rustc/4e0d3973fafdfb1c51011bc74e44257b5e3863f1/library/alloc/src/alloc.rs:317
#6878 0x0000555ba731f6f2 in rayon_core::registry::WorkerThread::wait_until_cold ()
#6879 0x0000555ba751d02b in std::sys_common::backtrace::__rust_begin_short_backtrace ()
#6880 0x0000555ba751957e in core::ops::function::FnOnce::call_once{{vtable.shim}} ()
#6881 0x0000555ba75f7735 in alloc::boxed::{impl#44}::call_once<(), dyn core::ops::function::FnOnce<(), Output=()>, alloc::alloc::Global> () at /rustc/4e0d3973fafdfb1c51011bc74e44257b5e3863f1/library/alloc/src/boxed.rs:1691
#6882 alloc::boxed::{impl#44}::call_once<(), alloc::boxed::Box<dyn core::ops::function::FnOnce<(), Output=()>, alloc::alloc::Global>, alloc::alloc::Global> () at /rustc/4e0d3973fafdfb1c51011bc74e44257b5e3863f1/l
install
/join
call into your first pool, and that makes another blocking call into the second pool, that will all be synced as they return
try_*
, and check for a flag inside your handler function
(0..100).into_par_iter()
.try_for_each(|x| {
if sigint_raised() {
return Err(());
}
// ...
});
sigint_raised
would probably be implemented by checking some atomic variable
SIGINT
handler would set such variable
xs.into_par_iter()...
. The problem is, the lenght of xs
is at most 4
, and Items are very heavy, so I want to make sure that rayon splits each of vec's element into a separate stolable job. What's the right API for that?
src/iter/plumbing/
if you want to go reading
.with_min_len
is useful if every work unit is small, otherwise, there's no need to touch anything?
std::ops::ControlFlow
, which is already integrated but not in the latest version from May 21. However, rayon-core
appears to explicitly prohibit this mixing of different versions. Is there a work around for this restriction?
[patch...]
to override git rayon
, it might still use the published rayon-core
, but I'm not sure. The best answer is that I should just publish a new release for you. :)