Neverlord on master
Remove dead code Fix C++17 deprecation warning Merge branch 'topic/warnings' and 1 more (compare)
Neverlord on dead-code
Neverlord on warnings
Neverlord on warnings
Fix C++17 deprecation warning (compare)
Neverlord on dead-code
Remove dead code (compare)
Neverlord on warnings
Fix deprecation warning (compare)
Neverlord on neverlord
Fix deprecation warning (compare)
Neverlord on neverlord
Fix deprecation warning (compare)
self->current_mailbox_element()->move_content_to_message()
to have something to hold onto.
caf::binary_serializer
, and the output of that changed between 0.17 and 0.18, so we're in the process of transitioning our database state in a forward-compatible way.
Totally separate question:
self->set_down_handler(...);
auto handle = self->spawn<caf::monitored>(my_typed_actor, args...);
typed_actor<...>::behavior_type my_typed_actor(typed_actor<...>::pointer self, Args... args) {
if (!validate(args...)) {
self->quit(make_error(...));
return typed_actor<...>::behavior_type::make_empty_behavior();
}
return {
// ...
};
}
If the validation inside the my_typed_actor
function fails the down handler will never be triggered, despite the actor shutting down correctly. Is this a bug, or are we simply abusing the API here?
inspect
. So I guess the middleman can just transfer the serialized bytes from Arrow API, without incurring the serialization latency inside the middleman itself. May I know if this is true? But yeah for sure I will benchmark a bit to see the real effect. :)
caf.scheduler.max-threads
(https://actor-framework.readthedocs.io/en/stable/ConfiguringActorApplications.html).
caf::async::{consumer,producer}_resource<T>
. That neatly solves what I wanted to do.
sphinx-build
in the manual
directory and you'll have the version that has some initial documentation for the new flow API. :)
I see. I can see how a new stream<T>
built on-top of data flows can work much better than the existing streaming API, which was a pain to debug and had lots of shutdown order issues.
If I understand data flows correctly they must be fully configured before their hosting actors launch. This is a limitation compared to the existing experimental streaming API which was fully configurable externally, and integrated into the typed actor API. Will this limitation also exist for the new stream<T>
? I'm curious about your plans on the streaming front in general.
New streams would likely be required for us from my experiments because flows cannot replace the old streams yet, unless that is staying for 0.19. The lack of dynamic setup with flows between actors after they've started will be a major hassle for us, because it requires rewriting how we start our ingestion pipeline. I'll let you know if I encounter any major issues, but so far I've seen none in the playground I've set up for my experiments.
If you need any design input for the new streams I'm always open to chat since we're really deep into the streaming internals by now. Would be really interested in some design doc if you have something like that.