@peddinavk one question for which your Input would be interesting. If you want to do this, do you want to have the contract that there is only one subscriber for this topic allowed? So it always would be possible that the single subscriber changes the data and multiple subscribers is an error. Or would you like the functionality that a sibscriber can request a writeable reference and if there are other readers a copy is made behind the scenes. The first option is maybe easier
Yes, 1st option would suite as modifier is only one but end subscribers can be more than one as they are only readers.
Thank you @budrus, @sebastian:reparts.org for your replies. Please allow me to provide more details about this use case, similar to "chain" as mentioned by @sebastian:reparts.org . V2X stack(in automotive) is similar to network stack in user space with layer structured(AccessLayer(AL)-->NetworkLayer(NL)-->MessagingLayer(ML)-->SafetyApps(SA)). W.r.t data access they look like chain (AL)pub–>(NL)modsubpub—>(ML)modsubpub-->(SA)multiple subs. Usually in Network layer each layer will strip the respective header and route to upper layers, in V2X stack also it's same but here there is an extra catch. Some of these layers do the unpack and create new header before passing to next layer. for example, NL header is ASN serialized to save over the Air bandwidth so NL has to de-serialize the header and create and attach the new header to payload and send it to the ML where ML verify/decrypt the payload and provide final safety data to safety apps. Safety apps, like FCW(Forward Collison warning ) & ICW(Intersection Collison warning) apps use same "safety payload" to determine the threat, but these safety apps consume the data & won't modify it as they are the last users in the chain. Looking for zero copy mechanism in such kind of layered stack to operate in same shared memory segment created by AL and release by final Layer Apps(Safety Apps).
Out of curiosity, does this group came across this kind of use case(chain model of pub/sub) in the past?
@lygstate: iceoryx has a central deamon called RouDi. This daemon creates all the resouces like share memory and takes care of the service discovery. Without this deamon applications will not run
Yeap, I've seen that, is that possible getting roudi to be not a deamon, embbed into each running process, and free it by the last running process?
In other words, what's I need to do to getting roudi can be embeed into each process, and can be used like DDS for single machine
IOX_MAX_SUBSCRIBERSfor example? Or, for a
Listenerchanging the max allowed subscribers to a much higher value like
255(uint8_t max). Will it lead to extra iterations over all 255 entries instead of over 128 entries? We have a system with a lot of subscribers for certain topics, and while I can eventually reduce some of them, I have tweaked these values for a rough and dirty prototype. Would like to know if I'm paying hidden costs.
@nikhilm:matrix.org the short answer is: it is very likely that you do not have any increase in runtime behavior unless you massively increase the number to something like 16384 or higher but even then the increase should be in the range of microseconds. As long as the number is less than 4096 (page size) the runtime should be more or less identical.
Here is why: When you increase the number of supported attachments (MAX_NUMBER_OF_EVENTS_PER_LISTENER) you have to increase the MAX_NUMBER_OF_NOTIFIERS_PER_CONDITION_VARIABLE and with that you increase the size of an array of bools. It is not yet optimized but in the end 8 bools are stored in one byte and then you would get an array of 256 entries = 32 bytes. For now it is then an array of 256 bytes, one byte for each bool.
Usually the page size is around 4096 bytes, therefore the CPU should get this with one bite into the memory and then starts working. So unless you do not increase the number of events to a number larger than 4096 you should get no increase of the runtime overhead of memory to cpu transfer - but if you increase it over 4096 you should get additional 100ns (something in that range) of additional runtime overhead for every page. See article below on why I came up with this number.
The CPU itself then checks every entry if its true or not, if so the entry id is stored in a vector and the listener later only checks and handles the ids where actually something happened. The vector could be in the worst case as large as the number of events but then you have serious other problems in the system like a very heavy load and the listener is rarely scheduled - the normal case is one id in the vector or maybe two.
This check is performed directly on the cpu without acquiring new pages and is so fast that the difference between 128 - 1024 should not be measurable since the next step would be again to load something into the page and do stuff which takes much longer than to check a bunch of bools for the CPU.
The final big task is then to transfer the vector of ids back to the memory but since it contains rarely anything it should also be done in one bite since it should be less than the page size.
Here a nice article on how long a read from memory to cpu cache takes: https://formulusblack.com/blog/compute-performance-distance-of-data-as-a-measure-of-latency/
static bool PoshRuntime::isRuntimeInitialised()- I'd like to be able to check if its not so I can prevent the construction of my publishers/subscribers without getting terminated
I just opened a draft PR #997 to discuss the new service discovery user API for
v2.0.0 and I need your help! What do you think is the best fitting name for the class which provides information about the current services in the system?
Any other ideas? Would appreciate your feedback! Also, because many will be gone soonish, I wish you Merry Christmas and a Happy New Year! :santa: :christmas_tree:
the merge window is closing and
v2.0 will likely land end of this week. We still need a name for it! In our established tradition to name each release alphabetically after ice cream flavours, please participate in the following poll: https://nuudel.digitalcourage.de/KHMZu2N1RxQQJTOD
v2.0.0has landed! More info is available on the release page. One of the new features is request/ response communication, closing the gaps towards AUTOSARs
ara::com:car: :truck: :tractor: Come & join the developer meetup on Thursday if you have questions!