Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
    Matija Tudan
    @mtudan
    Hi all, did anyone try to cross compile helloworld example using yocto?
    eboasson
    @eboasson
    @gbiggs_gitlab sorry to have overlooked your question. I'm not sure how to find out: https://raw.githubusercontent.com/ros2/ros2/galactic/ros2.repos just points to the branch, but when I look at https://github.com/ros2-gbp/cyclonedds-release/tree/release/galactic%2Fcyclonedds it doesn't seem to have been updated. I guess I should have made a PR somewhere ...
    In any case we're waiting for approval from the Eclipse Foundation to release the 0.8.0 version, which we are already trying to update Humble to. Given that OR has a preference for depending on officially released versions over some arbitrary tags, I think there's a good chance Galactic, too, will switch.
    @mtudan I see you also found a relevant GitHub issue, which is what I would've suggested doing because I don't think very many people have been doing that and I think that currently gives you the best chances of getting a response from someone who knows more about using Cyclone on yocto.
    Michael Pöhnl
    @budrus
    I would be interested in the information included in the SampleInfo when using the CXX API. I think it is this one https://github.com/eclipse-cyclonedds/cyclonedds-cxx/blob/master/src/ddscxx/include/dds/sub/detail/TSampleInfoImpl.hpp. Do you have more documentation, or would that be in the DDS spec?
    eboasson
    @eboasson
    We should have more documentation, but in this case the spec is indeed a good place to look because it spells out the interpretation of the various fields in great detail in section 2.2.2.5.1.1 "Interpretation of the SampleInfo". The translation to the C++ API is then pretty straightforward.
    https://cyclonedds.io/docs/cyclonedds-cxx/0.8/ddscxx.html#_CPPv4N3dds3sub10SampleInfoE really is just a very short summary of what it says in the spec
    Michael Pöhnl
    @budrus
    Thanks @eboasson
    sgf201
    @sgf201
    hi all, Has anyone sorted out which projects are using cyclonedds especially those that are also using iceoryx
    eboasson
    @eboasson
    @sgf201 I am not aware of anyone who has compiled such a list. https://iot.eclipse.org/adopters/ could be a starting point (both Cyclone DDS and Iceoryx are Eclipse projects listed there) but I also know for a fact that there are projects/companies using it that have not uploaded their logos.
    sgf201
    @sgf201:matrix.org
    [m]
    Thanks @eboasson
    Michael Pöhnl
    @budrus
    I'm wondering what happens if Reliability is set to "RELIABLE", reader History to KEEP_LAST_N and there is an overflow of the reader's cache . The spec says " DataWriter may block if the modification would cause data to be lost ". Samples that are in the reader cache but not yet read or taken would they be regarded as lost samples if there is an overflow? I.e. would a reader that does not read the samples in the cache fast enough block the writer? If not, what would then be the way to go if there is a reader that cannot keep the pace of a writer? Setting History to KEEP_ALL and hoping that the resource limits are big enough?
    eboasson
    @eboasson
    @budrus For a KEEP_LAST reader, pushing data out of the history is perfectly acceptable and is not considered losing data. Blocking occurs when continuing would result in exceeding a "resource limit". See, e.g., 2.2.2.4.2.11, 2.2.3.18 and 2.2.3.19. If you look at those, it is possible to do, e.g.: depth = K, max_samples_per_instance = K, max_samples = K and max_instances = N > K and then you could fill up the history for one instance (= key value) and be unable to store a second instance. In other words, in the presence of keys, KEEP_LAST doesn't guarantee there won't be blocking.
    Michael Pöhnl
    @budrus
    OK, Thanks @eboasson. I'm wondering what the best Reliability strategy is for iceoryx in Cyclone. We now set the iceoryx queue size to the KEEP_LAST value of the reader's History QoS. We can assume that iceoryx will only lose data in case there is a queue overflow which will drop the oldest sample for storing the latest one (with default settings). We now plan to use the blocking feature in iceoryx when RELIABLE is set. This will block the publisher until there is free space in the queue again. This is not a nice thing for the publisher. Maybe it is unlikely that this occurs as long as the listener thread gets scheduled and can transfer the samples from the iceoryx queue to the reader cache. But for me the overflow behavior in the iceoryx queue would be the same as the overflow behavior the reader cache also would have with RELIABLE. So one could argue that a reliable reader with KEEP_LAST will lose samples if it doesn't consume fast enough and that it is no big difference if we already drop in the iceoryx queue. This would avoid that writers get blocked if the reliable reader is not fast enough. Using the blocking feature in iceoryx would be needed if we want to ensure that the reader does not lose any sample, even when the price is to slow down the writer. How would you solve the latter use case with DDS, and is the plan we have for reliability with iceoryx a good one or not?
    eboasson
    @eboasson

    @budrus It depends on what we think of the DDS spec and to what extent that spec should be followed to the letter. I tend to believe there are really only a few sensible ways of using DDS (and this does align with what I have seen at users). The first distinction is "shared data space" vs "message queue".

    For a shared data space you essentially always end up with KEEP_LAST 1 on reader & writer, with keys used to distinguish objects of the same topic with independent lifetimes or update times. This works fine, all you care about is the most recent value, and so:

    • for a topic that has no key field: it maps to the Iceoryx history without blocking;
    • for a topic that does have key fields: there is no real equivalent in Iceoryx, so one needs to allow blocking. Increasing the subscriber history depth a bit makes sense to reduce the risk of blocking.

    For a message queueing pattern, you typically end up with KEEP_ALL in DDS, so blocking in Iceoryx is the sensible thing to do.

    KEEP_LAST N>1 is rarely used in DDS systems because it rarely actually adds anything of value. It is popular in ROS 2 and in my view that is because ROS 2 is lacking support for keys. The body of existing code written against that limitation and the direction ROS 2 is moving in don't make me think that's likely to change, and so ROS 2 is in my considered opinion going down a dead end. What exactly we do and what exact guarantees we give is something that should be determined by how it is actually used, rather than by how the DDS spec defines it.

    DDS says that with KEEP_LAST N, eventually the reader should see, for each key value, the last N values published by "the" writer. (Note that if there are multiple writers, that guarantees very little.) All you can do as an application is sample your reader history, and so you have to be prepared to handle gaps in the sequence. That means the guarantee DDS gives you is really meaningful only if there is a single writer, a reader that waits until several samples have been collected before reading, and then requires that those samples were consecutive. I've never yet encountered a system where that requirement existed ...

    And so I believe the DDS specification to give a stronger guarantee than it should have. The correct specification would be that the reader samples the output from the writers and pushes those samples in its history. Then gaps can occur anywhere.

    With that model, you could just use KEEP_LAST M (sic!) in Iceoryx without blocking and still be correct. With the DDS model, you ought to use M >= N. Then use blocking for topics with key fields and non-blocking for topics that have no key fields.

    Michael Pöhnl
    @budrus
    That was a lot of information, thanks. In automotive, I've often seen applications getting triggered by an event that has another frequency then the writers that provide messages for its readers. Often the "shared data space", so KEEP_LAST 1 is fine. If the Reliability is set to RELIABLE, we would start blocking the writer when Cyclone does not get the samples fast enough out of the iceoryx queue. In the case where there are no keys, dropping the oldest sample in the iceoryx queue should be fine. So maybe we should not use the blocking feature of iceoryx in this case. I think today iceoryx won't be used if the topic has keys or with KEEP_ALL. So then we maybe have no case where the blocking feature off iceoryx makes sense. Sure this could change when supporting KEEP_ALL or keys
    sgf201
    @sgf201
    when cyclonedds + iceoryx works, which is responsible for the cross hosts communication ,cyclone dds or iceoryx?
    Michael Pöhnl
    @budrus
    Network communication is done with DDS RTPS from Cyclone. iceoryx is only used within a host between threads and processes if the QoS settings are compatible. iceoryx needs a shared memory
    sgf201
    @sgf201
    @budrus thanks, the data is traveled like "msg(A)->shm(from A roudi)->cyclondds->net->cyclonedds->shm(from B roudi)->reader(B)" or" msg(A)->shm(from A roudi)->cyclondds->net->cyclonedds->reader(B)"
    Michael Pöhnl
    @budrus
    No, Cyclone can access the network stack from each process. If. writer and reader match in the network the message is passed without shm to be involved. So cyclone can send it to both channels, shm and network when you pass a message
    sgf201
    @sgf201
    the msg(A) has been filled in trunk (loaned from roudi) and all writer and reader are inited as support iceoryx
    Michael Pöhnl
    @budrus
    Cyclone will serialize this loaned message and send it over the network if there is a reader on another host
    sgf201
    @sgf201
    and all the reader on B will get msg directly from cyclone (roudi of B will do nothing in the whole read operation) right?
    Michael Pöhnl
    @budrus
    I think so. Please be aware that RouDi themselves will never do something in the actual data transfer. They only setup everything. But yes, for what you describe will loaning of messages, you will use the shared memory of host A to store the message
    sgf201
    @sgf201
    clear, thanks a lot
    that means , if a msg is from other host , enable iceoryx will not reduce mem cpy for all the msg read operations
    MatthiasKillat
    @MatthiasKillat
    @budrus Yes, for Keep All iceoryx is never used. I also modified the upcoming changes in eclipse-cyclonedds/cyclonedds#1180 to not block if iceoryx is used in reliable communication. This means we will lose data in case of queue (buffer) overflow.
    MatthiasKillat
    @MatthiasKillat

    To add more to the data path: basically there are certain QoS which support shared memory (e.g. Best Effort, Volatile, Keep Last 10). Assume the QoS are fine. Then the data type we want to send is considered. Basically there is fixed size data (size known at compile time, PODs, fixed size arrays etc.) or dynamic sized data (the size is known at runtime only, like sequences).

    In both cases we use shared memory, but for fixed size data we do not even need a copy (zero-copy). For dynamic size data we serialize into shared memory at the writer and deserialize from it at the reader.

    The QoS supported are expanded to Transient Local in eclipse-cyclonedds/cyclonedds#1180 which is not yet merged to master but will be soon.

    And yes, if receiving data from another host (machine) we cannot use iceoryx at all and will use the network.
    In this case we always serlialize and deserialize and maybe copy to/from buffers used to send the data over the network interface.
    zemskymax
    @zemskymax
    Hi all, I would like to connect a WPF (.Net) application to the ROS2 Galactic (with Cyclone DDS RMW). The communication should be a simple pub / sub (without client and service implementations).
    What would be the simplest way to approach this task? Would you suggest creating the binding on top of the C++ or the C Cyclone DDS implementations?
    What Cyclone version should I use - 0.8.2 or oldest? thanks
    eboasson
    @eboasson

    The C++ binding is a wrapper around the C binding so in the end it is all the same (if .Net agrees with either) so I would say it is best to use the one that you feel most comfortable with. I personally would use the C one, but then I know that one much better and I also know people who'd definitely go the other way.

    To make simple pub/sub work, you don't need to do anything special, just make sure your topic and type names/definitions match between the ROS 2 world and DDS because there are some translations going on. See, e.g., https://github.com/eclipse-cyclonedds/cyclonedds/issues/518#issuecomment-630901396

    sgf201
    @sgf201
    Is there any tool recommendation suitable for using cyclonedds: for example, 1 Recording all data. 2. Lightweight, showing bandwidth of the current DDS data
    eboasson
    @eboasson
    Not currently, assuming with (1) you mean something other than a packet capture. A recorder would be pretty straightforward in Python (look at what writers exist, create readers for new topics/types, record whatever arrives). For (2) the data is not currently being made available by Cyclone in a convenient form, but again it depends on what exactly you want to see: capture filters can be used for the lightest-weight independent bandwidth measurement (i.e., tcpdump 'udp[8:4]=0x52545053' captures all RTPS data, so it really is but a tiny step to get the bandwidth instead of a capture)
    aohwang
    @aohwang
    Hi all, i see the ControlTopic option in the docs/manual/opyions.md, however I don't know what is difference if the option is enabled or not. Could you give me some hints?
    aohwang
    @aohwang
    Another question, I see the function "trigger_recv_threads" which sends a byte to receive-threads. My question is why we need to trigger receive-threads in some situations? Thanks
    aohwang
    @aohwang
    in q_receive.c
    aohwang
    @aohwang
    Sorry, I have known what the function trigger_recv_threads is used for.
    eboasson
    @eboasson

    ControlTopic is a setting inherited from OpenSplice and there it determines whether or not a topic for controlling some aspects of the DDSI stack is created. When I restored it in Cyclone it was no longer a topic and the "enable" attribute means nothing — I forgot to clean that up, sorry ...

    What you can do with it is make a Cyclone domain start "deaf", "mute" or both and either automatically reset this after some time or do it manually. Sometimes it comes in handy to be able to start it knowing there cannot be any communication, especially in testing.

    All trigger_recv_threads needs to do is to make something happen to unblock the select() and recvmsg() calls that Cyclone does to wait for data from the network. For the select I have a clean solution, for the recvmsg() calls the best I could come up with is sending a minimal packet and hoping that it arrives (perhaps it always does, but it certainly usually does).

    Some people have asked the sensible question if it would not be better to send an empty RTPS message instead of a single byte. For Cyclone's purposes it makes no difference (all it needs is for the blocking call to return), but when you look at a packet capture a proper RTPS message header with no other content is probably less surprising.

    aohwang
    @aohwang
    Thanks @eboasson :smile:
    aohwang
    @aohwang:matrix.org
    [m]
    Hi @eboasson: Is the loop should be removed? https://github.com/eclipse-cyclonedds/cyclonedds/blob/master/src/core/ddsi/src/q_addrset.c#L347 It seems to be redundant because of the line 332.
    sgf201
    @sgf201:matrix.org
    [m]
    Hi all, Is there description about the tool pubsub? I think it is a tool for sending and receiving dds msg,but I can not understand the help info of the tool
    eboasson
    @eboasson

    @aohwang:matrix.org a bit late, but still, better late than never: that loop is not redundant, but I kinda agree that I the writing is a bit clumsy. Basically if you have an address that matches a "local" interface, the first loop does an early return, and this second loop only comes into play if none of the addresses matches something local.

    A number of issues have popped up where it would mistakenly assume that routable addresses would be routed correctly by the kernel. Fortunately, so far they could be worked around by enabling the DontRoute configuration option, which causes it to completely ignore any addresses not matching a local interface. But (so far) that's more a problem of having an insufficiently strong preference for local addresses than that this bit of code is broken.

    @sgf201:matrix.org yes, there is, but not in the C code base. I thought I had already removed the C program, as the Python version (which does have documentation: https://github.com/eclipse-cyclonedds/cyclonedds-python/blob/master/docs/source/tools.pubsub.rst) has now really outgrown it.

    The pubsub tool is something I originally wrote for OpenSplice and then it got ported to Cyclone (and some useful functionality stripped out in the process). Most of it still fits, so https://github.com/eboasson/opensplice-tools could still be used to get some more information for the C version.

    aohwang
    @aohwang
    HI @eboasson , I made a comment in the pull request eclipse-cyclonedds/cyclonedds#1239 . What do you think of it? Thanks
    sgf201
    @sgf201:matrix.org
    [m]
    HI,Is it possible to use iceoryx to send one kind topic and not the others? For example, iceoryx is used for long messages and not for short messages
    Michael Pöhnl
    @budrus
    @sgf201 Not explicitly as of today. But as iceoryx is not used for all possible QoS settings, a trick would be to use a QoS setting that is incompatible with iceoryx. E.g. KEEP_ALL or a key field im the message would do it. Not really a clean solution though
    4 replies
    sgf201
    @sgf201:matrix.org
    [m]
    when we send a small msg like a 8 byte state, Is it faster use dds directly than use icoryx since loan mem needs operation also
    1 reply
    Thijs Miedema
    @thijsmie
    Hi all! We're slowly migrating away from using gitter and setting up a community on discord, feel free to join! https://discord.gg/BkRYQPpZVV