It's working using a router on the host PC. I can communicate over ROS between the native PC and the container :)
I'd like to get it going over NAT without the router though, if that's possible.
It should have been working with your 1st configuration, without the router.
Maybe you have been confused by the
ros2 topic list not working because of non-routed discovery traffic ?
Can you please give it a try again, starting the ROS2 nodes on each side and cheking if the data flows ? You can also set the
RUST_LOG=debug environment variable to see the DDS discovery information and the zenoh established routes.
I don't understand why you needs to "publish a topic in the container to make ROS think it exist". Does ROS creates the DDS subscription only if it discovers the DDS publication ? That would explain the behaviour you see.
But that's not what I saw when testing with the ROS2 turtlesim + teleop: both are declaring their DDS subsciptions at startup anyway.
The case you were testing will work the way you describe, but that's effectively for a finished system. It works because the nodes you run on both sides of the bridge explicitly open up the topics they want. But there are a lot of things in ROS that don't work that way. For example, no ROS developer is going to be happy if the developer tools don't work, and the developer tools won't work if introspection information isn't available. (There are important use cases in ROS where having that information available without having to have a shell on the actual robot is important.) That's why
ros2 topic list won't show topics that are actually available - it doesn't subscribe to them to list them. It's also why
ros2 topic echo won't echo a topic that actually exists on the far side of the bridge: it thinks the topic doesn't exist because it can't see it in the discovered topics and so it refuses to subscribe to it. And it's why rviz will not be able to visualise data by topic - it doesn't know the topics exist, so the developer must know in advance.
ros2 topic *commands that relies on DDS discovery ?
ros2 topiccommand is implemented here:
ros2 topic listwon't work, I've confirmed that running the
zenoh-dds-bridgein peer mode across NAT without a router on the host PC works just fine. Thanks!
RUST_LOG=debugenvironment variable prior to start it), do you see in logs a route being created for the topic you’r subscribing to ?
New route: DDS ‘…’ => zenoh ‘…’
DiscoveredPublicationand the name of your topic.
I have found the log message,
firstly I start the zenoh bridge dds and it gives me the 'New Router DDS ' message.
Then I started the ros2 node hello_publisher with the topic name hello_world.
The log message shows:
[2021-05-17T09:52:36Z DEBUG zenoh_bridge_dds] DiscoveredPublication(rt/hello_world, std_msgs::msg::dds_::String_, None [2021-05-17T09:52:36Z DEBUG zenoh_bridge_dds] Declaring resource /rt/hello_world [2021-05-17T09:52:36Z DEBUG zenoh::net::routing::resource] Register resource /rt/hello_world [2021-05-17T09:52:36Z INFO zenoh_bridge_dds] New route: DDS 'rt/hello_world' => zenoh '/rt/hello_world' (rid=16) with type 'std_msgs::msg::dds_::String_' [2021-05-17T09:52:36Z DEBUG zplugin_dds] Local Domain Participant IH = 10456347892458033829 [2021-05-17T09:52:36Z DEBUG zplugin_dds] Discovery data from Participant with IH = 10456347892458033829 [2021-05-17T09:52:36Z DEBUG zplugin_dds] Discovered endpoint is keyless: true [2021-05-17T09:52:36Z DEBUG zplugin_dds] Ignoring discovery from local participant: rt/hello_world
After that I started on my local machine the python script, the log message showed :
New session link established from 6F2AC0C5A89D4050AC9473305CD6E25C: tcp/xx.xx.9.50:7447 => tcp/xx.xx.8.224:33470
I think the connection is already built, and the zenoh bridge has also found the publisher on topic hello_world.
[2021-05-17T10:05:43Z DEBUG zenoh::net::routing::resource] Register resource rt/hello_world [2021-05-17T10:05:43Z DEBUG zenoh::net::routing::pubsub] Register peer subscription rt/hello_world (peer: 6F2AC0C5A89D4050AC9473305CD6E25C)
JEnoch (Julien Enoch):
The ROS2 daemon is only queried by the ROS2 cli tools. Regular ROS2 applications directly use their own ROS2 graph cache managed by rcl+rmw
This is not necessarily true. The daemon was mainly made for the CLI tools to keep them responsive and accurate, but other applications can use it if they want to. I think that rviz uses the daemon, actually.
What we are now considering is to allow the user to configure a zenoh bridge with a set of topics for which he wants the discovered entities to be propagated to a remote bridge. Receiving those discovery information, the remote bridge will create the corresponding DDS entities, allowing DDS advertisement and completion of the ROS2 graph by all ROS2 nodes.
Of course a user will be able to set "*" for this set of "propagated" topics, meaning that all DDS entities will be proactively created, but with consequences on discovery time and scalability.
I think this gives the best of both worlds. It allows us to fence of different sections of a widely-distributed application, while also allowing the dynamic nature of ROS to be maintained where it is needed. And for unknown system introspection,
* can be used.
Hi! We have been testing rmw_zenoh in the past week, and we were able to run a basic distributed application in the following setup:
ROS1 node -> ros1_bridge (with the ROS2 side running rmw_zenoh) -> Zenoh Router -> Android emulator -> ROS2 node (running with rmw_zenoh)
We are also looking at adding support for shared memory transport in rmw_zenoh (we got delayed this week by other projects). From what I have seen so far, I think that rmw_zenoh for ROS2 apps could provide a superior solution compared to using DDS on the ROS2 side and then bridging it over Zenoh. It just brings in unnecessary complexity, if the whole system could just run Zenoh as the ROS2 backend. Obviously, rmw_zenoh is not yet production ready, and as we discussed it here before, it might make sense to reimplement it in Rust instead of using the Zenoh-C binding, but in the long run this might become a better solution.