aarch64
for a Pi4 (2GB) and I started the router ./zenohd
and followed the steps in the documentation of setting up a local memory storage and tried putting hello world
on the API. Upon querying the data I get [ ]
although the paths are the same.
--mem-storage /demo/example/**
parameter for zenohd
RUST_LOG=debug ./target/release/zenoh-bridge-dds -m peer
ros2 run examples_rclcpp_minimal_publisher publisher_member_function
<?xml version="1.0" encoding="UTF-8"?>
<CycloneDDS xmlns="https://cdds.io/config" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="https://cdds.io/config https://raw.githubusercontent.com/eclipse-cyclonedds/cyclonedds/master/etc/cyclonedds.xsd">
<Domain id="any">
<Discovery>
<ParticipantIndex>
auto
</ParticipantIndex>
<MaxAutoParticipantIndex>
30
</MaxAutoParticipantIndex>
</Discovery>
</Domain>
</CycloneDDS>
zenoh
for arm64
and if someone has a Raspberry Pi 4 with docker lying around please give docker run --init -p 7447:7447/tcp -p 7447:7447/udp -p 8000:8000/tcp shantanoodesai/zenoh
a spin
zenoh
as well as Docker images can be built effortlessly for all possible platforms amd64
, arm64
, i686
, armv7
+ armv6
Hi all just a heads up. Docker Alpine image for zenoh (~45MB) builds well however is not able to run because I keep getting
Error Relocating: /usr/local/bin/zenohd: __register_atfork: symbol not found
Error Relocating: /usr/local/bin/zenohd: __res_init: symbol not found
Does someone have an idea regarding this incompatibility with Alpine Docker containers?
For Alpine I didn’t try to target x86_64-unknown-linux-gnu
.
But I managed to make it work targeting x86_64-unknown-linux-musl
, defining RUSTFLAGS '-Ctarget-feature=-crt-static’
(to force dynamic link of the plugins/backends libs, as default is static with MUSL) and having the follwing in my ~/.cargo/config
:
[target.x86_64-unknown-linux-musl]
linker = "x86_64-linux-musl-gcc"
Also I’m running on MacOS with musl-cross installed.
Hmm I changed the build for aarch64-unknown-linux-musl
target but I get
error: cannot produce cdylib for `zenoh-plugin-storages v0.5.0-dev (/project/plugins/zenoh-plugin-storages)` as the target `aarch64-unknown-linux-musl` does not support these crate types
Maybe I am drifting in the wrong direction
Success on Debian-slim (rpi4) docker run --init -d -p 7447:7447/tcp -p 7447:7447/udp -p 8000:8000/tcp shantanoodesai/zenoh:latest
for anyone willing to try. Please not there are changes to the docker image so I advice to run it in detached mode with -d
. If you wish to give in-memory example a try
docker run --init -d -p 7447:7447/tcp -p 7447:7447/udp -p 8000:8000/tcp shantanoodesai/zenoh:latest --mem-storage demo/example/**
everything mentioned in the getting started page works
-d
option.eclipse/zenoh
image (alpine x86_64 based) it’s only 5.5Mb. Did you build in release mode ?
zenoh::net::Session
with a declare_querying_subscriber()
function. You can see an example of usage here.QueryingSubscriber
starts with a query on the same resource it subscribes to (but it could be configured to query on another resource). The results of the query are merged/sorted/deduplicated with the live publications that may occur in parallel, before to be delivered to the QueryingSubscriber
user. At any time, the user can re-issue a query and the same behaviour will re-occur.
musl
cross compilation for aarch64
apparently the cdylib
was causing problems when compiling for aarch64-unknown-linux-musl
but on my local x84_64
based machine I replaced the crates that had cdylib
to staticlib
and it seems to cross-compile.*.a
files as opposed to *.so
files.Hi, everyone! I just tried the example in "your first zenoh app". First, I launch the dockerized router:
$ podman run --init -p 7447:7447/tcp -p 7447:7447/udp -p 8000:8000/tcp eclipse/zenoh --mem-storage='/myhome/**'
Then, I launch the Python script that produces temperature readings, and I see:
Traceback (most recent call last):
File "/***/zenoh-server.py", line 18, in <module>
z = Zenoh({})
zenoh.ZError: IO error (Unable to bind udp port 224.0.0.224:7447) at /root/.cargo/registry/src/github.com-1ecc6299db9ec823/zenoh-0.5.0-beta.8/src/net/runtime/orchestrator.rs:350. - Caused by Address already in use (os error 98)
Everything locally. Am I doing something wrong? Shouldn't the peer find the router?
"multicast_address": "224.0.0.224:7448"
to the config works! Thanks, @Mallets
udp/131.176.207.40:7447
or tcp/131.176.207.40:7447
. A broker is just en entity mediating communications between two clients.
Hi @gardenest , you can define the topology by manually interconnecting the different peers with each other. To do so, you need to configure each zenoh session with the following :
Here is an example in python :
from zenoh import Zenoh
z = Zenoh({"listener": "tcp/0.0.0.0:7447",
"peer": "tcp/127.0.0.1:7448,tcp/127.0.0.1:7449",
"multicast_scouting": "false",
"peers_autoconnect": "false"})
$ cat config
multicast_scouting=false
peers_autoconnect=false
Then you can run zn_sub.py and zn_pub.py like this :python3 examples/zenoh-net/zn_sub.py -c config -l tcp/0.0.0.0:7447
python3 examples/zenoh-net/zn_pub.py -c config -e tcp/127.0.0.1:7447
python3 examples/zenoh-net/zn_pub.py -c config -e tcp/127.0.0.1:7447