arm64/x86platforms without having to maintain distinct Dockerfiles for each platforms.
*.soyou built this way, which maximal GLIBC version they require (check with
ldd -v zenohd *.so | grep GLIBC) ? I manage to cross-build binaries for
armv7lusing a specific Dockerfile, but unfortunately the resulting
libzplugin_storages.sorequires GLIBC 2.29 which is greater than the default one on Raspberry Pi OS (2.28)
Hello @gardenest, this is indeed strange. You should see all reachable routers and peers with zn_scout.
There is a difference between zn_info and zn_scout that could explain this behavior :
So maybe part of your peers are not reachable through multicast but are discovered by zn_info and other peers by gossip discovery (through the router probably).
use zenoh::net::*; let mut config = config::peer(); config.insert(config::ZN_MULTICAST_ADDRESS_KEY, "126.96.36.199:7448".to_string()); let session = open(config).await.unwrap();
aarch64for a Pi4 (2GB) and I started the router
./zenohdand followed the steps in the documentation of setting up a local memory storage and tried putting
hello worldon the API. Upon querying the data I get
[ ]although the paths are the same.
--mem-storage /demo/example/**parameter for
RUST_LOG=debug ./target/release/zenoh-bridge-dds -m peer
ros2 run examples_rclcpp_minimal_publisher publisher_member_function
<?xml version="1.0" encoding="UTF-8"?> <CycloneDDS xmlns="https://cdds.io/config" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="https://cdds.io/config https://raw.githubusercontent.com/eclipse-cyclonedds/cyclonedds/master/etc/cyclonedds.xsd"> <Domain id="any"> <Discovery> <ParticipantIndex> auto </ParticipantIndex> <MaxAutoParticipantIndex> 30 </MaxAutoParticipantIndex> </Discovery> </Domain> </CycloneDDS>
arm64and if someone has a Raspberry Pi 4 with docker lying around please give
docker run --init -p 7447:7447/tcp -p 7447:7447/udp -p 8000:8000/tcp shantanoodesai/zenoha spin
zenohas well as Docker images can be built effortlessly for all possible platforms
Hi all just a heads up. Docker Alpine image for zenoh (~45MB) builds well however is not able to run because I keep getting
Error Relocating: /usr/local/bin/zenohd: __register_atfork: symbol not found Error Relocating: /usr/local/bin/zenohd: __res_init: symbol not found
Does someone have an idea regarding this incompatibility with Alpine Docker containers?
For Alpine I didn’t try to target
But I managed to make it work targeting
RUSTFLAGS '-Ctarget-feature=-crt-static’ (to force dynamic link of the plugins/backends libs, as default is static with MUSL) and having the follwing in my
[target.x86_64-unknown-linux-musl] linker = "x86_64-linux-musl-gcc"
Also I’m running on MacOS with musl-cross installed.
Hmm I changed the build for
aarch64-unknown-linux-musl target but I get
error: cannot produce cdylib for `zenoh-plugin-storages v0.5.0-dev (/project/plugins/zenoh-plugin-storages)` as the target `aarch64-unknown-linux-musl` does not support these crate types
Maybe I am drifting in the wrong direction
Success on Debian-slim (rpi4)
docker run --init -d -p 7447:7447/tcp -p 7447:7447/udp -p 8000:8000/tcp shantanoodesai/zenoh:latest for anyone willing to try. Please not there are changes to the docker image so I advice to run it in detached mode with
-d. If you wish to give in-memory example a try
docker run --init -d -p 7447:7447/tcp -p 7447:7447/udp -p 8000:8000/tcp shantanoodesai/zenoh:latest --mem-storage demo/example/**
everything mentioned in the getting started page works