Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
    Julien Enoch
    @JEnoch
    Waiting for a fix, can’t you do your tests with a fixed IP (i.e. starting a router with -e tcp/<ip_of_other_router>:7447) ?
    Luigi Rodorigo
    @lrodorigo

    Ok, I will test it later.
    So, I deduce that the multicast discovery of the routers is still an 'experimental feature'.

    Just an observation:
    this is not so-clear from the Getting Started section of the zenoh.io website...
    it would be better to clearly report that, otherwise it seems that the "hello world" example between two routers, even in the simplest network configuration, is not working at all.

    OlivierHecart
    @OlivierHecart

    Hi @lrodorigo,
    You can connect multiple routers with each other manually using the -e option. For example for 2 routers :

    host1> zenohd
    host2> zenohd -e tcp/host1:7447

    routers discovery is deactivated by default so that users keep full control on the routers network topology.

    Luigi Rodorigo
    @lrodorigo
    Thanks Olivier, now it is clear to me.
    I was only reporting that it should be more clear on the Getting Started guide [ here: https://zenoh.io/docs/getting-started/quick-test/ ]
    OlivierHecart
    @OlivierHecart
    Yep we need to improve that. Thanks for the advice.
    Luigi Rodorigo
    @lrodorigo
    Another question:
    in order to persist all data on each router, the storage must be recreated on each router, after the zenohd process has started. Is it correct?
    OlivierHecart
    @OlivierHecart
    That's correct.
    Luigi Rodorigo
    @lrodorigo
    Sorrry for another question:
    What is the Zenoh behaviour wrt the leading slash of the path?
    Are /demo/first and demo/first both allowed? If yes, are both seen as the same path?
    Julien Enoch
    @JEnoch
    Both are allowed. Using the zenoh API demo/first is a relative Path to the Workspace prefix. If not specified, the default prefix of a Workspace is /.
    Luigi Rodorigo
    @lrodorigo
    Quite clear, many thanks.
    Julien Enoch
    @JEnoch
    @lrodorigo : Olivier just pushed a fix for the « routers_autoconnect_multicast + REST plugin » issue. You can pull and build master now, or wait ~1h that a new eclipse/zenoh:master image is built and pushed to Docker hub.
    @gardenest : if you use routers_autoconnect_multicast=true too, you should also pull the master branch for this fix.
    Luigi Rodorigo
    @lrodorigo

    Sorry for annoying you with really noob questions, but I am also having problems with z_put.py/z_get.py/z_sub.py examples (from https://github.com/eclipse-zenoh/zenoh-python/blob/master/examples/zenoh/).

    In particular the following happens:
    1) I start the z_sub.py script
    2) I run z_put.py script (without additional parameters)
    3) z_sub.py correctly invokes the callback
    4) if I run z_get.py, no data are received at all.

    Openning session...
    New workspace...
    Get Data from '/demo/example/**'...
    
    Process finished with exit code 0

    More over (while the zenohd process with a [working] REST plugin is running):
    0) I start z_sub.py -e tcp/localhost:7447
    1) I run z_put.py -e tcp/localhost:7447
    2) I run curl -X PUT -d 'Hello Data!' http://localhost:8000/demo/example/test2
    3) The z_sub.py invokes the callback ONLY for the data sent from the z_put.py script (and not for the data sent by the REST API)
    4) The key put from z_put.py is retreived on the API at http://localhost:8000/demo/** but the content is blank (while the content received from z_sub.py is (correctly) 'Put from Python!'

    Am I missing something?

    Julien Enoch
    @JEnoch
    For your first use case, you need to have a running zenoh router with an appropriate storage configured to store the publication and reply to the get (e.g. start the router with an memory storage: zenohd —mem-storage ‘/demo/**').
    Is that your case ?
    Luigi Rodorigo
    @lrodorigo
    Ok, just tried with a running zenohd router (and a mem storage created by curl -X PUT -H 'content-type:application/properties' -d 'path_expr=/demo/example/**' http://localhost:8000/@/router/local/plugin/storages/backend/memory/storage/my-storage), but unfortunately the result is the same
    Also tried to add -e tcp/localhost:7447 to z_pub and z_get (in this case the multicast discovery should work?)
    Julien Enoch
    @JEnoch
    If you installed zenoh-python via pip install eclipse-zenoh you might have compatibility issue with the router built from master branch. (sorry, we’re large phase of changes since the last 0.5.0-beta.8 release, including in the protocol and APIs…)
    Can you please try to re-install it with:
    pip install https://github.com/eclipse-zenoh/zenoh-python/zipball/master
    Julien Enoch
    @JEnoch
    I just tried a similar scenario using router and zenoh-python from master branch, and it worked for me:
    1. zenohd --mem-storage '/demo/**’
    2. python3 ./z_sub.py
    3. python3 ./z_put.py => z_sub receives the publication
    4. curl -X PUT -d 'Hello Data!' http://localhost:8000/demo/example/test2=> z_sub receives the publication
    5. curl http://localhost:8000/demo/** => result is:
      [
      { "key": "/demo/example/zenoh-python-put", "value": "Put from Python!", "encoding": "text/plain", "time": "2021-07-07T13:48:04.130432996Z/D2D25874C4884368A11E63673229BA46" },
      { "key": "/demo/example/test2", "value": "Hello Data!", "encoding": "application/x-www-form-urlencoded", "time": "2021-07-07T13:48:08.151674997Z/D2D25874C4884368A11E63673229BA46" }
      ]
    Can you try the same ?
    To answer your question: -e tcp/localhost:7447 is not necessary for z_put and z_get since they are configured in peer mode by default and will do multicast discovery and discover the router.
    Luigi Rodorigo
    @lrodorigo
    After installing the python library from the master branch things are going better
    Julien Enoch
    @JEnoch
    Glad to read that ! :smiley:
    Luigi Rodorigo
    @lrodorigo
    If run zenohd with zenohd --mem-storage '/demo/** everything is working, if I run the zenohd docker container, and then I create the storage using curl -X PUT -H 'content-type:application/properties' -d 'path_expr=/demo/example/**' http://localhost:8000/@/router/local/plugin/storages/backend/memory/storage/my-storage, the GET request does not see the data sent from z_put.py (but it sees the data sent from PUT HTTP requests)
    kydos
    @kydos
    @lrodorigo it would be good if you could look at the admin information in both cases. In other terms, look at what is the difference when you do a get on the URI: http://localhost:8000/@/router/local/plugin/storages/backend/memory/storage/**
    BTW, alsos notice that in the first case the storage path is /demo/** in the sencond you are creating a storage on /demo/example/**.
    Thus depending on what you’re trying to put it may not be surprising that it does not get stored. What is the key you are putting?
    Julien Enoch
    @JEnoch
    @lrodorigo the issue with docker container might still be related to the « routers_autoconnect_multicast + REST plugin » issue mentionned above that prevents the storages plugin to works properly. The new eclipse/zenoh:master image including Olivier’s fix for this issue is now ready. Please try to pull it and re-do your test.
    Luigi Rodorigo
    @lrodorigo

    Yes I noticed that, and actually the reported metadata of the two datastores are correct:

    /@/router/FB96238589C74C1AAADBB8359A9A24A8/plugin/storages/backend/memory/storage/my-storage
    {"path_expr":"/demo/example/**"}

    and

    /@/router/1FE3B25A0F674180A0424BEFC8F46314/plugin/storages/backend/memory/storage/mem-storage-1
    {"path_expr":"/demo/**"}

    Anyway I am using z_put.py with the default --path parameter (/demo/example/zenoh-python-put), and I am publishing with curl on /demo/example/zenoh-python-put

    Thanks @JEnoch , I am going to pull the new docker master image.
    Anyway I am not using the routers_autoconnect_multicast option (the current config file is empty)
    OlivierHecart
    @OlivierHecart
    @lrodorigo what could cause the issue is a bad multicast discovery between z_put.py and the router inside the docker (we experienced problems with docker and multicast in the past). Could you try with z_put.py -e tcp/localhost:7447 ?
    Luigi Rodorigo
    @lrodorigo
    Ok, but I am using --net=host option on the docker container
    Oh, I found it!
    If I add the --privileged option to the container, the REST is receiving data sent from z_put.py
    Julien Enoch
    @JEnoch
    Wow, I didn’t know this one… Are your running Docker on Linux ? With sudo?
    Luigi Rodorigo
    @lrodorigo
    Docker on linux.
    The docker daemon is started by SystemD, and my user is a member of the docker group (so I don't have to launch the docker client using sudo)
    Is there some shared memory access?
    Julien Enoch
    @JEnoch
    Not by default. zenoh supports shared memory transport, but it’s not active by default.
    Luca Cominardi
    @Mallets
    A minor correction, shared memory transport is active by default but data needs to be created in a shared memory region
    Luigi Rodorigo
    @lrodorigo
    ok... so you can investigate on it (--privileged is really a mess from the security point of view)
    Luca Cominardi
    @Mallets
    if you take the basic examples that you can find here, you can see that there is a zn_pub_shm that allows to publish over shared memory
    Luigi Rodorigo
    @lrodorigo
    Ok.
    Anyway the issue is related to the multicast discovery. *
    If I specify -e tcp/localhost:7447 the REST is working correctly. So --net=host is not enough and it requires --privileged to enable the discovery from the docker container... is there some L2/L3 socket usage? (but in this case even the standalone executable would require the CAP_NET_RAW cap.)
    OlivierHecart
    @OlivierHecart
    zenoh uses socket2 and async-std crates for UDP sockets creation/manipulation and pnet crate to discover local net interfaces.
    I don't believe that socket2 nor async-std do anything fancy. But maybe pnet does. It maybe worth checking if our docker/multicast related issues are related to net interfaces discovery or not.
    Julien Enoch
    @JEnoch
    The answer might be here. Can you try to replace --privileged with --cap-add NET_BROADCAST and/or --cap-add NET_ADMIN?
    Julien Enoch
    @JEnoch

    @lrodorigo : I tried to reproduce the issue with:

    • a Ubuntu 20.04 VM (my host is a Mac)
    • Docker 20.10.3 within, the user being in « docker » group
    • ran docker run --net host --init eclipse/zenoh:master --mem-storage '/demo/**’
    • ran z_put.py + z_get.py on my Mac

    This worked, without the --privileged option.
    Are you running a different Docker version ?

    halfbit
    @halfbit:matrix.org
    [m]
    super cool on the querying subscriber, will have to try it out
    4 replies
    Luigi Rodorigo
    @lrodorigo

    I don't know what's happened but now everything is working as expected event without --privileged. But I kept the PC shutted-down during the night.

    I'll inform you if the strange behaviour occurs again. Many thanks for your precious support.

    2 replies
    halfbit
    @halfbit:matrix.org
    [m]
    is the router and how it works documented somewhere? any published benchmarks?
    OlivierHecart
    @OlivierHecart
    What kind of benchmarks are you looking for ?
    halfbit
    @halfbit:matrix.org
    [m]
    that's I guess the real question here isn't it hah
    I think I'd have to concoct the network setup for my benchmark to really know...