Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
    Luigi Rodorigo
    @lrodorigo
    the two peers are not able to discover each other
    Julien Enoch
    @JEnoch
    By « peers » you mean « routers » right ? You’re only starting 2 zenohd processes, or also zenoh applications (i.e. using a zenoh API) ?
    I mean theeclipse/zenoh container you run is a zenoh router: internally it runs the zenohd process.
    Luigi Rodorigo
    @lrodorigo
    Of course, sorry for the misleading term.
    Yes I would mean router, it is running zenohd process.
    Each host is running the same ecplise/zenoh container with the mentioned docker run command.
    Julien Enoch
    @JEnoch
    OK, thanks for clarification. Actually I saw that both routers_autoconnect_multicast and routers_autoconnect_gossip options have been introduced after the 0.5.0-beta.8 release. Thus, they are not yet supported in eclipse/zenoh:latest image.
    I kicked a build of the master branch that support those options. You should be able to run an eclipse/zenoh:master container soon. I’ll let you know as soon as available.
    Luigi Rodorigo
    @lrodorigo
    Ok, now I remembered that in the previous test I was using a build made from the master branch
    but it was not loading the REST plugin (because I didn't copy the .so files on the other host), so I started to use the Docker container...
    Luigi Rodorigo
    @lrodorigo

    These two options, are actually needed in order to enable the multicast discovery?

    routers_autoconnect_multicast=true
    routers_autoconnect_gossip=true

    Seems that they are quite undocumented (I found them only on this chat)

    Julien Enoch
    @JEnoch
    If all your routers are using routers_autoconnect_multicast=true and multicast is working for all, you don’t need routers_autoconnect_gossip=true.
    Julien Enoch
    @JEnoch
    @lrodorigo : the eclipse/zenoh:master image is now ready.
    Unfortunately, after few tests, I discovered that using routers_autoconnect_multicast=true causes a side effect that prevents the REST plugin to correctly start…
    We will investigate this and come back to you soon.
    Waiting for a fix, can’t you do your tests with a fixed IP (i.e. starting a router with -e tcp/<ip_of_other_router>:7447) ?
    Luigi Rodorigo
    @lrodorigo

    Ok, I will test it later.
    So, I deduce that the multicast discovery of the routers is still an 'experimental feature'.

    Just an observation:
    this is not so-clear from the Getting Started section of the zenoh.io website...
    it would be better to clearly report that, otherwise it seems that the "hello world" example between two routers, even in the simplest network configuration, is not working at all.

    OlivierHecart
    @OlivierHecart

    Hi @lrodorigo,
    You can connect multiple routers with each other manually using the -e option. For example for 2 routers :

    host1> zenohd
    host2> zenohd -e tcp/host1:7447

    routers discovery is deactivated by default so that users keep full control on the routers network topology.

    Luigi Rodorigo
    @lrodorigo
    Thanks Olivier, now it is clear to me.
    I was only reporting that it should be more clear on the Getting Started guide [ here: https://zenoh.io/docs/getting-started/quick-test/ ]
    OlivierHecart
    @OlivierHecart
    Yep we need to improve that. Thanks for the advice.
    Luigi Rodorigo
    @lrodorigo
    Another question:
    in order to persist all data on each router, the storage must be recreated on each router, after the zenohd process has started. Is it correct?
    OlivierHecart
    @OlivierHecart
    That's correct.
    Luigi Rodorigo
    @lrodorigo
    Sorrry for another question:
    What is the Zenoh behaviour wrt the leading slash of the path?
    Are /demo/first and demo/first both allowed? If yes, are both seen as the same path?
    Julien Enoch
    @JEnoch
    Both are allowed. Using the zenoh API demo/first is a relative Path to the Workspace prefix. If not specified, the default prefix of a Workspace is /.
    Luigi Rodorigo
    @lrodorigo
    Quite clear, many thanks.
    Julien Enoch
    @JEnoch
    @lrodorigo : Olivier just pushed a fix for the « routers_autoconnect_multicast + REST plugin » issue. You can pull and build master now, or wait ~1h that a new eclipse/zenoh:master image is built and pushed to Docker hub.
    @gardenest : if you use routers_autoconnect_multicast=true too, you should also pull the master branch for this fix.
    Luigi Rodorigo
    @lrodorigo

    Sorry for annoying you with really noob questions, but I am also having problems with z_put.py/z_get.py/z_sub.py examples (from https://github.com/eclipse-zenoh/zenoh-python/blob/master/examples/zenoh/).

    In particular the following happens:
    1) I start the z_sub.py script
    2) I run z_put.py script (without additional parameters)
    3) z_sub.py correctly invokes the callback
    4) if I run z_get.py, no data are received at all.

    Openning session...
    New workspace...
    Get Data from '/demo/example/**'...
    
    Process finished with exit code 0

    More over (while the zenohd process with a [working] REST plugin is running):
    0) I start z_sub.py -e tcp/localhost:7447
    1) I run z_put.py -e tcp/localhost:7447
    2) I run curl -X PUT -d 'Hello Data!' http://localhost:8000/demo/example/test2
    3) The z_sub.py invokes the callback ONLY for the data sent from the z_put.py script (and not for the data sent by the REST API)
    4) The key put from z_put.py is retreived on the API at http://localhost:8000/demo/** but the content is blank (while the content received from z_sub.py is (correctly) 'Put from Python!'

    Am I missing something?

    Julien Enoch
    @JEnoch
    For your first use case, you need to have a running zenoh router with an appropriate storage configured to store the publication and reply to the get (e.g. start the router with an memory storage: zenohd —mem-storage ‘/demo/**').
    Is that your case ?
    Luigi Rodorigo
    @lrodorigo
    Ok, just tried with a running zenohd router (and a mem storage created by curl -X PUT -H 'content-type:application/properties' -d 'path_expr=/demo/example/**' http://localhost:8000/@/router/local/plugin/storages/backend/memory/storage/my-storage), but unfortunately the result is the same
    Also tried to add -e tcp/localhost:7447 to z_pub and z_get (in this case the multicast discovery should work?)
    Julien Enoch
    @JEnoch
    If you installed zenoh-python via pip install eclipse-zenoh you might have compatibility issue with the router built from master branch. (sorry, we’re large phase of changes since the last 0.5.0-beta.8 release, including in the protocol and APIs…)
    Can you please try to re-install it with:
    pip install https://github.com/eclipse-zenoh/zenoh-python/zipball/master
    Julien Enoch
    @JEnoch
    I just tried a similar scenario using router and zenoh-python from master branch, and it worked for me:
    1. zenohd --mem-storage '/demo/**’
    2. python3 ./z_sub.py
    3. python3 ./z_put.py => z_sub receives the publication
    4. curl -X PUT -d 'Hello Data!' http://localhost:8000/demo/example/test2=> z_sub receives the publication
    5. curl http://localhost:8000/demo/** => result is:
      [
      { "key": "/demo/example/zenoh-python-put", "value": "Put from Python!", "encoding": "text/plain", "time": "2021-07-07T13:48:04.130432996Z/D2D25874C4884368A11E63673229BA46" },
      { "key": "/demo/example/test2", "value": "Hello Data!", "encoding": "application/x-www-form-urlencoded", "time": "2021-07-07T13:48:08.151674997Z/D2D25874C4884368A11E63673229BA46" }
      ]
    Can you try the same ?
    To answer your question: -e tcp/localhost:7447 is not necessary for z_put and z_get since they are configured in peer mode by default and will do multicast discovery and discover the router.
    Luigi Rodorigo
    @lrodorigo
    After installing the python library from the master branch things are going better
    Julien Enoch
    @JEnoch
    Glad to read that ! :smiley:
    Luigi Rodorigo
    @lrodorigo
    If run zenohd with zenohd --mem-storage '/demo/** everything is working, if I run the zenohd docker container, and then I create the storage using curl -X PUT -H 'content-type:application/properties' -d 'path_expr=/demo/example/**' http://localhost:8000/@/router/local/plugin/storages/backend/memory/storage/my-storage, the GET request does not see the data sent from z_put.py (but it sees the data sent from PUT HTTP requests)
    kydos
    @kydos
    @lrodorigo it would be good if you could look at the admin information in both cases. In other terms, look at what is the difference when you do a get on the URI: http://localhost:8000/@/router/local/plugin/storages/backend/memory/storage/**
    BTW, alsos notice that in the first case the storage path is /demo/** in the sencond you are creating a storage on /demo/example/**.
    Thus depending on what you’re trying to put it may not be surprising that it does not get stored. What is the key you are putting?
    Julien Enoch
    @JEnoch
    @lrodorigo the issue with docker container might still be related to the « routers_autoconnect_multicast + REST plugin » issue mentionned above that prevents the storages plugin to works properly. The new eclipse/zenoh:master image including Olivier’s fix for this issue is now ready. Please try to pull it and re-do your test.
    Luigi Rodorigo
    @lrodorigo

    Yes I noticed that, and actually the reported metadata of the two datastores are correct:

    /@/router/FB96238589C74C1AAADBB8359A9A24A8/plugin/storages/backend/memory/storage/my-storage
    {"path_expr":"/demo/example/**"}

    and

    /@/router/1FE3B25A0F674180A0424BEFC8F46314/plugin/storages/backend/memory/storage/mem-storage-1
    {"path_expr":"/demo/**"}

    Anyway I am using z_put.py with the default --path parameter (/demo/example/zenoh-python-put), and I am publishing with curl on /demo/example/zenoh-python-put

    Thanks @JEnoch , I am going to pull the new docker master image.
    Anyway I am not using the routers_autoconnect_multicast option (the current config file is empty)
    OlivierHecart
    @OlivierHecart
    @lrodorigo what could cause the issue is a bad multicast discovery between z_put.py and the router inside the docker (we experienced problems with docker and multicast in the past). Could you try with z_put.py -e tcp/localhost:7447 ?
    Luigi Rodorigo
    @lrodorigo
    Ok, but I am using --net=host option on the docker container
    Oh, I found it!
    If I add the --privileged option to the container, the REST is receiving data sent from z_put.py
    Julien Enoch
    @JEnoch
    Wow, I didn’t know this one… Are your running Docker on Linux ? With sudo?
    Luigi Rodorigo
    @lrodorigo
    Docker on linux.
    The docker daemon is started by SystemD, and my user is a member of the docker group (so I don't have to launch the docker client using sudo)
    Is there some shared memory access?
    Julien Enoch
    @JEnoch
    Not by default. zenoh supports shared memory transport, but it’s not active by default.
    Luca Cominardi
    @Mallets
    A minor correction, shared memory transport is active by default but data needs to be created in a shared memory region
    Luigi Rodorigo
    @lrodorigo
    ok... so you can investigate on it (--privileged is really a mess from the security point of view)
    Luca Cominardi
    @Mallets
    if you take the basic examples that you can find here, you can see that there is a zn_pub_shm that allows to publish over shared memory