Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
    Luca Cominardi
    @Mallets
    one quick test you could do is to have a 2 publishers and 2 subscribers: 1 one sending/receiving actual data and 1 one sending/receiving probes at high rate (100-1000 Hz)
    Tom Burdick
    @teburd
    I'll try that out
    Luca Cominardi
    @Mallets
    this is to artificially increase the load and ‘trick’ the scheduler
    in any case, yes… that’s a big difference between rtos and non-realtime :)
    Tom Burdick
    @teburd
    yeah, always a disappointment when going back and forth hah I did see on the roadmap zephyr support, that's pretty nice
    thank you all for the great info, going to create some experiments for myself, see how things look, really appreciate it
    very cool project
    kydos
    @kydos
    Sounds good @bfrog, let us know if you have any questions. For the rest, we always welcome ideas, and suggestions for new features / uses.
    BTW, the zephyr support is available as part of zenoh-pico, that was a pull request from @esteve which was merged a few weeks ago.
    Tom Burdick
    @teburd
    neat!
    Tom Burdick
    @teburd
    amazing what a difference --release makes here for those zn_pub/sub_thr programs
    30k msgs/s in debug, 700k msgs/s in release
    some major compiler voodoo?
    everything inlined into a single call? like wow
    kydos
    @kydos
    @bfrog, FYI, we are about to merge a change that should further improve the performances for small data. To make a long story short we found out that Rust async futures tend to bloat the stack and as a consequence drop performances as deep the async call stack becomes. We’ve done some refactoring to keep the stack size small and early measurements are extremely promising. The merge should happen sometimes toward the end of next week. Thus stay tuned.
    For what concerns the debug vs release our flags are set to make sure that debug compiles faster while release does global optimisation and cpu specific code generation.
    Tom Burdick
    @teburd
    nice, yeah I know the nats client folks (same author as sled database!) actually dropped using async as they found it didn't give them the benefits they were looking for
    that's different though... single socket client really
    would be interesting to see a nats comparison as well
    awesome
    kydos
    @kydos
    We ended-up just constraining the async portion. If you are curious can take a look this branch
    We’ve done measurement with NATS in the past, we’ll do again once we merge these changes. Two things to keep in mind. With NATS you have to expliticely flush from the user API. Zenoh does the batching automatically. Additionally our protocol has some higher level primitives, thus our routing code is a bit more complicated…. If you consider all of that, our performance are not so bad. But I am really looking forward to see where we get with the new async/sync split.
    Tom Burdick
    @teburd
    is it possible to use tokio instead of async-std?
    seems like most things are pretty tied into async-std
    Luca Cominardi
    @Mallets
    async-std runtime is capable of coexsit with tokio… in fact some network-specific aspects (like the QUIC implementation) leverage tokio
    so, if you need to use some other tokio-based library on your side, just make sure to enable the ‘tokio*’ feature of async-std when compiling: https://docs.rs/async-std/1.9.0/async_std/index.html
    Geoff
    @geoff_or:matrix.org
    [m]
    What ports do I need to open on a firewall and set up port forwarding for to get Zenoh to talk from one PC to another PC that is on a separate subnet behind a firewall/NAT setup?
    Luca Cominardi
    @Mallets
    By default it uses the 7447 port
    Geoff
    @geoff_or:matrix.org
    [m]
    UDP?
    Luca Cominardi
    @Mallets
    But if they are behing a firewall/NAT setup they will not able to dinamycally discover each others via the multicast scouting, so you need to point the zenoh peers with the right IP address
    TCP for connections (by default)
    Geoff
    @geoff_or:matrix.org
    [m]
    that's not a problem, I have a fixed network layout so setting peers manually is fine
    Luca Cominardi
    @Mallets
    UDP for discovery… if the firewall allows passing through of mutlicast messages that should also work
    Geoff
    @geoff_or:matrix.org
    [m]
    Oh Zenoh uses TCP? I thought it was all UDP
    Luca Cominardi
    @Mallets
    ok, so TCP/7447 is the default one
    no no, zenoh is multi-protocol… by default uses TCP for communication but it can also operate on top of UDP (best effort only), QUIC, TLS and unixsockets
    Geoff
    @geoff_or:matrix.org
    [m]
    OK. I guess if it's using TCP by default then that's fine
    Luca Cominardi
    @Mallets
    by default goes over plain TCP
    Geoff
    @geoff_or:matrix.org
    [m]
    it
    it'll be more reliable going through the NAT
    Luca Cominardi
    @Mallets
    in general yes
    Geoff
    @geoff_or:matrix.org
    [m]
    Thanks! I'll give it a go
    Luca Cominardi
    @Mallets
    Great, let us know if you have any problem!
    Julien Enoch
    @JEnoch
    Note that if you use the zenoh router and want to access its REST API through the NAT, you’ll also need to open TCP/8000
    Luca Cominardi
    @Mallets
    thanks @JEnoch , I forgot about that one
    Oliver Layer
    @OliLay
    Hi guys, a question about minimal overhead of a Zenoh data message. On zenoh.io it is stated that "the minimal wire overhead of a data message is 4 bytes.".
    is this actually possible? I looked at some zenoh data messages in Wireshark, using the dissector, from which I calculated an overhead of 13 bytes. (I used the python API to publish something)
    Are the 4 bytes maybe just for raw zenoh.net usage? (leaving out stuff like encoding, etc.)
    Especially I wonder how this is solved with the SN field, because it alone takes 4 bytes in the frame.
    Luca Cominardi
    @Mallets
    Hi @OliLay , 4 bytes is in fact the minimal overhead achievable in zenoh data messages, however this requires a special configuration.
    Luca Cominardi
    @Mallets
    As you correctly identified, the SN filed takes more bytes… let me explain briefly how it works:
    • A SN is encoded as variable-length (VLE). That means that the footprint on the wire depends on its value. As you can imagine a 64 bit integer always takes 8 bytes when encoded as is. Instead, while encoded as VLE the size on the wire depends on its value. E.g., encoding the value 0 always takes 1 byte on the wire (event if it is a 64 bit integer).
    • To limit the number of bytes the SN can take, two zenoh endpoints can negotiate its resolution at session establishment. As per VLE works, an SN resolution of 128 will allow to always stick to 1 byte max.
    By default, zenoh uses 4 bytes resolution for the SN. This is to cope with high throughput. Indeed, larger the SN window, the more messages you can have on the fly so as to increas throughput. So, in short limiting the SN resolution is very beneficial in those cases when operating over constrained networks at low throughput.