Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
    Tom Burdick
    @teburd
    30k msgs/s in debug, 700k msgs/s in release
    some major compiler voodoo?
    everything inlined into a single call? like wow
    kydos
    @kydos
    @bfrog, FYI, we are about to merge a change that should further improve the performances for small data. To make a long story short we found out that Rust async futures tend to bloat the stack and as a consequence drop performances as deep the async call stack becomes. We’ve done some refactoring to keep the stack size small and early measurements are extremely promising. The merge should happen sometimes toward the end of next week. Thus stay tuned.
    For what concerns the debug vs release our flags are set to make sure that debug compiles faster while release does global optimisation and cpu specific code generation.
    Tom Burdick
    @teburd
    nice, yeah I know the nats client folks (same author as sled database!) actually dropped using async as they found it didn't give them the benefits they were looking for
    that's different though... single socket client really
    would be interesting to see a nats comparison as well
    awesome
    kydos
    @kydos
    We ended-up just constraining the async portion. If you are curious can take a look this branch
    We’ve done measurement with NATS in the past, we’ll do again once we merge these changes. Two things to keep in mind. With NATS you have to expliticely flush from the user API. Zenoh does the batching automatically. Additionally our protocol has some higher level primitives, thus our routing code is a bit more complicated…. If you consider all of that, our performance are not so bad. But I am really looking forward to see where we get with the new async/sync split.
    Tom Burdick
    @teburd
    is it possible to use tokio instead of async-std?
    seems like most things are pretty tied into async-std
    Luca Cominardi
    @Mallets
    async-std runtime is capable of coexsit with tokio… in fact some network-specific aspects (like the QUIC implementation) leverage tokio
    so, if you need to use some other tokio-based library on your side, just make sure to enable the ‘tokio*’ feature of async-std when compiling: https://docs.rs/async-std/1.9.0/async_std/index.html
    Geoff
    @geoff_or:matrix.org
    [m]
    What ports do I need to open on a firewall and set up port forwarding for to get Zenoh to talk from one PC to another PC that is on a separate subnet behind a firewall/NAT setup?
    Luca Cominardi
    @Mallets
    By default it uses the 7447 port
    Geoff
    @geoff_or:matrix.org
    [m]
    UDP?
    Luca Cominardi
    @Mallets
    But if they are behing a firewall/NAT setup they will not able to dinamycally discover each others via the multicast scouting, so you need to point the zenoh peers with the right IP address
    TCP for connections (by default)
    Geoff
    @geoff_or:matrix.org
    [m]
    that's not a problem, I have a fixed network layout so setting peers manually is fine
    Luca Cominardi
    @Mallets
    UDP for discovery… if the firewall allows passing through of mutlicast messages that should also work
    Geoff
    @geoff_or:matrix.org
    [m]
    Oh Zenoh uses TCP? I thought it was all UDP
    Luca Cominardi
    @Mallets
    ok, so TCP/7447 is the default one
    no no, zenoh is multi-protocol… by default uses TCP for communication but it can also operate on top of UDP (best effort only), QUIC, TLS and unixsockets
    Geoff
    @geoff_or:matrix.org
    [m]
    OK. I guess if it's using TCP by default then that's fine
    Luca Cominardi
    @Mallets
    by default goes over plain TCP
    Geoff
    @geoff_or:matrix.org
    [m]
    it
    it'll be more reliable going through the NAT
    Luca Cominardi
    @Mallets
    in general yes
    Geoff
    @geoff_or:matrix.org
    [m]
    Thanks! I'll give it a go
    Luca Cominardi
    @Mallets
    Great, let us know if you have any problem!
    Julien Enoch
    @JEnoch
    Note that if you use the zenoh router and want to access its REST API through the NAT, you’ll also need to open TCP/8000
    Luca Cominardi
    @Mallets
    thanks @JEnoch , I forgot about that one
    Oliver Layer
    @OliLay
    Hi guys, a question about minimal overhead of a Zenoh data message. On zenoh.io it is stated that "the minimal wire overhead of a data message is 4 bytes.".
    is this actually possible? I looked at some zenoh data messages in Wireshark, using the dissector, from which I calculated an overhead of 13 bytes. (I used the python API to publish something)
    Are the 4 bytes maybe just for raw zenoh.net usage? (leaving out stuff like encoding, etc.)
    Especially I wonder how this is solved with the SN field, because it alone takes 4 bytes in the frame.
    Luca Cominardi
    @Mallets
    Hi @OliLay , 4 bytes is in fact the minimal overhead achievable in zenoh data messages, however this requires a special configuration.
    Luca Cominardi
    @Mallets
    As you correctly identified, the SN filed takes more bytes… let me explain briefly how it works:
    • A SN is encoded as variable-length (VLE). That means that the footprint on the wire depends on its value. As you can imagine a 64 bit integer always takes 8 bytes when encoded as is. Instead, while encoded as VLE the size on the wire depends on its value. E.g., encoding the value 0 always takes 1 byte on the wire (event if it is a 64 bit integer).
    • To limit the number of bytes the SN can take, two zenoh endpoints can negotiate its resolution at session establishment. As per VLE works, an SN resolution of 128 will allow to always stick to 1 byte max.
    By default, zenoh uses 4 bytes resolution for the SN. This is to cope with high throughput. Indeed, larger the SN window, the more messages you can have on the fly so as to increas throughput. So, in short limiting the SN resolution is very beneficial in those cases when operating over constrained networks at low throughput.
    But this is only one piece of the story. Configuring the SN in this way allows to limit the wire overhead to 2 bytes. So, where are the other 2 bytes?
    Luca Cominardi
    @Mallets
    This is in the data message itself and how resources are represented:
    • In zenoh each resource is represented by a key, in the form of URI. E.g. /home/sensor/temperature
    • The key can be arbitrary long. This is very handy for the user who is completely free to define his own key structure, as complex/nested as he wants to be.
    • However, transmitting the whole key is not wire efficient at all. For this reason, in zenoh we have a mechanism to do some dynamic mapping between the complete string form of the key into a more compact integer form, called simply resource ID. This means that we have a wire-efficent mapping for identifying resources departing from simple integers sent on the wire. This resource ID is encoded as VLE, thus the possibility to use only 1 byte.
    • In order to use this wire-efficient representation you may need to use the declare_resource() primitive in zenoh, which does exactly that under the hood.
    So, summarizing. 4 bytes are the minimal overhead that can be achieved by zenoh thanks to:
    1) SN negotiation and limited to 1 byte. This might have an impact on the achievable throughput due to max num of messages that can be on the fly at the same time.
    2) The efficient mapping between string and integer resources on the wire.
    @OliLay I hope this clarify your doubt
    halfbit
    @halfbit:matrix.org
    [m]
    that's really really nice
    halfbit
    @halfbit:matrix.org
    [m]
    with that sync branch, I get 1.5 million msgs/sec with the shm thr samples, amazing
    so... yeah I think removing some async stuff there really did do some magic
    Luca Cominardi
    @Mallets
    Yes, that’s what we discovered by investigating in depth the async stack. It’s very powerful, but has a huge cost when used everywhere. So, the sync branch is really an ongoing effort of fencing the async part in few specific parts in zenoh.
    halfbit
    @halfbit:matrix.org
    [m]
    makes a lot of sense
    Oliver Layer
    @OliLay
    Hi @Mallets. Thank you very much for the information, which makes the protocol much clearer for me now. I was a bit confused, because I have zenoh TCP traffic captured, and as I saw in the source code that requires the frame length and the payload length to be prepended each. With leaving that aside (e.g. using UDP) and using a 1 byte SN as you said, I get how you can have a 4 byte data message on the wire. Thanks again :)
    kydos
    @kydos

    with that sync branch, I get 1.5 million msgs/sec with the shm thr samples, amazing

    Using shared memory is useful for payload that are at least 1Kbytes. Otherwise try w/o shared memory for smaller data… You should get even higher throughput.