@cite-reader This is incredible feedback. It is my second day with ZMQ and I really liked until I reached a STREAM socket. You really cut down my efforts.
I will check things with java.net.Socket and other libs for my current use case MQTT.
I really continue my general learning with ZMQ as it is offering so many sockets to try different use cases.
Hi, is ZMQ a good option for real-time Browser-based Video/Audio streaming (reference: https://github.com/zeromq/JSMQ)? If yes, what would be the general capacity (# of streams that ZMQ can handle, say each payload is 50kb per 100ms) on a 16GB 6 cores example Linode machine. Can it handle 1000 or at least 500 concurrent streaming on such server?
Atm, already using websockets to stream audio/video but the big issue I face with it is the more clients connect to the "video channel" the slower the video stream playback to all other clients. I am wondering also if ZMQ is vulnerable to this limitations?
Off the top of my head, cloud functions are extremely transient, which means something else in the messaging topology needs to be stable. Either subscribers get listed in the cloud function's config, they register themselves in some KV store you have lying around, or you put a broker on a compute engine instance. Depending on your deliverability needs that broker might be as simple as running
zmq_proxy between an XPUB and XSUB socket, or you might need to run with DEALER/ROUTER and write custom code to handle acknowledgements and maybe disk persistence.
If you do want stronger deliverability guarantees than you can get out of PUB/SUB sockets by themselves, it's worth evaluating the development work of doing all that against using GCP's "Cloud Pub/Sub" product.
I'm trying to develop asynchronous RPC and am wondering if there's a reason to use PAIR, vs DEALER ROUTER. It seems like PAIR would be the natural choice because both sides can initiate communication where my understanding of DEALER ROUTER is that you would need to send using DEALER and would REPLY using ROUTER. Or you could setup DEALER <> DEALER I guess.
I guess my main question is there any reason not to use PAIR for this use case, even though it's not strictly a multi-threading use case
okay, that's a good point. I would need to know how many clients would connect at design time. I guess my design is more accurately a single server with N clients that need to perform concurrent requests to the server (ideally just message sending). Another point I should keep in mind is that the docs say PAIR is unsuitable for TPC based communication, due to lack of reconnection.
Given that, I think the more appropriate design is DEALER<>ROUTER. Here clients would send requests to the service using DEALER, which would process them and then return the responses using ROUTER and the
routing id. There may be a reason why certain recommendations are made in the docs :sweat_smile:
I'm seeing some unusual behavior with our ZMQ setup and I'm curious if anyone has some ideas on where to start poking around. We are using v3.2 (upgrading is on the roadmap, but not any time super soon) and also using jzqm which we are building against our zmq build. We have services using jzmq that have a ROUTER socket listening for client connections and a second ROUTER that is used to distribute work to a bunch of worker threads. This is all on Linux. On the client side, we are primarily Windows and are using cppzmq (generally inside a library wrapped in python bindings) to talk to the services. Clients send messages using a DEALER socket and then poll for a given timeout waiting on a reply (the server will send an ack message to the client if the request is taking a long time to process to keep it from timing out). If there is a long enough period with receiving a reply from server (either ack or the answer to the query) an exception is raised and the socket is eventually remade. Generally this has all seemed to work fine.
Recently with the whole pandemic, we've had lots of people transition to working over VPN and have now started to notice some unusual behavior on the service side. Inspecting the open sockets on the service side, I saw well over 3000 open connections today (much higher than normal) to one of our services and many IPs with 30-60 open sockets, almost exclusively from machines with VPN related IP addresses. Investigating some of those client IPs it seems like in many of the cases whichever computer made those sockets is no longer the same machine as the one which currently has the VPN address (our VPN system is a big pool of IPs, so there is no guarantee the same machine gets the same IP every day). So something in ZMQ is hanging onto those connections even though the client socket has clearly been cleaned up.
From this stackoverflow post it seems like there might be something happening with messages being queued, but not sent with a ROUTER (https://stackoverflow.com/questions/28558274/how-to-drop-inactive-disconnected-peers-in-zmq). Does that seem like a likely candidate for what is happening here? Are there things I can do on the ZMQ side to try and prevent this from happening, noting that I don't have any control over the client IPs or when they might be dropping off VPN (Either intentionally or because the Internet is cruel)?
ZMQ_TCP_KEEPALIVEfamily of socket options to tighten that detection window.
ZMQ_HEARTBEAT_TIMEOUTto your repertoire of tools to deal with it.