Hello, libuv beginner here
My test application creates TCP connections to an HTTP server, sends a GET request upon connection and begins reading stream. It really doesn't care what's inside the stream so it always passes the same buffer to alloc_cb (never actually calls malloc).
This seems to be working (the HTTP server sends data to all these connections). My problem is that this test application takes 100% of one core (i'm running the test on a Mac).
I'm using the default loop and tried running
uv_run both with
nreadmay be zero if there is no more data to be read. If addr is NULL, it indicates there is nothing to read (the callback shouldn’t do anything), if not NULL, it indicates that an empty datagram was received from the host at addr. The flags parameter may be UV_UDP_PARTIAL if the buffer provided by your allocator was not large enough to hold the data. In this case the OS will discard the data that could not fit (That’s UDP for you!).
When and why I can receive such callback?
I have two libuv question that I hope someone can help me with..
uv_queue_workin the thread pool ? essentially the opposite of what
uv_async_senddoes which if I am not mistaken is to send information back to the eventLoop.
C++thread id ?
Thanks a tonne !
@indutny Do you know who would know about TCP packet coalescing? Is this a thing linux kernels do? Does libuv do it anywhere?
For context, I'm trying to find the root cause for a performance issue in MagicScript where some devices get data callbacks up to 64k each, while other devices get them around 1.2kb (typical MTU size for TCP packets)