Hello, libuv beginner here
My test application creates TCP connections to an HTTP server, sends a GET request upon connection and begins reading stream. It really doesn't care what's inside the stream so it always passes the same buffer to alloc_cb (never actually calls malloc).
This seems to be working (the HTTP server sends data to all these connections). My problem is that this test application takes 100% of one core (i'm running the test on a Mac).
I'm using the default loop and tried running
uv_run both with
nreadmay be zero if there is no more data to be read. If addr is NULL, it indicates there is nothing to read (the callback shouldn’t do anything), if not NULL, it indicates that an empty datagram was received from the host at addr. The flags parameter may be UV_UDP_PARTIAL if the buffer provided by your allocator was not large enough to hold the data. In this case the OS will discard the data that could not fit (That’s UDP for you!).
When and why I can receive such callback?
I have two libuv question that I hope someone can help me with..
uv_queue_workin the thread pool ? essentially the opposite of what
uv_async_senddoes which if I am not mistaken is to send information back to the eventLoop.
C++thread id ?
Thanks a tonne !
@indutny Do you know who would know about TCP packet coalescing? Is this a thing linux kernels do? Does libuv do it anywhere?
For context, I'm trying to find the root cause for a performance issue in MagicScript where some devices get data callbacks up to 64k each, while other devices get them around 1.2kb (typical MTU size for TCP packets)
in my httpSignature.sign(), I provided my key (privateKey.pem string), my keyId, and headersToSign.
I dont think the keyId affects the signature but the key/headers will dictate how the signature gets produced. I did not specify an algorithm in my case, so the key.createSign() function automatically detected the algorithm to use based on the key's type
In the end, httpSignature.sign() will produce a signature that gets added to my request's header. I've compared this signature I got from Joyent vs the OpenSSL tool and saw that there is a difference in the signature value. Theoretically, shouldnt the httpSignature.sign() gives a signature that is same as the OpenSSL tool?
I've used the same stringToSign produced my httpSignature.sign() and on the openSSL tool I ran this command:
printf '%b' "$myStringToSign" | openssl dgst -sha256 -sign private.pem | openssl enc -e -a | tr -d '\n'
where $myStringtoSign is the environment variable that I set to be the same as stringToSign from httpSignature.sign(). and private.pem is my privateKey file.