These are chat archives for fanout/pushpin

Oct 2018
Justin Karneges
Oct 24 2018 02:12
@yukw777 are there multiple publishers or one publisher? sounds like multiple otherwise I'm not sure how you'd increase load. do you get occasional latency with a single publisher?
Oct 24 2018 14:05
HI all
i am running into strange issue
if i have json that has chinese characters in the json payload
i am receiving an error 400 bad request invalid json is there anything specific i need to set on PushPin?
httpControlRespond(req, 400, "Bad Request", "Body is not valid JSON.\n");
this is the place it is failing
Oct 24 2018 14:22
@jkarneges any inputs from you on this?
Justin Karneges
Oct 24 2018 15:26
@cd-rnukala be sure your request body is using UTF-8 format. I'm not sure what else the problem could be
Peter Yu
Oct 24 2018 15:26
@jkarneges one publisher as in the backend that makes POST calls to pushpin? There's only one pushpin and one backend in our set up. One go routine is subscribed to redis channels then keeps relaying things back to pushpin in a for loop. We're increasing the load by increasing the frequency of redis pub/sub messages. If we spin up multiple go routines to relay messages back to pushpin (effectively multiple publishers), pushpin's latency becomes even worse. Some messages even take 5 seconds.
we did verify that distributing the load by having multiple pods behind a load balancer helps.
Justin Karneges
Oct 24 2018 15:29
and if you have exactly one go routine doing publishing, do you see varied latency?
Peter Yu
Oct 24 2018 15:31
yeah, less frequent than having multiple go routines, but surely
i just ran one test
some messages taking about 2 seconds to come back from pushpin
Justin Karneges
Oct 24 2018 15:34
2 second round trip time for a POST request, when there are no subscribed clients and only 1 publisher? that is nuts
Peter Yu
Oct 24 2018 15:35
oh sorry, this is my original set up
so there are clients
we did try bombing pushpin with one subscriber and one publisher
setting message_rate to 0 seemed to help in that case
but as soon as we added more subscribers, it suffered the same problem
let me send you the simple test script we used. maybe we're doing something wrong.
Justin Karneges
Oct 24 2018 15:39
I just tried apache bench and didn't see any issue with this minimal test:
ab -c1 -n100000 -T application/json -p data.json http://localhost:5561/publish
where data.json is this:
  "items": [
      "channel": "test",
      "formats": {
        "http-stream": {
          "content": "汉语\n"
no subscribers, 1 publisher (the -c1), and got 2800 rps with max 8ms latency
Justin Karneges
Oct 24 2018 15:47
with -c 50 (50 concurrent publishers): 6600 rps with 48ms max latency (99% within 15ms)
tiny message of course
Justin Karneges
Oct 24 2018 15:53
i think to get 300ms-2s latency there'd need to be a network issue, or publisher-side issue
(note that I ran the above test on localhost)
Peter Yu
Oct 24 2018 16:16
@jkarneges sorry it took a bit. wanted to clean things up a bit and make some helpful comments for you.
i'll try running apache bench too just to be sure
Peter Yu
Oct 24 2018 16:34
ah this is interesting. I'm running apache bench on my laptop (latest macbook pro) with pushpin and I sending 100k messages timed out on me, so I reduced it to 20k and got the following results:
Server Software:
Server Hostname:
Server Port:            5561

Document Path:          /publish
Document Length:        10 bytes

Concurrency Level:      1
Time taken for tests:   34.439 seconds
Complete requests:      20000
Failed requests:        0
Total transferred:      1880000 bytes
Total body sent:        5900000
HTML transferred:       200000 bytes
Requests per second:    580.73 [#/sec] (mean)
Time per request:       1.722 [ms] (mean)
Time per request:       1.722 [ms] (mean, across all concurrent requests)
Transfer rate:          53.31 [Kbytes/sec] received
                        167.30 kb/s sent
                        220.61 kb/s total

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    2 148.7      0   19610
Processing:     0    0   1.0      0     121
Waiting:        0    0   1.0      0     120
Total:          0    2 148.7      0   19611

Percentage of the requests served within a certain time (ms)
  50%      0
  66%      0
  75%      0
  80%      0
  90%      0
  95%      0
  98%      0
  99%      0
 100%  19611 (longest request)
@jkarneges ^
could you maybe send me your pushpin configuration?
Oct 24 2018 16:36
@jkarneges What configuration changes do I need to make for Pushpin that would enable it to send payload with UTF-16 encoding?
Peter Yu
Oct 24 2018 17:34
@jkarneges when i ran pushpin on an ec2 instance separately, it seems to work fine.. so confused right now.