These are chat archives for cherrypy/cherrypy

May 2017
Dan Vinakovsky
May 24 2017 02:16
Hi all
I came across what I believe to be a bug/performance limitation, and based on some googling it may have been brought up several years back (I'll create an issue on Github if it wasn't/you guys aren't aware of it)
These two posts explain the situation very well:
in a nutshell, with the default settings, if you fire off 100 requests to the cherrypy http server simultaneously it will very quickly respond to 10 of them & then do nothing for 10 seconds, then process another batch of 10
Dan Vinakovsky
May 24 2017 02:22
i think i understand why this happens, and i've been able to mitigate it somewhat by upping the thread pool size, reducing the socket wait to 5 seconds (from the default of 10), and configuring cherrypy to respond with the "Connection" http header set to "close"
that said, this is of course not ideal as i've essentially degraded the performance of my web app to HTTP/1.0 (as in 1 request, 1 response per connection)
is there something obvious that i'm missing, or should i swap out the default HTTP server for something else?
David Allouche
May 24 2017 11:35
@hexaclock Myself, and I think most people using cherrypy on services with significant traffic, use it in a reverse-proxy configuration with a dedicated web server (apache, ngnix, etc.) on the public side. Here, I use Apache with "SetEnv proxy-nokeepalive 1". That has no impact on quality of service, it only impacts communication between Apache and cherrypy.
Whenever people ask questions about ssl support or anything related to relatively large scale deployments, the answer is usually "meh, use a reverse proxy". So there is little motivation to fix those issues.
Jason R. Coombs
May 24 2017 12:57
@hexaclock: There’s also a dynamic thread pool that will allow the pool size to grow based on demand.
That may serve your needs.
Dan Vinakovsky
May 24 2017 22:18
Thanks for the quick feedback :)
I tried the dynpool, but it didn't quite work in my situation
I suppose I'll go with nginx and use options like keepalive & max_conns to control the number of open connections between nginx & cherrypy, and allow nginx to do the heavy lifting between the clients & itself