by

Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • 19:01
    tleish commented #119
  • 17:50
    adam12 commented #119
  • 08:00
    semdinsp edited #121
  • 07:36
    semdinsp commented #120
  • 06:43
    danielevans commented #120
  • 06:23
    danielevans commented #120
  • 06:19
    danielevans commented #120
  • 05:57
    semdinsp edited #121
  • 03:44
    semdinsp commented #120
  • 02:39
    semdinsp edited #121
  • 02:39
    semdinsp edited #121
  • 02:38
    semdinsp edited #121
  • Jun 04 20:40
    danielevans commented #120
  • Jun 04 14:41
    ioquatix commented #121
  • Jun 04 13:21
    semdinsp edited #121
  • Jun 04 13:20
    semdinsp edited #121
  • Jun 04 13:19
    semdinsp edited #121
  • Jun 04 13:17
    semdinsp opened #121
  • Jun 04 00:20
    ioquatix commented #120
  • Jun 03 07:56
    danielevans commented #120
Paul Sadauskas
@paul
i dunno, I'm just running it locally from docker for this testing.
Paul Sadauskas
@paul
if I point the http benchmark at it directly, looks like I get about 2500req/s.
Samuel Williams
@ioquatix

@paul so

Falcon handled Requests/sec: 3453.9907

batching to influx DB can handle about 2500 requests per second?

Or is that benchmark just a no op
Paul Sadauskas
@paul

right, so influxdb is written in golang, and the http benchmark tool I'm using (hey)[https://github.com/rakyll/hey] is also written in go. I think influxdb allows unlimited connections by default, and I ran hey with concurrency=20. Doing hey -> influxdb directly with a tiny payload of 1 metric results in hey reporting Requests/sec: 2634.1636. Doing hey -> falcon -> async-http -> influxdb with a payload that gets parsed, decoded then re-encoded into the same final payload to influxdb results in Requests/sec: 1745.9014 (unbuffered, no connection limit). Adding in the buffering, it does Requests/sec: 3453.9907.

The parsing, decoding and encoding is all happening synchronously in the app, the http post to influx is wrapped in an Async. So it seems async and/or async-http is able to "spawn" ~1750 fibers/sec, but then finalize the requests at a somewhat slower rate, because it eventually runs out of memory.

image.png
the buffering has totally solved the problem, though.
Samuel Williams
@ioquatix
@Paul in order to complete the request, the HTTP body needs to be flushed. If influx db is getting overloaded, it will cause falcon to get overloaded. We should probably think about if there is a graceful way to apply some back-pressure here. Basically, you are loosing the back pressure by making an async task but not waiting on it, but even more, falcon itself will keep on accepting requests because it can, but the upstream is getting slower and slower i.e. eventually exhausting memory.
The natural relationship is that accepting requests should slow down if influx db can't handle the requests fast enough, but trying to ensure that back pressure relationship seems a bit tricky to me... I'm not quite sure how you'd feed that back to falcon accept loop reliably. The simplest option would be to just limit (i.e. a semaphore) around new connections to some fixed upper limit, e.g. in your case 3000 req/s or thereabouts.
Paul Sadauskas
@paul
This is not the normal case, but in my particular case, I do not want falcon slowing down or 504'ing requests, because heroku will eventually stop sending the requests completely. Ideally, I want some way to be made aware of the backpressure, so I can scale up the number of falcon host servers I'm running, or notify my users they need to upscale their influxdb server. Like I was saying earlier, some kind of Exception, like "Unable to obtain a connection from the pool within 5 seconds", that I can rescue and handle with alerting or Sentry or something.
Samuel Williams
@ioquatix
That makes sense.
Paul Sadauskas
@paul
Something else that would be useful is allowing falcon to crash in certain scenarios. When I was running out of memory, falcon would just get wedged: it wouldn't answer any requests (so caddy just returned 504), but it also didn't crash, so that systemd could restart it and recover.
Samuel Williams
@ioquatix
What error was it throwing?
We could probably argument that to propagate certain exceptions back out, which probably makes sense
Samuel Williams
@ioquatix
The design isolates the individual tasks but you could kill the entire server calling #stop. When falcon got wedged, was it actually responding at all, or was it fully dead, i.e. if you stopped making requests, would it recover?
Samuel Williams
@ioquatix
Probably the best way to deal with this is a watchdog task or even an external system.
falcon has a supervisor, it could kill unresponsive server instances.
William T. Nelson
@wtn
I would totally convert tabs to 2*?\s in the ruby files
Samuel Williams
@ioquatix
I hear that 🌭 for indentation is all the rage now
CuQi1H7XEAEYJq_.jpeg
William T. Nelson
@wtn
I have a feature suggestion: wildcard name matching for falcon virtual hosts.
Samuel Williams
@ioquatix
@wtn How should it work?
William T. Nelson
@wtn

For example, rack '*.example.com' would serve requests for hostnames host.example.com, post.example.com and so on.

It would be helpful for apps that run on a large and/or variable number of subdomains (of one domain) from one instance. A multi-hostname or wildcard SSL certificate would be necessary.

I'm experimenting with falcon features for virtual hosting and proxying. It can now handle some of the things that nginx does, and with much less configuration! So I'm excited!
William T. Nelson
@wtn
Here's the documentation for the nginx feature: https://nginx.org/en/docs/http/server_names.html#wildcard_names
Globbing with * could be confusing. As a simpler option, allowing the form rack '.example.com' using .example.com as a special wildcard name would be very helpful. Or, allowing a list of hostnames.
Samuel Williams
@ioquatix
I see
How does SSL work in this case, do you also use wildcard certificates?
Probably using a regexp would make more sense.
however you'd need to then deal with the situations where multple rack apps match the same request...
Samuel Williams
@ioquatix
I guess what you want is some kind of proxy alias
Samuel Williams
@ioquatix
Something like
    server_name   ~^(www\.)?(.+)$;

    location / {
        root   /sites/$2;
    }
William T. Nelson
@wtn
Yes, I think that would be equivalent.
Alternately, something like an nginx default server; I want falcon to bind to all network interfaces and use the certificate I specify, and allow requests regardless of hostname. My rack app can take it from there.
David Ortiz
@davidor
Hello, quick question. Is there an equivalent to Puma's before_fork in falcon? Thanks
David Ortiz
@davidor
@ioquatix do you know?
Samuel Williams
@ioquatix
@davidor there is no such thing because it’s not needed. Each application instance is loaded in its own thread/process.
David Ortiz
@davidor
@ioquatix so is there a way to run some code just once even when running multiple falcon workers?
Samuel Williams
@ioquatix
yes
falcon serve --preload "runonce.rb"
you can also use preload option in falcon.rb when using falcon host/falcon virtual
@davidor does that help you?
falcon will also load/require all gems in your group :preload { gem "foo" } from your Gemfile
David Ortiz
@davidor
@ioquatix that's what I need. Thanks!
Samuel Williams
@ioquatix
Awesome!
Here is an example I'm using to preload rails: https://github.com/rubyapi/rubyapi/blob/master/config/preload.rb