by

Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
    Jacob Alheid
    @shakefu
    @jonashaag - 1st - love this project! We're trying to use it instead of uwsgi and had awesome success so far. Unfortunately we just hit the SERVER_PORT int bug in our deploy. I saw that a fix is already merged to master, but not released. I was wondering if you had a timeline for that fix to be released?
    Jonas Haag
    @jonashaag
    Cool, yes that will be released in the next days
    Jacob Alheid
    @shakefu
    :+1:
    Jonas Haag
    @jonashaag
    2.0.5 out!
    Stuart Reynolds
    @stuz5000
    Does bjoern support concurrent connections? Looking at https://blog.appdynamics.com/engineering/a-performance-analysis-of-python-wsgi-servers-part-2/ -- its says yes, but https://github.com/jonashaag/bjoern says no. Looking for a threaded server that can make use of shared in-memory caches.
    Jonas Haag
    @jonashaag

    @stuz5000 concurrenct connections != threading != shared memory. What are you trying to accomplish?

    bjoern can easily support a large number of concurrent connections. It doesn't support threads or multiprocessing. However you can spawn multiple bjoern workers to listen on the same port with SO_REUSEPORT (https://lwn.net/Articles/542629/), or you can fork multiple workers (see tests/fork.py)

    Stuart Reynolds
    @stuz5000
    Thanks @jonashaag -- I'm turning a (number crunching) API into a webservice. The API makes use of in-memory caches (call memoization) to be efficient and avoid duplicate work for the same request. Also, cache misses either enter no-GIL sections, or are processed with multiprocessing or on other hosts before being returned in the HTTP request. I think its natural here to have a multiple-threads answering the same query (because of the latency gains for the memory cache, and low concern about the GIL). However , if run a single bjoern.run, then and call a some long running API call then the next HTTP request isn't answered until the first completes (eg. if I request /sleepseconds/1, I get exactly one response per second with N concurrent requests). Forking new workers means no shared (in-memory) cache and unnecessary high latency. I tried multiple threads, each running bjoern.run(wsgi_application, host, port, reuse_port=True), this gives: ev_signal_start: Assertion `("libev: a signal must not be attached to two different loops"
    Jonas Haag
    @jonashaag

    @stuz5000 From your web application's point of view you've got a typical I/O bound model here. bjoern does not care about I/O out of the box -- you'll have to make your application yield manually: A loop like

    while not done():
      yield

    which will be polled by the bjoern event loop. But this doesn't scale well and wastes loads of CPU. You'll be off much better using something like gunicorn/geventlet, meinheld, or something asyncio based like uvloop.

    You should really be using something like memcache for caching, not shared memory. Don't even start with shared memory, it will cause you all kinds of problems when you evolve your application.
    Jonas Haag
    @jonashaag
    At least use something that is disk based, not process based. Trust me it's a terrible idea, you'll make lots of assumptions in your code that will only hold true for shared memory cache and then you'll be unable to ever scale your deployment
    Stuart Reynolds
    @stuz5000
    @jonashaag Perhaps --- what you say is right for many applications. In fact we already back our in process memory caches with redis, but sensible cacheing depends on usage patterns. Loading the serialized objects on a local miss and redis hits still takes some 30 seconds to warm up the servers . Out of process cacheing is not free becuase of the serialization costs. Very simply, if our process is stateless then were are (badly) I/O bound which waiting for large object to deserialize from out-of-process caches.
    Jonas Haag
    @jonashaag
    @stuz5000 may I ask what serialisation format you're using?
    Stuart Reynolds
    @stuz5000
    It depends on the objects. We have machine learning models (cpickled), and large data tables (also cpickle). We could speed this up 3-4x with effort, but without in-process cacheing the server would still be I/O bound even when reading from memcache or redis. So -- looking at solutions giving parallelism with shared state (=threads).
    Jacob Alheid
    @shakefu
    Huh. You must have some very large pickled objects if it takes you 30 seconds to load them.
    Stuart Reynolds
    @stuz5000
    Yes -- we try to fill up the available memory with them to minimize cache misses.
    Rizogg
    @Rizogg_twitter
    is there any documentation using bjoern with nginx django?
    Madhukar S Holla
    @madhukar01
    +1 ^^
    Any documentation to setup bjoern with nginx django?
    Gabriel Domanowski
    @GGG1998
    Hi! Few hours ago bjoern worked OK, but now AttributeError: module 'bjoern' has no attribute 'run'. I can import bjoern but it seems like no methods available
    @madhukar01 use socket to connect with bjoern
    Jonas Haag
    @jonashaag
    @GGG1998 paste dir(bjoern)
    ohenepee
    @ohenepee
    Hello guys... anyone has a Server Sent Events example I can borrow (that's if its possible on bjoern)
    Lev Rubel
    @levchik
    Hi everyone. So I was thinking if bjoern fully utilizes only 1 core ? If I wanted to make use of all available cores, how would I do that? My application heavily uses a lot of requests to external services, so I wondered I could scale bjoern that way.
    Taavi Laanemaa
    @tlaanemaa
    Hello

    I was hoping maybe someone can help me out, I must be doing something wrong here but I've tried everything and cant figure it out.
    Im getting the following error:

    Traceback (most recent call last):
      File "./src/bjoern.py", line 5, in <module>
        bjoern.run(api.app)
    AttributeError: module 'bjoern' has no attribute 'run'

    My bjoern.py file contains the following:

    import bjoern
    import api
    
    if __name__ == '__main__':
        bjoern.run(api.app)

    Thanks!

    Taavi Laanemaa
    @tlaanemaa
    Come to think of it, I should probablly post that as an issue in github so that others who might have a similar problem can find it
    Goldstein
    @GoldsteinE
    Hi! There was a problem for old Python interpreters causing segfault. I patched it in #173, but there is probably need to add checks for null pointers returned from Python C API functions everywhere.
    Александр Сербул
    @AlexanderSerbul
    Good evening! Thank you very much for bjoern WSGI-server! I looked in sources - is it true, that calling slow WSGI-app script (say 30 secs sleep) would block WSGI-server from executing other, say short as 50 ms, scripts from other clients due blocking in 'wsgi_call_application'?
    Jonas Haag
    @jonashaag
    yes, as is the case with any web server if you mean "slow" = high cpu load
    if "slow" = waiting for i/o, you might want to use multiple forks or "receive steering" with bjoern, or you might want to use a web server with special support for async i/o
    Александр Сербул
    @AlexanderSerbul
    @jonashaag Thank you very much for clarification!
    Tyler Moore
    @devopsec
    @jonashaag We were thinking about implementing opessl wrapped socks on this project for backend https, would this conflict with Ryan's parser or any of the middleware code?
    Jonas Haag
    @jonashaag
    @devopsec not sure but why don't you simply put a reverse proxy in front that does SSL termination
    Tyler Moore
    @devopsec
    That is definitely a solution but seems like a heavy resource solely for SSL termination. Thanks for the suggestion, we're gonna test out the performance when scaling multiple NGINX -> bjoern containers.
    Jonas Haag
    @jonashaag
    heavy resource usage? have you ever run an nginx server? :D it requires like 1MB of ram
    Siddharth Jain
    @SidJain1412
    Hey, I'm testing a CPU intensive flask app using bjoern. I have a 16 core machine, but bjoern only utilises 8 cores. Why is this, and can I make some changes so that it utilises all cores? (as a comparison, waitress uses 16 cores, but performs slightly worse)
    Jonas Haag
    @jonashaag
    bjoern uses 1 core max unless you start it in threaded mode or are using forks
    what concurrency mode are you use
    using
    Siddharth Jain
    @SidJain1412
    thanks for the quick reply, I just ran it using bjoern.run(app, "0.0.0.0", port=80)
    Jonas Haag
    @jonashaag
    In this setup bjoern will use 1 CPU core
    Best way to add concurrency is using forks or multiple instances ("receive steering")
    see wiki for details
    Siddharth Jain
    @SidJain1412
    I'll check out forks, but it seems to be using 8, and by wiki do you mean https://github.com/jonashaag/bjoern/wiki ?
    Kleber Rocha
    @klinux
    Hi, someone is using flask with bjoern? I have a problem with flash messages, it's simple doesn't work.
    Siddharth Jain
    @SidJain1412
    @klinux you mean flash messages don't work when using bjoern, and they do when using flask to run the app?
    Kleber Rocha
    @klinux
    @SidJain1412 I'm sorry, I find out what happens, the problem is oidc config on nginx, the problems is with redirects.
    Interview
    @interviewer_gitlab
    Is it recommended to run bjoern directly without nginx or with nginx in production? (for flask)
    @ankitarya10 did you replace gunicorn with bjoern?
    Jonas Haag
    @jonashaag
    @interviewer_gitlab you should always put a reverse proxy (eg. nginx) in front of the WSGI server