Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
    ankitarya10
    @ankitarya10
    Hi Guys, I am evaluating bjoern to replace our current gunicorn setup. I was wondering how can I deploy application under a sub_path e.g. xyz.com/service1 , xyz.com/service2 ? In gunicorn I can use SCRIPT_NAME.
    PS: I noticed urls actually redirect somewhere, I posted them just as examples.
    Kevin
    @kevex91
    Hi Jonas, thanks for all the hard work. We've been using bjoern with our Flask based REST API. Was wondering if threading was in the pipeline? Or what the future plans are for bjoern since there hasn't been an update in a few months.
    Jonas Haag
    @jonashaag
    WTF is wrong with Gitter, I never get email notifications... sorry guys!
    @kevex91 no threading plans and no other plans either. It works and has all the features I need so no reason do add anything.
    Kevin
    @kevex91

    Hi again, using Python 2.7.13 [PyPy 5.7.1 with GCC 6.3.0] I haven't been able to install through pip or setup.py install.

    I keep getting a whole bunch of --> bjoern/filewrapper.c:41:23: error: ‘PyFileObject’

    Is it a PyPy issue?

    Jonas Haag
    @jonashaag
    @kevex91 That's entirely possible, to be honest I don't think it will work with PyPy
    Kevin
    @kevex91
    Ah, ok. Just trying to squeeze just a little bit more out of the code is all.
    Harsh Patel
    @hp685
    I'm evaluating various WSGI servers for a choice of a server and I'm fairly new at trying Bjoern. I
    I'm wondering if Bjoern is an asynchronous server, and if so how to handle concurrent connections without use of threads/processes
    To be more specific, the scenario entails that requests spend most of the time in application code
    @jonashaag
    Jonas Haag
    @jonashaag
    @hp685 in application code = waiting for I/O? or CPU-intensive stuff?
    Harsh Patel
    @hp685
    @jonashaag Network I/O. I figure I need multiple workers as in tests/fork.py.
    Also your comment on one of the issues about libev being an implementation detail and not something exposed to client code clarified it a whole lot. I was looking to get concurrency by leveraging libev
    Jonas Haag
    @jonashaag
    @hp685 yeah I guess you'll have to use something that exposes that to client code, like aiohttp, uvloop, gevent, etc
    Harsh Patel
    @hp685
    👍🏼
    Jay States
    @iamjstates
    hello - anyone around? I have a question about permissions and user/groups
    Anthony
    @tribals
    Hi folks!
    Does anyone use it in production? If so, with which workloads?
    I'm very interesting in using it in my own project, by little afraid is project still maintained?
    Seb
    @SebJansen
    @jonashaag
    Do you know how I can make bjoern available from the command line, like Gunicorn is?
    Example: $ gunicorn myproject.wsgi
    pip installing bjoern doesn't create the $ bjoern command
    Jonas Haag
    @jonashaag
    That's not something that's implemented in bjoern, although it might be good idea. PR welcome. Generally, you'll have to have a if __name__ == '__main__' section at the bottom of bjoern.py (or your module), and then use something like argparse to create a command-line parser. Then it's available with python -m bjoern .... Having it in PATH directly like with gunicorn doesn't really anything to that.
    Andrew
    @andrewllyons
    Hi everyone! Having trouble getting this to install on python3. I'm running on OSX 10.13.3 and have libev installed via homebrew. In fact installing using the python 2.7 version of pip (pip2) actually works, but attempting via pip3 can't find the libev header (bjoern/request.h:4:10: fatal error: 'ev.h' file not found). Any ideas?
    @jonashaag
    Jonas Haag
    @jonashaag
    Hm works for me with the exact same setup. What's your pip3 version @andrewllyons
    Andrew
    @andrewllyons
    Hey @jonashaag thanks for the reply! Sorry for the late one on my end, I'm based in Australia so it was the weekend for me. My pip3 is version 9.0.1. My machine has both python2 and python3 (and pip2 & pip3 respectively) and will install via pip2, just not pip3. It seems strange that pip2 can detect my libev headers but pip3 can't
    Jacob Alheid
    @shakefu
    @jonashaag - 1st - love this project! We're trying to use it instead of uwsgi and had awesome success so far. Unfortunately we just hit the SERVER_PORT int bug in our deploy. I saw that a fix is already merged to master, but not released. I was wondering if you had a timeline for that fix to be released?
    Jonas Haag
    @jonashaag
    Cool, yes that will be released in the next days
    Jacob Alheid
    @shakefu
    :+1:
    Jonas Haag
    @jonashaag
    2.0.5 out!
    Stuart Reynolds
    @stuz5000
    Does bjoern support concurrent connections? Looking at https://blog.appdynamics.com/engineering/a-performance-analysis-of-python-wsgi-servers-part-2/ -- its says yes, but https://github.com/jonashaag/bjoern says no. Looking for a threaded server that can make use of shared in-memory caches.
    Jonas Haag
    @jonashaag

    @stuz5000 concurrenct connections != threading != shared memory. What are you trying to accomplish?

    bjoern can easily support a large number of concurrent connections. It doesn't support threads or multiprocessing. However you can spawn multiple bjoern workers to listen on the same port with SO_REUSEPORT (https://lwn.net/Articles/542629/), or you can fork multiple workers (see tests/fork.py)

    Stuart Reynolds
    @stuz5000
    Thanks @jonashaag -- I'm turning a (number crunching) API into a webservice. The API makes use of in-memory caches (call memoization) to be efficient and avoid duplicate work for the same request. Also, cache misses either enter no-GIL sections, or are processed with multiprocessing or on other hosts before being returned in the HTTP request. I think its natural here to have a multiple-threads answering the same query (because of the latency gains for the memory cache, and low concern about the GIL). However , if run a single bjoern.run, then and call a some long running API call then the next HTTP request isn't answered until the first completes (eg. if I request /sleepseconds/1, I get exactly one response per second with N concurrent requests). Forking new workers means no shared (in-memory) cache and unnecessary high latency. I tried multiple threads, each running bjoern.run(wsgi_application, host, port, reuse_port=True), this gives: ev_signal_start: Assertion `("libev: a signal must not be attached to two different loops"
    Jonas Haag
    @jonashaag

    @stuz5000 From your web application's point of view you've got a typical I/O bound model here. bjoern does not care about I/O out of the box -- you'll have to make your application yield manually: A loop like

    while not done():
      yield

    which will be polled by the bjoern event loop. But this doesn't scale well and wastes loads of CPU. You'll be off much better using something like gunicorn/geventlet, meinheld, or something asyncio based like uvloop.

    You should really be using something like memcache for caching, not shared memory. Don't even start with shared memory, it will cause you all kinds of problems when you evolve your application.
    Jonas Haag
    @jonashaag
    At least use something that is disk based, not process based. Trust me it's a terrible idea, you'll make lots of assumptions in your code that will only hold true for shared memory cache and then you'll be unable to ever scale your deployment
    Stuart Reynolds
    @stuz5000
    @jonashaag Perhaps --- what you say is right for many applications. In fact we already back our in process memory caches with redis, but sensible cacheing depends on usage patterns. Loading the serialized objects on a local miss and redis hits still takes some 30 seconds to warm up the servers . Out of process cacheing is not free becuase of the serialization costs. Very simply, if our process is stateless then were are (badly) I/O bound which waiting for large object to deserialize from out-of-process caches.
    Jonas Haag
    @jonashaag
    @stuz5000 may I ask what serialisation format you're using?
    Stuart Reynolds
    @stuz5000
    It depends on the objects. We have machine learning models (cpickled), and large data tables (also cpickle). We could speed this up 3-4x with effort, but without in-process cacheing the server would still be I/O bound even when reading from memcache or redis. So -- looking at solutions giving parallelism with shared state (=threads).
    Jacob Alheid
    @shakefu
    Huh. You must have some very large pickled objects if it takes you 30 seconds to load them.
    Stuart Reynolds
    @stuz5000
    Yes -- we try to fill up the available memory with them to minimize cache misses.
    Rizogg
    @Rizogg_twitter
    is there any documentation using bjoern with nginx django?
    Madhukar S Holla
    @madhukar01
    +1 ^^
    Any documentation to setup bjoern with nginx django?
    Gabriel Domanowski
    @GGG1998
    Hi! Few hours ago bjoern worked OK, but now AttributeError: module 'bjoern' has no attribute 'run'. I can import bjoern but it seems like no methods available
    @madhukar01 use socket to connect with bjoern
    Jonas Haag
    @jonashaag
    @GGG1998 paste dir(bjoern)
    ohenepee
    @ohenepee
    Hello guys... anyone has a Server Sent Events example I can borrow (that's if its possible on bjoern)
    Lev Rubel
    @levchik
    Hi everyone. So I was thinking if bjoern fully utilizes only 1 core ? If I wanted to make use of all available cores, how would I do that? My application heavily uses a lot of requests to external services, so I wondered I could scale bjoern that way.