Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • Jan 31 19:45
    dcecile commented #503
  • Jan 31 19:40
    blueyed commented #526
  • Jan 31 13:29
    codecov[bot] commented #526
  • Jan 31 13:26
    codecov[bot] commented #526
  • Jan 31 11:59
    codecov[bot] commented #526
  • Jan 31 11:59
    popravich synchronize #526
  • Jan 31 11:59

    popravich on travis_pypy

    .travis.yml: chaching pip packa… Makefile: build/install only re… .travis.yml: cache redis-server… (compare)

  • Jan 31 11:44
    codecov[bot] commented #526
  • Jan 31 11:36
    codecov[bot] commented #526
  • Jan 31 11:35
    codecov[bot] commented #526
  • Jan 31 11:34
    codecov[bot] commented #526
  • Jan 31 11:24
    codecov[bot] commented #526
  • Jan 31 11:23
    codecov[bot] commented #526
  • Jan 31 10:04
    gyermolenko commented #503
  • Jan 31 09:31
    gyermolenko commented #431
  • Jan 31 09:23
    Natim closed #444
  • Jan 31 09:23
    Natim commented #444
  • Jan 31 09:09
    gyermolenko commented #444
  • Jan 31 05:32
    codecov[bot] commented #539
  • Jan 31 05:32
    vir-mir synchronize #539
Nathaniel J. Smith
@njsmith
so SO_REUSEADDR just tells the kernel to stop worrying about this case, because it's not really an issue. In fact on Windows they don't even support the weird SO_REUSEADDR-disabled semantics.
(but Windows does have a SO_REUSEADDR constant, and if you use it then it turns on some wacky insecure thing, so on Windows the correct thing to do is just never touch SO_REUSEADDR at all)
Andrew Svetlov
@asvetlov
@njsmith thanks for the explanation!
calbot
@calbot
There doesn't appear to be a way to stream command bytes redis via generator/consumer... Is there a way that I'm not seeing?
*to redis
oocydo
@oocydo
Hello chat! How can I handle multiple concurrent TCP connections transferring very large binary files? I want the server to listen to the port, accept incoming connections and receive data from multiple clients at once
TCP is not mandatory, HTTP is okay too, however, TCP is preferred
Also I would like to receive files like it does BufferedInputStream, by chunks, not at once
The problem is that I do not know anything about sockets, networking, protocols and Python
Andrew Svetlov
@asvetlov
You spetted the problem correctly. Sorry, you cannot work with TCP without the knowledge about sockets and networking.
@calbot Please explain stream command bytes to redis. I have no idea what do you mean.
crazyheinz
@crazyheinz
hi I'm trying to install aiohttp on my macos and I get the error Package u'aiohttp' requires a different Python: 2.7.16
I'm a noob in python and dunno how to fix this
Kai Blin
@kblin
If you're on a mac, first get a non-broken Python install
I'd suggest installing homebrew https://brew.sh/ and grabbing python3 from there
probably also install a non-broken python2 for good measure, the version OSX ships with is unuseable
oocydo
@oocydo
@asvetlov So there is no library that can handle my problem with few lines of code sticking to high level?
Kai Blin
@kblin
@oocydo https://gist.github.com/giefko/2fa22e01ff98e72a5be2 is the first google hit, maybe see what's happening there?
you might want to look into using sendfile or some other zero copy API to bypass userspace if you want higher performance
oocydo
@oocydo
@kblin There's server sending files to multiple clients, not vice versa
Kai Blin
@kblin
sure, just turn around the logic then
I personally wouldn't use Python for this, but I think I'd also just use some existing protocol/servers
but if you basically just want a server that sends raw bytes at any client connecting, that should be straightforward
oocydo
@oocydo

@kblin
Client:
Should send thousands of very large blobs to the server

Server:
Should save on the drive files coming from clients. There might be multiple clients connected and transferring files at once

And it should be implemented on Python. So the gist you sent above fits my needs?

As there are thousands of blobs and their size might be much more than 100 MB the server needs to support asynchronous I/O
oocydo
@oocydo
The gist is not asynchronous. Will just running it in another thread be okay?
Wasdf
@waasdf
Hey! I just have one question about how to manage middlewares correctly: Is possible to apply middlewares only to specific handler? Currently I'm checking jwt token to deny unautorized request but i want to make public the swagger api documentation without passing the jwt token
I can manually first detect from where I get the request and then pass to another function/middleware or directly to the handler but I think is not the best approach ;/
Oleh Kuchuk
@hzlmn
You can add something like whitelist to your middleware that would pass through routes like /swagger for example.
Or you can use https://github.com/hzlmn/aiohttp-jwt that already have this feature ready for you ;)
Wasdf
@waasdf
wow, thanks!
Aviram Hassan
@aviramha
Regarding aioredis - What is the difference between multi-exec and pipeline?
Andrew Svetlov
@asvetlov
pipeline doesn't execute MULTI/EXEC Redis command but sends the next plain command without waiting for an answer from the former cmd.
oocydo
@oocydo
Hello chat! Is there some example of server for asynchronous file uploading using HTTP?
Aviram Hassan
@aviramha
Thanks @asvetlov
Joongi Kim
@achimnol
um.. is it safe to call a native async function from a coroutine-based async function (which is decorated by @asyncio.coroutine and uses yield from)?
i need to make a library to call my native async function and the library uses the coroutine-based async function...
it raises a runtime error which says my coroutine was never awaited.
a toy example with just two coroutine functions (native / generator) works well with a deprecation warning... hmmm
Andrew Svetlov
@asvetlov
Yes, it's safe. Deprecation was added in Python 3.8 recently to enforce the migration
jitunair18
@jitunair18
Hi folks, I am trying to use aiohttp to do a get request X number of times. I notice every time I run it the performance hugely varies.It takes 10Y seconds sometimes for the work to complete and Y on the very next run. Try again after sometime it goes back to 10Y seconds. I dont see any such variation if i just run with regular requests in a synchronous fashion.
Andrew Svetlov
@asvetlov
Sorry, I cannot say what's going on your side remotely. Client tracing API can help with analyzing where the time is spent: http://docs.aiohttp.org/en/stable/client_advanced.html#client-tracing
jitunair18
@jitunair18
thanks @asvetlov let me try that and get back
jitunair18
@jitunair18
run test order details:test_async_order_details1
Starting request
Connection created
run test order details:test_async_order_details10
Starting request
Connection created
@asvetlov i see the connections are getting created each time for every task, though I am trying to re use the same session. I was assuming that signal would fire only once for sessions getting re used.
Andrew Svetlov
@asvetlov
do you use https connections?
jitunair18
@jitunair18
@asvetlov yes we do authentication is through username and api key passed in the request headers
async def test_async_order_details1(session):
    print("run test order details:{}".format(inspect.stack()[0][3]))
    search_index_route = "endpoint".format(api_host)
    headers = {'Content-type': 'application/json', "Accept": "text/json", "username": "sample",
               "apikey": "sample"}
    async with session.get(search_index_route, headers=headers) as resp:
        data = await resp.text()
        return data
sample client, imagine 100 such testing tasks, one run await resp.text() just takes a lot of time to process with all the requests queued up, then in next run its much much faster than requests.
jitunair18
@jitunair18
100 such synchronous tasks with requests takes about 27-28 seconds(consistently), same amount of work with aiohttp takes 9 seconds on one run, next run it takes 150 seconds
def test_order_details():
    print("run test order details:{}".format(inspect.stack()[0][3]))
    search_index_route = "{}/path".format(api_host)
    resp = session1.get(search_index_route,
                       headers={'Content-type': 'application/json', "Accept": "text/json", "username":"sample", "apikey":"sample"})
    print(resp.text())