Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • 03:51

    pquentin on master

    Enable random test ordering in … (compare)

  • 03:51
    pquentin closed #190
  • 01:52
    RatanShreshtha synchronize #190
  • 01:37
    RatanShreshtha synchronize #190
  • 01:29
    RatanShreshtha synchronize #190
  • 01:20
    RatanShreshtha synchronize #190
  • 01:07
    RatanShreshtha synchronize #190
  • Dec 13 07:14
    RatanShreshtha opened #190
  • Dec 12 08:04

    oremanj on master

    Bump version to 0.4.0 for relea… Bump version to 0.4.0+dev post … Merge pull request #23 from ore… (compare)

  • Dec 12 08:04
    oremanj closed #23
  • Dec 12 08:00
    oremanj synchronize #23
  • Dec 12 07:56
    oremanj opened #23
  • Dec 12 06:54

    pquentin on master

    Use GitHub Actions for macOS CI… (compare)

  • Dec 12 06:54
    pquentin closed #189
  • Dec 12 06:13
    RatanShreshtha synchronize #189
  • Dec 12 06:08
    RatanShreshtha synchronize #189
  • Dec 12 06:07

    njsmith on master

    Update mypy to 0.750 Rearrange for better inference blacken and 2 more (compare)

  • Dec 12 06:07
    njsmith closed #22
  • Dec 12 03:34
    njsmith synchronize #22
Alex Grönholm
@agronholm
I've added a deprecation warning to its README
yes
@ItsLiamCo_twitter
@njsmith ah alright thank you! I'm still highly biased
For trio that is, as I just never managed to find asyncio efficient in the same way trio is. That's why I think I'll stick to watching out for future httpx and asks commits rather than trying to move everything to aiohttp
Asks is great a part from a few issues such as proxy support and handling redirects, while httpx looks promising with it's active maintenance
But I have yet to make httpx work with nurseries and the library itself has yet to implement proper parallel requests
Nathaniel J. Smith
@njsmith
I wouldn't worry about that "parallel requests" thing. I think most people don't really want their HTTP library implementing its own concurrency. You want your HTTP library to give you primitives to work with single requests, and then you want to use those from inside a more general concurrency library.
IIUC the "parallel requests" mentioned in their docs is an idea for implementing some basic concurrency in the library itself for users who for some reason can't use a "real" concurrency framework, but nobody else has anything like that and it's fine.
yes
@ItsLiamCo_twitter
Yea in the end I chatted a bit with
Florimond and he showed me that there was a request pool limit that wasn't explicitly explained in the docs, so I kept getting blank exceptions because I was trying to make too many requests at once with a session
But it's all good now
Nathaniel J. Smith
@njsmith
oh, that's good to know
Ryan Zeigler
@rzeigler

question, I'm sitting and reading the docs, and it says to not, as a general rule, pass a timeout parameter. Is there an idiomatic way of applying a timeout to the entry of a context manager block rather than all operations within it. I'm thinking specifically of something like

    with connect_to_something() as conn:
        await do_many_things_with_conn()

Its unclear to me how I would apply a cancellation scope to the conn context manager from calling code without passing in a timeout.

Nathaniel J. Smith
@njsmith
I guess it would be possible to define a helper that takes an arbitrary async context manager and applies a timeout to entering it
but yeah, if you specifically want timeouts on some sub-part of a larger operation and that needs to be integrated into the operation's logic, then passing in a timeout argument can make sense
Ryan Zeigler
@rzeigler
i think I conceptually like the idea of decorating the context manager better than passing in a timeout
Ryan Zeigler
@rzeigler
also, is there an equivalent to asyncio.gather?
Nathaniel J. Smith
@njsmith
not a direct equivalent
nurseries have the basic "wait for a bunch of stuff to finish" part built in, and then for collecting up the results it depends on what you want
(there are a bunch of details that vary depending on what you're doing... how do you want to keep track of which result goes with which operation? do you want them incrementally as they complete? if one of the worker tasks has an unhandled exception, what do you want to do?)
we probably will add some helpers eventually, maybe using https://github.com/python-trio/trimeter, but that hasn't happened yet
L. Kärkkäinen
@Tronic
Is httpx still able to re-use one TCP/QUIC connection for those parallel HTTP/2 or HTTP/3 requests, if you just stick them into a nursery? Also, nursery's return value handling (or lack of) still makes it quite unpractical.
Ross Rochford
@RossRochford_twitter

@rzeigler It's also possible to do the following:

-create a 'timeout' send/receive channel pair
-launch a task that sleeps for N seconds, then send a message to the timeout send channel
-inside the nursery, wait for messages on the timeout receive channel and call cancel_scope on the nursery when a message arrives

Joshua Oreman
@oremanj
That seems like a rather roundabout way of saying nursery.cancel_scope.deadline = trio.current_time() + N :-)
Ross Rochford
@RossRochford_twitter
Ah, I did not know that this is possible : )
Nathaniel J. Smith
@njsmith
Is anyone up for replying to https://trio.discourse.group/t/develope-assembly-line/235 ? I've been meaning to but haven't had enough brain
Alex Grönholm
@agronholm
I'll give it a shot
Nathaniel J. Smith
@njsmith
@agronholm [♥ reaction]
Matt Ettus
@MattEttus
Is having a short-lived nursery within a task which is itself within a nursery a reasonable thing to do? Or is it better to add the subtasks to the app's main nursery?
Nathaniel J. Smith
@njsmith
@MattEttus "having a short-lived nursery within a task which is itself within a nursery" <-- that's exactly how nurseries are designed to be used
Lojack
@lojack5
Hey posting here 'cause I'm not sure where else to ask. I've been off-and-on working on running wxPython on top of trio, and would like some input. Mostly just about if I'm abusing trio nurseries, but also other criticism is welcome. Current work is here.
Tom Christie
@tomchristie
"IIUC the "parallel requests" mentioned in their docs is an idea for implementing some basic concurrency in the library itself for users who for some reason can't use a "real" concurrency framework, but nobody else has anything like that and it's fine." - This is worth digging into, yeah...
The parallel requests proposal in httpx (which might well just get dropped) was there for two reasons...
  • To allow existing threaded-concurrency frameworks to be able to make large numbers of requests concurrently. (Because it's using async under the hood, rather than spinning up a thread-per-request.)
  • Because asyncio at 3.8 is a bit naff, and still doesn't give you a nursery primitive, so providing folks with a sensible "we'll keep HTTP request tasks properly scoped for you and deal with failure cases in dependable ways" seemed worthwhile.
Tom Christie
@tomchristie
The first one of those two was the main driver for the proposal. As it is, I'd expect it's probably something we'll just drop - it's not really needed. (And it's better that asyncio focuses on getting the basics in place, rather than us having to work around that on behalf of our users.)
@ItsLiamCo_twitter - "I have yet to make httpx work with nurseries" - That's surprising. Any chance you'd be able to reduce that down to a simple reproducible case, and open an issue around that?
András Mózes
@mozesa
@lojack5
@lojack5 hello, I have a
@lojack5 Have you seen PySimpleGui?
It has WX part as well. Earlier I used trio with PySimpleGui and it was really easy to work with.
Alex Grönholm
@agronholm
@tomchristie just to make sure, have you considered anyio's task groups?
Tom Christie
@tomchristie
That's a good point actually, I'll have to take a look at them properly sometime.
Alex Grönholm
@agronholm
if you are able to write the code to work against anyio, you get all three (trio, curio and asyncio) for "free"
Tom Christie
@tomchristie
It's somewhat moot for httpx, since we'd already done the "make this work on trio and asyncio" by the time we knew about that. (We could perfectly well switch over to anyio, but it doesn't buy us a lot.) But the task groups specificially I'm interested in since eg. if I'm talking folks how to spawn tasks in an asyncio context, I really want to be able to point them at a sensible primitive like that.
I believe that Yury Selivanov has an asyncio TaskGroup implementation essentially complete, but it's not available yet.
Alex Grönholm
@agronholm
wouldn't that just add another dependency? if so, why not use anyio just for that?
Tom Christie
@tomchristie
"why not use anyio just for that" - I hadn't realised that anyio had a TaskGroups implementation, and yeah, I'm interested in it now, exactly because of that.
Nathaniel J. Smith
@njsmith
@tomchristie :wave: hey, good to see you, welcome
Btw folks, we have several PRs ready for review, both simple and not-so-simple :-)
Lojack
@lojack5
@mozesa while that looks interesting, it's not really feasible in my case. I'm working on a codebase of around 111K sloc, with about 13K sloc directly related GUI stuff.