Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • 18:26

    pquentin on master

    Make testing instructions less … Merge pull request #1252 from z… (compare)

  • 18:26
    pquentin closed #1252
  • 17:26
    zackw opened #1252
  • 16:48

    dependabot-preview[bot] on pip

    (compare)

  • 16:48

    dependabot-preview[bot] on master

    Bump asn1crypto from 1.1.0 to 1… Merge pull request #102 from py… (compare)

  • 16:48
    dependabot-preview[bot] closed #102
  • 16:44
    dependabot-preview[bot] labeled #102
  • 16:44
    dependabot-preview[bot] opened #102
  • 16:44

    dependabot-preview[bot] on pip

    Bump asn1crypto from 1.1.0 to 1… (compare)

  • 14:14

    dependabot-preview[bot] on pip

    (compare)

  • 14:14

    dependabot-preview[bot] on master

    Bump immutables from 0.10 to 0.… Merge pull request #1251 from p… (compare)

  • 14:14
    dependabot-preview[bot] closed #1251
  • 14:06
    dependabot-preview[bot] labeled #1251
  • 14:06
    dependabot-preview[bot] opened #1251
  • 14:06

    dependabot-preview[bot] on pip

    Bump immutables from 0.10 to 0.… (compare)

  • 05:47
    pquentin assigned #130
  • Oct 15 16:14

    dependabot-preview[bot] on pip

    (compare)

  • Oct 15 16:14

    dependabot-preview[bot] on master

    Bump attrs from 19.2.0 to 19.3.… Merge pull request #80 from pyt… (compare)

  • Oct 15 16:14
    dependabot-preview[bot] closed #80
  • Oct 15 16:10
    dependabot-preview[bot] labeled #80
jtrakk
@jtrakk
@belm0 Very cool! Thanks for sharing this
John Belmonte
@belm0
no, it's using the trio example server from h11 :-p
(best I could do 1 year ago when I wrote it)
unlikely this code could be used unmodified in another project, which is why it's labeled proof-of-concept
Nathaniel J. Smith
@njsmith
@belm0 ahh heh :-) makes sense
John Belmonte
@belm0
one nice thing about this approach is that the draw() method you implement is sync, so with respect to concurrency it's working with an atomic snapshot of your program's state
Kyle Lawlor
@wgwz
just discovered this project today and thought it might be interesting to try to create some trio documentation with: https://mybinder.readthedocs.io/en/latest/using.html#generating-interactive-open-source-package-documentation
Nathaniel J. Smith
@njsmith
@wgwz oh yeah, binder is pretty cool! (I used to work with the folks working on it.) maybe it would be useful for our hypothetical future "cookbook" documentation? it does require using jupyter notebooks, and I'm not sure if our users are familiar with those or how much work it would take to integrate them into our sphinx docs. And it they take like ~30 seconds to start. So not something you can use to easily fiddle with examples directly while reading the docs.
Peter Sutton
@dj-foxxy

What's the easiest way to run a bunch of tasks in a given order with at most n in parallel.

async def main():
    async with trio.open_nursery() as nursery:
        for i in range(1000):
            nursery.start_soon(slow_lookup, i)

In the example, other parts of the code are awaiting for a specific value j to be mapped to an i. Lower values of i are much more likely to map to any j, so I'd like to start them in order. Does that make sense?

Matthias Urlichs
@smurfix
It does. You probably want a CapacityLimiter.
You also want to use await nursery.start() because with that the initial step of slow_lookup must tell the loop that it may proceed with the next i.
Peter Sutton
@dj-foxxy
Thanks @smurfix , that's what I'll use. Seems to work well, same speed to complete all lookups but everything is mapped almost instantly now (len(js) <<< len(is)).
(I'm scanning a network looking for a small set of MACs)
Kyle Lawlor
@wgwz

@njsmith yeah, that part in particular caught my eye. "interactive open source documentation" sounds great. hmm, yeah those are good points. although i do think the design of jupyter notebooks is pretty user-friendly. i think that even in the case of unfamiliarity with jupyter and low-level of experience with python, the barrier to figuring out how to use it would be fairly low.

to address all the points you made which are valid and i don't disagree with, perhaps the ~30 seconds to load up the notebook isn't that bad. i'm going assume that most users who would be interested in using these are users with less experience in trio than those who already know a lot about async python. when i read through the trio documentation, it takes me a while to get through. i think that is primarily a result of the density of topics. so this means for people like myself, i will already commit a hour or two to reading the documentation at a time.

so i wouldn't mind opening a new tab for the examples i'm reading through and having to wait for it to load. it also makes it possible for me to study how trio is working outside of my typical development environment (my laptop). for example i could be at the library and be able to use the browser at the library.
Nathaniel J. Smith
@njsmith
@wgwz yeah, the core idea of having interactive examples in the docs sounds really cool. There are just a bunch of details and I'm having trouble imagining how they would all work out
I guess in addition to binder, there are other services that do interactive programming in your browser, like repl.it and glitch and probably others I don't know about. I wonder if any of them are particularly good for this use case.
Dave Hirschfeld
@dhirschfeld
You've come across https://github.com/minrk/thebelab? It seems to be aimed at this usecase but I haven't used it myself so couldn't say how well it fits the bill..
Nathaniel J. Smith
@njsmith
Hah, of course Min has gone and implemented exactly the thing we were talking about
I wonder if it's ready for real use
It looks like the live REPL on python.org uses https://www.pythonanywhere.com/
Nathaniel J. Smith
@njsmith
huh, I think this might be the first time guido has posted on the trio tracker. I wonder how he found the thread :-)
aratz-lasa
@aratz-lasa
Awesome!
Benjamin Kane
@bbkane
Hello - is there a list of companies/projects using Trio in production? https://github.com/python-trio/trio/wiki/Testimonials is mostly people, not projects/companies. Is it being used for PyPi ?
oakkitten
@oakkitten
regarding #1208, why is Queue not considered a candidate to replace Channel?
oakkitten
@oakkitten
Tube, Hose, Pipe, these all transfer liquid-ish substances, and i agree that Channel is similiar, but one wouldn't say a Queue of water
Nathaniel J. Smith
@njsmith
@oakkitten Queue also has a big conflict with queue.Queue and asyncio.Queue unfortunately...
oakkitten
@oakkitten
what kind of a conflict?
Nathaniel J. Smith
@njsmith
(we used to have a trio.Queue in fact, and channels were originally created to replace it, before we realized that they were a more general concept. So I guess that history makes it not the first thing that comes to mind :-))
The same kind of issue as we're talking about with asyncio.Stream: the interface we want is different and incompatible with the stdlib thing, so we probably don't want to use the same name
Justin Turner Arthur
@JustinTArthur
I’m curious why both asyncio and node.js examples have you awaiting drain after write instead of before.
oakkitten
@oakkitten
i see now, thanks
Nathaniel J. Smith
@njsmith
new "good first issue", in case anyone is looking for one: python-trio/snekomatic#22
John Belmonte
@belm0
Quentin Pradet
@pquentin
@bbkane https://github.com/groove-x/ is a company
See
argh mobile gitter
See the link above my message
Quentin Pradet
@pquentin
@bbkane also, yes, I think PyPI is using https://github.com/pypa/linehaul in production, and https://github.com/HyperionGray is using trio
@belm0 nice slides!
Nathaniel J. Smith
@njsmith
IIRC pypi is no longer using the trio-based linehaul – they used it for a few years and were really happy, but then their logging infrastructure changed and no longer needed a linehaul-shaped piece
Davide Rizzo
@sorcio
To be fair stream-of-objects reminds me more of conveyor belts than fluid flow, although I like the hydraulics metaphor of flow, pressure, etc
Will Clark
@willcl-ark
Hello. I have used Trio to make a TCP proxy, where the aim is to route the data over a mesh network and then out to the wider internet... It's all working OK, however I don't think I am handling larger streams of data properly: I can't see the receive buffer size in Trio, but I suspect that I am sending out via my external link before the entire stream is received. My example code is here: https://bpaste.net/show/em1Q . I suspect that L37-L45 is the part that is sending before the entire stream is received... So my question is, is trio automatically waiting on the full stream during for data in server_stream or is this receiving a fixed amount and do I need to add code in here to decipher the stream length and buffer it before sending onwards?
Tim Stumbaugh
@tjstum_gitlab
@willcl-ark one of the great things about trio is that there aren't really any hidden buffers. nothing is actually read from the underlying TCP connection until you call receive_some. the async for loop that the Stream interface offers is just a convenient shorthand for writing a loop sort of like this:
while True:
    data = await conn.receive_some()
    if not data:
        break
    # do something with data
TCP itself doesn't offer any assurances that one party's "send" lines up with the other part's "receive"
and there aren't any message boundaries that come "for free." so you can definitely see cases where it looks like you get "part of" one message in one call to receive_some and then the "rest of" the message in the next call (plus maybe part of the subsequent message!)
The usual way to handle this sort of situation is to have a parser, or something that's keeping track of how much data (how many bytes) there "should be"
Tim Stumbaugh
@tjstum_gitlab
and in your case, not calling send_jumbo until you have determined that you have a complete message. One very customary approach is to send a 4-byte length prefix, followed by the bytes. That way, the receiver can figure out where one "message" ends.
So I guess this is all a very long-winded way of saying "you need to add code to decipher the stream length and buffer"
Will Clark
@willcl-ark
Thanks @tjstum_gitlab that's very helpful. So are you saying that the async for data in server_stream: on L37 is waiting until the stream is ending on it's own (as you have in your pseudocode), because that seems to conflict with the second part of your response :)
Tim Stumbaugh
@tjstum_gitlab
Each time you go through the for loop, you get "some amount of the data." If the stream has completely "finished" (usually because your peer called .close or the network broke or something), then the for loop terminates
So yes, I think...