Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • 18:26

    pquentin on master

    Make testing instructions less … Merge pull request #1252 from z… (compare)

  • 18:26
    pquentin closed #1252
  • 17:26
    zackw opened #1252
  • 16:48

    dependabot-preview[bot] on pip

    (compare)

  • 16:48

    dependabot-preview[bot] on master

    Bump asn1crypto from 1.1.0 to 1… Merge pull request #102 from py… (compare)

  • 16:48
    dependabot-preview[bot] closed #102
  • 16:44
    dependabot-preview[bot] labeled #102
  • 16:44
    dependabot-preview[bot] opened #102
  • 16:44

    dependabot-preview[bot] on pip

    Bump asn1crypto from 1.1.0 to 1… (compare)

  • 14:14

    dependabot-preview[bot] on pip

    (compare)

  • 14:14

    dependabot-preview[bot] on master

    Bump immutables from 0.10 to 0.… Merge pull request #1251 from p… (compare)

  • 14:14
    dependabot-preview[bot] closed #1251
  • 14:06
    dependabot-preview[bot] labeled #1251
  • 14:06
    dependabot-preview[bot] opened #1251
  • 14:06

    dependabot-preview[bot] on pip

    Bump immutables from 0.10 to 0.… (compare)

  • 05:47
    pquentin assigned #130
  • Oct 15 16:14

    dependabot-preview[bot] on pip

    (compare)

  • Oct 15 16:14

    dependabot-preview[bot] on master

    Bump attrs from 19.2.0 to 19.3.… Merge pull request #80 from pyt… (compare)

  • Oct 15 16:14
    dependabot-preview[bot] closed #80
  • Oct 15 16:10
    dependabot-preview[bot] labeled #80
Kyle Lawlor
@wgwz
so i wouldn't mind opening a new tab for the examples i'm reading through and having to wait for it to load. it also makes it possible for me to study how trio is working outside of my typical development environment (my laptop). for example i could be at the library and be able to use the browser at the library.
Nathaniel J. Smith
@njsmith
@wgwz yeah, the core idea of having interactive examples in the docs sounds really cool. There are just a bunch of details and I'm having trouble imagining how they would all work out
I guess in addition to binder, there are other services that do interactive programming in your browser, like repl.it and glitch and probably others I don't know about. I wonder if any of them are particularly good for this use case.
Dave Hirschfeld
@dhirschfeld
You've come across https://github.com/minrk/thebelab? It seems to be aimed at this usecase but I haven't used it myself so couldn't say how well it fits the bill..
Nathaniel J. Smith
@njsmith
Hah, of course Min has gone and implemented exactly the thing we were talking about
I wonder if it's ready for real use
It looks like the live REPL on python.org uses https://www.pythonanywhere.com/
Nathaniel J. Smith
@njsmith
huh, I think this might be the first time guido has posted on the trio tracker. I wonder how he found the thread :-)
aratz-lasa
@aratz-lasa
Awesome!
Benjamin Kane
@bbkane
Hello - is there a list of companies/projects using Trio in production? https://github.com/python-trio/trio/wiki/Testimonials is mostly people, not projects/companies. Is it being used for PyPi ?
oakkitten
@oakkitten
regarding #1208, why is Queue not considered a candidate to replace Channel?
oakkitten
@oakkitten
Tube, Hose, Pipe, these all transfer liquid-ish substances, and i agree that Channel is similiar, but one wouldn't say a Queue of water
Nathaniel J. Smith
@njsmith
@oakkitten Queue also has a big conflict with queue.Queue and asyncio.Queue unfortunately...
oakkitten
@oakkitten
what kind of a conflict?
Nathaniel J. Smith
@njsmith
(we used to have a trio.Queue in fact, and channels were originally created to replace it, before we realized that they were a more general concept. So I guess that history makes it not the first thing that comes to mind :-))
The same kind of issue as we're talking about with asyncio.Stream: the interface we want is different and incompatible with the stdlib thing, so we probably don't want to use the same name
Justin Turner Arthur
@JustinTArthur
I’m curious why both asyncio and node.js examples have you awaiting drain after write instead of before.
oakkitten
@oakkitten
i see now, thanks
Nathaniel J. Smith
@njsmith
new "good first issue", in case anyone is looking for one: python-trio/snekomatic#22
John Belmonte
@belm0
Quentin Pradet
@pquentin
@bbkane https://github.com/groove-x/ is a company
See
argh mobile gitter
See the link above my message
Quentin Pradet
@pquentin
@bbkane also, yes, I think PyPI is using https://github.com/pypa/linehaul in production, and https://github.com/HyperionGray is using trio
@belm0 nice slides!
Nathaniel J. Smith
@njsmith
IIRC pypi is no longer using the trio-based linehaul – they used it for a few years and were really happy, but then their logging infrastructure changed and no longer needed a linehaul-shaped piece
Davide Rizzo
@sorcio
To be fair stream-of-objects reminds me more of conveyor belts than fluid flow, although I like the hydraulics metaphor of flow, pressure, etc
Will Clark
@willcl-ark
Hello. I have used Trio to make a TCP proxy, where the aim is to route the data over a mesh network and then out to the wider internet... It's all working OK, however I don't think I am handling larger streams of data properly: I can't see the receive buffer size in Trio, but I suspect that I am sending out via my external link before the entire stream is received. My example code is here: https://bpaste.net/show/em1Q . I suspect that L37-L45 is the part that is sending before the entire stream is received... So my question is, is trio automatically waiting on the full stream during for data in server_stream or is this receiving a fixed amount and do I need to add code in here to decipher the stream length and buffer it before sending onwards?
Tim Stumbaugh
@tjstum_gitlab
@willcl-ark one of the great things about trio is that there aren't really any hidden buffers. nothing is actually read from the underlying TCP connection until you call receive_some. the async for loop that the Stream interface offers is just a convenient shorthand for writing a loop sort of like this:
while True:
    data = await conn.receive_some()
    if not data:
        break
    # do something with data
TCP itself doesn't offer any assurances that one party's "send" lines up with the other part's "receive"
and there aren't any message boundaries that come "for free." so you can definitely see cases where it looks like you get "part of" one message in one call to receive_some and then the "rest of" the message in the next call (plus maybe part of the subsequent message!)
The usual way to handle this sort of situation is to have a parser, or something that's keeping track of how much data (how many bytes) there "should be"
Tim Stumbaugh
@tjstum_gitlab
and in your case, not calling send_jumbo until you have determined that you have a complete message. One very customary approach is to send a 4-byte length prefix, followed by the bytes. That way, the receiver can figure out where one "message" ends.
So I guess this is all a very long-winded way of saying "you need to add code to decipher the stream length and buffer"
Will Clark
@willcl-ark
Thanks @tjstum_gitlab that's very helpful. So are you saying that the async for data in server_stream: on L37 is waiting until the stream is ending on it's own (as you have in your pseudocode), because that seems to conflict with the second part of your response :)
Tim Stumbaugh
@tjstum_gitlab
Each time you go through the for loop, you get "some amount of the data." If the stream has completely "finished" (usually because your peer called .close or the network broke or something), then the for loop terminates
So yes, I think...
Will Clark
@willcl-ark
OK understood, so I need to parse the length of the stream, buffer it, and receive until I have the correct amount before sending, inside of that for loop
Tim Stumbaugh
@tjstum_gitlab
L45 seems to be immediately sending whatever recently arrived chunk you've gotten (after preceding it with the MAGIC value, of course)
Will Clark
@willcl-ark
yep, that's where I've gone wrong then
Do you happen to know how most people get the length of a regular TCP stream? is there something in trio or another library, or do I need to be struct.unpack-ing it manually?
Tim Stumbaugh
@tjstum_gitlab
python-trio/trio#796 is the relevant issue in trio here
I've used approaches ranging from a stateful parser ("sans I/O" is a relevant term here) to just a dumb function that tries to decode an entire bytestring in one shot (and returns None if it's incomplete)
the latter is simple to implement but accidentally quadratic
NoskLo
@nosklo_gitlab
There is no way to get the length of a TCP stream. It has no predefined length. It could go on forever.
That's why you need a higher level protocol on top to do the framing
Nathaniel J. Smith
@njsmith
I guess if you only want to transmit one message, you could define your protocol so that it just sends all the data and then closes the connection, and the receiving side would handle it by just reading repeatedly until the connection is reported closed
That's how early versions of HTTP worked
But it's not a very popular approach, because usually you want to send multiple messages and you don't want to have to set up a new connection for each one