These are chat archives for ipython/ipython

23rd
Mar 2015
Jason Grout
@jasongrout
Mar 23 2015 17:39
@minrk - we have a ioloop question. Is something like this possible?
x = f()
print x
where f() is a function that yields back to the ioloop for some asynchronous, long processing, before returning a value.
Min RK
@minrk
Mar 23 2015 17:41
With coroutines, yes
x = yield f()
nbviewer, jupyterhub are full of them. Not IPython, though.
Do you want to do this in a cell? If so, there's a catch.
Sylvain Corlay
@SylvainCorlay
Mar 23 2015 17:43
@jasongrout noo don't do that
it is not because something is possible that we should do it :)
Jason Grout
@jasongrout
Mar 23 2015 17:46
@minrk - I'm thinking of something more like the parallel things, I think. I'd like it to appear to be blocking, in the cell, but in reality be asynchronous, as far as the event loop is concerned.
Sylvain Corlay
@SylvainCorlay
Mar 23 2015 17:46
:fire:
Min RK
@minrk
Mar 23 2015 17:47
That works in the parallel code because the eventloop is in another (C++) thread.
It doesn't work with ioloop, because it's all in one thread.
Jason Grout
@jasongrout
Mar 23 2015 17:47
ah, hmmm.
Min RK
@minrk
Mar 23 2015 17:48
To do it with a single-thread eventloop, the async bits must be explicit.
Or gevent-style, where they are explicit but hideable, which is the worst of all.
Jason Grout
@jasongrout
Mar 23 2015 17:49
how does it work with the parallel stuff in another thread? So basically you have two event loops running?
Min RK
@minrk
Mar 23 2015 17:49
The other thread is a C++ thread inside libzmq.
No other Python code is running.
Oh, wait. You could totally do it the same way.
What operations are you waiting on?
Jason Grout
@jasongrout
Mar 23 2015 17:50
For an example, say I'm doing a parallel computation and want to wait for the result.
Min RK
@minrk
Mar 23 2015 17:50
It cannot be things on comms, or anything via the existing IPython channels, but if it's other activities (external APIs, etc.), then you can.
Jason Grout
@jasongrout
Mar 23 2015 17:51
yep, no comms, no things on the existing socket.
websocket channel
Min RK
@minrk
Mar 23 2015 17:51
ok, perfect. And it's tornado IOLoop that you are using?
Jason Grout
@jasongrout
Mar 23 2015 17:51
Or perhaps I'm waiting for disk io to finish, etc.
yes, well, we're using zmqshell
so basically, yes
Min RK
@minrk
Mar 23 2015 17:52
You aren't going to be using the main IOLoop, so what zmqshell uses doesn't matter.
Jason Grout
@jasongrout
Mar 23 2015 17:52
ah, you're asking what the second event loop is? Sure, tornado, troillus, or whatever
maybe even our own for loop :)
Min RK
@minrk
Mar 23 2015 17:52
Create another thread, and run a different eventloop there. Then you can wait with whatever API you like - Future, threading.AsyncResult, etc.
That's basically how the parallel stuff works. Messages are coming in on a background thread, but when you ask to wait for a result, events propagate from the background thread to the Python one.
You can do more in the background thread if it's Python
Armin Burgmeier
@aburgm
Mar 23 2015 17:54
Can I, for example, send updates to a widget from the background thread?
Jason Grout
@jasongrout
Mar 23 2015 17:54

so I have

x = f()
print x

f() creates another thread, runs an event loop, which does its thing asynchronously. Then f() waits for the thread to finish before returning...

Min RK
@minrk
Mar 23 2015 17:54
Yup
Or you run a persistent background thread and hand off tasks to it, depending on the nature of what you are doing.
Jason Grout
@jasongrout
Mar 23 2015 17:55
right; hmm, duh, it seems rather obvious, I guess. Seems like pretty much the purpose of threads...
I guess the key piece I wasn't thinking about was blocking on thread completion in the main loop.
But the main IOLoop is still blocked.
Min RK
@minrk
Mar 23 2015 17:56
Yes
AsyncResult provides a decent API, and you can use multiprocessing.pool.ThreadPool to submit work to threads.
Jason Grout
@jasongrout
Mar 23 2015 17:57
hmmm...probably no way to yield back to the main IOLoop while waiting for the thread to complete
right?
Min RK
@minrk
Mar 23 2015 17:57
Right, not without cooperating with the main thread. But that can't be done cleanly inside a cell.
Since yielding to the main loop means other cells can start executing.
Jason Grout
@jasongrout
Mar 23 2015 17:58
right...hmmm.
so if I am updating a widget (like @aburgm just asked about), which has to be done from the main thread because it's a message out over the session connection, that updating has to stop when a cell is executing, right?
otherwise we lose the synchronicity of cell execution
Min RK
@minrk
Mar 23 2015 18:00
yes, 100%
each update is essentially another tiny execution
Armin Burgmeier
@aburgm
Mar 23 2015 18:02
In an ideal world, we would be able to yield control back to the IPython event loop, but stop it from executing new cells until the current cell resumes and finishes
Min RK
@minrk
Mar 23 2015 18:06
That may be possible in the future. It would require significant changes in how messages and state are handled in the kernel, so it likely won't be soon.
Jason Grout
@jasongrout
Mar 23 2015 18:08
okay, thanks.
Min RK
@minrk
Mar 23 2015 18:09
This is a limitation of the IPython kernel, not one of the message protocol as a whole, so it is fixable.
Jason Grout
@jasongrout
Mar 23 2015 18:10
that's great to hear :)
Min RK
@minrk
Mar 23 2015 18:11
A change that shouldn't be too difficult is how we store the parent headers for associating results and side effects with the request that caused them.
Since it is guaranteed that there is always only one request processing at a time, they are stored in a class attribute on the kernel.
If that guarantee is removed, we would have to change how we associate parent messages with their descendants.
Jason Grout
@jasongrout
Mar 23 2015 19:17
oh, right. I was trying to think of where this "every cell blocks the event loop" assumption is used
Sylvain Corlay
@SylvainCorlay
Mar 23 2015 19:22
when you separate out traitlets, could be a good occasion to have an aggressive pep8 pass
since it is going to invalidate PRs anyway
Jason Grout
@jasongrout
Mar 23 2015 19:30
typo PR fix of the day: ipython/ipython#8119 (should be really easy to review :)
Jason Grout
@jasongrout
Mar 23 2015 20:40
When I start an IPython notebook on a computer where another server is already running on the default :8888 port, it says "The port 8888 is already in use, trying another random port.". And then it always chooses port 8889. I think we need to check our random number generator. Perhaps we're using this one: https://xkcd.com/221/
Fernando Perez
@fperez
Mar 23 2015 21:06
it actually used to be random :) At some point the port choosing got changed, but nobody updated the message...
Christopher Dubeau
@ChrisDubeau
Mar 23 2015 21:16
I have been working on solving issue ipython/ipython#8089 however when i want to get to the imaged view i cant get find the proper path. I am probable overlooking it in the simplest of areas but if someone could push me in the right direction that would be appreciated.
Matthias Bussonnier
@Carreau
Mar 23 2015 21:25
@jasongrout @fperez the first 3 tries are not random. if it cannot bind after 3 consecutive port its tart picking at random 5 times and then abort. (5 and 3 IIRC, but might be differnt)
was required by people that while list range of ports not to have too much randomness ;-)
Christopher Dubeau
@ChrisDubeau
Mar 23 2015 21:28
OK Im just blind today sorry bout that
Fernando Perez
@fperez
Mar 23 2015 21:28
thanks for the clarification, @Carreau. Eearly on it was fully random (hence the msg), but indeed people complained.
mgunter6
@mgunter6
Mar 23 2015 22:10
Where can I find an example of using kernel clients in the IPython 3 API?
Specifically, I have code like the following, and I want to know the proper way to use the object returned from k.client().
from IPython.kernel.multikernelmanager import MultiKernelManager
m = MultiKernelManager()
id = m.start_kernel()
k = m.get_kernel(id)
c = k.client()
Thomas Kluyver
@takluyver
Mar 23 2015 22:44
@mgunter6 I'm not sure if there are very many examples, but take a look at the execute preprocessor in nbconvert - that uses it: https://github.com/ipython/ipython/blob/master/IPython/nbconvert/preprocessors/execute.py
mgunter6
@mgunter6
Mar 23 2015 23:00
Cool. Thanks. Is it possible something in MultiKernelManager isn't fully implemented in IPython 3? I find that when I use the code above, the client automatically sends a "kernel_info_request" message but does not read the response (perhaps because the code above yields an IOLoopKernelManager (for k) and I'm testing this using the IPython console, which, as I understand it, doesn't run an IOLoop.
Plus I'm having some issues where v3 of MultiKernelManager is restarting kernels after I call shutdown_kernel(id). I'll probably just use start_new_kernel directly and scrap my usage of MultiKernelManager.
Matthias Bussonnier
@Carreau
Mar 23 2015 23:05
the K info request is to know which version of protocol the Kernel speaks I guess.
mgunter6
@mgunter6
Mar 23 2015 23:06

Understood. The issue is that I do this:

from IPython.kernel.multikernelmanager import MultiKernelManager
m = MultiKernelManager()
id = m.start_kernel()
k = m.get_kernel(id)
c = k.client()
c.execute('blah')
c.get_shell_msg()

The shell message received on the last line is the kernel_info_response message, not the execute_response message I expect.

Matthias Bussonnier
@Carreau
Mar 23 2015 23:07
Oh, I see.
mgunter6
@mgunter6
Mar 23 2015 23:09
This issue is new with IPython 3.
Matthias Bussonnier
@Carreau
Mar 23 2015 23:09
Do open a bug report,if it's not something to fix, it's at least something to document.
mgunter6
@mgunter6
Mar 23 2015 23:09
Where should I open a bug report?
Matthias Bussonnier
@Carreau
Mar 23 2015 23:12
On IPython repo seem like the right place.