These are chat archives for dereneaton/ipyrad

11th
Apr 2016
Isaac Overcast
@isaacovercast
Apr 11 2016 13:25
Pushed a new commit to apply3 that uses the failure flag to test job completion:
    concat_check = {}
    concat_check_deps = Dependency(tmpids, failure=True, success=False)
    with lbview.temp_flags(after=concat_check_deps):
        concat_check = lbview.apply(cleanup_and_die, ["Concat"])
tmpids are the message ids of the concat jobs. this will run the cleanup_and_die function only if all these jobs fail. If all the jobs succeed it won't run and if you try concat_check.get() it'll throw an ImpossibleDependency exception
cleanup_and_die() is still a dummy function at this point, doesn't do anything but print().
Isaac Overcast
@isaacovercast
Apr 11 2016 17:45
I'm getting some traction on error checking for the new step3 but want to check in with you about strategy before i code up a bunch of stuff. let me know when you have a few minutes...
Deren Eaton
@dereneaton
Apr 11 2016 17:46
Now is good. What's up?
Isaac Overcast
@isaacovercast
Apr 11 2016 17:51
OK, so the code above will create a task that will only get run if any of the tasks in tmpids fails. Ideally I'd like to create a nanny task like this for every func in step 3 so it'll die and let know where and also fetch some metadata from the failed tasks. My first attempt naively tried to throw an IPyradError inside cleanup_and_die, but obv that doesn't work bcz it's getting run on an engine, so it doesn't throw to the main loop. I'm trying to figure out the most graceful way of killing ipyrad from inside an ipyclient, or another idea for how to check the results
The other idea I had was gathering up all the asyncresults from all the "nanny" tasks and checking their results inside the while 1 loop in apply_jobs(). Seems more annoying, but would make it easier to shut down on failure.
Deren Eaton
@dereneaton
Apr 11 2016 18:07
Can't we have one cleanup_and_die command that can take all tmpids?
Isaac Overcast
@isaacovercast
Apr 11 2016 18:11
That is a capital idea!
Yeah... good idea. that should work.
Deren Eaton
@dereneaton
Apr 11 2016 18:17
Inside the function, I still don't know the best way to kill everything.
Isaac Overcast
@isaacovercast
Apr 11 2016 18:21
With just one async result to check it's a bit easier.. poll the status of the asyncresult, and when it returns something other than None you can handle it there. I think i have a good idea, i'll take a crack at it and let you know when it's ready, hopefully today...
Deren Eaton
@dereneaton
Apr 11 2016 18:25
Yeah won't this function intercept jobs so that they don't continue down the queue? So we don't need to kill anything, printing the error ends execution on that sample. But it should check for things to clean uo.