These are chat archives for dereneaton/ipyrad

Aug 2017
Aug 21 2017 15:30
@dereneaton Deren, couple questions about using the ipyrad API. To start Jupyter, do I just submit the batch script (slurm_jupyter.sbatch) each time I want to reconnect? And what is the "profile" that I'm setting for ipcluster (and ipyrad)?
Aug 21 2017 16:41
@dereneaton I've been running ipyrad intensely on our HTC network for about a year. Long story short, we recently had to switch to having the working directory for ipyrad on the execute node instead of a large drive mounted to HTC network. My job keeps failing and all signs point to running out of disk space, but both the ipyrad and HTC error messages are blank. Is ipyrad coded to print an error message if it runs out of disk space to write to? If definitely yes, than I might have other issues to hunt for, but if not, things make sense.
Deren Eaton
Aug 21 2017 16:44
@toczydlowski No if you run out of disk while running ipyrad it will fail and not print a very coherent error message. It is a kind of difficult error to catch when working in parallel. Super easy to fix though. Just change your 'project_dir' in your ipyrad parameters to be in a scratch directory, or wherever it is that you have lot's of space allocated to you.
Aug 21 2017 19:20
@dereneaton Thanks Deren. Yeah I've requested a lot more space (and memory) and resubmitted the job. Job matching and running takes awhile with computing requests of this size - partly why I was hoping for a definitive message before going down this road - but hoping this does the trick! Glad to know my diagnosis still seems correct!