These are chat archives for dereneaton/ipyrad

Sep 2017
Sep 25 2017 16:57

Hello all, I was able to run my assembly all the way to step 7, but looking at the -r flag, all of my samples are still stuck at state 5 (except one sample with very few reads gets filtered out at step 2). I looked at the step 6 error file and found the following:


DEBUG:ipyrad.core.assembly:Sample UMBM78642 not in proper state.
INFO:ipyrad.assemble.cluster_across:checkpoint = 0
INFO:ipyrad.assemble.cluster_across:substeps = [1, 2, 3, 4, 5, 6, 7]
INFO:ipyrad.assemble.cluster_across:building reads file -- loading utemp file into mem
INFO:ipyrad.assemble.cluster_across:starting alignments
INFO:ipyrad.assemble.cluster_across:multicat -- building full database
INFO:ipyrad.assemble.cluster_across:in the multicat
INFO:ipyrad.assemble.cluster_across:maxlen inside build_h5_array is 150
INFO:ipyrad.assemble.cluster_across:nloci inside build_h5_array is 616069
INFO:ipyrad.assemble.cluster_across:chunks in build_h5_array: 1512
INFO:ipyrad.assemble.cluster_across:nloci is 616069
INFO:ipyrad.assemble.cluster_across:chunks is 1512
ERROR:ipyrad.core.assembly:EngineError(Engine '42061c7d-64ae-4091-8ec0-da03cb64ec0c' died while running task u'e0c3a6e1-9927-4eb1-98a7-1a166084fb10')
ERROR:ipyrad.core.assembly:shutdown warning: [Errno 3] No such process
INFO:ipyrad:debugging turned off
Any idea what could be going wrong here? Thanks!

Ollie White
Sep 25 2017 17:16

Hello, I can use the ipyrad pipeline via the command line no problem but having some trouble getting the API version to run on my university HPC. I am using an interactive session linked to my jupyter notebook. I seem to have an error with ipcluster . See output of the ipyrad_log.txt file below

2017-09-25 17:10:45,386     pid=1457     []    ERROR         No ipcluster instance found. This may be a problem with your installation
    setup. I would recommend that you contact the ipyrad developers. 

2017-09-25 17:43:48,929     pid=10652     []    ERROR         No ipcluster instance found. This may be a problem with your installation
    setup. I would recommend that you contact the ipyrad developers.

Any thoughts on this would be appreciated. Cheers, Ollie

Deren Eaton
Sep 25 2017 17:21
Hi @Ollie_W_White_twitter , did you start an ipcluster instance running? The simplest way, if your notebook is running on a node with several cores allocated to you, is to go to the jupyter dashboard, select new terminal, and then in the new terminal run ipcluster start. Then when you call .run() in ipyrad it will distribute work across the ipcluster cores.
We still need to add more documentation about ipcluster. If you want to just use cores on a single node (usually up to about 16 or 24) follow the instructions above. But if you want to connect to cores across multiple nodes using MPI, then you will need to start ipcluster using a job submission script like shown here ( Unlike the CLI, which launches ipcluster automatically, we leave it to the user to start and stop ipcluster when using the API because it gives you a little more control.
Hi @emhudson, it's possible that one of the engines died because it hit a memory error. Your data set looks pretty big, so it's possible. Do you know how much memory you allocated to the job?
Ollie White
Sep 25 2017 17:29
Cheers @dereneaton, that seemed to get it working now thank you! Didn't realise I needed to get ipcluster running beforehand. Is it okay to run the the API pipeline using a number of different interactive HPC sessions? Using qsub scripts the clustering step took over 24 hours so thought it would be good to break it up a bit?
Sep 25 2017 18:21

Hi @dereneaton, here's my complete srun file for step six: #!/bin/bash

SBATCH --qos=normal

SBATCH --time=7-00:00:00

SBATCH --verbose ### Verbosity logs error information into the error file


SBATCH --job-name=JGBS6 ### Job Name

SBATCH --ntasks=120

SBATCH --mem=16000

SBATCH --output=jasonsGBS6.output.out

SBATCH --error=jasonsGBS6.error.err

SBATCH --mail-type=ALL



module load anaconda/2.7
source activate ipyrad

export HOME=/work/shizuka/ehudson3

ipcluster start --n 240 --engines=MPI --ip=* --daemonize
sleep 240
ipyrad -p params-jasonsGBS.txt -s6 -d -c 240 --ipcluster

(Sorry for the (lack of) formatting!)
Sep 25 2017 22:26
@isaacovercast @dereneaton Hi Isaac and Deren. I'm getting ready to switch from V0.5.15 to the most recent version - in part to fix the vcf coding issues I ran into that we were talking about last week. I also ran into another issue in that I just ran out of disk space on step 6 - indexing clusters (at about 30% complete). The project directory got to almost 700GB, and the input sample directory is 230GB. (I am running this on a cluster with -c 30 -t 2 and about a TB of disk space). My input .fq files were not gzipped, so I am going to gzip those to save space... but does the code in the latest version use less disk space than V0.5.15? I saw notes about better clean ups, but wasn't sure what that meant/how big of a difference this would make. Also on space saving - I usually run steps 1,2,3 as a job and then 4,5,6,7 as a second job. In V0.5.15 I get an error when running steps 4,5,6,7 if ipyrad can't find the raw fastq path. Do all steps still require access to the directory with the raw fastq files? If not, I won't copy it over to the execute node for later steps, which would save time and disk space. I have 315 samples, average about 500MB a piece. Zipped into one tarball they are about 50GB (unzipped the 230GB mentioned above). Guesses as to max disk space this would use in latest version? The raw reads are already filtered/trimmed, FYI. Thanks!!