These are chat archives for dereneaton/ipyrad

5th
Oct 2017
congliu0514
@congliu0514
Oct 05 2017 00:07
@isaacovercast I used -b to create a new branch and re-ran ipyrad from step 4 with max_alleles_consens = 1
Ollie White
@Ollie_W_White_twitter
Oct 05 2017 13:32

Hi @isaacovercast, I think my disk space should be sufficient. I am running ipyrad on a compute node of our University HPC. Below is the tail of by .o file after running one of the scripts from step 3 through to 6.

Requested resource limits are neednodes=1:ppn=16,nodes=1:ppn=16,pmem=4000mb,walltime=48:00:00
Used resource limits are cput=00:13:07,mem=285480kb,vmem=1311520kb,walltime=03:05:44

Not sure what is happening but it seems to work fine when I run each branch one at a time

Isaac Overcast
@isaacovercast
Oct 05 2017 16:20
@rfolkert Yes, that's true. I've never tried setting these values to 0, but there's no reason that wouldn't work.
@Ollie_W_White_twitter Disk space may look fine, but sometimes clusters have quotas imposed per user. Not sure if that's the issue. So you got everything to run fine now?
Isaac Overcast
@isaacovercast
Oct 05 2017 16:37
@congliu0514 I have no idea why this would happen. can you email me the params files for both the runs and one of the .consens.gz files from the _consens directory for each of the runs.
nspope
@nspope
Oct 05 2017 21:41
@eachambers that's an error from the HDF5 library (h5py is the python package wrapping this library). As a first step try reinstalling h5py through conda. You might also try opening a connection to a file with h5py to see if you can replicate the error outside of ipyrad ... for example, go to the <assemblyName>_across output folder, in which there should be a file called assemblyName.clust.hdf5. Then in Python REPL:
import h5py
h5py.File("assemblyName.clust.hdf5", 'r')