These are chat archives for dereneaton/ipyrad

25th
May 2018
Saritonia
@Saritonia
May 25 09:15

Dear @isaacovercast and @dereneaton. Hope you are well. I am trying to run the step 6 in ipyrad with a dataset of 294 pair-end GBS samples but an unexpected error have occurred. I show you the error message below.

Begin run: 2018-05-22 17:30
Using args {'preview': False, 'force': False, 'threads': 2, 'results': False, 'quiet': False, 'merge': None, 'ipcluster': None, 'cores': 0, 'params': 'params-pseudocistus.txt', 'branch': None, 'steps': '4567', 'debug': False, 'new': None, 'MPI': False}
Platform info: ('Linux', 'nodo92', '3.10.0-327.3.1.el7.x86_64', '#1 SMP Wed Dec 9 14:09:15 UTC 2015', 'x86_64')2018-05-23 11:34:56,769 pid=17476 [assembly.py] ERROR EngineError(Engine '4e8e9dbf-6a9d323c8723626a0c3969e6' died while running task u'3c29cf0d-ce6fb36b8b850fc16aef8ee2')

I got in contact with the supercomputation service when I am doing the analysis and they told me that the error is not about a lack of resources in the cluster. Do you have any idea about the causes of the problem and solutions? Let me know if you need additional information about the analysis settings. I look forward your answer. Have a lovely day!!

Isaac Overcast
@isaacovercast
May 25 16:20
@pedroamandrade You shouldn't need to install cutadapt by hand, as the ipyrad conda install is totally self-contained. I would bet the cutadapt version you installed is incompatible with the one we use internally. Try conda remove cutadapt and then run ipyrad again and i bet it'll work. If that doesn't work try conda install -c ipyrad cutadapt, this will install the version of cutadapt we host in the ipyrad anaconda channel.
@Saritonia Did the supercomputation service people tell you why they thought this wasn't a lack of resources? I've seen this error dozens of times before during step 6 and it is almost always because an ipyparallel engine died because RAM was exhausted. How many samples are you running? How much RAM are you allocating? I am almost certain this is a RAM issue. Be sure you are allocating at LEAST 4GB of RAM per core. If your dataset is "big" you'll need more than that of course.
Felipe Zapata
@zapataf
May 25 19:39
Hi @dereneaton @isaacovercast I am trying to analyze 3RAD data set (96 samples) but I am running into this error in step 3 when I run in debug mode [cluster_within.py] WARNING Bad derephandleI've never seen this issue before. Any ideas? Happy to provide parms file, data, etc
Isaac Overcast
@isaacovercast
May 25 20:44
Does this only happen in debug mode?
Are you sure you're not running out of disk space?
Felipe Zapata
@zapataf
May 25 20:52
Hi Isaac, I am running in debug mode because when I don't use it I only get an error but I don't know where the error is. Nope, it's not about diskspace. I tried this on our HPC (2TB space) and a local machine (about 600GB space).