These are chat archives for dereneaton/ipyrad

4th
Dec 2016
Isaac Overcast
@isaacovercast
Dec 04 2016 00:21
Hi Mariana, If you are importing demultiplexed files then the duplicate barcodes don't matter. You can just have one barcodes file that includes the sample name and barcode of all the samples. Then you can point the sorted_fastq_path parameter to the location of your demultiplexed fastq files and run step 1. It should import the demux'd fastq files fine. You shouldn't have to worry about the R2 index, since it shouldn't be inline with the sequence, if i'm understanding correctly.
Isaac Overcast
@isaacovercast
Dec 04 2016 00:38
@ksil91 Hm, that's tricky. There's not really a way to restart halfway through that step. Getting more walltime would be one way, but i understand that isn't always possible. If you can allocate more cores then this should speed up the job, which could be another way of working around walltime limits.
I'll have to think about whether there's another way...
Mariana Mira Vasconcellos
@marimira
Dec 04 2016 20:50
@isaacovercast Thank you very much! The way that worked for me was to create a folder for each of my individual pools and run step 1 on each one of them separately since they have the same inline barcodes. Then, I created another ipyrad folder to run the analysis starting with the demultiplexed files that were generated for each of my pools in the step 1 run. I copied all the demultiplexed files from each pool in a folder that I pointed in the sorted_fastq_path parameter. That worked great! Another possibility I thought was merging the runs before step 2 using "ipyrad -m", but I didn't quite understand how I would set this up from the help. Anyway, it worked without the need of merging in ipyrad.
Isaac Overcast
@isaacovercast
Dec 04 2016 22:42
@marimira Awesome! Glad you got it working.... Yeah, I've been meaning to improve the docs for how to merge. It's on the todo list..