These are chat archives for dereneaton/ipyrad

Jan 2018
Peter B. Pearman
Jan 16 2018 14:14
Let me add, I'd like ipyrad to be able to access the GPU, assuming that I have installed cuda drivers for the Nvidia card on the Ubuntu machine.
Jan 16 2018 16:21
Hi again Deren and Isaac. As Nitish Narula, my ipyrad run is going to be canceled by the scheduler on the cluster in a few days due to time limits. In my case, I am running step 3. When I used pyRAD, I repeated this step independently only with the samples which could not finish the clustering within samples and then I added these last samples to the clust.XX folder. Can I do the same with ipyrad? Is it going to affect to the stats files or any following steps? Or maybe I could restart the analysis again and continue with the samples which did not finish step 3, is it possible? Thanks you in advance!!
Deren Eaton
Jan 16 2018 17:59
@pbpearman, No, ipyrad doesn't make use of the GPUs at this time, only CPUs. That would be cool though!
Jan 16 2018 18:07
@eaton-lab Thanks for looking into this. Is there anything I can test on my side? Or something I can send you?
Isaac Overcast
Jan 16 2018 19:02
@pbpearman That's a great idea, but the parallelization engine we currently use (ipyparallel) doesn't look like it has pycuda support, so my sense it is would be exceedingly difficult to implement pyCUDA in ipyrad (it would require a total refactorization of the parallelization model). We do accept pull requests though :+1:
Isaac Overcast
Jan 16 2018 19:10
@Saritonia One option would be to create 2 branches with 1/2 the samples in each branch, then run step 3 on each branch and then merge them after step 3. The problem you'll encounter is that step 6 is also resource intensive, and this branching/merging strategy won't work at that point. It would be better to try allocating more resources to your ipyrad run. If you can add more cores and ensure you have enough RAM allocated (~4GB per core) then it'll speed up the assembly.