Hi all. We just set up Fmriprep to use in our lab. We use singularity (one large .img file) in cluster environment. I'm trying to figure out the typical requirements of the pipeline for cluster. Sometimes the pipeline appears to run smoothly and fast in 1h, sometimes it won't finish at all. Couple of questions:
Say I have one subject with ~1000 standard BOLD slices + T1, and my machine has 12 cores and 30Gb of memory. What is a typical time to process such dataset using default parameters? Is it 1h, 5h or 10h? Any rough estimate is appreciated.
What is typical increase in time when ICA-AROMA and/or FS recon is added? Is it going to double or triple the time?
Chris Filo Gorgolewski
Hey Janne - we try to answer most user support questions on https://neurostars.org/tags/fmriprep. This way the answers are easier to find for other users. It would be great if you could repost the question there. Someone will answer you soon.