Well thank you! It’s been fun!
So you could have increased the
for ranges in the notebook and come up with something massive?
By the way, I would always recommend adding
filesystem (faster init, should be the default) and maybe also setting
open_dataset. For the latter, the best values to choose depends on your analysis, but it can make a big difference to load times if the original chunks are small. That's why it’s nice to eventually write Intake specs, to hide this kind of detail.
two tests: one with 3yr of data ~900 files runs fine. but then when I scale to 7000 files... no errors but isn't running okay...
900 files: https://jupyter.qhub.esipfed.org/user/cgentemann/doc/tree/shared/users/cgentemann/notebooks/cloud_mur_v41_3yr.ipynb
7000 files: https://jupyter.qhub.esipfed.org/user/cgentemann/doc/tree/shared/users/cgentemann/notebooks/cloud_mur_v41-all.ipynb
ideas on what might be wrong?