github-actions[bot] on gh-pages
Update documentation (compare)
rsignell-usgs on done-with-gmt
rsignell-usgs on master
Update meeting-notes.rst Merge pull request #829 from pa… (compare)
github-actions[bot] on done-with-gmt-preview
Update documentation (compare)
jhamman on done-with-gmt
Update meeting-notes.rst (compare)
github-actions[bot] on gh-pages
Update documentation (compare)
rabernat on master
Updated calendar (#828) (compare)
github-actions[bot] on gh-pages
Update documentation (compare)
TomAugspurger on master
Add link to youtube recording Use parenthetical for link RST link and 7 more (compare)
in compute_study.md, it is indicated that
'Duplicate each study for 2, 4, 8, and 16 workers per node (reducing chunk size proportionally)'
But I do not recall this reduction of chunk size for each increase of workers per node in utils.py,
benchmarks/datasets.py
), and it therefore assumes that the total dataset size will be equal to the (chunk size) * (number of nodes) * (number of workers per node)
.
compute_study.md
writeup was to sketch out what would be needed to generate some preliminary scaling studies for various "common" operations done with xarray
and dask
. The results of each study should be a plot of "Number of Nodes" vs "Operation Runtime". However, the "Operation Runtime" depends on much more than "Number of Nodes", including "Number of Workers per Node", "Number of Threads per Worker", "Total Number of Chunks", "Chunk Size", etc.
compute_study.md
document, I tried to find a way to fix all of the other parameters in a way such that each study was "fair." I chose 1 "Chunk per Worker" and 1 "Thread per Worker", and I chose to vary the "Chunk Size" and "Number of Workers per Node". And later we tried to come up with a way of varying the "Chunking Scheme" (i.e., chunk over all dimensions, chunk over only spatial dimensions, chunk over only time), too. But we need to generate data that looks at how these numbers vary with "Chunks per Worker" and "Threads per Worker", too.
Error displaying widget
with e.g. the dask_kubernetes.KubeCluster
widget (or any other widget)? It looks like this is related to ipywidgets==7.5
. I have a Pangeo environment with jupyterlab=0.35
, tornado=5.1.1
and dask_labextension==0.3.3
, because I noticed that it was a working configuration at some point, but I'm not sure this is still the recommended configuration.
method_name
io.k8s.core.v1.pods.attach.create 8
io.k8s.core.v1.pods.binding.create 291582
io.k8s.core.v1.pods.create 66682
io.k8s.core.v1.pods.delete 656336
io.k8s.core.v1.pods.eviction.create 312632
io.k8s.core.v1.pods.exec.create 204
io.k8s.core.v1.pods.get 109
io.k8s.core.v1.pods.portforward.create 3974
io.k8s.core.v1.pods.status.patch 17
io.k8s.core.v1.services.proxy.get 6
binding.create
and eviction.create
differ from just create
?