This channel is rarely used. For other channels to contact the dask community, please see https://docs.dask.org/en/stable/support.html
File "/opt/lsst/software/stack/python/miniconda3-4.5.12/envs/lsst-scipipe-1172c30/lib/python3.7/site-packages/distributed/comm/core.py", line 204, in _raise
raise IOError(msg)
OSError: Timed out trying to connect to 'tcp://10.46.128.14:36210' after 10 s: in <distributed.comm.tcp.TCPConnector object at 0x7feb07069128>: ConnectionRefusedError: [Errno 111] Connection refused
tornado.application - ERROR - Exception in callback functools.partial(<bound method IOLoop._discard_future_result of <tornado.platform.asyncio.AsyncIOLoop object at 0x7feb77f9b278>>, <Task finished coro=<Worker.heartbeat() done, defined at /opt/lsst/software/stack/python/miniconda3-4.5.12/envs/lsst-scipipe-1172c30/lib/python3.7/site-packages/distributed/worker.py:866> exception=OSError("Timed out trying to connect to 'tcp://10.46.128.14:36210' after 10 s: in <distributed.comm.tcp.TCPConnector object at 0x7feb07444240>: ConnectionRefusedError: [Errno 111] Connection refused")>)
Traceback (most recent call last):
File "/opt/lsst/software/stack/python/miniconda3-4.5.12/envs/lsst-scipipe-1172c30/lib/python3.7/site-packages/distributed/comm/core.py", line 215, in connect
quiet_exceptions=EnvironmentError,
tornado.util.TimeoutError: Timeout
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/lsst/software/stack/python/miniconda3-4.5.12/envs/lsst-scipipe-1172c30/lib/python3.7/site-packages/tornado/ioloop.py", line 743, in _run_callback
ret = callback()
File "/opt/lsst/software/stack/python/miniconda3-4.5.12/envs/lsst-scipipe-1172c30/lib/python3.7/site-packages/tornado/ioloop.py", line 767, in _discard_future_result
future.result()
File "/opt/lsst/software/stack/python/miniconda3-4.5.12/envs/lsst-scipipe-1172c30/lib/python3.7/site-packages/distributed/worker.py", line 875, in heartbeat
metrics=await self.get_metrics(),
File "/opt/lsst/software/stack/python/miniconda3-4.5.12/envs/lsst-scipipe-1172c30/lib/python3.7/site-packages/distributed/core.py", line 747, in send_recv_from_rpc
comm = await self.pool.connect(self.addr)
File "/opt/lsst/software/stack/python/miniconda3-4.5.12/envs/lsst-scipipe-1172c30/lib/python3.7/site-packages/distributed/core.py", line 874, in connect
connection_args=self.connection_args,
File "/opt/lsst/software/stack/python/miniconda3-4.5.12/envs/lsst-scipipe-1172c30/lib/python3.7/site-packages/distributed/comm/core.py", line 227, in connect
_raise(error)
File "/opt/lsst/software/stack/python/miniconda3-4.5.12/envs/lsst-scipipe-1172c30/lib/python3.7/site-packages/distributed/comm/core.py", line 204, in _raise
raise IOError(msg)
OSError: Timed out trying to connect to 'tcp://10.46.128.14:36210' after 10 s: in <distributed.comm.tcp.TCPConnector object at 0x7feb07444240>: ConnectionRefusedError: [Errno 111] Connection refused
Hey I just posted a question on SO: https://stackoverflow.com/questions/57760355/how-should-i-load-a-memory-intensive-helper-object-per-worker-in-dask-distribute
Summary: I have a big object that I need per-worker and don't want to reinit every function call. Can dask handle this?
Hello everyone. Just wanted to know whether I can perform Dask array computations inside a delayed function, something like this.
def dask_delayed_example(X):
mean = X.mean(axis=0)
std = X.std(axis=0)
return X - mean
X = df.to_dask_array(lengths=True)
res = dask_delayed_example(X)
Is Dask delayed only meant for parallelizing computation or can it also be used to perform computation on larger-than-memory datasets in a distributed manner?
Hi there ! I'm trying to use dask-ssh
as follows:
% dask-ssh --scheduler master1 node1
---------------------------------------------------------------
Dask.distributed v1.27.0
Worker nodes:
0: node1
scheduler node: master1:8786
---------------------------------------------------------------
[ scheduler master1:8786 ] : /home/applis/anaconda/envs/py3v19.04/bin/python -m distributed.cli.dask_scheduler --port 8786
[ worker node1 ] : /home/applis/anaconda/envs/py3v19.04/bin/python -m None master1:8786 --nthreads 0 --host node1 --memory-limit auto
but I cannot connect a client to the scheduler and logging does not provide more information. Client('master1:8786')
fails with OSError: Timed out trying to connect...
and I can not access the web ui at master1:8787. However, running dask-scheduler
and dask-worker
work fine. Any suggestion ?
da.from_array(large_numpy_array)
was blowing up workers, as (counterintuitively) the large_numpy_array
is not actually partitioned along the expected chunks. Each worker appears to get a full copy of large_numpy_array
regardless of the chunking.
from_array(large_array, chunks=chunks)[0].compute()
, for example, does not allocate data to workers the way one would expect from the chunking
scatter
, is that right?