-- Found APRUTIL: /usr/lib/x86_64-linux-gnu/libaprutil-1.so
-- Could NOT find TBB (missing: TBB_LIBRARIES TBB_INCLUDE_DIR)
CMake Error at cmake_modules/commonSetup.cmake:703 (message):
TBB requested but package not found
Call Stack (most recent call first):
-- Configuring incomplete, errors occurred!
See also "/home/aranjan/HPCC/Build5/CMakeFiles/CMakeOutput.log".
See also "/home/aranjan/HPCC/Build5/CMakeFiles/CMakeError.log".
Hi guys, I have 1 master node and 2 slave nodes.
Processes running on Master :
mydafilesrv ( pid 2252 ) is running ...
myeclagent ( pid 3533 ) is running ...
myesp ( pid 5159 ) is running ...
mysasha ( pid 6796 ) is running ...
mythor ( pid 21232 ) is running with 2 slave process(es) ...
Processes running on Slave :
mydafilesrv ( pid 2096 ) is running ...
mydali ( pid 3349 ) is running ...
myeclccserver ( pid 4779 ) is running ...
Initially, the CPU utilization for all 3 nodes is 0% (100% idle state).
When I start running a dataGeneration ECL script, the CPU utilization of master node reaches 40-50% however CPU utilization on Slaves continues to remain 0%.
Even the network utilization on all 3 nodes is similar and the disk utilization for just the master seems very high.
I am confused. I expected the slave processes to be doing all the data generation and writing it to the disk work. But low-level metrics don't indicate that. Can someone please share some insight on the same?
Only HPCC related user processes are running on these 3 instances.
I am consistently seeing eclagent to be the top process on master.
I am periodically seeing daserver & thorslave to be the top process on slave
I have been trying to spray data using dfuplus command.
Here are the details of my HPCC setup.
Master IP Address :172.31.45.14
Slave 1 IP Address : 172.31.33.152
Slave 2 IP Address : 172.31.42.187
On my master node, I have 30 GB of kmeans related dataset in csv format spread across 30 files.
The directory is : /mnt/var/lib/HPCCSystems/dataset/kmeans_30GB/
Using the dfuplus command, I am trying to spray the same data across my slave nodes.
Here is the command I am running for an individual file.
$ sudo dfuplus action=spray srcip=172.31.45.14 srcfile=/mnt/var/lib/HPCCSystems/dataset/kmeans_30GB/file1 dstname=kmeans::dataset::file1 dstcluster=mythor server=http://172.31.45.14:8010 format=csv
Checking for local Dali File Server on port 7100
Variable spraying from /mnt/var/lib/HPCCSystems/dataset/kmeans_30GB/file1 on 172.31.45.14:7100 to kmeans::dataset::file1
Submitted WUID D20180423-152513
D20180423-152513 status: queued
Failed: No Drop Zone on '172.31.45.14' configured at '/mnt/var/lib/HPCCSystems/dataset/kmeans_30GB/file1'.
The job fails stating that
No Drop Zone on '172.31.45.14' configured at '/mnt/var/lib/HPCCSystems/dataset/kmeans_30GB/file1'.
My ECL Watch console shows a single mydropzone entry of 172.31.33.152.