@/all we have finally acquired an Apache Gitter channel. Please direct your questions to https://gitter.im/apache/toree
root@ubuntu-1gb-sgp1-01:/home# spark-shell Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties Setting default log level to "WARN". To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel). 17/02/08 04:03:11 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using bu iltin-java classes where applicable 17/02/08 04:03:12 WARN Utils: Your hostname, ubuntu-1gb-sgp1-01 resolves to a loopback address: 127.0.1.1; using 10.15.0.5 instead (on interface eth0) 17/02/08 04:03:12 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address 17/02/08 04:03:42 WARN ObjectStore: Version information not found in metastore. hive.metastore.schema.verif ication is not enabled so recording the schema version 1.2.0 17/02/08 04:03:43 WARN ObjectStore: Failed to get database default, returning NoSuchObjectException 17/02/08 04:03:46 WARN ObjectStore: Failed to get database global_temp, returning NoSuchObjectException Spark context Web UI available at http://10.15.0.5:4040 Spark context available as 'sc' (master = local[*], app id = local-1486526597417). Spark session available as 'spark'. Welcome to ____ __ / __/__ ___ _____/ /__ _\ \/ _ \/ _ `/ __/ '_/ /___/ .__/\_,_/_/ /_/\_\ version 2.1.0 /_/
Hi i am stuck in my work in submitting a spark job to hadoop yarn master in cluster mode
please find my environment setup below
i have a linux machine having 128 GB of RAM, 2TB Hard disk, 2x16 cores.
i have set up cloud era hadoop containers in a docker mount point having 50 GB(this mount point is almost full). i have one datanode, namenode and yarnmaster containers runnings.
i am submitting spark job from my host machine to run Rscript in cluster mode. R server and libraries are set up in datanode.
When i submit the spark job it remains in the accepted state for long time. please find the spark submit command i am using below
spark-submit --master yarn --name RechargeModel --deploy-mode cluster --executor-memory 3G --num-executors 4 rechargemodel.R