Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
    Chip Senkbeil
    @chipsenkbeil
    test
    test
    Chip Senkbeil
    @chipsenkbeil
    test
    Marius van Niekerk
    @mariusvniekerk
    test
    Chip Senkbeil
    @chipsenkbeil
    I wonder if I got the bot banned. >_>
    Marius van Niekerk
    @mariusvniekerk
    haha
    while(true) post ?
    Chip Senkbeil
    @chipsenkbeil
    Well, I had the bot post that message whenever it received a message in this channel.
    But, I forgot to filter out its own messages.
    So, it was replying to itself until it stopped.
    Marius van Niekerk
    @mariusvniekerk
    oh
    nice
    Chip Senkbeil
    @chipsenkbeil
    Now if I could just get the bot to respond again.
    Marius van Niekerk
    @mariusvniekerk
    yeah
    Chip Senkbeil
    @chipsenkbeil
    @apache-toree-bot respond
    @apache-toree-bot please respond
    Guess I'll try later.
    Marius van Niekerk
    @mariusvniekerk
    yeah
    its being rude
    Chip Senkbeil
    @chipsenkbeil
    @EvanOman, it never hurts to open an issue
    Chip Senkbeil
    @chipsenkbeil
    test
    singravi
    @singravi
    Hi All, Is there any place from where i can download the master built for Spark 2.0 with Scala 2.11 ? I am having issues in building the branch as i do not have the required version of sbt (constantly getting the 0.13.9 error)
    Chip Senkbeil
    @chipsenkbeil
    @singravi, we have not released 0.2.0 yet. We're in the process of finishing up the vote for the 0.1.x branch. Then we will move on to 0.2.x promptly.
    singravi
    @singravi
    understood.However anyway to get a build of master branch with the spark 2.0 and scala 2.11 .
    I am struck and get the same error as https://issues.apache.org/jira/browse/TOREE-336
    Chip Senkbeil
    @chipsenkbeil
    @/all we have finally acquired an Apache Gitter channel. Please direct your questions to https://gitter.im/apache/toree
    @/all I will be updating the description to indicate so. I am also working on a bot to remind everyone to switch channels.
    aremirata
    @aremirata
    @all, how can I install apache toree scala on yarn?
    There is an error loading the kernel
    Evan Oman
    @EvanOman

    @aremirata

    @/all we have finally acquired an Apache Gitter channel. Please direct your questions to https://gitter.im/apache/toree

    aremirata
    @aremirata
    thanks for letting me know
    Chip Senkbeil
    @chipsenkbeil
    Sorry, just updated our repository to point to the new Gitter channel.
    Roshani Nagmote
    @Roshrini
    Hi, I want to add an external jar sitting on my local machine in jupyter scala toree. I tried %AddJar /path/to/jar but it gives error as Magic AddJar failed to execute with error:
    no protocol: Can anyone help me with this?
    Corey Stubbs
    @Lull3rSkat3r
    Hi @Roshrini can you ask your question in the https://gitter.im/apache/toree channel?
    Roshani Nagmote
    @Roshrini
    sure.
    Mingsterism
    @mingsterism
    hi guys
    anyone knows why i get this error.
    root@ubuntu-1gb-sgp1-01:/home# spark-shell
    Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
    Setting default log level to "WARN".
    To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
    17/02/08 04:03:11 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using bu                                                                                               iltin-java classes where applicable
    17/02/08 04:03:12 WARN Utils: Your hostname, ubuntu-1gb-sgp1-01 resolves to a loopback address: 127.0.1.1;                                                                                                using 10.15.0.5 instead (on interface eth0)
    17/02/08 04:03:12 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
    17/02/08 04:03:42 WARN ObjectStore: Version information not found in metastore. hive.metastore.schema.verif                                                                                               ication is not enabled so recording the schema version 1.2.0
    17/02/08 04:03:43 WARN ObjectStore: Failed to get database default, returning NoSuchObjectException
    17/02/08 04:03:46 WARN ObjectStore: Failed to get database global_temp, returning NoSuchObjectException
    Spark context Web UI available at http://10.15.0.5:4040
    Spark context available as 'sc' (master = local[*], app id = local-1486526597417).
    Spark session available as 'spark'.
    Welcome to
          ____              __
         / __/__  ___ _____/ /__
        _\ \/ _ \/ _ `/ __/  '_/
       /___/ .__/\_,_/_/ /_/\_\   version 2.1.0
          /_/
    Irving Duran
    @blacknred0
    @mingsterism this is due to your /etc/hosts having different ip address. your 127.0.0.1 is not same as 10.15.0.5
    you can fix it over there or updating spark environment by assigning SPARK_LOCAL_IP
    cristina-grosu
    @cristina-grosu
    Have you seen this type of error so far?
    blob
    Irving Duran
    @blacknred0
    I don't get this... why when I do "$SPARK_HOME/bin/spark-submit --class "org.class" --master spark://w.x.y.z:6066" and I look on the UI the driver is there and saying is running, but nothing is happening.... thoughts?
    Chip Senkbeil
    @chipsenkbeil
    @/all we have moved to https://gitter.im/apache/toree
    Arnold1
    @Arnold1
    hi, how can you insert a field into a dataset? i want to Insert field(s) with constant value(s)...
    Shion
    @ShionAt
    Please support my attempt to change the world of the project, spread it, baptize it
    https://github.com/ShionAt/Keys
    https://twitter.com/ShionKeys
    vijendra singh
    @viju0731_twitter
    hi
    how do i tune spark data driven param
    Lijo Varghese
    @lijoev

    Hi i am stuck in my work in submitting a spark job to hadoop yarn master in cluster mode
    please find my environment setup below

    i have a linux machine having 128 GB of RAM, 2TB Hard disk, 2x16 cores.
    i have set up cloud era hadoop containers in a docker mount point having 50 GB(this mount point is almost full). i have one datanode, namenode and yarnmaster containers runnings.
    i am submitting spark job from my host machine to run Rscript in cluster mode. R server and libraries are set up in datanode.
    When i submit the spark job it remains in the accepted state for long time. please find the spark submit command i am using below
    spark-submit --master yarn --name RechargeModel --deploy-mode cluster --executor-memory 3G --num-executors 4 rechargemodel.R