These are chat archives for thunder-project/thunder

14th
Jun 2015
Jeremy Freeman
@freeman-lab
Jun 14 2015 06:13
@lilumb everything should be working now if you reinstall, let me know if otherwise. and you might want to try first calling pip uninstall thunder-python before calling pip install thunder-python again.
lilumb
@lilumb
Jun 14 2015 13:38

Installation completed successfully - thank you!
I tried a simple example and got the error below:

>>> data = tsc.loadExample('fish-series')
15/06/14 09:35:34 INFO MemoryStore: ensureFreeSpace(263992) called with curMem=0, maxMem=278302556
15/06/14 09:35:34 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 257.8 KB, free 265.2 MB)
15/06/14 09:35:34 INFO MemoryStore: ensureFreeSpace(29820) called with curMem=263992, maxMem=278302556
15/06/14 09:35:34 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 29.1 KB, free 265.1 MB)
15/06/14 09:35:34 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on il-bcm71.cm.cluster:45590 (size: 29.1 KB, free: 265.4 MB)
15/06/14 09:35:34 INFO BlockManagerMaster: Updated info of block broadcast_0_piece0
15/06/14 09:35:34 INFO SparkContext: Created broadcast 0 from newAPIHadoopFile at PythonRDD.scala:497
15/06/14 09:35:34 INFO MemoryStore: ensureFreeSpace(263880) called with curMem=293812, maxMem=278302556
15/06/14 09:35:34 INFO MemoryStore: Block broadcast_1 stored as values in memory (estimated size 257.7 KB, free 264.9 MB)
15/06/14 09:35:34 INFO MemoryStore: ensureFreeSpace(29778) called with curMem=557692, maxMem=278302556
15/06/14 09:35:34 INFO MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 29.1 KB, free 264.8 MB)
15/06/14 09:35:34 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on il-bcm71.cm.cluster:45590 (size: 29.1 KB, free: 265.4 MB)
15/06/14 09:35:34 INFO BlockManagerMaster: Updated info of block broadcast_1_piece0
15/06/14 09:35:34 INFO SparkContext: Created broadcast 1 from broadcast at PythonRDD.scala:454
15/06/14 09:35:34 INFO FileInputFormat: Total input paths to process : 1
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "build/bdist.macosx-10.5-x86_64/egg/thunder/utils/context.py", line 583, in loadExample
  File "build/bdist.macosx-10.5-x86_64/egg/thunder/utils/context.py", line 96, in loadSeries
  File "build/bdist.macosx-10.5-x86_64/egg/thunder/rdds/fileio/seriesloader.py", line 212, in fromBinary
  File "/cm/shared/apps/hadoop/Apache/spark-1.3.0-bin-hadoop2.4/python/pyspark/context.py", line 522, in newAPIHadoopFile
    jconf, batchSize)
  File "/cm/shared/apps/hadoop/Apache/spark-1.3.0-bin-hadoop2.4/python/lib/py4j-0.8.2.1-src.zip/py4j/java_gateway.py", line 538, in __call__
  File "/cm/shared/apps/hadoop/Apache/spark-1.3.0-bin-hadoop2.4/python/lib/py4j-0.8.2.1-src.zip/py4j/protocol.py", line 300, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.newAPIHadoopFile.
: java.lang.IncompatibleClassChangeError: Found interface org.apache.hadoop.mapreduce.JobContext, but class was expected
    at thunder.util.io.hadoop.FixedLengthBinaryInputFormat$.getRecordLength(FixedLengthBinaryInputFormat.scala:32)
    at thunder.util.io.hadoop.FixedLengthBinaryInputFormat.isSplitable(FixedLengthBinaryInputFormat.scala:73)
    at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:387)
    at org.apache.spark.rdd.NewHadoopRDD.getPartitions(NewHadoopRDD.scala:95)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:219)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:217)
    at scala.Option.getOrElse(Option.scala:120)
    at org.apache.spark.rdd.RDD.partitions(RDD.scala:217)
    at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:32)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:219)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:217)
    at scala.Option.getOrElse(Option.scala:120)
    at org.apache.spark.rdd.RDD.partitions(RDD.scala:217)
    at org.apache.spark.rdd.RDD.take(RDD.scala:1156)
    at org.apache.spark.api.python.SerDeUtil$.pairRDDToPython(SerDeUtil.scala:205)
    at org.apache.spark.api.python.PythonRDD$.newAPIHadoopFile(PythonRDD.scala:457)
    at org.apache.spark.api.python.PythonRDD.newAPIHadoopFile(PythonRDD.scala)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231)
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:379)
    at py4j.Gateway.invoke(Gateway.java:259)
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133)
    at py4j.commands.CallCommand.execute(CallCommand.java:79)
    at py4j.GatewayConnection.run(GatewayConnection.java:207)
    at java.lang.Thread.run(Thread.java:745)

Might you have any insight on this error? As you can likely decipher, I'm using Spark 1.3.0 in a standalone mode.

andrew giessel
@andrewgiessel
Jun 14 2015 14:16
Your spark is compiled against Hadoop v2 and thunder needs v1
Check the FAQ
It's crashing when it tries to do hdfs stuff
You can install a 2nd spark and point thunder to it w the SPARK_HOME env var
Jeremy Freeman
@freeman-lab
Jun 14 2015 15:30
yup the version "compiled for hadoop 1.x" should work
That's one of several options when you go to download spark
lilumb
@lilumb
Jun 14 2015 16:07
Thanks to @andrewgiessel & @freeman-lab. I will follow up as you've suggested.