Where communities thrive

  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
Repo info
  • Jan 20 17:19
    kornelc commented #627
  • Jan 20 17:17
    kornelc commented #627
  • Jan 20 16:25
    kornelc commented #627
  • Jan 20 16:24
    kornelc commented #627
  • Jan 20 16:05
    kornelc commented #906
  • Jan 20 16:02
    kornelc closed #906
  • Jan 20 16:01
    kornelc commented #906
  • Jan 20 03:32
    kiendang commented #906
  • Jan 19 21:05
    scala-steward closed #899
  • Jan 19 21:05
    scala-steward commented #899
  • Jan 19 21:05
    scala-steward opened #907
  • Jan 19 15:50
    kornelc commented #906
  • Jan 19 03:13
    kiendang commented #906
  • Jan 19 03:10
    kiendang commented #906
  • Jan 18 20:27
    kornelc commented #710
  • Jan 18 20:16
    kornelc opened #906
  • Jan 13 05:11
    scala-steward closed #891
  • Jan 13 05:11
    scala-steward commented #891
  • Jan 13 05:11
    scala-steward opened #905
  • Jan 13 03:15
    suevii commented #904
Aleksei Alefirov

Ok, managed to solve it.

"sh.almond" % "scala-interpreter" % V.almond cross CrossVersion.full

worked fine for me, the problem was missing jvm-repr, thankfully, not news in this chat.

Aleksei Alefirov

Hi again,
I'd like to use ScalaIntepreter, but when I create it, I get

    at almond.amm.AmmInterpreter$.$anonfun$apply$3(AmmInterpreter.scala:130)
Caused by: scala.reflect.internal.FatalError: Error accessing /usr/lib/jvm/java-1.8.0-openjdk-
    at scala.tools.nsc.classpath.AggregateClassPath.$anonfun$list$3(AggregateClassPath.scala:99)

Can I somehow fixed? Maybe with correct ScalaInterpreterParams?

Aleksei Alefirov

Hello again,
Playing with Classloaders (both ScalaIntepreter param and current Thread context one), I've managed to initialize ScalaInterpeter.
But trying to execute code there fails for me with Compilation Failure.
For code val x = 1 I get following error:

cmd0.sc:7: exception during macro expansion: 
java.lang.IllegalArgumentException: argument type mismatch

  .printOnChange(x, "x", _root_.scala.None, _root_.scala.None, _root_.scala.None)) }

Looked at almond code a bit, I've experimented and set ScalaInterpreter params autoUpdateLazyVals and autoUpdateVars both to false, and got slightly different error:

cmd0.sc:8: exception during macro expansion: 
java.lang.IllegalArgumentException: argument type mismatch

          .print(x, "x", _root_.scala.None)

Maybe someone can give me a clue, how to solve it? Thanks.

Hao Sun

java.lang.ClassCastException: almond.ReplApiImpl cannot be cast to ammonite.repl.ReplAPI
I am not sure what is happening. I was trying to load a locally published jar and I got this.

Seems like cats works..
(I am on almond 0.9.0 and scala 2.12.8)

Hao Sun
Solved. I had ammonite in my jar. Exclude it fixed my issue.
Adam Davidson
"Local clusters, Mesos, and Kubernetes, aren't supported by ammonite-spark yet". We use Spark on k8s here, just wondering if this means it does not work at all, or whether you can use it but without ammonite features?
Hubert Plociniczak
almond 0.9.1 (compared to 0.9.0) adds metabrowse dependency to the build. The latter, in turn, brings scala-compiler 2.13.1 dependency. This is problematic for custom kernels that are still on 2.13.0 because ammonite cross compiles to a full Scala version (not just binary one).
In the end one gets nomethod/noclass found exceptions during initialization of the kernel because of compiler interface differences. Would that be considered a bug/feature? I'm guessing metabrowse would need to be crosscompiled to a full Scala version to solve this properly.
Michał Gołębiewski
hello guys, ive encountered problem when i try to run Scala kernel on my Jupyter Notebook:
./almond --log debug --connection-file scala.kernel.json 
DEBUG ScalaKernel$ Auto dependency:
  Trigger: Module(org.apache.spark, *)
  Adds: Dependency(Module(sh.almond, almond-spark_2.12), 0.9.1)
DEBUG ScalaKernel$ Creating interpreter
DEBUG ScalaKernel$ Created interpreter
DEBUG ScalaKernel$ Running kernel
DEBUG ScalaKernel$ Initializing interpreter (background)
INFO AmmInterpreter$ Creating Ammonite interpreter
DEBUG AmmInterpreter$ Initializing interpreter predef
Compiling (synthetic)/ammonite/predef/interpBridge.sc
DEBUG ZeromqConnection Opening channels for ConnectionParameters(,tcp,33763,35867,59313,42939,44325,****,Some(hmac-sha256),Some(scala))
ERROR AmmInterpreter$ Caught exception while initializing interpreter
java.lang.NoSuchMethodError: scala.tools.nsc.classpath.ZipAndJarClassPathFactory$.create(Lscala/reflect/io/AbstractFile;Lscala/tools/nsc/Settings;)Lscala/tools/nsc/util/ClassPath;

here is rest of the error:


and here are version numbers:

$ scala -version
Scala code runner version 2.12.8 -- Copyright 2002-2018, LAMP/EPFL and Lightbend, Inc.

$ java -version
openjdk version "1.8.0_242"
OpenJDK Runtime Environment (build 1.8.0_242-8u242-b08-0ubuntu3~18.04-b08)
OpenJDK 64-Bit Server VM (build 25.242-b08, mixed mode)

$ javac -version
javac 1.8.0_242
Hubert Plociniczak
@mjgolebiewski That's similar to what I was encountering. Except you seem to be on 2.12.x and you are getting scala-reflect binary incompatibilies. I've also been fighting with that once I resolved scala-compilar compatibility issues.
How did you build that particular almond kernel? Is that from a fresh clone of the repo?
I can show you the (not so pretty) workaround I used, but I think this should be solved in general in almond itself, by fetching dependencies with full scala version, not just the binary compatible one.
@mjgolebiewski I ran into the same problem on my FreeBSD dev machine. Just for fun, I tried SCALA_VERSION=2.13.1 ALMOND_VERSION=0.9.1 and it works
Michał Gołębiewski
@hubertp @oddodaoddo changing scala version helped. thank you guys
Michail Chatzis
Good morning everyone, I have run to a problem during installation. I would appreciate any help, thank you!
Sudheer Doppalapudi
Hi Team, I am trying to use mermaid JS inside Scala shell launched with the help of almond. I want to draw diagrams right form the Jupiter notebooks. Anybody tried this or have any luck with it ? Thanks in advance.
Shriraj Bhardwaj
I tried SCALA_VERSION=2.13.1 and ALMOND_VERSION=0.9.1 this breaks if you try to import spark-sql for scala. so I downgraded ALMOND_VERSION to 0.8.3 which worked well, if anyone looking for working with spark-sql, use SCALA_VERSION=2.12.8 and ALMOND_VERISON=0.8.3
Will Udstrand
Are there any updates on the issue @mjgolebiewski was seeing? I have tried installing almond with SCALA_VERSION=2.12.8 ALMOND_VERISON=0.8.3, SCALA_VERSION=2.12.8 ALMOND_VERSION=0.9.1, and SCALA_VERSION=2.13.1 ALMOND_VERSION=0.9.1 all to no avail.
David Bouyssié
I'm trying to use Almond on Google Colab. I have followed this tutorial https://gist.github.com/shadaj/323ad2393b46c1b71df435728a052c24 but it doesn't seem to work anymore. Any workaround?
Alfonso Roa
@sbhardwaj-mt i hope you find already a solution but spark is not compiled for scala 2.13, you must use a scala version of the 2.12, mainly 2.12.10 with almond 0.9.1 will work
Wojtek Pituła
Hey, I'm building a library (with sbt) and would like to add it to my almond container (with custom Dockerfile). I assume sbt publish(on host) -> coursier fetch(in docker file) would work but I would like to skip pushing the artifact to external repo. Do you know how (or which) to copy artifacts from sbt target so it will be properly picked up by kernel (so ammonite and coursier)
Sören Brunk
@Krever you could do an sbt publishLocal and then copy the local ivy dir into the docker image on build.
You can publish to a dir different than .ivy/local with sbt to avoid copying too much stuff into the dockerfile. See https://www.scala-sbt.org/1.x/docs/Publishing.html
Sören Brunk
We're using a somewhat similar method (a bit more involved i.e. using a multi stage docker build) to create an almond docker image without having published almond artifacts (for snapshot builds).
Alfonso Roa
a question related maybe more with coursier, is there a way to download some dependencies (to create a container) and access them to use in almond? the objective is to create the docker with some dependencies and not have to download them with ivy every time
Sören Brunk
@alfonsorr adding a RUN coursier fetch <lib>in the docker build should do the trick to prepopulate the coursier cache. almond uses the coursier API through Ammonite so it should find the cached artifacts in the running container then.
Wojtek Pituła
@sbrunk thanks!
Wojtek Pituła
I have one more question, hopefully, a simple yes/no one but only partially related to almond. Is it possible to store notebook state? I have a use case in which I need to wait in the middle of notebook for a potentially very long time, so I would like to stop it and after I get completion event revive it from where it stopped.
ammonite session persistance comes to my mind, but Im curious if anyone actually tried that
Alfonso Roa
@sbrunk great, thanks!
Alfonso Roa
i added also coursier fetch --sources <lib> to skip all the downloads in the notebook, it works great thanks!
Sören Brunk
@alfonsorr I just remembed that you can even take it one step further. You can execute a notebook programatically using nbconvertin the docker build which will do a coursier fetch as well. The advantage is that it will stay in sync with your import $ivy ... statements in the notebook.
I did that for my Scala Days talk about almond because I used a notebook as live slides. https://github.com/sbrunk/scaladays-2019/blob/master/scripts/jupyter.sh#L87
@Krever I think saved ammonite sessions don't persist through different runs of ammonite but I'm not sure
Wojtek Pituła
yeah, seems like so ;/ I just went through docs and can find anything about out of process persistance
Alfonso Roa
i was thinking to launch a scala script with the imports but that idea is much better
hello, this is more related to ammonite-spark but I'm trying to run the REPL and then hopefully notebooks on my company cluster, it's running Hortonworks/Cloudera HDP 2.6.5
creating the Spark sessions fails
Exception in thread "main" java.lang.NoClassDefFoundError: scala/MatchError
    at org.apache.spark.deploy.yarn.ExecutorLauncher$.main(ApplicationMaster.scala:833)
    at org.apache.spark.deploy.yarn.ExecutorLauncher.main(ApplicationMaster.scala)
Caused by: java.lang.ClassNotFoundException: scala.MatchError
    at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
    at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
    ... 2 more
this looks like some typical binary compatibility conflict, but I have no idea how to track it down
Sören Brunk
@hygt do the Scala versions of your Spark installation and Ammonite match? Most spark distributions are still on Scala 2.11 (Spark 2.4 supports Scala 2.12 but it's not the default) while Ammonite has dropped for 2.11 support after version 1.6.7
yes, I know
I've rebuilt Spark and it seems to work now
I'm not using the provided Spark binaries, exactly because of Scala 2.11
now I get a connectivity error (from the executor to the client?) but I guess that's unrelated, let's do more digging...
so I've tried many things but I always have issues if I load local Spark jars
unsetting SPARK_HOME gets me a little further
so maybe I should publish my patched version of Spark to our local artifactory and let AmmoniteSparkSession fetch the jars
Alfonso Roa
can you show the imports and the way that you create the SparkSession'
and be sure to use a 2.11 kernel and spark version 2.3.2
I'm using scala 2.12, spark 2.4.4 and ammonite-spark 0.9.0
I was trying something like that
import $ivy.`sh.almond::ammonite-spark:0.9.0`
import ammonite.ops._

val sparkHome = sys.env("SPARK_HOME")
val sparkJars = ls ! Path(sparkHome) / 'jars


import org.apache.spark.sql._

val spark =