## Where communities thrive

• Join over 1.5M+ people
• Join over 100K+ communities
• Free without limits
##### Activity
• Aug 02 17:13
bitWeaver-arch commented #345
• Aug 01 16:49
kiendang commented #854
• Aug 01 16:29
alexarchambault synchronize #854
• Aug 01 16:27
alexarchambault synchronize #854
• Aug 01 16:11
alexarchambault synchronize #854
• Aug 01 16:09
alexarchambault synchronize #854
• Aug 01 16:00
alexarchambault synchronize #854
• Aug 01 15:43
alexarchambault commented #854
• Aug 01 15:40
review-notebook-app[bot] commented #854
• Aug 01 15:40
alexarchambault opened #854
• Aug 01 15:29

alexarchambault on master

Validate examples from a mill c… NIT Merge pull request #852 from al… (compare)

• Aug 01 15:29
alexarchambault closed #852
• Aug 01 14:05
scala-steward synchronize #849
• Aug 01 14:05
scala-steward opened #853
• Aug 01 14:04
alexarchambault synchronize #852
• Aug 01 14:03
alexarchambault synchronize #852
• Aug 01 13:59

alexarchambault on master

Update sbt-mima-plugin to 0.9.2… (compare)

• Aug 01 13:59
alexarchambault closed #829
• Aug 01 13:58
alexarchambault closed #776
• Aug 01 13:58

alexarchambault on master

Fix publish settings (#830) (compare)

Sören Brunk
@sbrunk
@vherasme you can use the -v option to mount a host directory into your container. See https://docs.docker.com/storage/bind-mounts/#start-a-container-with-a-bind-mount
Sören Brunk
@sbrunk
docker run -it --rm -v /sysroot/home/victor/Documentos/:/home/jovyan/data -p 8888:8888 almondsh/almond:latestshould work for you
Victor M
@vherasme
Thanks a lot. One last thing, it won't allow me to create new notebooks: An error occurred while creating a new notebook. Permission denied: data/Untitled.ipynb
I was able to create the notebook in the work directory. Will this notebook disappear once I quit?
Sören Brunk
@sbrunk
Yes only the data directory will persist. You should check the write permissions of your documents dir. Or you could also mount another directory too.
Victor M
@vherasme

Yes only the data directory will persist. You should check the write permissions of your documents dir. Or you could also mount another directory too.

Thanks a lot. It's all working now

michalrudko
@mrjoseph84
Hi All, is there currently a way to install the almond.sh kernel in offline mode? I am unfortunately behind the corporate firewall and cannot install it with suggested in the docs ./coursier launch almond -- --install . I saw the issue on GitHub: almond-sh/almond#145 however the link in the answer is not valid anymore. I'd appreciate tips on how to go about it. Thanks!
Sören Brunk
@sbrunk
@mrjoseph84 You could try to generate a standalone launcher as described in the docs: https://almond.sh/docs/install-other#all-included-launcher
michalrudko
@mrjoseph84

@mrjoseph84 You could try to generate a standalone launcher as described in the docs: https://almond.sh/docs/install-other#all-included-launcher

Thanks! I'll check that.

michalrudko
@mrjoseph84
Hello, is there any way to pass the Spark conf parameters via environment variables so. that you don't need to specify them in the notebook? For PySpark it's PYSPARK_SUBMIT_ARGS pyspark-shell, for Zeppelin it's SPARK_SUBMIT_OPTIONS in conf/zeppelin-env.sh (https://zeppelin.apache.org/docs/0.5.5-incubating/interpreter/spark.html). Is there any way to pass the envs in similiar way in Almond kernel (e.g. via some Ammonite configs)?
Alexandre Archambault
@alexarchambault
@mrjoseph84 SPARK_CONF_DEFAULTS is taken into account. It should contain a path. The file at that path is read, and should contain lines like spark.foo value (space separating property name from value). Lines starting with # are ignored.
That doesn't allow to specify the options directly in an environment variable though. But support for it could be added. It should be a matter of adding some logic reading an env var around here.
michalrudko
@mrjoseph84
@alexarchambault thanks! This is exactly what we used.
sandrolabruzzo
@sandrolabruzzo
Hi All, I need help, I had some problems on the run of spark in yarn mode on Jupiter using almond, I got always the error: Caused by: java.io.IOException: No FileSystem for scheme: http
Instead, If I use ammonite shell and import the same library it works
this is my snippet of code:
import $ivy.org.apache.spark::spark-sql:2.4.0 import$ivy.sh.almond::ammonite-spark:0.5.0
import org.apache.spark.sql._

val spark = {
AmmoniteSparkSession.builder()
.master("yarn")
.config("spark.executor.instances", "20")
.config("spark.executor.memory", "2g")
.getOrCreate()
}
def sc = spark.sparkContext

val rdd = sc.parallelize(1 to 100000000, 100)

val n = rdd.map(_ + 1).sum()
I use almond:0.5.0 --scala 2.11.12
ammonite:1.6.7
I'm getting Error: Unable to access jarfile /opt/conda/share/jupyter/kernels/scala/launcher.jar after installing; I know I can probably just chmod +r /opt/conda/share/jupyter/kernels/scala/ but did I do something wrong for it to be installed with -rwx------?
5 replies
Is it possible to get widgets along the lines of ipython widget's file upload or select? I haven't seen any examples doing more than string inputs.
@nonickfx_gitlab
Hi all, I'd like to store snippets of code from a cell in Jupyter directly to a text file. In IPython, one could do that using magic commands (%...). Any pendant to that in Almond? I found some discussions from 2016 stating it's not possible, but maybe things changed in the meantime? Thanks
4 replies
Brian Howard
@bhoward
Hi, I'm trying to use the almond kernel with jupyter-book, but it doesn't like the ANSI color codes generated by ammonite. It is supposed to be possible to turn off the colors with interp.colors() = ammonite.util.Colors.BlackWhite, but that doesn't seem to have any effect from within Jupyter (no errors, it just doesn't turn off the color). It does work from an ammonite REPL. Any ideas? I don't want to file an issue before I know whether this is on the almond side or the ammonite side (and there's already an issue with jupyter-book to handle colors, but I don't think it's going to happen soon).
7 replies
Tommaso Schiavinotto
@Teudimundo_gitlab
Hi there, I'm importing a script in a notebook through import $file.script. It looks like that once the script is compiled successfully and imported will no longer be reimported also if the script changes when the cell is newly evaluated. Is there a way to force the compilation or the reimport of the script without restarting the kernel? 2 replies Olivier Deckers @olivierdeckers Hi, it looks like almond is not downloading runtime dependencies of libraries: If I run import$ivy.io.grpc:grpc-core:1.30.2
io.perfmark.PerfMark

I get "object perfmark is not a member of package io", while perfmark should be a runtime dependency for grpc-core, and should be downloaded and put on the classpath as well. Is there some setting to change this behavior?

Anton Sviridov
@keynmol

apologies if this is too deep of a jupyter vs. jupyterlab question.

I have a custom kernel written on top of Almond and it works great in old jupyter notebook.

The main crux is that in execute I start a computation asynchronously and immediately return success:

          val id: String = ...

ExecuteResult.Success(
DisplayData.markdown("_awaiting results_").withId(s"$id-output")) And then the async process will use $id-output to render the results as html.

all the other output updates work in jupyterlab apart from rendering the final output. Has anything changed in Almond's/Jupyter's API?

Matthew Hoggan
@mehoggan

Hi, when trying to add a repository in Almond I keep getting:

cmd1.sc:3: object repositories is not a member of package ammonite.interp
val res1_1 = interp.repositories() ++= Seq(/*...*/)

import coursierapi._

interp.repositories() ++= Seq(/*...*/)
How do I fix this?
Florian Magin
@fmagin
Does the Almond Kernel support embedding into another Application? I.e. some application that starts it as a thread so the kernel has access to to all the classes of the JVM context (via the same class loader so the static variables are shared)
20 replies
I added support for this in the Kotlin Jupyter Kernel recently (Kotlin/kotlin-jupyter#102) and have a real use case for this
Florian Magin
@fmagin
it seems like it should support embedding like I described, if the class loader option (-i as far as I understand) is set to false. Will give this a try some time
Lanking
@lanking520

I am trying to use Spark 3.0 with the local standalone cluster setup. I just simply create 1 master and 1 worker locally. However, the job is keep crashing with the issue

20/11/23 15:55:24 ERROR CoarseGrainedExecutorBackend: Executor self-exiting due to : Unable to create executor due to null
...
Caused by: java.io.IOException: No FileSystem for scheme: http
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2667)

It seemed all jars are uploaded to the Spark remote server and Spark trying to fetch them.

import $ivy.org.apache.spark:spark-sql_2.12:3.0.0 import org.apache.spark.sql._ val spark = { NotebookSparkSession.builder() .master("spark://localhost:7077") .getOrCreate() } spark.conf.getAll.foreach(pair => println(pair._1 + ":" + pair._2)) def sc = spark.sparkContext val rdd = sc.parallelize(1 to 100000000, 100) val n = rdd.map(_ + 1).sum() You can reproduce by using the above code (and please create a standalone cluster beforehand curl -O https://archive.apache.org/dist/spark/spark-3.0.0/spark-3.0.0-bin-hadoop2.7.tgz tar zxvf spark-3.0.0-bin-hadoop2.7.tgz mv spark-3.0.0-bin-hadoop2.7/ spark export SPARK_MASTER_HOST=localhost export SPARK_WORKER_INSTANCES=1 ./spark/sbin/start-master.sh ./spark/sbin/start-slave.sh spark://localhost:7077 5 replies This is tested on both Cent OS 7 and Mac with JDK 11, Scala 2.12.10, Almond (latest) + Almond Spark(0.10.9) Joaquín Chemile @jchemile Hello! In which version of Jupyter Notebook Almond is worked? Because I'm facing this problem: Traceback (most recent call last): File "C:\Users\joaqchem\anaconda3\lib\site-packages\tornado\web.py", line 1699, in _execute result = await result File "C:\Users\joaqchem\anaconda3\lib\site-packages\tornado\gen.py", line 736, in run yielded = self.gen.throw(*exc_info) # type: ignore File "C:\Users\joaqchem\anaconda3\lib\site-packages\notebook\services\sessions\handlers.py", line 73, in post type=mtype)) File "C:\Users\joaqchem\anaconda3\lib\site-packages\tornado\gen.py", line 729, in run value = future.result() File "C:\Users\joaqchem\anaconda3\lib\site-packages\tornado\gen.py", line 736, in run yielded = self.gen.throw(*exc_info) # type: ignore File "C:\Users\joaqchem\anaconda3\lib\site-packages\notebook\services\sessions\sessionmanager.py", line 79, in create_session kernel_id = yield self.start_kernel_for_session(session_id, path, name, type, kernel_name) File "C:\Users\joaqchem\anaconda3\lib\site-packages\tornado\gen.py", line 729, in run value = future.result() File "C:\Users\joaqchem\anaconda3\lib\site-packages\tornado\gen.py", line 736, in run yielded = self.gen.throw(*exc_info) # type: ignore File "C:\Users\joaqchem\anaconda3\lib\site-packages\notebook\services\sessions\sessionmanager.py", line 92, in start_kernel_for_session self.kernel_manager.start_kernel(path=kernel_path, kernel_name=kernel_name) File "C:\Users\joaqchem\anaconda3\lib\site-packages\tornado\gen.py", line 729, in run value = future.result() File "C:\Users\joaqchem\anaconda3\lib\site-packages\tornado\gen.py", line 209, in wrapper yielded = next(result) File "C:\Users\joaqchem\anaconda3\lib\site-packages\notebook\services\kernels\kernelmanager.py", line 160, in start_kernel super(MappingKernelManager, self).start_kernel(**kwargs) File "C:\Users\joaqchem\anaconda3\lib\site-packages\jupyter_client\multikernelmanager.py", line 110, in start_kernel km.start_kernel(**kwargs) File "C:\Users\joaqchem\anaconda3\lib\site-packages\jupyter_client\manager.py", line 259, in start_kernel **kw) File "C:\Users\joaqchem\anaconda3\lib\site-packages\jupyter_client\manager.py", line 204, in _launch_kernel return launch_kernel(kernel_cmd, **kw) File "C:\Users\joaqchem\anaconda3\lib\site-packages\jupyter_client\launcher.py", line 138, in launch_kernel proc = Popen(cmd, **kwargs) File "C:\Users\joaqchem\anaconda3\lib\subprocess.py", line 775, in __init__ restore_signals, start_new_session) File "C:\Users\joaqchem\anaconda3\lib\subprocess.py", line 1178, in _execute_child startupinfo) FileNotFoundError: [WinError 2] The system cannot find the file specified I've Python 3.7.3, Scala 2.11.12 and Almond 0.6.0 Anton Sviridov @keynmol I think there might be a subtle problem with Almond and 2.13. It depends on ammonite's repl artifact: [info] org.jline:jline-terminal:3.14.1 [info] +-com.lihaoyi:ammonite-repl_2.13.3:2.2.0-4-4bd225e [info] | +-sh.almond:scala-interpreter_2.13.3:0.10.9 [S] Which actually uses jline 3.14.1, but I think that's conflicting with Scala's own jline (which was changed to 3.15.0 in 2.13) which isn't binary compatible, so missing link barks: [error] Category: FIELD NOT FOUND [error] In artifact: jline-3.15.0.jar [error] In class: org.jline.utils.Log [error] In method: isEnabled(java.util.logging.Level):122 [error] Access to: org.jline.utils.Log.logger [error] Problem: Field not found: logger [error] Found in: jline-terminal-3.14.1.jar Even most recent releases of ammonite-repl use 3.14.1, which means merely updating almond won't help on 2.13 This is perhaps obscure, but just want to check with @alexarchambault that I'm understanding this correctly? And transitive exclusion might be the only way to fix it? Or silencing missinglink. Joaquín Chemile @jchemile Thanks!! Anton Sviridov @keynmol oh sorry @jchemile that's unrelated to your problem :) with regards to which version - I sort of just get whatever pip3 installs, and it works. May be try using a newer almond version? JasonVranek @JasonVranek Hello! I currently use the Scala embedded DSL 'chisel' to do hardware design and feel like a notebook environment lends well to this. I just had a couple questions: 1) is there a way to split a class across multiple cells? (some of these hardware modules end up being hundreds of lines). I'm currently tinkering around with implicit conversions to add methods but open to other options. 2) I am very excited about nbdev (https://github.com/fastai/nbdev) and am curious if there have been any attempts to port some of these features to scala. thanks! Anton Sviridov @keynmol Always dreamt of having a job where I can use chisel :D 1) does defining multiple traits and then mixing them in in one final class work with chisel? JasonVranek @JasonVranek It does! Thanks Anton, I just tried this and it does work. Much much cleaner way of achieving this. (btw chisel is awesome) Anton Sviridov @keynmol Fantastic :) And I think your idea about chisel in notebooks makes a lot of sense, I should try it. JasonVranek @JasonVranek Not quite my idea - they've already setup almond + chisel as an onboarding tool here you can mess around with https://github.com/freechipsproject/chisel-bootcamp but I'm unaware of anyone currently using this for more serious development nicknn7 @nicknn7 Hi everyone, we are looking to increase the heap space available to the Scala kernel. When running "./almond --install --command "java -XX:MaxRAMPercentage=80.0 -jar almond" --copy-launcher true" on installation, the kernel.json file gets generated correctly (i.e. includes this XX:... line) but for some reason, inside the notebook, Runtime.getRuntime().maxMemory()/1000000000 always yields 32GB, no matter which RAMPercentage we pass to the installer. Any ideas what we're doing wrongly? The kernel is being installed globally. When we pass a small number, it seems to have an effect, but starting from a percentage which is above 32G, it always stays at 32GB. Any advice appreciated. Thanks! 2 replies André L. F. Pinto @andrelfpinto Hello, I am trying to run this code: case class Foo(id:Long, name:String) val constructor = classOf[Foo].getConstructors()(0) val args = Array[AnyRef](new java.lang.Integer(1), "Foobar") val instance = constructor.newInstance(args:_*).asInstanceOf[Foo] However, I am getting: java.lang.IllegalArgumentException: wrong number of arguments sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) java.lang.reflect.Constructor.newInstance(Constructor.java:423) ammonite.$sess.cmd1$Helper.<init>(cmd1.sc:2) ammonite.$sess.cmd1$.<clinit>(cmd1.sc:7) I see that the constructor is: java.lang.reflect.Constructor[?0] = public ammonite.$sess.cmd0$Helper$Foo(ammonite.$sess.cmd0$Helper,long,java.lang.String)
Where can I get this Helper?
Pham Nguyen
@akizminet
Hello, I'm new to this kernel. I don't know how to hide progress bar of almond-spark.
Kien Dang
@kiendang
Hi may I know how to get contextual help working? I passed both --metabrowse to the launcher and --sources --default=trueto coursier but it doesn't work for me.
Kien Dang
@kiendang
ah I was using scala 2.13.4 whereas metabrowse only supports 2.13.1
RndMnkIII
@RndMnkIII
How could I use the Jupyter widgets (IntSlider, TextBox, ...) in JupyterLab with the Almond kernel?