Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Sep 22 13:10
    stevenhurwitt opened #879
  • Sep 20 20:53
    dependabot[bot] closed #841
  • Sep 20 20:53
    dependabot[bot] commented #841
  • Sep 20 20:53
    dependabot[bot] labeled #878
  • Sep 20 20:53
    dependabot[bot] labeled #878
  • Sep 20 20:53
    dependabot[bot] opened #878
  • Sep 20 20:52

    dependabot[bot] on npm_and_yarn

    (compare)

  • Sep 20 20:52

    dependabot[bot] on npm_and_yarn

    Bump prismjs from 1.15.0 to 1.2… (compare)

  • Sep 19 11:23
    joesan closed #877
  • Sep 19 11:23
    joesan commented #877
  • Sep 19 09:32
    joesan opened #877
  • Sep 16 20:08
    scala-steward closed #873
  • Sep 16 20:08
    scala-steward commented #873
  • Sep 16 20:08
    scala-steward opened #876
  • Sep 14 22:09
    scala-steward opened #875
  • Sep 10 22:13
    scala-steward closed #861
  • Sep 10 22:13
    scala-steward commented #861
  • Sep 10 22:13
    scala-steward opened #874
  • Sep 10 04:06
    scala-steward closed #872
  • Sep 10 04:06
    scala-steward commented #872
Henry
@hygt
also our data scientists would rather use notebooks :smiley:
Pedro Larroy
@larroy
is almond working? I'm trying to run it in jupyter notebook and finding all kinds of problems
First i bumped in this issue: almond-sh/almond#508
now seems the kernel it's hanging
is there a way to debug it?
I separated my statements into smaller chunks and now seems to work
weird
Wojtek Pituła
@Krever
how would you approach rendering basic grapgh diagram? just some nodes and edges
Wojtek Pituła
@Krever
@alexarchambault I see that in master ammonite is in ver 2.0.4 but cs resolve -t sh.almond:scala-kernel_2.13.1:0.9.1 shows it in 1.7.4. Could we have a release with the newer version?
Sören Brunk
@sbrunk
@Krever Alex has just released 0.10.0 which updates ammonite to 2.1.4
Wojtek Pituła
@Krever
Great, thanks for the ping!
Victor M
@vherasme
Hello People
I am running almond with docker with: docker run -it --rm -p 8888:8888 almondsh/almond:latest. How can I access my local file system?
I want to read a file in /sysroot/home/victor/Documentos/test.csv
Sören Brunk
@sbrunk
@vherasme you can use the -v option to mount a host directory into your container. See https://docs.docker.com/storage/bind-mounts/#start-a-container-with-a-bind-mount
Sören Brunk
@sbrunk
docker run -it --rm -v /sysroot/home/victor/Documentos/:/home/jovyan/data -p 8888:8888 almondsh/almond:latestshould work for you
Victor M
@vherasme
Thanks a lot. One last thing, it won't allow me to create new notebooks: An error occurred while creating a new notebook. Permission denied: data/Untitled.ipynb
I was able to create the notebook in the work directory. Will this notebook disappear once I quit?
Sören Brunk
@sbrunk
Yes only the data directory will persist. You should check the write permissions of your documents dir. Or you could also mount another directory too.
Victor M
@vherasme

Yes only the data directory will persist. You should check the write permissions of your documents dir. Or you could also mount another directory too.

Thanks a lot. It's all working now

michalrudko
@michalrudko
Hi All, is there currently a way to install the almond.sh kernel in offline mode? I am unfortunately behind the corporate firewall and cannot install it with suggested in the docs ./coursier launch almond -- --install . I saw the issue on GitHub: almond-sh/almond#145 however the link in the answer is not valid anymore. I'd appreciate tips on how to go about it. Thanks!
Sören Brunk
@sbrunk
@mrjoseph84 You could try to generate a standalone launcher as described in the docs: https://almond.sh/docs/install-other#all-included-launcher
michalrudko
@michalrudko

@mrjoseph84 You could try to generate a standalone launcher as described in the docs: https://almond.sh/docs/install-other#all-included-launcher

Thanks! I'll check that.

michalrudko
@michalrudko
Hello, is there any way to pass the Spark conf parameters via environment variables so. that you don't need to specify them in the notebook? For PySpark it's PYSPARK_SUBMIT_ARGS pyspark-shell, for Zeppelin it's SPARK_SUBMIT_OPTIONS in conf/zeppelin-env.sh (https://zeppelin.apache.org/docs/0.5.5-incubating/interpreter/spark.html). Is there any way to pass the envs in similiar way in Almond kernel (e.g. via some Ammonite configs)?
Alexandre Archambault
@alexarchambault
@mrjoseph84 SPARK_CONF_DEFAULTS is taken into account. It should contain a path. The file at that path is read, and should contain lines like spark.foo value (space separating property name from value). Lines starting with # are ignored.
That doesn't allow to specify the options directly in an environment variable though. But support for it could be added. It should be a matter of adding some logic reading an env var around here.
michalrudko
@michalrudko
@alexarchambault thanks! This is exactly what we used.
sandrolabruzzo
@sandrolabruzzo
Hi All, I need help, I had some problems on the run of spark in yarn mode on Jupiter using almond, I got always the error: Caused by: java.io.IOException: No FileSystem for scheme: http
Instead, If I use ammonite shell and import the same library it works
this is my snippet of code:
import $ivy.`org.apache.spark::spark-sql:2.4.0`
import $ivy.`sh.almond::ammonite-spark:0.5.0` 
import org.apache.spark.sql._

val spark = {
  AmmoniteSparkSession.builder()
    .master("yarn")    
    .config("spark.executor.instances", "20")
    .config("spark.executor.memory", "2g")
    .getOrCreate()
}
def sc = spark.sparkContext

val rdd = sc.parallelize(1 to 100000000, 100)

val n = rdd.map(_ + 1).sum()
I use almond:0.5.0 --scala 2.11.12
ammonite:1.6.7
Chad Selph
@chadselph
I'm getting Error: Unable to access jarfile /opt/conda/share/jupyter/kernels/scala/launcher.jar after installing; I know I can probably just chmod +r /opt/conda/share/jupyter/kernels/scala/ but did I do something wrong for it to be installed with -rwx------?
5 replies
Chad Selph
@chadselph
Is it possible to get widgets along the lines of ipython widget's file upload or select? I haven't seen any examples doing more than string inputs.
1 reply
Vadim G
@nonickfx_gitlab
Hi all, I'd like to store snippets of code from a cell in Jupyter directly to a text file. In IPython, one could do that using magic commands (%...). Any pendant to that in Almond? I found some discussions from 2016 stating it's not possible, but maybe things changed in the meantime? Thanks
4 replies
Brian Howard
@bhoward
Hi, I'm trying to use the almond kernel with jupyter-book, but it doesn't like the ANSI color codes generated by ammonite. It is supposed to be possible to turn off the colors with interp.colors() = ammonite.util.Colors.BlackWhite, but that doesn't seem to have any effect from within Jupyter (no errors, it just doesn't turn off the color). It does work from an ammonite REPL. Any ideas? I don't want to file an issue before I know whether this is on the almond side or the ammonite side (and there's already an issue with jupyter-book to handle colors, but I don't think it's going to happen soon).
7 replies
Tommaso Schiavinotto
@Teudimundo_gitlab
Hi there, I'm importing a script in a notebook through import $file.script. It looks like that once the script is compiled successfully and imported will no longer be reimported also if the script changes when the cell is newly evaluated. Is there a way to force the compilation or the reimport of the script without restarting the kernel?
2 replies
Olivier Deckers
@olivierdeckers

Hi, it looks like almond is not downloading runtime dependencies of libraries: If I run

import $ivy.`io.grpc:grpc-core:1.30.2`
io.perfmark.PerfMark

I get "object perfmark is not a member of package io", while perfmark should be a runtime dependency for grpc-core, and should be downloaded and put on the classpath as well. Is there some setting to change this behavior?

Anton Sviridov
@keynmol

apologies if this is too deep of a jupyter vs. jupyterlab question.

I have a custom kernel written on top of Almond and it works great in old jupyter notebook.

The main crux is that in execute I start a computation asynchronously and immediately return success:

          val id: String = ...

          ExecuteResult.Success(
            DisplayData.markdown("_awaiting results_").withId(s"$id-output"))

And then the async process will use $id-output to render the results as html.

all the other output updates work in jupyterlab apart from rendering the final output. Has anything changed in Almond's/Jupyter's API?

1 reply
Matthew Hoggan
@mehoggan

Hi, when trying to add a repository in Almond I keep getting:

cmd1.sc:3: object repositories is not a member of package ammonite.interp
val res1_1 = interp.repositories() ++= Seq(/*...*/)

Current code reads:

import coursierapi._

interp.repositories() ++= Seq(/*...*/)
1 reply
How do I fix this?
Florian Magin
@fmagin
Does the Almond Kernel support embedding into another Application? I.e. some application that starts it as a thread so the kernel has access to to all the classes of the JVM context (via the same class loader so the static variables are shared)
20 replies
I added support for this in the Kotlin Jupyter Kernel recently (Kotlin/kotlin-jupyter#102) and have a real use case for this
Florian Magin
@fmagin
it seems like it should support embedding like I described, if the class loader option (-i as far as I understand) is set to false. Will give this a try some time
Lanking
@lanking520

I am trying to use Spark 3.0 with the local standalone cluster setup. I just simply create 1 master and 1 worker locally. However, the job is keep crashing with the issue

20/11/23 15:55:24 ERROR CoarseGrainedExecutorBackend: Executor self-exiting due to : Unable to create executor due to null
...
Caused by: java.io.IOException: No FileSystem for scheme: http
    at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2660)
    at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2667)

It seemed all jars are uploaded to the Spark remote server and Spark trying to fetch them.

import $ivy.`org.apache.spark:spark-sql_2.12:3.0.0`
import org.apache.spark.sql._

val spark = {
  NotebookSparkSession.builder()
    .master("spark://localhost:7077")
    .getOrCreate()
}
spark.conf.getAll.foreach(pair => println(pair._1 + ":" + pair._2))
def sc = spark.sparkContext
val rdd = sc.parallelize(1 to 100000000, 100)
val n = rdd.map(_ + 1).sum()

You can reproduce by using the above code (and please create a standalone cluster beforehand

curl -O https://archive.apache.org/dist/spark/spark-3.0.0/spark-3.0.0-bin-hadoop2.7.tgz
tar zxvf spark-3.0.0-bin-hadoop2.7.tgz
mv spark-3.0.0-bin-hadoop2.7/ spark
export SPARK_MASTER_HOST=localhost
export SPARK_WORKER_INSTANCES=1
./spark/sbin/start-master.sh
./spark/sbin/start-slave.sh spark://localhost:7077
5 replies
This is tested on both Cent OS 7 and Mac with JDK 11, Scala 2.12.10, Almond (latest) + Almond Spark(0.10.9)
Joaquín Chemile
@jchemile

Hello! In which version of Jupyter Notebook Almond is worked? Because I'm facing this problem:

Traceback (most recent call last):
  File "C:\Users\joaqchem\anaconda3\lib\site-packages\tornado\web.py", line 1699, in _execute
    result = await result
  File "C:\Users\joaqchem\anaconda3\lib\site-packages\tornado\gen.py", line 736, in run
    yielded = self.gen.throw(*exc_info)  # type: ignore
  File "C:\Users\joaqchem\anaconda3\lib\site-packages\notebook\services\sessions\handlers.py", line 73, in post
    type=mtype))
  File "C:\Users\joaqchem\anaconda3\lib\site-packages\tornado\gen.py", line 729, in run
    value = future.result()
  File "C:\Users\joaqchem\anaconda3\lib\site-packages\tornado\gen.py", line 736, in run
    yielded = self.gen.throw(*exc_info)  # type: ignore
  File "C:\Users\joaqchem\anaconda3\lib\site-packages\notebook\services\sessions\sessionmanager.py", line 79, in create_session
    kernel_id = yield self.start_kernel_for_session(session_id, path, name, type, kernel_name)
  File "C:\Users\joaqchem\anaconda3\lib\site-packages\tornado\gen.py", line 729, in run
    value = future.result()
  File "C:\Users\joaqchem\anaconda3\lib\site-packages\tornado\gen.py", line 736, in run
    yielded = self.gen.throw(*exc_info)  # type: ignore
  File "C:\Users\joaqchem\anaconda3\lib\site-packages\notebook\services\sessions\sessionmanager.py", line 92, in start_kernel_for_session
    self.kernel_manager.start_kernel(path=kernel_path, kernel_name=kernel_name)
  File "C:\Users\joaqchem\anaconda3\lib\site-packages\tornado\gen.py", line 729, in run
    value = future.result()
  File "C:\Users\joaqchem\anaconda3\lib\site-packages\tornado\gen.py", line 209, in wrapper
    yielded = next(result)
  File "C:\Users\joaqchem\anaconda3\lib\site-packages\notebook\services\kernels\kernelmanager.py", line 160, in start_kernel
    super(MappingKernelManager, self).start_kernel(**kwargs)
  File "C:\Users\joaqchem\anaconda3\lib\site-packages\jupyter_client\multikernelmanager.py", line 110, in start_kernel
    km.start_kernel(**kwargs)
  File "C:\Users\joaqchem\anaconda3\lib\site-packages\jupyter_client\manager.py", line 259, in start_kernel
    **kw)
  File "C:\Users\joaqchem\anaconda3\lib\site-packages\jupyter_client\manager.py", line 204, in _launch_kernel
    return launch_kernel(kernel_cmd, **kw)
  File "C:\Users\joaqchem\anaconda3\lib\site-packages\jupyter_client\launcher.py", line 138, in launch_kernel
    proc = Popen(cmd, **kwargs)
  File "C:\Users\joaqchem\anaconda3\lib\subprocess.py", line 775, in __init__
    restore_signals, start_new_session)
  File "C:\Users\joaqchem\anaconda3\lib\subprocess.py", line 1178, in _execute_child
    startupinfo)
FileNotFoundError: [WinError 2] The system cannot find the file specified

I've Python 3.7.3, Scala 2.11.12 and Almond 0.6.0

Anton Sviridov
@keynmol

I think there might be a subtle problem with Almond and 2.13.

It depends on ammonite's repl artifact:

[info] org.jline:jline-terminal:3.14.1
[info]   +-com.lihaoyi:ammonite-repl_2.13.3:2.2.0-4-4bd225e
[info]   | +-sh.almond:scala-interpreter_2.13.3:0.10.9 [S]

Which actually uses jline 3.14.1, but I think that's conflicting with Scala's own jline (which was changed to 3.15.0 in 2.13) which isn't binary compatible, so missing link barks:

[error] Category: FIELD NOT FOUND
[error]   In artifact: jline-3.15.0.jar
[error]     In class: org.jline.utils.Log
[error]       In method:  isEnabled(java.util.logging.Level):122
[error]       Access to: org.jline.utils.Log.logger
[error]       Problem: Field not found: logger
[error]       Found in: jline-terminal-3.14.1.jar

Even most recent releases of ammonite-repl use 3.14.1, which means merely updating almond won't help on 2.13

This is perhaps obscure, but just want to check with @alexarchambault that I'm understanding this correctly? And transitive exclusion might be the only way to fix it? Or silencing missinglink.

Joaquín Chemile
@jchemile
Thanks!!
Anton Sviridov
@keynmol
oh sorry @jchemile that's unrelated to your problem :)
with regards to which version - I sort of just get whatever pip3 installs, and it works. May be try using a newer almond version?