by

Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Sep 27 06:03
    scala-steward opened #650
  • Sep 26 06:06
    olivierdeckers closed #648
  • Sep 26 06:06
    olivierdeckers commented #648
  • Sep 24 05:01
    Key2Love commented #630
  • Sep 22 08:20
    olivierdeckers commented #648
  • Sep 21 17:20

    alexarchambault on gh-pages

    Update website (compare)

  • Sep 21 16:24

    alexarchambault on master

    Update sbt-pack to 0.13 (#649) (compare)

  • Sep 21 16:24
    alexarchambault closed #649
  • Sep 20 22:04
    scala-steward opened #649
  • Sep 19 09:00
    YannMoisan commented #397
  • Sep 18 06:59
    olivierdeckers opened #648
  • Sep 16 19:27
    chenmay0921 opened #647
  • Sep 15 16:25
    brayellison commented #397
  • Sep 14 13:09

    alexarchambault on gh-pages

    Update website (compare)

  • Sep 14 12:15

    alexarchambault on master

    Update coursier to 2.0.0-RC6-26… (compare)

  • Sep 14 12:15
    alexarchambault closed #646
  • Sep 14 11:37

    alexarchambault on gh-pages

    Update website (compare)

  • Sep 14 10:31
    alexarchambault synchronize #646
  • Sep 14 10:31

    alexarchambault on master

    Allow to pass arguments to jupy… Clearer and colored log messages Minor clean-up and 2 more (compare)

  • Sep 14 10:31
    alexarchambault closed #644
Andrew
@sheerluck

@Krever

%%javascript
var kernel = IPython.notebook.kernel;
var thename = window.document.getElementById("notebook_name").innerHTML;
var command = "theNotebook = " + "'"+thename+"'";
kernel.execute(command);

Wojtek Pituła
@Krever
Awesome, thanks!
I wanted to hide it in my scala lib, but will figure sth out...
Sören Brunk
@sbrunk
@Krever you totally can. Here’s the almond version:
kernel.publish.js("""
  var thename = window.document.getElementById("notebook_name").innerHTML;
  var command = 'val theNotebook = "'+thename+'"';
  Jupyter.notebook.kernel.execute(command);
""")
image.png
Note that it only works in the classic notebook, not in JupyterLab due to security restrictions
Sören Brunk
@sbrunk
But you can put it into a library by adding scala-kernel-api as a provided dependency as described in the docs. https://almond.sh/docs/api-access-instances#from-a-library
@hygt Does it work for you now? Anyway, thanks for sharing what you’ve found trying to solve these issues.
Wojtek Pituła
@Krever
@sbrunk thanks, didnt know such things are possible. I will have to figure out sth that works in Jupyter lab, but now I have all the pieces on almond side.
Henry
@hygt
@sbrunk yes I've gotten to a point where it works. I've built the Almond launcher with coursier's --assembly-rule exclude-pattern ... to get rid of Jackson and Json4s classes. No more binary compatibility issues
my setup is tricky because our codebase shares way too much code between our services and Spark jobs
we can get around bin compat issues with sbt-assembly shading rules
but I was a bit tired of moving 100+ MB fat JARs around just to do some exploration with the Spark shell
Henry
@hygt
also our data scientists would rather use notebooks :smiley:
Pedro Larroy
@larroy
is almond working? I'm trying to run it in jupyter notebook and finding all kinds of problems
First i bumped in this issue: almond-sh/almond#508
now seems the kernel it's hanging
is there a way to debug it?
I separated my statements into smaller chunks and now seems to work
weird
Wojtek Pituła
@Krever
how would you approach rendering basic grapgh diagram? just some nodes and edges
Wojtek Pituła
@Krever
@alexarchambault I see that in master ammonite is in ver 2.0.4 but cs resolve -t sh.almond:scala-kernel_2.13.1:0.9.1 shows it in 1.7.4. Could we have a release with the newer version?
Sören Brunk
@sbrunk
@Krever Alex has just released 0.10.0 which updates ammonite to 2.1.4
Wojtek Pituła
@Krever
Great, thanks for the ping!
Victor M
@vherasme
Hello People
I am running almond with docker with: docker run -it --rm -p 8888:8888 almondsh/almond:latest. How can I access my local file system?
I want to read a file in /sysroot/home/victor/Documentos/test.csv
Sören Brunk
@sbrunk
@vherasme you can use the -v option to mount a host directory into your container. See https://docs.docker.com/storage/bind-mounts/#start-a-container-with-a-bind-mount
Sören Brunk
@sbrunk
docker run -it --rm -v /sysroot/home/victor/Documentos/:/home/jovyan/data -p 8888:8888 almondsh/almond:latestshould work for you
Victor M
@vherasme
Thanks a lot. One last thing, it won't allow me to create new notebooks: An error occurred while creating a new notebook. Permission denied: data/Untitled.ipynb
I was able to create the notebook in the work directory. Will this notebook disappear once I quit?
Sören Brunk
@sbrunk
Yes only the data directory will persist. You should check the write permissions of your documents dir. Or you could also mount another directory too.
Victor M
@vherasme

Yes only the data directory will persist. You should check the write permissions of your documents dir. Or you could also mount another directory too.

Thanks a lot. It's all working now

michalrudko
@mrjoseph84
Hi All, is there currently a way to install the almond.sh kernel in offline mode? I am unfortunately behind the corporate firewall and cannot install it with suggested in the docs ./coursier launch almond -- --install . I saw the issue on GitHub: almond-sh/almond#145 however the link in the answer is not valid anymore. I'd appreciate tips on how to go about it. Thanks!
Sören Brunk
@sbrunk
@mrjoseph84 You could try to generate a standalone launcher as described in the docs: https://almond.sh/docs/install-other#all-included-launcher
michalrudko
@mrjoseph84

@mrjoseph84 You could try to generate a standalone launcher as described in the docs: https://almond.sh/docs/install-other#all-included-launcher

Thanks! I'll check that.

michalrudko
@mrjoseph84
Hello, is there any way to pass the Spark conf parameters via environment variables so. that you don't need to specify them in the notebook? For PySpark it's PYSPARK_SUBMIT_ARGS pyspark-shell, for Zeppelin it's SPARK_SUBMIT_OPTIONS in conf/zeppelin-env.sh (https://zeppelin.apache.org/docs/0.5.5-incubating/interpreter/spark.html). Is there any way to pass the envs in similiar way in Almond kernel (e.g. via some Ammonite configs)?
Alexandre Archambault
@alexarchambault
@mrjoseph84 SPARK_CONF_DEFAULTS is taken into account. It should contain a path. The file at that path is read, and should contain lines like spark.foo value (space separating property name from value). Lines starting with # are ignored.
That doesn't allow to specify the options directly in an environment variable though. But support for it could be added. It should be a matter of adding some logic reading an env var around here.
michalrudko
@mrjoseph84
@alexarchambault thanks! This is exactly what we used.
sandrolabruzzo
@sandrolabruzzo
Hi All, I need help, I had some problems on the run of spark in yarn mode on Jupiter using almond, I got always the error: Caused by: java.io.IOException: No FileSystem for scheme: http
Instead, If I use ammonite shell and import the same library it works
this is my snippet of code:
import $ivy.`org.apache.spark::spark-sql:2.4.0`
import $ivy.`sh.almond::ammonite-spark:0.5.0` 
import org.apache.spark.sql._

val spark = {
  AmmoniteSparkSession.builder()
    .master("yarn")    
    .config("spark.executor.instances", "20")
    .config("spark.executor.memory", "2g")
    .getOrCreate()
}
def sc = spark.sparkContext

val rdd = sc.parallelize(1 to 100000000, 100)

val n = rdd.map(_ + 1).sum()
I use almond:0.5.0 --scala 2.11.12
ammonite:1.6.7
Chad Selph
@chadselph
I'm getting Error: Unable to access jarfile /opt/conda/share/jupyter/kernels/scala/launcher.jar after installing; I know I can probably just chmod +r /opt/conda/share/jupyter/kernels/scala/ but did I do something wrong for it to be installed with -rwx------?
5 replies
Chad Selph
@chadselph
Is it possible to get widgets along the lines of ipython widget's file upload or select? I haven't seen any examples doing more than string inputs.
1 reply
Vadim G
@nonickfx_gitlab
Hi all, I'd like to store snippets of code from a cell in Jupyter directly to a text file. In IPython, one could do that using magic commands (%...). Any pendant to that in Almond? I found some discussions from 2016 stating it's not possible, but maybe things changed in the meantime? Thanks
4 replies
Brian Howard
@bhoward
Hi, I'm trying to use the almond kernel with jupyter-book, but it doesn't like the ANSI color codes generated by ammonite. It is supposed to be possible to turn off the colors with interp.colors() = ammonite.util.Colors.BlackWhite, but that doesn't seem to have any effect from within Jupyter (no errors, it just doesn't turn off the color). It does work from an ammonite REPL. Any ideas? I don't want to file an issue before I know whether this is on the almond side or the ammonite side (and there's already an issue with jupyter-book to handle colors, but I don't think it's going to happen soon).
7 replies
Tommaso Schiavinotto
@Teudimundo_gitlab
Hi there, I'm importing a script in a notebook through import $file.script. It looks like that once the script is compiled successfully and imported will no longer be reimported also if the script changes when the cell is newly evaluated. Is there a way to force the compilation or the reimport of the script without restarting the kernel?
2 replies
Olivier Deckers
@olivierdeckers

Hi, it looks like almond is not downloading runtime dependencies of libraries: If I run

import $ivy.`io.grpc:grpc-core:1.30.2`
io.perfmark.PerfMark

I get "object perfmark is not a member of package io", while perfmark should be a runtime dependency for grpc-core, and should be downloaded and put on the classpath as well. Is there some setting to change this behavior?