Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • 20:28
    kiendang synchronize #843
  • 15:32
    jdonnelly-apixio commented #838
  • 00:22
    jdonnelly-apixio commented #838
  • 00:17
    jdonnelly-apixio commented #838
  • Aug 04 15:19
    YourPsychiatrist commented #718
  • Aug 04 12:09
    bitWeaver-arch commented #345
  • Aug 03 12:39
    onrylmz commented #345
  • Aug 02 17:13
    bitWeaver-arch commented #345
  • Aug 01 16:49
    kiendang commented #854
  • Aug 01 16:29
    alexarchambault synchronize #854
  • Aug 01 16:27
    alexarchambault synchronize #854
  • Aug 01 16:11
    alexarchambault synchronize #854
  • Aug 01 16:09
    alexarchambault synchronize #854
  • Aug 01 16:00
    alexarchambault synchronize #854
  • Aug 01 15:43
    alexarchambault commented #854
  • Aug 01 15:40
    review-notebook-app[bot] commented #854
  • Aug 01 15:40
    alexarchambault opened #854
  • Aug 01 15:29

    alexarchambault on master

    Validate examples from a mill c… NIT Merge pull request #852 from al… (compare)

  • Aug 01 15:29
    alexarchambault closed #852
  • Aug 01 14:05
    scala-steward synchronize #849
Olivier Deckers
@olivierdeckers

Hi, it looks like almond is not downloading runtime dependencies of libraries: If I run

import $ivy.`io.grpc:grpc-core:1.30.2`
io.perfmark.PerfMark

I get "object perfmark is not a member of package io", while perfmark should be a runtime dependency for grpc-core, and should be downloaded and put on the classpath as well. Is there some setting to change this behavior?

Anton Sviridov
@keynmol

apologies if this is too deep of a jupyter vs. jupyterlab question.

I have a custom kernel written on top of Almond and it works great in old jupyter notebook.

The main crux is that in execute I start a computation asynchronously and immediately return success:

          val id: String = ...

          ExecuteResult.Success(
            DisplayData.markdown("_awaiting results_").withId(s"$id-output"))

And then the async process will use $id-output to render the results as html.

all the other output updates work in jupyterlab apart from rendering the final output. Has anything changed in Almond's/Jupyter's API?

1 reply
Matthew Hoggan
@mehoggan

Hi, when trying to add a repository in Almond I keep getting:

cmd1.sc:3: object repositories is not a member of package ammonite.interp
val res1_1 = interp.repositories() ++= Seq(/*...*/)

Current code reads:

import coursierapi._

interp.repositories() ++= Seq(/*...*/)
1 reply
How do I fix this?
Florian Magin
@fmagin
Does the Almond Kernel support embedding into another Application? I.e. some application that starts it as a thread so the kernel has access to to all the classes of the JVM context (via the same class loader so the static variables are shared)
20 replies
I added support for this in the Kotlin Jupyter Kernel recently (Kotlin/kotlin-jupyter#102) and have a real use case for this
Florian Magin
@fmagin
it seems like it should support embedding like I described, if the class loader option (-i as far as I understand) is set to false. Will give this a try some time
Lanking
@lanking520

I am trying to use Spark 3.0 with the local standalone cluster setup. I just simply create 1 master and 1 worker locally. However, the job is keep crashing with the issue

20/11/23 15:55:24 ERROR CoarseGrainedExecutorBackend: Executor self-exiting due to : Unable to create executor due to null
...
Caused by: java.io.IOException: No FileSystem for scheme: http
    at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2660)
    at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2667)

It seemed all jars are uploaded to the Spark remote server and Spark trying to fetch them.

import $ivy.`org.apache.spark:spark-sql_2.12:3.0.0`
import org.apache.spark.sql._

val spark = {
  NotebookSparkSession.builder()
    .master("spark://localhost:7077")
    .getOrCreate()
}
spark.conf.getAll.foreach(pair => println(pair._1 + ":" + pair._2))
def sc = spark.sparkContext
val rdd = sc.parallelize(1 to 100000000, 100)
val n = rdd.map(_ + 1).sum()

You can reproduce by using the above code (and please create a standalone cluster beforehand

curl -O https://archive.apache.org/dist/spark/spark-3.0.0/spark-3.0.0-bin-hadoop2.7.tgz
tar zxvf spark-3.0.0-bin-hadoop2.7.tgz
mv spark-3.0.0-bin-hadoop2.7/ spark
export SPARK_MASTER_HOST=localhost
export SPARK_WORKER_INSTANCES=1
./spark/sbin/start-master.sh
./spark/sbin/start-slave.sh spark://localhost:7077
5 replies
This is tested on both Cent OS 7 and Mac with JDK 11, Scala 2.12.10, Almond (latest) + Almond Spark(0.10.9)
Joaquín Chemile
@jchemile

Hello! In which version of Jupyter Notebook Almond is worked? Because I'm facing this problem:

Traceback (most recent call last):
  File "C:\Users\joaqchem\anaconda3\lib\site-packages\tornado\web.py", line 1699, in _execute
    result = await result
  File "C:\Users\joaqchem\anaconda3\lib\site-packages\tornado\gen.py", line 736, in run
    yielded = self.gen.throw(*exc_info)  # type: ignore
  File "C:\Users\joaqchem\anaconda3\lib\site-packages\notebook\services\sessions\handlers.py", line 73, in post
    type=mtype))
  File "C:\Users\joaqchem\anaconda3\lib\site-packages\tornado\gen.py", line 729, in run
    value = future.result()
  File "C:\Users\joaqchem\anaconda3\lib\site-packages\tornado\gen.py", line 736, in run
    yielded = self.gen.throw(*exc_info)  # type: ignore
  File "C:\Users\joaqchem\anaconda3\lib\site-packages\notebook\services\sessions\sessionmanager.py", line 79, in create_session
    kernel_id = yield self.start_kernel_for_session(session_id, path, name, type, kernel_name)
  File "C:\Users\joaqchem\anaconda3\lib\site-packages\tornado\gen.py", line 729, in run
    value = future.result()
  File "C:\Users\joaqchem\anaconda3\lib\site-packages\tornado\gen.py", line 736, in run
    yielded = self.gen.throw(*exc_info)  # type: ignore
  File "C:\Users\joaqchem\anaconda3\lib\site-packages\notebook\services\sessions\sessionmanager.py", line 92, in start_kernel_for_session
    self.kernel_manager.start_kernel(path=kernel_path, kernel_name=kernel_name)
  File "C:\Users\joaqchem\anaconda3\lib\site-packages\tornado\gen.py", line 729, in run
    value = future.result()
  File "C:\Users\joaqchem\anaconda3\lib\site-packages\tornado\gen.py", line 209, in wrapper
    yielded = next(result)
  File "C:\Users\joaqchem\anaconda3\lib\site-packages\notebook\services\kernels\kernelmanager.py", line 160, in start_kernel
    super(MappingKernelManager, self).start_kernel(**kwargs)
  File "C:\Users\joaqchem\anaconda3\lib\site-packages\jupyter_client\multikernelmanager.py", line 110, in start_kernel
    km.start_kernel(**kwargs)
  File "C:\Users\joaqchem\anaconda3\lib\site-packages\jupyter_client\manager.py", line 259, in start_kernel
    **kw)
  File "C:\Users\joaqchem\anaconda3\lib\site-packages\jupyter_client\manager.py", line 204, in _launch_kernel
    return launch_kernel(kernel_cmd, **kw)
  File "C:\Users\joaqchem\anaconda3\lib\site-packages\jupyter_client\launcher.py", line 138, in launch_kernel
    proc = Popen(cmd, **kwargs)
  File "C:\Users\joaqchem\anaconda3\lib\subprocess.py", line 775, in __init__
    restore_signals, start_new_session)
  File "C:\Users\joaqchem\anaconda3\lib\subprocess.py", line 1178, in _execute_child
    startupinfo)
FileNotFoundError: [WinError 2] The system cannot find the file specified

I've Python 3.7.3, Scala 2.11.12 and Almond 0.6.0

Anton Sviridov
@keynmol

I think there might be a subtle problem with Almond and 2.13.

It depends on ammonite's repl artifact:

[info] org.jline:jline-terminal:3.14.1
[info]   +-com.lihaoyi:ammonite-repl_2.13.3:2.2.0-4-4bd225e
[info]   | +-sh.almond:scala-interpreter_2.13.3:0.10.9 [S]

Which actually uses jline 3.14.1, but I think that's conflicting with Scala's own jline (which was changed to 3.15.0 in 2.13) which isn't binary compatible, so missing link barks:

[error] Category: FIELD NOT FOUND
[error]   In artifact: jline-3.15.0.jar
[error]     In class: org.jline.utils.Log
[error]       In method:  isEnabled(java.util.logging.Level):122
[error]       Access to: org.jline.utils.Log.logger
[error]       Problem: Field not found: logger
[error]       Found in: jline-terminal-3.14.1.jar

Even most recent releases of ammonite-repl use 3.14.1, which means merely updating almond won't help on 2.13

This is perhaps obscure, but just want to check with @alexarchambault that I'm understanding this correctly? And transitive exclusion might be the only way to fix it? Or silencing missinglink.

Joaquín Chemile
@jchemile
Thanks!!
Anton Sviridov
@keynmol
oh sorry @jchemile that's unrelated to your problem :)
with regards to which version - I sort of just get whatever pip3 installs, and it works. May be try using a newer almond version?
JasonVranek
@JasonVranek
Hello! I currently use the Scala embedded DSL 'chisel' to do hardware design and feel like a notebook environment lends well to this. I just had a couple questions: 1) is there a way to split a class across multiple cells? (some of these hardware modules end up being hundreds of lines). I'm currently tinkering around with implicit conversions to add methods but open to other options. 2) I am very excited about nbdev (https://github.com/fastai/nbdev) and am curious if there have been any attempts to port some of these features to scala. thanks!
Anton Sviridov
@keynmol
Always dreamt of having a job where I can use chisel :D 1) does defining multiple traits and then mixing them in in one final class work with chisel?
JasonVranek
@JasonVranek
It does! Thanks Anton, I just tried this and it does work. Much much cleaner way of achieving this. (btw chisel is awesome)
Anton Sviridov
@keynmol
Fantastic :) And I think your idea about chisel in notebooks makes a lot of sense, I should try it.
JasonVranek
@JasonVranek
Not quite my idea - they've already setup almond + chisel as an onboarding tool here you can mess around with https://github.com/freechipsproject/chisel-bootcamp but I'm unaware of anyone currently using this for more serious development
nicknn7
@nicknn7
Hi everyone, we are looking to increase the heap space available to the Scala kernel. When running "./almond --install --command "java -XX:MaxRAMPercentage=80.0 -jar almond" --copy-launcher true" on installation, the kernel.json file gets generated correctly (i.e. includes this XX:... line) but for some reason, inside the notebook, Runtime.getRuntime().maxMemory()/1000000000 always yields 32GB, no matter which RAMPercentage we pass to the installer. Any ideas what we're doing wrongly? The kernel is being installed globally. When we pass a small number, it seems to have an effect, but starting from a percentage which is above 32G, it always stays at 32GB. Any advice appreciated. Thanks!
2 replies
André L. F. Pinto
@andrelfpinto
Hello, I am trying to run this code:
case class Foo(id:Long, name:String)
val constructor = classOf[Foo].getConstructors()(0)
val args = Array[AnyRef](new java.lang.Integer(1), "Foobar")
val instance = constructor.newInstance(args:_*).asInstanceOf[Foo]
However, I am getting:
java.lang.IllegalArgumentException: wrong number of arguments
  sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
  sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
  sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
  java.lang.reflect.Constructor.newInstance(Constructor.java:423)
  ammonite.$sess.cmd1$Helper.<init>(cmd1.sc:2)
  ammonite.$sess.cmd1$.<clinit>(cmd1.sc:7)
I see that the constructor is:
java.lang.reflect.Constructor[?0] = public ammonite.$sess.cmd0$Helper$Foo(ammonite.$sess.cmd0$Helper,long,java.lang.String)
Where can I get this Helper?
Pham Nguyen
@akizminet
Hello, I'm new to this kernel. I don't know how to hide progress bar of almond-spark.
Kien Dang
@kiendang
Hi may I know how to get contextual help working? I passed both --metabrowse to the launcher and --sources --default=trueto coursier but it doesn't work for me.
Kien Dang
@kiendang
ah I was using scala 2.13.4 whereas metabrowse only supports 2.13.1
RndMnkIII
@RndMnkIII
How could I use the Jupyter widgets (IntSlider, TextBox, ...) in JupyterLab with the Almond kernel?
RndMnkIII
@RndMnkIII
image.png
drawing on canvas using javascript from Almond DisplayData API
Quafadas
@Quafadas
Has anyone seen this before? Any hints on what I might have done wrong?
almond-sh/almond#784
Quafadas
@Quafadas

Excitingly, I've now managed to dance through the minefield of my corporate setup, and got almond working. Yeyz...
I can't figure out how to enable "metabrowse" or any form of code completion though.

RUN ./almond --install --log info --metabrowse --id scala2_13 --display-name "Scala 2.13" --global --jupyter-path /opt/conda/share/jupyter/kernels

Should that work? Is there anything else to configure in Juypter itself?
ah almond-sh/almond#441

2 replies
Quafadas
@Quafadas
Finally, the "variable inspector" is a fabulous tool, I have to say... however, if I add it to the kernel, then case class compilation appears to fail.
case class Something(i: Int)
​
cmd3.sc:20: not found: value res3_1
  .declareVariable("res3_1", res3_1); Iterator() }, 
                             ^Compilation Failed
Compilation Failed
It works wonderfully, for "fundamental" types though
Quafadas
@Quafadas
Man I'm too slow to read github, sorry for spam
ZanyRumata
@ZanyRumata
hey guys!
I really stucked on how to hide progress bars in spark tasks?
Sometimes the info is really annoying.
I already tried - progressBars(false), but it seems doesn't work
okey. i just asked a question and finally found an answer.
NotebookSparkSession - doesn't work with ProgressBars (no errors, but also no effect)
AmmoniteSparkSession - allow to hide ProgressBars without any trouble.
I think it's a bug, but i'm not sure)
Quafadas
@Quafadas

I've been trying to install the exciting 0.11.2 release ... but I get

 Cannot find default main class. Specify one with -M or --main-class.

Against the command

/tmp/cs bootstrap almond:0.11.2 --scala 2.13.4 -i user --default=true --sources -o almond

Can anyone see what might be wrong here?

Quafadas
@Quafadas
Should someone else bump into the same, coursier docs appear to lay out a solution by adding a main class directly.
 -M almond.ScalaKernel
Quafadas
@Quafadas
@alexarchambault To my understanding you are the protagonist behind coursier and almond. Thankyou for your work. They are wonderful.
Quafadas
@Quafadas
When working with large variables, is there a way to view the entire variable, currently it gets truncated after some number of lines?
4 replies
Quafadas
@Quafadas

Feedback on almond-sh/almond#813 welcome...

Anyone else tried this?

notmike
@notmike-5
Hi, I am trying to install almond for jupyter notebook running on archliunux. I get the error Cannot find default main class. Specify one with -M or --main-class. Using the suggestion of adding -M almond.ScalaKernel It appears to install and I see the scala kernel option in ipython notebook. However, when I select this kernel I get a 500 internal server error.
notmike
@notmike-5
plz to helping me :(
Quafadas
@Quafadas

A couple of questions: would macros work in almond?

object Report {
    implicit val pickler : ReadWriter[Report] = upickle.default.macroRW[Report]
}

Should that work? I get

cmd24.sc:2: trait CaseObjectContext is a trait; does not take constructor arguments
    implicit val pickler : ReadWriter[LossReport] = upickle.default.macroRW[LossReport]

Also, I can't get tab completion to work... has anyone got it working successfully, any tips?

Quafadas
@Quafadas
Have followed the docs here, https://jupyterlab-lsp.readthedocs.io/en/latest/Configuring.html... but can't find any diagnostics on why it isn't working :-(.