@DerThorsten, I was brainstorming with @SylvainCorlay how to best implement a Fortran compiler in WASM. I have a demo here: https://certik.gitlab.io/lfortran/wasm_demo.html, this does parsing, AST and ASR (Abstract Semantic Representation), but not backend yet. Would you recommend to use LLVM in WASM, or write our own backend for WASM?
So that we can connect it with jupyterlite via XEUS.
Thorsten Beier
@DerThorsten
Hi @certik so LLVM based things can also be compiled to wasm, in particular Julia can be compiled to wasm in that way https://github.com/Keno/julia-wasm. But compiling LLVM to wasm it not much fun. A own backend might be easier to compile to wasm and maybe yield smaller wasm builds.
Ondřej Čertík
@certik
How fast is LLVM via WASM?
My experience is that my own x86 backend is about 20x faster than LLVM backend to x86.
Thorsten Beier
@DerThorsten
I have no experience / data on that side. So far I only compiled/tried non-LLVM based languages in wasm
Ondřej Čertík
@certik
I am trying the Julia version you sent now.
The prompt is immediate on my computer. Thins like sin(0.5) return immediately.
The other downside of LLVM might be the large WASM download. Right now, LFortran is about 1MB in WASM, so it loads fast.
Thorsten Beier
@DerThorsten
jeah a big size is actually really problematic. But 1MB is awesome!
Ondřej Čertík
@certik
My browser says the Julia REPL has about 50MB download!
Huge.
How does Lua do it?
How does it compile to WASM?
Thorsten Beier
@DerThorsten
Its written in C so it pretty much compiles to WASM out of the box
Yes, you can compile existing C++ codes to WASM using emscripten. That is what I used in the above LFortran demo. My question is how to generate WASM from within LFortran?
Emscripten is simply using Clang (LLVM) and its WASM backend.
So that is the LLVM route.
So I would need to get LLVM itself to run in WASM first, so that we can use it.
I was hoping there might be some good way to generate the WASM binary format right away, sort of like I generate x86 code by emitting the machine code into std::string.
Does the latest version of xeus-cling support CUDA and is there usage documentation on how to point cling at my CUDA drivers?
Theodore Aptekarev
@piiq
Hi! I am looking for a way to make SlicerJupyter kernel (made using xeus) available in a Google Cloud managed Jupyter Lab instance (they call them AI Notebooks). Can someone advice if i can use xeus-python kernel from a docker container?
I want to get the mamba list but for the environment that would be installed by mamba install, because I want to get the resolved package list for a different platform. Is there a way to get this, and does it mean digging into the mamba API?
Wolf Vollprecht
@wolfv
@ndevenish you can just use --dry-run
and to change the platform you can use CONDA_SUBDIR=win-64 mamba create -n blabla mypackage --dry-run
to get a proper list is ... probably not easily doable right now