Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
    Ondřej Čertík
    @certik
    So that we can connect it with jupyterlite via XEUS.
    Thorsten Beier
    @DerThorsten
    Hi @certik so LLVM based things can also be compiled to wasm, in particular Julia can be compiled to wasm in that way https://github.com/Keno/julia-wasm. But compiling LLVM to wasm it not much fun. A own backend might be easier to compile to wasm and maybe yield smaller wasm builds.
    Ondřej Čertík
    @certik
    How fast is LLVM via WASM?
    My experience is that my own x86 backend is about 20x faster than LLVM backend to x86.
    Thorsten Beier
    @DerThorsten
    I have no experience / data on that side. So far I only compiled/tried non-LLVM based languages in wasm
    Ondřej Čertík
    @certik
    I am trying the Julia version you sent now.
    The prompt is immediate on my computer. Thins like sin(0.5) return immediately.
    The other downside of LLVM might be the large WASM download. Right now, LFortran is about 1MB in WASM, so it loads fast.
    Thorsten Beier
    @DerThorsten
    jeah a big size is actually really problematic. But 1MB is awesome!
    Ondřej Čertík
    @certik
    My browser says the Julia REPL has about 50MB download!
    Huge.
    How does Lua do it?
    How does it compile to WASM?
    Thorsten Beier
    @DerThorsten
    Its written in C so it pretty much compiles to WASM out of the box
    Ondřej Čertík
    @certik
    Oh, it is intepreted?
    Interpreted.
    Thorsten Beier
    @DerThorsten
    jeah lua is interpreted
    Ondřej Čertík
    @certik
    So the backend of Lua simply interprets it.
    LFortran's LLVM backend compiles to machine code, loads it and executes it. Just like Julia does it.
    Thorsten Beier
    @DerThorsten
    or a Bytecode vm (lua)
    Ondřej Čertík
    @certik
    We could write an interpreter, but it might be easier (and faster) to write the WASM backend that will generate WASM on the fly, and execute it.
    Do you have any experience generating WASM from C++?
    Thorsten Beier
    @DerThorsten
    I do that with https://emscripten.org/ and I have compiled a few things with it.
    Ondřej Čertík
    @certik
    Yes, you can compile existing C++ codes to WASM using emscripten. That is what I used in the above LFortran demo. My question is how to generate WASM from within LFortran?
    Emscripten is simply using Clang (LLVM) and its WASM backend.
    So that is the LLVM route.
    So I would need to get LLVM itself to run in WASM first, so that we can use it.
    I was hoping there might be some good way to generate the WASM binary format right away, sort of like I generate x86 code by emitting the machine code into std::string.
    Wolf Vollprecht
    @wolfv
    looks interesting
    but you might already have foudn it, too, since it was an obvious google :)
    Thorsten Beier
    @DerThorsten
    @certik there is also https://github.com/binji/wasm-clang so there they generate wasm on the fly from c++
    Wolf Vollprecht
    @wolfv
    I'd be also curious if we can leverage this: https://web.dev/ps-on-the-web/#webassembly-debugging
    Ondřej Čertík
    @certik
    Lobster is interesting, I'll play with it.
    I think wasm-clang is using LLVM also.
    So the two approaches are either get LLVM running, or our own backend with Lobster.
    Ondřej Čertík
    @certik
    It's hard to believe that it is less than 800 lines of code.
    Our x86 assembly class that does something similar for x86 machine code is longer than that, and we support less features: https://gitlab.com/lfortran/lfortran/-/blob/master/src/lfortran/codegen/x86_assembler.h.
    Allen Townsend
    @alstown
    Does the latest version of xeus-cling support CUDA and is there usage documentation on how to point cling at my CUDA drivers?
    Theodore Aptekarev
    @piiq
    Hi! I am looking for a way to make SlicerJupyter kernel (made using xeus) available in a Google Cloud managed Jupyter Lab instance (they call them AI Notebooks).
    Can someone advice if i can use xeus-python kernel from a docker container?
    Wolf Vollprecht
    @wolfv_:matrix.org
    [m]
    @piiq: can you use conda packages there?
    4 replies
    Nicholas Devenish
    @ndevenish
    I want to get the mamba list but for the environment that would be installed by mamba install, because I want to get the resolved package list for a different platform. Is there a way to get this, and does it mean digging into the mamba API?
    Wolf Vollprecht
    @wolfv
    @ndevenish you can just use --dry-run
    and to change the platform you can use CONDA_SUBDIR=win-64 mamba create -n blabla mypackage --dry-run
    to get a proper list is ... probably not easily doable right now
    but you can get JSON output with --json and then parse that further
    however, note that there are some virtual packages dependent on the platform you are on, and that might change the resolution. conda-lock has implemetned these things properly: https://github.com/conda-incubator/conda-lock
    Nicholas Devenish
    @ndevenish
    @wolfv thanks, I tried --json but unfortunately none of the variants I tried gave useful information - CONDA_SUBDIR (instead of manual channels and --override-channels) and --dry-run instead of --download-only seem to have done the trick, thanks! (for building bundled releases we "pin" everything in the conda dependency list and not all our platforms are available in our CI)