tgingold on ci
Test.yml: add llvm in PATH for … (compare)
tgingold on ci
Test.yml: list llvm package on … (compare)
tgingold on master
Fix issue #2126, add handling o… Merge pull request #2127 from C… (compare)
umarcor on master
ci/Test: typo (compare)
Not even sure what those are used for currently (here)
If you extend the sidebar on the right, there is a sequence of notifications about commits, issues, etc. Not all users pay attention to it, but it's useful for maintaining organisations, since activity can go unnoticed in very busy days.
The complement to that feature is the ability to reference issues or PRs through
#. I know this one is a client side thing, so it can be implemented in Element or any other client. Yet, I'd like someone to point at any example on a matrix client.
@edbordin I guess that most users nowadays are expecting to use the tools more than once. Therefore, other packaging/distribution solutions are more desirable for them. However, fpga-toolchain is very interesting for workshops where you don't know which computers will people bring, or you cannot install much software on them. In those cases, fpga-toolchain can be provided in a pendrive, along with a development board. It's also interesting because the set of tools is not the minimal but it's neither too large. Hence, it allows workshops with multiple languages and also formal verification exercises. When you deprecate it, fomu-workshop will be affected. Yet, I'm adding documentation about how to use containers. 3-4 years ago, containers were not supported on any Windows, and gitpod did not exist. They are supported now. Hence, that's something that Sean and Tim will need to evaluate.
can the hdl containers be run without root privileges?
Yes. Those are OCI containers (https://opencontainers.org/). They should work with any compatible runtime (docker, podman, nerdctl/containerd, whatever k8s uses...). I use them with docker on Windows and with podman on Fedora.
There is 1GB of Ubuntu system libs included in the tarball, and all the executables are wrapped in bash/perl scripts for overriding the linker and library paths.
Hi, sorry for late reply, well actually there is less then 200MB of libraries and other resources taken from Ubuntu 20.04, and plain docker for it is 134MB without any needed library, so docker image would probably be larger then distribution files. Also note that python2 and python3 distributed are not from taken from ubuntu but built so we can have same version for all OS-es and also some software links to python and that must be specific version, so yes that is about 200MB of various files but size would be same in docker image
happy to move this discussion to hdl if it's more appropriate there
@edbordin, I believe that GHDL is the tricky piece. Most other projects are "typical" C/C++/Python tools, so the build and cross-compilation procedures are easier. Hence, it's up to you to discuss it here or in hdl/community. Most of the interested users do read both channels anyway.
actually there is less then 200MB of libraries and other resources taken from Ubuntu 20.04
@mmicko maybe the difference is because I looked at the size of the
oss-cad-suite-linux-x64-20210524 after extracting it? That's 1.06GB.
and plain docker for it is 134MB without any needed library, so docker image would probably be larger then distribution files.
I would say the size using containers would be the same, or the difference would be negligible. My concern is not the size, but the duplication of environment with a hand-made isolation procedure. In fpga-toolchain, most tools are compiled statically so that the environment is embedded in each tool. Containers do provide a proven isolation mechanism for system libraries (since that's their main purpose). In oss-cad-suite everything is handled as you would do in a container, an appimage, a flatpak, etc. but then no specific isolation tool is used. As commented in YosysHQ/oss-cad-suite-build#1, that is a lot of work! Containers/appimage/flatpak are non-trivial projects with many developers. oss-cad-suite is an almost single man project.
If you want a hierarchical and fixed development model, your approach is acceptable. You decide which tools to provide, you take care of preparing all of them, your clients stick to the use cases you explicitly support. If they want other use cases, they either improve your bundle or they hire you to do so. However, you will need to develop all the isolation related tweaks yourself, or you will need a significant effort for documenting all of it, so that potential contributors can help. Otherwise, it will be difficult for anyone to add their own additional system tools/libraries that interact with the ones in the bundle.
Moreover, if you wanted to introduce versioning of the tools, so that users could downgrade specific tools in the bundle, that would be something you would need to implement and document yourself. Doing so is already a huge effort even if existing package managers are used.
guess that docker would be nice to have at least for CI usage but to be honest in that case I would for example remove GUI tools which would lower deps and made it much smaller in size.
There are container images for CI usage, others for local usage and others with GUI tools. Size grows, respectively. CI images need to be small, specific and fast. Local images need to be complete and have handy tools. GUI tools need X11, Qt, Gtk... The advantage of containers is that all of those can be provided at the same time, and they can be composed. The outcome is that each user can pick the image that best fits. However, from a maintainer/development point of view, all the users
are using "the same" image. Conceptually, each "collection" in hdl/containers is a single several GB image that is presented in pieces of varying sizes.
See https://hdl.github.io/containers/#_tools_with_gui and https://github.com/ghdl/docker/blob/master/USE_CASES.md#environments-with-guis. Or any of the following gitpods: https://gitpod.io/#https://github.com/cocotb/cocotb, https://gitpod.io/#https://github.com/umarcor/msea.
Note that gitpod uses VSCode or Theia. That is, YosysHQ could provide a custom IDE that any user would launch either in the browser or locally. Moreover, it might be an specific flavour of TerosHDL. I very honestly think that would provide much market value to the product.
Please, do not take any of this as plain criticism. I do care about oss-cad-suite and I do believe it has some very relevant characteristics:
I would love to have GHDL plugin for yosys built and distributed, problem is that it is ADA based, and that is possible to cross compile to other linux variants , but have issue for darwin and windows targets
As commented in YosysHQ/oss-cad-suite-build#1, for distribution on Windows I very strongly recommend using MSYS2. If you don't want users to install MSYS2 explicitly, you can hide it from them with a wrapper. That is what projects Ruby or Qt do. GTKWave provides it too. You don't need to use the upstream recipes if you don't want to, but you can tweak those.
Recently, UCRT64 was added to MSYS2, which complements MINGW32, MINGW64 and CLANG64. UCRT64 should reduce possible corner cases when combining binaries with others built with VS.
For Linux, I avoid cross-compilation by using QEMU along with containers. See dbhi/qus and dbhi/docker. QEMU's user mode does not receive as much love as the system mode; hence, there might be issues with "weird" system signals. However, it works fairly well, and ARM support was improved significantly in these last 2 years. On the other hand, there are no official images for RISC-V yet. @carlosedp is pushing hard in that area: carlosedp/riscv-bringup.
For sure I need to use LLVM based build, but problem in it is that it try to precompile VHDL libraries with created binaries which is not possible for all targets since no QEMU for macOS (darwin)
You have issues for cross-building for macOS on ARM, from a Linux on x86-64?
Would appreciate any help in making minimal ghdl plugin build that can work on all targets
The plugin is not an issue. You can embed it into yosys. The plugin depends on the libghdl shared library. That is neither an issue, since you can build it. The problem is that almost all VHDL designs do depend on the standard and IEEE libraries. That's why, in practice, ghdl-yosys-plugin is said to depend on GHDL (on the installation, no the binary).
Theoretically, I'd say you could avoid building the libraries by providing an specific target to
make. Then you would need to bundle GHDL, the shared libs, the headers, the includes and the VHDL sources of the libraries. Last, you would need to provide an script for building the libraries, similar to https://github.com/ghdl/ghdl/tree/master/scripts/vendors, which users would need to run once after extracting the bundle. However, I don't think anyone used that workflow in practice...
Also apologize if it sounded for a time that I am not listening, it was more pressure to finish things and have it ready for customers, but keeping community in mind as well.
It's ok. The pressure is very different since you want to provide an specific set of tools on some specific platforms for some specific users in a given timeframe. Conversely, in hdl/containers and hdl/MINGW-packages there is no specific set of tools, platforms, users or timeframe. That is the point: to use the canonical solutions, even if it takes longer to achieve a complete solution. That is because no one is explicitly funding those solutions, so the time that can be demanded is zero. Well, I must say that Google is donating a container registry with "unlimited" resources since a couple of days ago. I need to start using it, tho.
I am pretty sure there are cross-compilers for the windows targets.
@tgingold maybe you mean https://github.com/hackfin/ghdl-cross.mk ? I think that is different. That generates a GHDL to be executed on Linux, which will produce simulation models for Windows. But GHDL itself is not cross-compiled.
@mmicko, absolutely no pressure or rush at all. Having a solid and stable open source EDA ecosystem is a mid-long term project that requires a lot of people to coordinate. We will get there, step by step.