Where communities thrive

  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
  • 18:14
    tspyrou commented #46
  • 18:10
    tspyrou opened #46
  • Sep 20 13:48
    kamilrakoczy edited #132
  • Sep 20 13:47
    kamilrakoczy opened #132
  • Sep 20 13:46

    kamilrakoczy on uhdm-yosys-fix-libffi

    yosys-uhdm: fix missing libffi … WIP: build only yosys-uhdm Sig… (compare)

  • Sep 20 12:34
    PiotrZierhoffer commented #31
  • Sep 20 12:33
    PiotrZierhoffer opened #31
  • Sep 20 09:37

    michalsieron on gcc-newlib-10.1

    Bump GCC-newlib version Add C++ as supported language t… Bump nostdc GCC to 10.1 and 3 more (compare)

  • Sep 20 07:49
    avelure starred hdl/containers
  • Sep 19 07:33
    ptracton starred hdl/constraints
  • Sep 19 01:02
    themainframe starred hdl/constraints
  • Sep 18 23:50
    QuantamHD commented #51
  • Sep 18 00:53
    aolofsson starred hdl/constraints
  • Sep 17 18:39

    ajelinski on fix-symbiflow-plugins


  • Sep 17 18:39

    ajelinski on master

    yosys-plugins-symbiflow; symbif… Merge pull request #131 from hd… (compare)

  • Sep 17 18:39
    ajelinski closed #131
  • Sep 17 18:07
    swang203 starred hdl/bazel_rules_hdl
  • Sep 17 08:12
    kamilrakoczy review_requested #131
  • Sep 17 06:22
    kamilrakoczy edited #131
  • Sep 17 06:22
    kamilrakoczy synchronize #131
Jim Lewis
BTW, .pro files are executable tcl. This is the difference between the .pro and our older formats.
Unai Martinez-Corral

Do core files specify what to run in the simulator?

@JimLewis, it depends on which core files. In the ones currently supported by FuseSoc, I think it does. See, for instance, https://github.com/m-kru/fsva/blob/master/dummy_fusesoc_lib/osvvm/div_by_3.core#L21-L26. However, in https://github.com/umarcor/osvb/blob/main/AXI4Stream/AXI4Stream.core the "targets" are specified elsewhere.

Unai Martinez-Corral
In https://github.com/antonblanchard/microwatt/blob/master/microwatt.core, a *.core file is used for synthesis and implementation with FuseSoC/edalize, but simulation is done with a Makefile, as well as implementation for some boards including Lattice devices.


I want to understand the similarities with regard to in-GUI commands that both OSVVM and VUnit provide. @GlenNicholls and nfranque (can't remember the specific nick) were working on that in the context of VUnit only. Does cocotb provide any feature in this regard?

If you want/need more info about the objective of that lmk

Unai Martinez-Corral
:thumbsup: :heart:
Jim Lewis
@GlenNicholls OSVVMs's in-GUI commands are documented in GitHub readme: https://github.com/OSVVM/OSVVM-Scripts/tree/master or in pdf: https://github.com/OSVVM/Documentation/blob/master/Script_user_guide.pdf
In either case it is short and simple.
Where can I find the container for Cadence Xcelium? :smile:
They call that one the garbage bin 😏
Unai Martinez-Corral
@svenn71 we cannot provide containers for non-open source software. We are not allowed. However, we can host dockerfiles for users to build containers themselves. I am not aware of anyone using Xcelium containers, tho. I've heard or used ModelSim/QuestaSim, Precision, Vivado and Radiant, but not Xcelium.
I am very familiar with the licensing model of the big three
I wish Netflix would read through John Cooleys Deep Chip and have somebody create a TV series like Mad Men for the EDA business throughout the 80s, 90s, 00s and 10s. I think it would be a fun thing to watch
Tim Ansell
svenn71: I'm pretty sure nobody would believe it
I didn't believe Mad Men either, but then I started googling .....
Unai Martinez-Corral

how is doit for python compared to rake for ruby regarding defining interdependence of tools and tasks in a workflow?

@svenn71 I'm not familiar with rake. However, some years ago I prototyped dbhi/run. One of the tools that I analysed was magefile, which is defined as:

a make/rake-like build tool using Go. You write plain-old go functions, and Mage automatically uses them as Makefile-like runnable targets.

See the feature comparison I did back then between how I wanted run to behave, and what was mage offering: https://github.com/dbhi/run#similar-projects. In hindsight, Golang was probably not the best choice for writing the prototype of run. So, my personal target is to achieve dbhi.github.io/concept/run using Python and without writing anything from scratch (not again). doit feels very similar to magefile, except it's Python instead of Go. Therefore, I blindly expect both of them to be very similar to rake.

Kaleb Barrett
Does anyone have a document on the justifications for the design of AXI? Me and a colleague were discussing it earlier and we are both curious. Perhaps they are buried in implications in the spec, but I'd like some more easily readable, if that exists.
Patrick Lehmann
you mean why AXI is built as it is built?
Kaleb Barrett
Yes. For example, "Why separate W and AW?", "Why separate, but required, read and write channel sets?", "Why are bursts decided by client instead of the bus arbitrator?", "Why master/slave instead of omnidirectional?", etc. Some of them become obvious if you think it through, others aren't, and I'd like to see how AXI became what it became.
Patrick Lehmann
Why separate W and AW?
  • for read, address and data is separated.
  • in parity, write is following this
  • you can send address first, data first, or both at the same time
  • in case of bursts, buffering is different for addresses and data. Reduced FIFO size for max outstanding bursts vs. max data capacity
1 reply
Patrick Lehmann
Why separate, but required, read and write channel sets?
  • you can have AXI-S to AXI-MM writers and an independent AXI-MM to AXI-S readers
  • you can have independent buffering on write and read paths

Why are bursts decided by client instead of the bus arbitrator?

Can you explain this?

1 reply

Why master/slave instead of omnidirectional?

Omnidirectional does not exists, it's
master => slave
slave <= master
just merged into one.

no precharged buses anymore?
Rodrigo A. Melo
@ktbarrett I have had the same doubts since I started using AXI. @Paebbels I really appreciate all that you can answer about that :-D
So, is valid, in the case of the write channel, first to specify wdata and then, in the next cycle, awaddr? The spec seems to suggest first address, then data, but I saw IP performing both operations at the same time. I never seen first data, then address.
I'm pretty sure the order doesn't matter. I seem to remember chasing down a bug a long time ago where my AXI slave would hand because it didn't support both cases
Patrick Lehmann
the specs allows, many IPs done support it, some crash on it
it's ok to hold back ready on data if your IP expects both at the same time
fun fact, with address first and data later, you can massively reduce the speed in Xilinx interconnects, because these smart IPs are just dumb ...
Kaleb Barrett

@Paebbels I'm not going to argue with you, but most of those justifications are not very compelling or become non-sequitur in the face of an alternative implementations. Which is where we arrived when me and my colleague started discussing it.

Those interconnects (at least the Xilinx ones) get expensive and we have also noticed they don't perform all that great. I feel like implementations are limited by the inherent complexity in the task. Maybe it's just Xilinx? A lot of their IP is buggy, non-compliant, or just not very performant.

We have a much simpler memory mapped bus protocol we use in-house that beats Xilinx's AXI implementations on resources and performance. So I'm wondering why they opted for complexity over simplicity? I can't imagine they did it without cause, or out of ignorance. And that's why I asked for an authoritative source, so I could get answers "straight from the horse's mouth". Would you consider yourself an authoritative source on the subject?

1 reply
Patrick Lehmann
AXI is a protocol family from ARM Ltd and part of AMBA (Advanced Microcontroller Bus Architecture). It has:
  • APB - Advanced Peripheral Bus
  • AXI - Advanced eXtensible Interface Bus
  • AHB - Advanced High-performance Bus
Before Xilinx switched to ARM based SoC, which comes with AXI interfaces and an AXI license for free, the used PowerPC with PLB (Processor Local Bus). and streaming was done with Xilinx own protocol called LocalLink.
Patrick Lehmann
I think using AXI gives you almost the highest possible performance. OTOH, even AXI-Lite is sometimes oversized. I think APB would be better than.
I'm not sure if ARM provides more insides on his AMBA site: https://www.arm.com/products/silicon-ip-system/embedded-system-design/amba-specifications
Does anyhone have experience with AMBA 5 CHI ? (the successor of axi4/ace)
Unai Martinez-Corral

@stnolting, @ktbarrett, I want to provide these refs to both of you in different channels, so I better write them here.

Most GItHub Actions workflows in ghdl/docker have workflow_dispatch and repository_dispatch events:

# https://github.com/ghdl/docker/blob/master/.github/workflows/cosim.yml
    types: [ cosim ]

The workflow_dispatch has no arguments, but the repository_dispatch has field named types, where we put a keyword.
The following curl call is used for triggering workflows from other workflows in the same repo, where $1 is the keyword (I guess it might be a list too):

# https://github.com/ghdl/docker/blob/master/.github/trigger.sh

curl -X POST https://api.github.com/repos/ghdl/docker/dispatches \
-H "Content-Type: application/json" -H 'Accept: application/vnd.github.everest-preview+json' \
-H "Authorization: token ${GHDL_BOT_TOKEN}" \
--data "{\"event_type\": \"$1\"}"

As you see, GHDL_BOT_TOKEN is used (which is a secret configured in the repo). That's because the default ${{ github.token }} does not allow triggering other repos (they want to prevent unexperienced users from creating infinite loops of workflow triggers).

We do have a similar trigger at the end of the main workflow in ghdl/ghdl:

# https://github.com/ghdl/ghdl/blob/master/.github/workflows/Test.yml#L594-L600
    - name: '🔔 Trigger ghdl/docker'
      run: |
        curl -X POST https://api.github.com/repos/ghdl/docker/dispatches \
        -H 'Content-Type: application/json' \
        -H 'Accept: application/vnd.github.everest-preview+json' \
        -H "Authorization: token ${{ secrets.GHDL_BOT }}" \
        --data '{"event_type": "ghdl"}'

So, after each successful CI run of branch master (or tagged commit) in ghdl/ghdl, a cross-repo trigger is executed for
starting workflow 'ghdl' in ghdl/docker (https://github.com/ghdl/docker/blob/master/.github/workflows/buster.yml). That one triggers workflow 'vunit' at the end (https://github.com/ghdl/docker/blob/master/.github/workflows/buster.yml#L58-L60). Then, 'vunit' triggers workflow 'ext' (https://github.com/ghdl/docker/blob/master/.github/workflows/vunit.yml#L41-L43). That's how we have all the docker images from ghdl/docker always in sync with GHDL's master branch.

Naturally, as long as you have a token, you can achieve the same result using any scripting/programming language. It's just an API call. Should you have you own scheduling server, you could completely replace the cron jobs and handle them on your own.

Now, how to combine this with PRs/issues? issue-runner is a proof of concept that uses GitHub's API for retrieving the body of the first message in the issues. It extracts the code-blocks and executes them as a MWE inside a container. The main use case is to have continuous integration of the MWEs reported by users in GHDL. So, whenever GHDL is updated (or weekly/monthly) we can test all the open issue reports to check whether any of them was fixed.

issue-runner is a combination of three components:

What is not implemented in issue-runner (yet):

What should be changed:

  • Having the runner implemented in JS does not make sense: https://github.com/eine/issue-runner/blob/master/ts/runner.ts. That should be handled by golang (including the coloured summary), or maybe better, done with pydoit and/or pytest.
  • Most of the functionality might probably be handled through PyGitHub, instead of using JavaScript. tip is an example of a GitHub Action based on PyGitHub.

Organisation MSYS2 is a nice example of how to use GitHub Actions, dummy Releases and an scheduler server for keeping all the packages up to date and reacting to all the PRs. See Automated Build Process and the four dummy releases: msys2/msys2-autobuild/releases. That autobuilder is mostly based on PyGitHub.

Until recently the default ${{ github.token }} in GitHub Actions workflows did not allow triggering other workflows (either in the same repo or in another one). Therefore, a Personal Access Token (PAT) with write permissions was required. For that reason, we did not exploit these repo cross-triggering possibilities accross organisations (hdl, symbiflow, GHDL, VUnit). We are keeping them "isolated" for now, until we can have a more sensible token/permission management. We don't want an abused bot to remove dozens of repos... It seems that GitHub is reworking the permissions in GitHub Actions, but there is no robust solution available yet.
At the same time, it seems that they are merging the functionality of repository_dispatch into workflow_dispatch, so I recommend reading the latest docs.

How to keep the testing of two or more repositories in sync?

  • In GHDL, we use cross-triggers between ghdl/ghdl and ghdl/docker. Other repos (ghdl/ghdl-cosim, ghdl/extended-tests, ghdl/ghdl-yosys-plugin) are NOT cross-triggered. We execute them through cron events, manual workflow_dispatch, or push/PR events. Therefore, all those tests are run post-mortem and do not have effect on the CI of the main repo.
  • In Icarus Verilog, there is repo ivtest which is NOT submoduled, but it is used in the CI workflow of the main repo (see https://github.com/steveicarus/iverilog/blob/master/.github/test.sh). Therefore, ivtest needs to be updated before iverilog and it needs to be kept in sync.
  • In organisation HDL, there is repo smoke-tests, which is added as a submodule in several repos (containers, MINGW-package...). Therefore, it needs to be updated before enhancements are tested in the "main" repos, but keeping sync is done through submodules.
  • In MSYS2, the autobuilder is executed every few hours, and it checks whether any job was scheduled. Antmicro uses a similar polling approach for coordinating external runners with GitHub Actions.

I hope these references are useful for you. Please do ask if you want further details about any of them.

@umarcor Hi, sorry for the late reply, I finally got around to reading the link you sent me regarding pyCAPI: https://github.com/VHDL-LS/rust_hdl/issues/112#issuecomment-866038036
I am one of the devs for Xeda and was interested in adding support for pyCAPI along side our current project definition flow - but I understand that pyCAPI is not fully completed correct?
Kaleb Barrett
Is there a library for binding a message queue like ZeroMQ to SV via the DPI and VHDL via the VHPI C FFI (forget what its called)?
Kaleb Barrett
Basically I'm trying to run a simulation as a service and want to use ZeroMQ for RPC and transaction queues. I know TLM exists to be transaction queues, but I don't think there is the same level of functionality that a message queue provides available for TLM. I could be wrong.
Does anybody know if yosys can skip the abc step and just map the native yosys logic blocks to their respective cells in a liberty file? I think the liberty parsing in abc is different from the liberty parsing in yosys 'read_liberty' function.
Tim Ansell
I'm about to present at the SBCCI / Chips in the Fields conference - https://sbmicro.org.br/chip-in-the-fields - Live stream @ https://www.youtube.com/cassriograndedosul -- Slides from the talk are at https://j.mp/sbcci21-sky130 -- Should be mostly familiar to everyone but does include some exciting updates people might be interested in...
Unai Martinez-Corral
Hi all! Although not active in the chat these last weeks, we did some exciting enhancements in HDL and GHDL. Let me post some "news":


  • Tools openFPGALoader and netgen were added.
  • Since Debian Bullseye was released, collection debian/bullseye was added. All the tools are available already.
    • In the near future, the default collection is expected to be changed from debian/buster to debian/bullseye. Therefore, I recommend users using Buster explicitly, to switch to Bullseye.
  • Support for multiple architectures was added. We are using QEMU on GitHub Actions, so performance is a limiting factor.
    • For now, we are building openFPGALoader, GTKWave, verilator, magic, ntgen, icestorm and arache-pnr; all of them for amd64, arm64v8, ppc64le and s390x. Testers are very welcome!
    • We are not publishing manifests yet, hence, users of non-amd64 architectures need to use the explicit/full image name: REGISTRY/ARCHITECTURE/COLLECTION/IMAGE.
See the updated documentation: hdl.github.io/containers. Find further discussion in hdl/containers#40.



There was hard work on pyGHDL.dom and pyVHDLModel for supporting multiple identifiers and statements/clauses deeper in the hierarchy (instantiations, generates, blocks, processes, sequential statements, context, use,...). See ghdl/ghdl@dac2e4d.


  • Project was reworked and three subsections were added:

    • pyVHDLModelUtils: helpers/utilities built on top of pyGHDL.dom/pyVHDLModel: resolve, fmt and sphinx.
    • Open Source VHDL Design Explorer (OSVDE):
      • Updated for showing generics and architectures. Architectures are shown recursively, meaning that the hierarchy of the blocks/generates is shown. All generate types are supported (if, for and case) and the conditions/choices are shown.
        • Entity/component instances are neither resolved recursively nor cross-referenced (yet), because no naive resolution was implemented. In the not distant future, Tristan might implement partial resolution features in libghdl, to avoid reimplementing them in the middleware.
      • Some icons were updated: https://umarcor.github.io/osvb/apis/project/OSVDE.html#id1.
    • Documentation generation: a demo about using pyGHDL.dom in Sphinx.
      • A generic exec directive is used for executing arbitrary Python code (from pyVHDLModelUtils.sphinx).
      • The sources are loaded once with initDesign and the documentation for libraries and/or design units is generated with printDocumentationOf.
      • A subsection about Diagrams contains references to GHDL, Yosys and sphinxcontrib-hdl-diagrams.
  • Conceptual Model was added.

    • That is a 7+1 layer model as a result of the discussions we had during the last months. It's an early draft and still (maybe forever) subject to change. However, I hope it will make easier to understand arbitrary references during the discussions.
      • In each layer, I added references to other sections of the documentation or to other repositories.
    • Electronic Design Automation Abstraction (EDA²) is used/presented/proposed. That is the expected reorganisation of the pyIPCMI codebase (and hopefully others) so that some pieces are reusable. The logo is a draft yet; however, it illustrates the purpose to create modules matching the "OSVB Model" (note the colours). In fact, the "OSVB Model" might be renamed to the "EDA² Model".
    • On a side but related note, I added diagram of the vision for the structure of Hardware Studio, which is based on the projects/pieces of the OSVB/EDA² Model. Beware that colours in that case are not an exact match. Nonetheless, it illustrates that the OSVB Model is not a complete and standalone stack, but pieces/layers are expected to be (re)used depending on the target project.
      • "DOM utils" refers to pyVHDLUtils, thus pyHWS.lib.dom is mostly a clone of OSVDE (removing the tkinter stuff). Actually, tabulate is used in pyVHDLUtils.sphinx for generating the RST tables.