Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • Sep 21 18:14
    tspyrou commented #46
  • Sep 21 18:10
    tspyrou opened #46
  • Sep 20 13:48
    kamilrakoczy edited #132
  • Sep 20 13:47
    kamilrakoczy opened #132
  • Sep 20 13:46

    kamilrakoczy on uhdm-yosys-fix-libffi

    yosys-uhdm: fix missing libffi … WIP: build only yosys-uhdm Sig… (compare)

  • Sep 20 12:34
    PiotrZierhoffer commented #31
  • Sep 20 12:33
    PiotrZierhoffer opened #31
  • Sep 20 09:37

    michalsieron on gcc-newlib-10.1

    Bump GCC-newlib version Add C++ as supported language t… Bump nostdc GCC to 10.1 and 3 more (compare)

  • Sep 20 07:49
    avelure starred hdl/containers
  • Sep 19 07:33
    ptracton starred hdl/constraints
  • Sep 19 01:02
    themainframe starred hdl/constraints
  • Sep 18 23:50
    QuantamHD commented #51
  • Sep 18 00:53
    aolofsson starred hdl/constraints
  • Sep 17 18:39

    ajelinski on fix-symbiflow-plugins

    (compare)

  • Sep 17 18:39

    ajelinski on master

    yosys-plugins-symbiflow; symbif… Merge pull request #131 from hd… (compare)

  • Sep 17 18:39
    ajelinski closed #131
  • Sep 17 18:07
    swang203 starred hdl/bazel_rules_hdl
  • Sep 17 08:12
    kamilrakoczy review_requested #131
  • Sep 17 06:22
    kamilrakoczy edited #131
  • Sep 17 06:22
    kamilrakoczy synchronize #131
Jim Lewis
@JimLewis
In either case it is short and simple.
svenn71
@svenn71
Where can I find the container for Cadence Xcelium? :smile:
ktbarrett
@ktbarrett:matrix.org
[m]
They call that one the garbage bin 😏
Unai Martinez-Corral
@umarcor
@svenn71 we cannot provide containers for non-open source software. We are not allowed. However, we can host dockerfiles for users to build containers themselves. I am not aware of anyone using Xcelium containers, tho. I've heard or used ModelSim/QuestaSim, Precision, Vivado and Radiant, but not Xcelium.
svenn71
@svenn71
I am very familiar with the licensing model of the big three
svenn71
@svenn71
I wish Netflix would read through John Cooleys Deep Chip and have somebody create a TV series like Mad Men for the EDA business throughout the 80s, 90s, 00s and 10s. I think it would be a fun thing to watch
Tim Ansell
@mithro
svenn71: I'm pretty sure nobody would believe it
svenn71
@svenn71
I didn't believe Mad Men either, but then I started googling .....
Unai Martinez-Corral
@umarcor

how is doit for python compared to rake for ruby regarding defining interdependence of tools and tasks in a workflow?

@svenn71 I'm not familiar with rake. However, some years ago I prototyped dbhi/run. One of the tools that I analysed was magefile, which is defined as:

a make/rake-like build tool using Go. You write plain-old go functions, and Mage automatically uses them as Makefile-like runnable targets.

See the feature comparison I did back then between how I wanted run to behave, and what was mage offering: https://github.com/dbhi/run#similar-projects. In hindsight, Golang was probably not the best choice for writing the prototype of run. So, my personal target is to achieve dbhi.github.io/concept/run using Python and without writing anything from scratch (not again). doit feels very similar to magefile, except it's Python instead of Go. Therefore, I blindly expect both of them to be very similar to rake.

Kaleb Barrett
@ktbarrett
Does anyone have a document on the justifications for the design of AXI? Me and a colleague were discussing it earlier and we are both curious. Perhaps they are buried in implications in the spec, but I'd like some more easily readable, if that exists.
Patrick Lehmann
@Paebbels
you mean why AXI is built as it is built?
Kaleb Barrett
@ktbarrett
Yes. For example, "Why separate W and AW?", "Why separate, but required, read and write channel sets?", "Why are bursts decided by client instead of the bus arbitrator?", "Why master/slave instead of omnidirectional?", etc. Some of them become obvious if you think it through, others aren't, and I'd like to see how AXI became what it became.
Patrick Lehmann
@Paebbels
Why separate W and AW?
  • for read, address and data is separated.
  • in parity, write is following this
  • you can send address first, data first, or both at the same time
  • in case of bursts, buffering is different for addresses and data. Reduced FIFO size for max outstanding bursts vs. max data capacity
1 reply
Patrick Lehmann
@Paebbels
Why separate, but required, read and write channel sets?
  • you can have AXI-S to AXI-MM writers and an independent AXI-MM to AXI-S readers
  • you can have independent buffering on write and read paths

Why are bursts decided by client instead of the bus arbitrator?

Can you explain this?

1 reply

Why master/slave instead of omnidirectional?

Omnidirectional does not exists, it's
master => slave
slave <= master
just merged into one.

svenn71
@svenn71
no precharged buses anymore?
Rodrigo A. Melo
@rodrigomelo9
@ktbarrett I have had the same doubts since I started using AXI. @Paebbels I really appreciate all that you can answer about that :-D
So, is valid, in the case of the write channel, first to specify wdata and then, in the next cycle, awaddr? The spec seems to suggest first address, then data, but I saw IP performing both operations at the same time. I never seen first data, then address.
GlenNicholls
@GlenNicholls
I'm pretty sure the order doesn't matter. I seem to remember chasing down a bug a long time ago where my AXI slave would hand because it didn't support both cases
Patrick Lehmann
@Paebbels
the specs allows, many IPs done support it, some crash on it
it's ok to hold back ready on data if your IP expects both at the same time
fun fact, with address first and data later, you can massively reduce the speed in Xilinx interconnects, because these smart IPs are just dumb ...
Kaleb Barrett
@ktbarrett

@Paebbels I'm not going to argue with you, but most of those justifications are not very compelling or become non-sequitur in the face of an alternative implementations. Which is where we arrived when me and my colleague started discussing it.

Those interconnects (at least the Xilinx ones) get expensive and we have also noticed they don't perform all that great. I feel like implementations are limited by the inherent complexity in the task. Maybe it's just Xilinx? A lot of their IP is buggy, non-compliant, or just not very performant.

We have a much simpler memory mapped bus protocol we use in-house that beats Xilinx's AXI implementations on resources and performance. So I'm wondering why they opted for complexity over simplicity? I can't imagine they did it without cause, or out of ignorance. And that's why I asked for an authoritative source, so I could get answers "straight from the horse's mouth". Would you consider yourself an authoritative source on the subject?

1 reply
Patrick Lehmann
@Paebbels
AXI is a protocol family from ARM Ltd and part of AMBA (Advanced Microcontroller Bus Architecture). It has:
  • APB - Advanced Peripheral Bus
  • AXI - Advanced eXtensible Interface Bus
  • AHB - Advanced High-performance Bus
Before Xilinx switched to ARM based SoC, which comes with AXI interfaces and an AXI license for free, the used PowerPC with PLB (Processor Local Bus). and streaming was done with Xilinx own protocol called LocalLink.
Patrick Lehmann
@Paebbels
I think using AXI gives you almost the highest possible performance. OTOH, even AXI-Lite is sometimes oversized. I think APB would be better than.
I'm not sure if ARM provides more insides on his AMBA site: https://www.arm.com/products/silicon-ip-system/embedded-system-design/amba-specifications
Nic30
@Nic30
Does anyhone have experience with AMBA 5 CHI ? (the successor of axi4/ace)
Unai Martinez-Corral
@umarcor

@stnolting, @ktbarrett, I want to provide these refs to both of you in different channels, so I better write them here.

Most GItHub Actions workflows in ghdl/docker have workflow_dispatch and repository_dispatch events:

# https://github.com/ghdl/docker/blob/master/.github/workflows/cosim.yml
  workflow_dispatch:
  repository_dispatch:
    types: [ cosim ]

The workflow_dispatch has no arguments, but the repository_dispatch has field named types, where we put a keyword.
The following curl call is used for triggering workflows from other workflows in the same repo, where $1 is the keyword (I guess it might be a list too):

# https://github.com/ghdl/docker/blob/master/.github/trigger.sh

curl -X POST https://api.github.com/repos/ghdl/docker/dispatches \
-H "Content-Type: application/json" -H 'Accept: application/vnd.github.everest-preview+json' \
-H "Authorization: token ${GHDL_BOT_TOKEN}" \
--data "{\"event_type\": \"$1\"}"

As you see, GHDL_BOT_TOKEN is used (which is a secret configured in the repo). That's because the default ${{ github.token }} does not allow triggering other repos (they want to prevent unexperienced users from creating infinite loops of workflow triggers).

We do have a similar trigger at the end of the main workflow in ghdl/ghdl:

# https://github.com/ghdl/ghdl/blob/master/.github/workflows/Test.yml#L594-L600
    - name: '🔔 Trigger ghdl/docker'
      run: |
        curl -X POST https://api.github.com/repos/ghdl/docker/dispatches \
        -H 'Content-Type: application/json' \
        -H 'Accept: application/vnd.github.everest-preview+json' \
        -H "Authorization: token ${{ secrets.GHDL_BOT }}" \
        --data '{"event_type": "ghdl"}'

So, after each successful CI run of branch master (or tagged commit) in ghdl/ghdl, a cross-repo trigger is executed for
starting workflow 'ghdl' in ghdl/docker (https://github.com/ghdl/docker/blob/master/.github/workflows/buster.yml). That one triggers workflow 'vunit' at the end (https://github.com/ghdl/docker/blob/master/.github/workflows/buster.yml#L58-L60). Then, 'vunit' triggers workflow 'ext' (https://github.com/ghdl/docker/blob/master/.github/workflows/vunit.yml#L41-L43). That's how we have all the docker images from ghdl/docker always in sync with GHDL's master branch.

Naturally, as long as you have a token, you can achieve the same result using any scripting/programming language. It's just an API call. Should you have you own scheduling server, you could completely replace the cron jobs and handle them on your own.

Now, how to combine this with PRs/issues? issue-runner is a proof of concept that uses GitHub's API for retrieving the body of the first message in the issues. It extracts the code-blocks and executes them as a MWE inside a container. The main use case is to have continuous integration of the MWEs reported by users in GHDL. So, whenever GHDL is updated (or weekly/monthly) we can test all the open issue reports to check whether any of them was fixed.

issue-runner is a combination of three components:

What is not implemented in issue-runner (yet):

What should be changed:

  • Having the runner implemented in JS does not make sense: https://github.com/eine/issue-runner/blob/master/ts/runner.ts. That should be handled by golang (including the coloured summary), or maybe better, done with pydoit and/or pytest.
  • Most of the functionality might probably be handled through PyGitHub, instead of using JavaScript. tip is an example of a GitHub Action based on PyGitHub.

Organisation MSYS2 is a nice example of how to use GitHub Actions, dummy Releases and an scheduler server for keeping all the packages up to date and reacting to all the PRs. See Automated Build Process and the four dummy releases: msys2/msys2-autobuild/releases. That autobuilder is mostly based on PyGitHub.

Until recently the default ${{ github.token }} in GitHub Actions workflows did not allow triggering other workflows (either in the same repo or in another one). Therefore, a Personal Access Token (PAT) with write permissions was required. For that reason, we did not exploit these repo cross-triggering possibilities accross organisations (hdl, symbiflow, GHDL, VUnit). We are keeping them "isolated" for now, until we can have a more sensible token/permission management. We don't want an abused bot to remove dozens of repos... It seems that GitHub is reworking the permissions in GitHub Actions, but there is no robust solution available yet.
At the same time, it seems that they are merging the functionality of repository_dispatch into workflow_dispatch, so I recommend reading the latest docs.

How to keep the testing of two or more repositories in sync?

  • In GHDL, we use cross-triggers between ghdl/ghdl and ghdl/docker. Other repos (ghdl/ghdl-cosim, ghdl/extended-tests, ghdl/ghdl-yosys-plugin) are NOT cross-triggered. We execute them through cron events, manual workflow_dispatch, or push/PR events. Therefore, all those tests are run post-mortem and do not have effect on the CI of the main repo.
  • In Icarus Verilog, there is repo ivtest which is NOT submoduled, but it is used in the CI workflow of the main repo (see https://github.com/steveicarus/iverilog/blob/master/.github/test.sh). Therefore, ivtest needs to be updated before iverilog and it needs to be kept in sync.
  • In organisation HDL, there is repo smoke-tests, which is added as a submodule in several repos (containers, MINGW-package...). Therefore, it needs to be updated before enhancements are tested in the "main" repos, but keeping sync is done through submodules.
  • In MSYS2, the autobuilder is executed every few hours, and it checks whether any job was scheduled. Antmicro uses a similar polling approach for coordinating external runners with GitHub Actions.

I hope these references are useful for you. Please do ask if you want further details about any of them.

Rishub
@rishubn
@umarcor Hi, sorry for the late reply, I finally got around to reading the link you sent me regarding pyCAPI: https://github.com/VHDL-LS/rust_hdl/issues/112#issuecomment-866038036
I am one of the devs for Xeda and was interested in adding support for pyCAPI along side our current project definition flow - but I understand that pyCAPI is not fully completed correct?
Kaleb Barrett
@ktbarrett
Is there a library for binding a message queue like ZeroMQ to SV via the DPI and VHDL via the VHPI C FFI (forget what its called)?
Kaleb Barrett
@ktbarrett
Basically I'm trying to run a simulation as a service and want to use ZeroMQ for RPC and transaction queues. I know TLM exists to be transaction queues, but I don't think there is the same level of functionality that a message queue provides available for TLM. I could be wrong.
svenn71
@svenn71
Does anybody know if yosys can skip the abc step and just map the native yosys logic blocks to their respective cells in a liberty file? I think the liberty parsing in abc is different from the liberty parsing in yosys 'read_liberty' function.
Tim Ansell
@mithro
I'm about to present at the SBCCI / Chips in the Fields conference - https://sbmicro.org.br/chip-in-the-fields - Live stream @ https://www.youtube.com/cassriograndedosul -- Slides from the talk are at https://j.mp/sbcci21-sky130 -- Should be mostly familiar to everyone but does include some exciting updates people might be interested in...
Unai Martinez-Corral
@umarcor
Hi all! Although not active in the chat these last weeks, we did some exciting enhancements in HDL and GHDL. Let me post some "news":

Containers

  • Tools openFPGALoader and netgen were added.
  • Since Debian Bullseye was released, collection debian/bullseye was added. All the tools are available already.
    • In the near future, the default collection is expected to be changed from debian/buster to debian/bullseye. Therefore, I recommend users using Buster explicitly, to switch to Bullseye.
  • Support for multiple architectures was added. We are using QEMU on GitHub Actions, so performance is a limiting factor.
    • For now, we are building openFPGALoader, GTKWave, verilator, magic, ntgen, icestorm and arache-pnr; all of them for amd64, arm64v8, ppc64le and s390x. Testers are very welcome!
    • We are not publishing manifests yet, hence, users of non-amd64 architectures need to use the explicit/full image name: REGISTRY/ARCHITECTURE/COLLECTION/IMAGE.
See the updated documentation: hdl.github.io/containers. Find further discussion in hdl/containers#40.

Awesome

GHDL

There was hard work on pyGHDL.dom and pyVHDLModel for supporting multiple identifiers and statements/clauses deeper in the hierarchy (instantiations, generates, blocks, processes, sequential statements, context, use,...). See ghdl/ghdl@dac2e4d.

OSVB

  • Project was reworked and three subsections were added:

    • pyVHDLModelUtils: helpers/utilities built on top of pyGHDL.dom/pyVHDLModel: resolve, fmt and sphinx.
    • Open Source VHDL Design Explorer (OSVDE):
      • Updated for showing generics and architectures. Architectures are shown recursively, meaning that the hierarchy of the blocks/generates is shown. All generate types are supported (if, for and case) and the conditions/choices are shown.
        • Entity/component instances are neither resolved recursively nor cross-referenced (yet), because no naive resolution was implemented. In the not distant future, Tristan might implement partial resolution features in libghdl, to avoid reimplementing them in the middleware.
      • Some icons were updated: https://umarcor.github.io/osvb/apis/project/OSVDE.html#id1.
    • Documentation generation: a demo about using pyGHDL.dom in Sphinx.
      • A generic exec directive is used for executing arbitrary Python code (from pyVHDLModelUtils.sphinx).
      • The sources are loaded once with initDesign and the documentation for libraries and/or design units is generated with printDocumentationOf.
      • A subsection about Diagrams contains references to GHDL, Yosys and sphinxcontrib-hdl-diagrams.
  • Conceptual Model was added.

    • That is a 7+1 layer model as a result of the discussions we had during the last months. It's an early draft and still (maybe forever) subject to change. However, I hope it will make easier to understand arbitrary references during the discussions.
      • In each layer, I added references to other sections of the documentation or to other repositories.
    • Electronic Design Automation Abstraction (EDA²) is used/presented/proposed. That is the expected reorganisation of the pyIPCMI codebase (and hopefully others) so that some pieces are reusable. The logo is a draft yet; however, it illustrates the purpose to create modules matching the "OSVB Model" (note the colours). In fact, the "OSVB Model" might be renamed to the "EDA² Model".
    • On a side but related note, I added diagram of the vision for the structure of Hardware Studio, which is based on the projects/pieces of the OSVB/EDA² Model. Beware that colours in that case are not an exact match. Nonetheless, it illustrates that the OSVB Model is not a complete and standalone stack, but pieces/layers are expected to be (re)used depending on the target project.
      • "DOM utils" refers to pyVHDLUtils, thus pyHWS.lib.dom is mostly a clone of OSVDE (removing the tkinter stuff). Actually, tabulate is used in pyVHDLUtils.sphinx for generating the RST tables.
Unai Martinez-Corral
@umarcor

@umarcor Hi, sorry for the late reply, I finally got around to reading the link you sent me regarding pyCAPI: https://github.com/VHDL-LS/rust_hdl/issues/112#issuecomment-866038036
I am one of the devs for Xeda and was interested in adding support for pyCAPI along side our current project definition flow - but I understand that pyCAPI is not fully completed correct?

@rishubn, no worries. These tasks/topics take long and they are not the main priority for most of us. Hence, discussions tend to be asynchronous (sometimes very asynchronous).

As you say, pyCAPI is not fully completed and ideally it should not be completed. Let me explain: the motivation of pyCAPI is to propose the .core (CAPI) format support available in FuseSoC to be reusable by other projects. There are two requirements for that to be possible:

  • Developers/maintainers of other projects need to be willing to use/support .core files. In order to do so, using that file format needs not to limit significantly the capabilities of the project. For instance, I'm not sure about wildcards being supported for defining the sources in the filesets.
  • Developers/maintainers of FuseSoC need to be willing to document the internal API of the CAPI format. The current CAPI2 Reference is a description for the users, the ones who are going to write YAML. We need the other side, the Python objects/classes that are produced after using "CAPI reader".

Should any of those conditions not be achievable, then we might need to define a new version of CAPI (or use some other name).

From a technical point of view the current pyCAPI serves two purposes:

  • Provide a working example of how a "typical" VUnit run script might be adapted to be used along with .core files: from https://github.com/umarcor/osvb/blob/main/AXI4Stream/test/vunit/run.py to https://github.com/umarcor/osvb/blob/main/AXI4Stream/test/vunit/run_capi.py. Should the pyCAPI.VUnit.AddCoreFilesets be implemented based on FuseSoC's API reader, the template-based run files generated by edalize would not be required. Users might use the "typical" VUnit entrypoints or optionally FuseSoC/edalize. In any case, they would keep the same .core files for simulation and for synthesis (not supported by VUnit).
  • Evaluate the capabilities of Python's dataclasses to (un)marshal JSON/YAML. Having built-in marshaling support would significantly reduce the maintenance of any parser/write, making it trivial to support JSON, YAML or other similar formats. I must say I'm familiar with mashaling using golang, but not Python. The initial tests were bittersweet. The code is really clean, easy to read and maintain: https://github.com/umarcor/osvb/blob/main/mods/pyCAPI/__init__.py. It works nice, as long as you expect "static" types. That is, each field "must" have a single expected type (class). Unions are not supported. That is unfortunate because several projects rely on items in a dictionary or a list maybe having different types. That's the case of FuseSoC's CAPI.

Therefore, CAPI: 3 is used in https://github.com/umarcor/osvb/blob/main/AXI4Stream/AXI4Stream.core to somehow indicate that it's a possible necessary evolution.

Moreover:

  • Several projects mix declarative and imperative features, sometimes in the same file. I believe that is the case of FuseSoC too, which has a kind of DSL (my_flag ? (my_string)). My perception is that we can and should avoid that by having the imperative features written in a different file and keeping the YAML declarative only. That's the purpose of the *.core + run.py example. However, if multiple other projects implemented some DSL, maybe my perception is wrong and that's required indeed.
  • Similarly, marshaling might be a sweet dream but not possible in practice.
  • The features that the YAML configuration file needs to support are directly related to the "Project" layer of the OSVB Model, which is expected to be split from pyIPCMI, as commented in https://umarcor.github.io/osvb/intro/model.html#electronic-design-automation-abstraction-eda2.

Overall, I am not experienced enough with FuseSoC and I cannot afford to dive into the codebase. Therefore, any help to confirm/correct my previous assumptions is very welcome. Meanwhile, I'm prioritising the CLI, EDA and Workflows "layers", which are potentially the most reusable by developers.

@rishubn, please, let me/us know about Xeda and the configurations you use. I'm personally more interested in the Python representation of the data after you parse the configuration files, rather than the actual TOML syntax. I saw that you define multiple targets/designs by providing a list of sources, a top unit and a clock.

  • How do you handle analysing/elaborating sources into different logical libraries?
  • How are constraints set for each design and board?
  • In general, what was the motivation for starting Xeda from scratch? I.e. what is done significantly different than other similar projects?
Unai Martinez-Corral
@umarcor

Is there a library for binding a message queue like ZeroMQ to SV via the DPI and VHDL via the VHPI C FFI (forget what its called)?
Basically I'm trying to run a simulation as a service and want to use ZeroMQ for RPC and transaction queues. I know TLM exists to be transaction queues, but I don't think there is the same level of functionality that a message queue provides available for TLM. I could be wrong.

@ktbarrett we had several discussions with @bradleyharden in GHDL's and VUnit's channels around 2018 Q4 and 2019. The context was "binding Python queues to VUnit queues" so we could achieve co-simulation "independently" of the language. The problem is having matching VHDL and C/Python representations of the data, i.e. knowing the type conversions and how to handle each type on the other side. Moreover, when using GHDL and VHPIDIRECT, the one who allocates memory needs to free it (you cannot allocate in C and free in Ada/VHDL). That's why I turned to the VASG to try having those type conversions and memory constraints specified.

Unai Martinez-Corral
@umarcor
My initial motivation was having the VUnit verification components use external buffers for their internal data. That is, avoid copying test data from a foreign app into the testbench, for it to be copied into the VC and last sent to the UUT. Instead, provide a pointer to the VC with the location of the data in the foreign app. By the same token, read the results by accessing the data in the VC at the end.