Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Activity
  • 18:14
    tspyrou commented #46
  • 18:10
    tspyrou opened #46
  • Sep 20 13:48
    kamilrakoczy edited #132
  • Sep 20 13:47
    kamilrakoczy opened #132
  • Sep 20 13:46

    kamilrakoczy on uhdm-yosys-fix-libffi

    yosys-uhdm: fix missing libffi … WIP: build only yosys-uhdm Sig… (compare)

  • Sep 20 12:34
    PiotrZierhoffer commented #31
  • Sep 20 12:33
    PiotrZierhoffer opened #31
  • Sep 20 09:37

    michalsieron on gcc-newlib-10.1

    Bump GCC-newlib version Add C++ as supported language t… Bump nostdc GCC to 10.1 and 3 more (compare)

  • Sep 20 07:49
    avelure starred hdl/containers
  • Sep 19 07:33
    ptracton starred hdl/constraints
  • Sep 19 01:02
    themainframe starred hdl/constraints
  • Sep 18 23:50
    QuantamHD commented #51
  • Sep 18 00:53
    aolofsson starred hdl/constraints
  • Sep 17 18:39

    ajelinski on fix-symbiflow-plugins

    (compare)

  • Sep 17 18:39

    ajelinski on master

    yosys-plugins-symbiflow; symbif… Merge pull request #131 from hd… (compare)

  • Sep 17 18:39
    ajelinski closed #131
  • Sep 17 18:07
    swang203 starred hdl/bazel_rules_hdl
  • Sep 17 08:12
    kamilrakoczy review_requested #131
  • Sep 17 06:22
    kamilrakoczy edited #131
  • Sep 17 06:22
    kamilrakoczy synchronize #131
Unai Martinez-Corral
@umarcor
They do have a Python and software background which I lack.
Unai Martinez-Corral
@umarcor

I would probably debase VUnit's regression/runner interface. It's not currently reusable for cocotb (or other) tests. The regression/runner only really works for HDL tests and the test spec and result formats are not documented.

@ktbarrett, yes, I am aware of that. However, there were some relevant changes since we last talked about it 4 months ago:

  • The entrypoint to OSVVM tests are VHDL Configurations (which are not the same as the "configuration" term used in VUnit). Although OSVVM can also use entities, I want to be able to execute existing OSVVM entrypoints. @LarsAsplund is open to it if that implies that OSVVM will use/support VUnit's py plumbing.
    • I'm working with @JimLewis, for introducing him to CI by using his TCL plumbing on MSYS2 (OSVVM/OsvvmLibraries#2), which is the main workflow he uses and teaches. I added a Linux job too.
    • I want to convert OSVVM's *.pro files to *.core files, and let VUnit use those (instead of distributing the core of OSVVM only with VUnit). I'm not sure about OSVVM + FuseSoC making sense, but that would be just a useful outcome in case anyone needs it.
    • I want to understand the similarities with regard to in-GUI commands that both OSVVM and VUnit provide. @GlenNicholls and nfranque (can't remember the specific nick) were working on that in the context of VUnit only. Does cocotb provide any feature in this regard?
  • I need to support custom environment variables for each testbench/test in VUnit. That's something that a contributor of cocotb brought up (I remember his green avatar, the nick starts with 'j', and he reads several rooms frequently, but I cannot remember his nick now) in Feb. In May, I found I needed that feature myself in a different use case.
  • pyVHDLModel allows to generate a VHDL testbench programmatically after parsing an existing entity. The main stopper for VUnit to run cocotb testbenches as-is was the need to autogenerate that testbench (or to require the users to do it). Now that can be hidden from the users. It is arguable whether implementing this is faster/easier than adapting VUnit's runner for executing cocotb's testbenches (similarly to running OSVVM's Configurations). I'm not doing any claim in that regard.

Those are my notes for layer 3. I hope to propose reshaping VUnit's runner for covering those use cases and hopefully running cocotb tests without a top-level testbench. But I need several months of knowledge yet, including learning more about pydoit, airflow and Patrick's layers 1-2.

Jim Lewis
@JimLewis
@umarcor WRT generating *.core files or anything else, it would be possible to create an application layer in the OSVVM scripts that could do that. Look at the VendorScripts_GHDL.tcl for an idea. All it would have to do is instead of doing an action that executed something, it could do a tcl "puts" to output the appropriate .core file information.
Jim Lewis
@JimLewis
To understand what I mean, look at the log file for GHDL created by OSVVM .pro script - it contains a log of all the commands run - and for the simulations the output.
Unai Martinez-Corral
@umarcor
It's @jwprice100 the user whose nick I did not remember above.
Unai Martinez-Corral
@umarcor
@JimLewis I thought about using Patrick's powershell vendor scripts, which are equivalent to the TCL scripts in that regard. Both of them parse the *.pro files. However, I could also do it manually once. Using an editor with vertical/block selection is probably faster than reusing either the TCL or the pwsh scripts. Nevertheless, it didn't make much sense until those sources are usable. That is, until VUnit can execute OSVVM Configurations.
There is another recent motivation for doing it: instead of converting the *.pro files explicitly, use wildcards and have it as a "cursed" example for implementing dependency scanning based on pyVHDLModel :smiley:. I like that idea because it's an exciting demo and it's also more idiomatic to feed wildcards into VUnit, rather than explicit filesets.
Jim Lewis
@JimLewis
Ok. I noticed on the core file someone manually generated for osvvm that there is a problem. Osvvm.pro has a conditional in it.
Do core files specify what to run in the simulator?
BTW, .pro files are executable tcl. This is the difference between the .pro and our older formats.
Unai Martinez-Corral
@umarcor

Do core files specify what to run in the simulator?

@JimLewis, it depends on which core files. In the ones currently supported by FuseSoc, I think it does. See, for instance, https://github.com/m-kru/fsva/blob/master/dummy_fusesoc_lib/osvvm/div_by_3.core#L21-L26. However, in https://github.com/umarcor/osvb/blob/main/AXI4Stream/AXI4Stream.core the "targets" are specified elsewhere.

Unai Martinez-Corral
@umarcor
In https://github.com/antonblanchard/microwatt/blob/master/microwatt.core, a *.core file is used for synthesis and implementation with FuseSoC/edalize, but simulation is done with a Makefile, as well as implementation for some boards including Lattice devices.
GlenNicholls
@GlenNicholls

@umarcor

I want to understand the similarities with regard to in-GUI commands that both OSVVM and VUnit provide. @GlenNicholls and nfranque (can't remember the specific nick) were working on that in the context of VUnit only. Does cocotb provide any feature in this regard?

If you want/need more info about the objective of that lmk

Unai Martinez-Corral
@umarcor
:thumbsup: :heart:
Jim Lewis
@JimLewis
@GlenNicholls OSVVMs's in-GUI commands are documented in GitHub readme: https://github.com/OSVVM/OSVVM-Scripts/tree/master or in pdf: https://github.com/OSVVM/Documentation/blob/master/Script_user_guide.pdf
In either case it is short and simple.
svenn71
@svenn71
Where can I find the container for Cadence Xcelium? :smile:
ktbarrett
@ktbarrett:matrix.org
[m]
They call that one the garbage bin 😏
Unai Martinez-Corral
@umarcor
@svenn71 we cannot provide containers for non-open source software. We are not allowed. However, we can host dockerfiles for users to build containers themselves. I am not aware of anyone using Xcelium containers, tho. I've heard or used ModelSim/QuestaSim, Precision, Vivado and Radiant, but not Xcelium.
svenn71
@svenn71
I am very familiar with the licensing model of the big three
svenn71
@svenn71
I wish Netflix would read through John Cooleys Deep Chip and have somebody create a TV series like Mad Men for the EDA business throughout the 80s, 90s, 00s and 10s. I think it would be a fun thing to watch
Tim Ansell
@mithro
svenn71: I'm pretty sure nobody would believe it
svenn71
@svenn71
I didn't believe Mad Men either, but then I started googling .....
Unai Martinez-Corral
@umarcor

how is doit for python compared to rake for ruby regarding defining interdependence of tools and tasks in a workflow?

@svenn71 I'm not familiar with rake. However, some years ago I prototyped dbhi/run. One of the tools that I analysed was magefile, which is defined as:

a make/rake-like build tool using Go. You write plain-old go functions, and Mage automatically uses them as Makefile-like runnable targets.

See the feature comparison I did back then between how I wanted run to behave, and what was mage offering: https://github.com/dbhi/run#similar-projects. In hindsight, Golang was probably not the best choice for writing the prototype of run. So, my personal target is to achieve dbhi.github.io/concept/run using Python and without writing anything from scratch (not again). doit feels very similar to magefile, except it's Python instead of Go. Therefore, I blindly expect both of them to be very similar to rake.

Kaleb Barrett
@ktbarrett
Does anyone have a document on the justifications for the design of AXI? Me and a colleague were discussing it earlier and we are both curious. Perhaps they are buried in implications in the spec, but I'd like some more easily readable, if that exists.
Patrick Lehmann
@Paebbels
you mean why AXI is built as it is built?
Kaleb Barrett
@ktbarrett
Yes. For example, "Why separate W and AW?", "Why separate, but required, read and write channel sets?", "Why are bursts decided by client instead of the bus arbitrator?", "Why master/slave instead of omnidirectional?", etc. Some of them become obvious if you think it through, others aren't, and I'd like to see how AXI became what it became.
Patrick Lehmann
@Paebbels
Why separate W and AW?
  • for read, address and data is separated.
  • in parity, write is following this
  • you can send address first, data first, or both at the same time
  • in case of bursts, buffering is different for addresses and data. Reduced FIFO size for max outstanding bursts vs. max data capacity
1 reply
Patrick Lehmann
@Paebbels
Why separate, but required, read and write channel sets?
  • you can have AXI-S to AXI-MM writers and an independent AXI-MM to AXI-S readers
  • you can have independent buffering on write and read paths

Why are bursts decided by client instead of the bus arbitrator?

Can you explain this?

1 reply

Why master/slave instead of omnidirectional?

Omnidirectional does not exists, it's
master => slave
slave <= master
just merged into one.

svenn71
@svenn71
no precharged buses anymore?
Rodrigo A. Melo
@rodrigomelo9
@ktbarrett I have had the same doubts since I started using AXI. @Paebbels I really appreciate all that you can answer about that :-D
So, is valid, in the case of the write channel, first to specify wdata and then, in the next cycle, awaddr? The spec seems to suggest first address, then data, but I saw IP performing both operations at the same time. I never seen first data, then address.
GlenNicholls
@GlenNicholls
I'm pretty sure the order doesn't matter. I seem to remember chasing down a bug a long time ago where my AXI slave would hand because it didn't support both cases
Patrick Lehmann
@Paebbels
the specs allows, many IPs done support it, some crash on it
it's ok to hold back ready on data if your IP expects both at the same time
fun fact, with address first and data later, you can massively reduce the speed in Xilinx interconnects, because these smart IPs are just dumb ...
Kaleb Barrett
@ktbarrett

@Paebbels I'm not going to argue with you, but most of those justifications are not very compelling or become non-sequitur in the face of an alternative implementations. Which is where we arrived when me and my colleague started discussing it.

Those interconnects (at least the Xilinx ones) get expensive and we have also noticed they don't perform all that great. I feel like implementations are limited by the inherent complexity in the task. Maybe it's just Xilinx? A lot of their IP is buggy, non-compliant, or just not very performant.

We have a much simpler memory mapped bus protocol we use in-house that beats Xilinx's AXI implementations on resources and performance. So I'm wondering why they opted for complexity over simplicity? I can't imagine they did it without cause, or out of ignorance. And that's why I asked for an authoritative source, so I could get answers "straight from the horse's mouth". Would you consider yourself an authoritative source on the subject?

1 reply
Patrick Lehmann
@Paebbels
AXI is a protocol family from ARM Ltd and part of AMBA (Advanced Microcontroller Bus Architecture). It has:
  • APB - Advanced Peripheral Bus
  • AXI - Advanced eXtensible Interface Bus
  • AHB - Advanced High-performance Bus
Before Xilinx switched to ARM based SoC, which comes with AXI interfaces and an AXI license for free, the used PowerPC with PLB (Processor Local Bus). and streaming was done with Xilinx own protocol called LocalLink.
Patrick Lehmann
@Paebbels
I think using AXI gives you almost the highest possible performance. OTOH, even AXI-Lite is sometimes oversized. I think APB would be better than.
I'm not sure if ARM provides more insides on his AMBA site: https://www.arm.com/products/silicon-ip-system/embedded-system-design/amba-specifications
Nic30
@Nic30
Does anyhone have experience with AMBA 5 CHI ? (the successor of axi4/ace)
Unai Martinez-Corral
@umarcor

@stnolting, @ktbarrett, I want to provide these refs to both of you in different channels, so I better write them here.

Most GItHub Actions workflows in ghdl/docker have workflow_dispatch and repository_dispatch events:

# https://github.com/ghdl/docker/blob/master/.github/workflows/cosim.yml
  workflow_dispatch:
  repository_dispatch:
    types: [ cosim ]

The workflow_dispatch has no arguments, but the repository_dispatch has field named types, where we put a keyword.
The following curl call is used for triggering workflows from other workflows in the same repo, where $1 is the keyword (I guess it might be a list too):

# https://github.com/ghdl/docker/blob/master/.github/trigger.sh

curl -X POST https://api.github.com/repos/ghdl/docker/dispatches \
-H "Content-Type: application/json" -H 'Accept: application/vnd.github.everest-preview+json' \
-H "Authorization: token ${GHDL_BOT_TOKEN}" \
--data "{\"event_type\": \"$1\"}"

As you see, GHDL_BOT_TOKEN is used (which is a secret configured in the repo). That's because the default ${{ github.token }} does not allow triggering other repos (they want to prevent unexperienced users from creating infinite loops of workflow triggers).

We do have a similar trigger at the end of the main workflow in ghdl/ghdl:

# https://github.com/ghdl/ghdl/blob/master/.github/workflows/Test.yml#L594-L600
    - name: '🔔 Trigger ghdl/docker'
      run: |
        curl -X POST https://api.github.com/repos/ghdl/docker/dispatches \
        -H 'Content-Type: application/json' \
        -H 'Accept: application/vnd.github.everest-preview+json' \
        -H "Authorization: token ${{ secrets.GHDL_BOT }}" \
        --data '{"event_type": "ghdl"}'

So, after each successful CI run of branch master (or tagged commit) in ghdl/ghdl, a cross-repo trigger is executed for
starting workflow 'ghdl' in ghdl/docker (https://github.com/ghdl/docker/blob/master/.github/workflows/buster.yml). That one triggers workflow 'vunit' at the end (https://github.com/ghdl/docker/blob/master/.github/workflows/buster.yml#L58-L60). Then, 'vunit' triggers workflow 'ext' (https://github.com/ghdl/docker/blob/master/.github/workflows/vunit.yml#L41-L43). That's how we have all the docker images from ghdl/docker always in sync with GHDL's master branch.

Naturally, as long as you have a token, you can achieve the same result using any scripting/programming language. It's just an API call. Should you have you own scheduling server, you could completely replace the cron jobs and handle them on your own.

Now, how to combine this with PRs/issues? issue-runner is a proof of concept that uses GitHub's API for retrieving the body of the first message in the issues. It extracts the code-blocks and executes them as a MWE inside a container. The main use case is to have continuous integration of the MWEs reported by users in GHDL. So, whenever GHDL is updated (or weekly/monthly) we can test all the open issue reports to check whether any of them was fixed.

issue-runner is a combination of three components:

What is not implemented in issue-runner (yet):

What should be changed:

  • Having the runner implemented in JS does not make sense: https://github.com/eine/issue-runner/blob/master/ts/runner.ts. That should be handled by golang (including the coloured summary), or maybe better, done with pydoit and/or pytest.
  • Most of the functionality might probably be handled through PyGitHub, instead of using JavaScript. tip is an example of a GitHub Action based on PyGitHub.

Organisation MSYS2 is a nice example of how to use GitHub Actions, dummy Releases and an scheduler server for keeping all the packages up to date and reacting to all the PRs. See Automated Build Process and the four dummy releases: msys2/msys2-autobuild/releases. That autobuilder is mostly based on PyGitHub.

Until recently the default ${{ github.token }} in GitHub Actions workflows did not allow triggering other workflows (either in the same repo or in another one). Therefore, a Personal Access Token (PAT) with write permissions was required. For that reason, we did not exploit these repo cross-triggering possibilities accross organisations (hdl, symbiflow, GHDL, VUnit). We are keeping them "isolated" for now, until we can have a more sensible token/permission management. We don't want an abused bot to remove dozens of repos... It seems that GitHub is reworking the permissions in GitHub Actions, but there is no robust solution available yet.
At the same time, it seems that they are merging the functionality of repository_dispatch into workflow_dispatch, so I recommend reading the latest docs.

How to keep the testing of two or more repositories in sync?

  • In GHDL, we use cross-triggers between ghdl/ghdl and ghdl/docker. Other repos (ghdl/ghdl-cosim, ghdl/extended-tests, ghdl/ghdl-yosys-plugin) are NOT cross-triggered. We execute them through cron events, manual workflow_dispatch, or push/PR events. Therefore, all those tests are run post-mortem and do not have effect on the CI of the main repo.
  • In Icarus Verilog, there is repo ivtest which is NOT submoduled, but it is used in the CI workflow of the main repo (see https://github.com/steveicarus/iverilog/blob/master/.github/test.sh). Therefore, ivtest needs to be updated before iverilog and it needs to be kept in sync.
  • In organisation HDL, there is repo smoke-tests, which is added as a submodule in several repos (containers, MINGW-package...). Therefore, it needs to be updated before enhancements are tested in the "main" repos, but keeping sync is done through submodules.
  • In MSYS2, the autobuilder is executed every few hours, and it checks whether any job was scheduled. Antmicro uses a similar polling approach for coordinating external runners with GitHub Actions.

I hope these references are useful for you. Please do ask if you want further details about any of them.

Rishub
@rishubn
@umarcor Hi, sorry for the late reply, I finally got around to reading the link you sent me regarding pyCAPI: https://github.com/VHDL-LS/rust_hdl/issues/112#issuecomment-866038036
I am one of the devs for Xeda and was interested in adding support for pyCAPI along side our current project definition flow - but I understand that pyCAPI is not fully completed correct?
Kaleb Barrett
@ktbarrett
Is there a library for binding a message queue like ZeroMQ to SV via the DPI and VHDL via the VHPI C FFI (forget what its called)?
Kaleb Barrett
@ktbarrett
Basically I'm trying to run a simulation as a service and want to use ZeroMQ for RPC and transaction queues. I know TLM exists to be transaction queues, but I don't think there is the same level of functionality that a message queue provides available for TLM. I could be wrong.