by

Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • 05:46
    ktbarrett commented #651
  • 05:46
    ktbarrett commented #651
  • 05:40
    ktbarrett edited #651
  • May 21 13:16
    eine commented #655
  • May 21 08:35
    talonmyburgh closed #655
  • May 21 08:35
    talonmyburgh commented #655
  • May 20 15:33
    eine labeled #653
  • May 20 15:28
    eine commented #654
  • May 20 14:44
    umarcor commented #652
  • May 20 14:43
    umarcor commented #652
  • May 20 14:23
    marph91 commented #652
  • May 20 12:09
    eine commented #655
  • May 20 10:18
    talonmyburgh opened #655
  • May 20 07:27
    olafvandenberg edited #654
  • May 20 06:35
    olafvandenberg opened #654
  • May 19 20:06
    umarcor commented #652
  • May 19 19:58
    marph91 closed #652
  • May 19 19:58
    marph91 commented #652
  • May 19 19:51
    eine commented #653
  • May 19 19:41
    hcommin commented #653
Lars Asplund
@LarsAsplund
This is one common response bias. Mentor are not using their own customer lists for the survey but every independent industry list will also have Mentor customers. These customers know who's behind the study and they tend to be more likely to respond to the survey and respond more in line with Mentor's agenda whether that is true or not.
Lars Asplund
@LarsAsplund
By looking at GitHub I can determine what people actually do, not only what they say they do.
T. Meissner
@tmeissner
The only problem is, for example - I only have my own spare time projects on Github
Sadly, most projects in FPGA/ASIC industry are closed or confidential
Commercial ones
Everytime when Mentor calls for their survey I take part. And everytime I mentioned the open source frameworks. Mostly I also write something like "Better VHDL & PSL support" in the wish text field ;)
But I'm also not sure how representative these surveys are.
Lars Asplund
@LarsAsplund

The only problem is, for example - I only have my own spare time projects on Github
Sadly, most projects in FPGA/ASIC industry are closed or confidential

What I need to measure to make a comparison with the Wilson study is the number of professionals that produce open source on GitHub and provide their testbenches. That's a very special group of people but as long as it's a random unbiased subset of the engineers working in the closed industry it can be used to represent that industry

Kaleb Barrett
@ktbarrett
We in cocotb are discussing (cocotb/cocotb#1562) replacing our makefile-based test runner/build scripts with more developed test frameworks. We are not interested in integration with, or dependence, on a single framework, but adding support in multiple frameworks. Right now we are looking at using edalize for build scripts and using a popular Python testing framework such as pytest for regressions. However, seeing how popular VUnit is, it would please a lot of people if we could add support for testing with cocotb to VUnit. Is that something that VUnit is interested in? I see cocotb mentioned in a few issues, but no serious contemplation or attempt of integrating it.
Lars Asplund
@LarsAsplund

But I'm also not sure how representative these surveys are.

No one can because the data in the survey is not public. I've said it before but facts can only come from measurements on data that anyone can repeat and review. Anything else is an opinion

@ktbarrett There have been some discussions on how these tools could benefit from each other but not more than that. I'm always interested in value adding integrations so I would like such an discussion where cocotb people are involved. Right now my brain says bedtime but let me look at that issue of yours tomorrow.
Lars Asplund
@LarsAsplund
@ktbarrett I agree with your analysis that this should be very non-intrusive so that we don't burden our separate development efforts with a new dependency. Just to be clear, what you're looking for is a command line experience where a user can list and run all or some tests and it doesn't matter if tests are implemented in VUnit or cocotb?
Bradley Harden
@bradleyharden
Is there no VHDL-2008 check package?
I assumed I could use check_equal with sfixed, but I guess that's not the case?
Kaleb Barrett
@ktbarrett

@LarsAsplund

I agree with your analysis that this should be very non-intrusive so that we don't burden our separate development efforts with a new dependency.

Yes, cocotb does not have to be shipped with vunit, but can be used as long as it is installed in the user's python environment. cocotb would appear to vunit as nothing more than a VPI/VHPI/FLI cosim project.

@ktbarrett Just to be clear, what you're looking for is a command line experience where a user can list and run all or some tests and it doesn't matter if tests are implemented in VUnit or cocotb?

That is what we are looking for. We would like to take advantage of VUnit's capabilities of building and running tests with multiple simulators, and its useful regression features like test parallelization and isolation.

umarcor
@umarcor

@ktbarrett, I was about to post....

@ktbarret, I've read some issues in cocotb recently (mostly references by @cmarqu), as Lars said we have briefly discussed it before, and overall I see that my work in VUnit/cosim and ghdl/ghdl-cosim is, once again, starting to partially converge with cocotb. For example, I'm now prototyping executing Python functions from VHDL (just let VHDL know the address of the function in memory, and use it, without any further argument management annoyances). That's obviously an equivalent feature to what cocotb allows by setting callbacks through VPI and using the API to get the arguments. However, I'm doing my work based on VHPIDIRECT. This is, IMHO, a much simpler and easy to understand solution for users with VHDL background. Unfortunately:

  • Features of VHPIDIRECT are limited comparing to VPI/VHPI. Hence, for certain use cases, it is not a valid alternative.
  • VHPIDIRECT is not an standard interface (yet). So, it might be a problem to support it in cocotb (cocotb/cocotb#1769). AFAIK, all the different simulators/interfaces supported in cocotb have something in common: HDL sources need not to be modified, everything is done in C and/or Python.
  • All the helpers and classes in cocotb are very advanced and easier to use than any of the content which is currently available in VUnit/cosim or ghdl/ghdl-cosim.

The way this is handled in plain VUnit is https://github.com/ghdl/ghdl-cosim/blob/master/vhpidirect/arrays/matrices/vunit_axis_vcs/run.py#L13. Of course, some additional VHDL files are added in line 9, which is where the VHDPIDIRECT bindings are declared. However, there is no further need to pre-build shared libraries or to provide runtime arguments, as opposed to VPI. Nevertheless, if required (e.g. because the software is not written in a single file) it is straightforward to use check_call(['gcc', 'whatever']). For example: https://github.com/VUnit/cosim/blob/master/examples/buffer/run.py#L35-L44

Someone might ask which is the purpose of VUnit/cosim, given that examples such as the one above exist in ghdl-cosim. The answer is (re)usability and performance. On the one hand, Python (which is already a dependency of VUnit) allows to package and distribute reusable co-simulation modules/resources more easily than the shell scripts or makefiles used in ghdl/ghdl-cosim. On the other hand, VUnit provides Verification Components written in VHDL, which rely on an internal integer_vector_ptr type. In VUnit/cosim, that external type is exposed to C through VHPIDIRECT, so that Verification Components can use a malloc'ed buffer as the source/sink of their queues. @bradleyharden and me have discussed about it in several issue, the better entrypoint being #603 (note that there are multiple considerations about how similar/different cocotb is).

Moreover, while working on what was later split to VUnit/cosim, @kraigher raised his legit concerns about adding into VUnit's codebase features to call GCC or any other axiliary tool. VUnit is, fundamentally, a test runner. Not a project management or project build tool (although it works nice in that role). There is a general agreement on keeping it like that, and upstreaming other features to projects such as tsfpga, edalize, fusesoc, VUnit/cosim, etc. In this sense, I think that cocotb fits as "the VPI/VHPI co-simulation extension of VUnit", a sibling of VUnit/cosim, "the VHPIDIRECT co-simulation extension".

The integration of VUnit/cosim in an existing run.py script is done by importing a class, and using some of the methods it provides to automatically fill arguments to VUnit's API calls. See usage of COSIM in https://github.com/VUnit/cosim/blob/master/examples/buffer/run.py. I'm certainly not a good coder, and I'm sure that https://github.com/VUnit/cosim/tree/master/cosim can be improved a lot. Nonetheless, I hope that the idea is clear enough.

I would propose to integrate cocotb and VUnit following a similar approach. It would likely require to extend the existing simulator APIs to allow providing some additional arguments (or maybe not). However I'm not sure about how are Python co-routines handled at runtime. In my prototypes with VHPIDIRECT, GHDL and the C sources are built into a single binary or shared library. If it's a binary (implying VHDL + C only), it gets executed as a regular VUnit test. If it's a shared library, the VUnit test builds it only, it is not executed. Then, a separate script is used to load the library in Python and do everything in "raw" Python (ctypes), including setting Python callbacks before calling ghdl_main. The advantage of using GHDL's VHPIDIRECT is that a single binary/shared library can be generated for runtime. But I'm afraid that this is not the case with other simulators. Instead, I'd suggest to explore using test pre-hooks and post-hooks for cocotb to "do it's stuff" before regular VUnit tests are executed. In fact, I'd like to explore this alternative as a cleanup of VUnit/cosim examples.

There is still a very important point: do users need to use VUnit's style of declaring sources? Do they need to adapt all their existing cocotb scripts? More on this in a minute...

umarcor
@umarcor

Related to the integration with edalize, the point with how VUnit was added is that AFAIU edalize behaves as a "generator of run.py files" and as an "external trigger". If users create a "project", generate the run.py for VUnit, then add some features to it, and last need to add/remove some files to the design, a very good understanding of the ecosystem is required:

  • Regenerating the run.py file will update the set of sources, but will overwrite any modification that the user did.
  • Modifying it manually in the run.py might introduce inconsistencies compared to what edalize would have generated.
  • The "default" solution is to not modify run.py, but to define anything through the edalize interface. As a result:
    • Users might have issues sharing projects with VUnit users that don't know about edalize.
    • All VUnit features need to be accesible through edalize's interface.

Note that this is not bare criticism. I want to fix/improve it, and this is for others to discuss which would be the best solution. Some ideas I have, but which I could not prototype because I don't know edalize well enough:

  • Require VUnit run.py files that are to be used by edalize to be loadable as modules. That is, to include if __name__ == '__main__' as in https://github.com/eine/vunit/blob/19c45d517243d8ef82d45d672f15218ff149c430/examples/vhdl/array_axis_vcs/run.py#L38-L39
  • Have edalize call add_library, add_source_files, etc. after loading the module but before calling main().
    • Alternatively, use a JSON and let VUnit load from it.
  • For this to work, main() needs to be wrapped in a try... except.
  • Users can have something such as if not EDALIZE in their run.py files, in order to use alternative mechanisms to define the set of sources.
So, overall, I'm describing two different approaches to integrate cocotb and VUnit:
  • The one used in VUnit/cosim, where the entrypoint is a regular run.py "on steroids" (provided by an optional external class).
  • The one proposed for edalize, where the entrypoint is external and a regular run.py with a few constraints is imported as a module.
umarcor
@umarcor
Note the importance of "a regular run.py" I'd like to be possible for plain VUnit workflows, cocotb extended workflow, edalize extended workflow, etc. to all coexist in the same run.py. I believe it is important that different users with different roles in the development can share the same test running infrastructure.
umarcor
@umarcor
@LarsAsplund, @ktbarrett, shall I copy the comments above to some RFC issue either in VUnit, VUnit/cosim or cocotb?
Kaleb Barrett
@ktbarrett
@umarcor I'm not entirely sure what you are getting at? cocotb uses VPI, VHPI, or FLI for communicating with simulators, I don't think we are planning on supporting VHPIDIRECT. And calling GCC is not necessary, as of cocotb v1.3 all C sources are built when the package is installed.
umarcor
@umarcor

cocotb uses VPI, VHPI, or FLI for communicating with simulators, I don't think we are planning on supporting VHPIDIRECT.

VPI/VHPI/FLI don't require any additional HDL sources to be added. Instead runtime arguments need to be provided. That's a relevant difference between cocotb and VUnit/cosim. There is no need for cocotb to support VHPIDIRECT, because that would directly overlap with VUnit/cosim. Also because of https://github.com/cocotb/cocotb/pull/1769#issuecomment-626045475

Apart from that, I believe that the purpose of integrating cocotb into VUnit should be shaped either as VUnit/cosim or as proposed for edalize. That's why I discussed how both integrations work.

And calling GCC is not necessary, as of cocotb v1.3 all C sources are built when the package is installed.

No, it's not required to build libs each time. But cocotb needs to run some pre-test hooks, maybe some post-test hooks and it needs to provide additional CLI arguments to the simulator interfaces through VUnit's API. If pre-test hooks and the existing API do not suffice, additional enhancements would need to be done in VUnit. Acceptance of such changes might be subject to the same "strategic criteria" that "callling GCC" was subject to.

umarcor
@umarcor
Moreover, the fact that the entrypoint is Python and that simulators execute callbacks written in Python is a common characteristic between cocotb and VUnit/cosim, although completely different C/C++ APIs are used under the hood.
Lars Asplund
@LarsAsplund

@bradleyharden

I assumed I could use check_equal with sfixed, but I guess that's not the case?

No, we always added check_equals on a need to have basis. Do a PR?

Bradley Harden
@bradleyharden
I can try to do that. Aren't the check functions generated progrmatically? I thought I remembered seeing that somewhere
Bradley Harden
@bradleyharden
I'm having trouble finding a way to implement this. I have some protocol checkers that monitor a bus. During a test, I want to reset the system, but doing so causes some of the protocol checks to fail. I want to temporarily disable those checks during reset, and then re-enable them afterwards. I can't seem to find a way to do that.
It looks like the closest thing would be to mock the loggers, but then I have to know exactly which errors appear and clear them
All other options appear to immediately fail the test once the loggers are re-enabled
I thought I was properly handling the check_enabled signal to disable it when reset occurs. But that doesn't seem to be working
Is that the right way to do this? Maybe I implemented that wrong?
Bradley Harden
@bradleyharden
Ah, it looks like the check enabled signal is more for being selective about which clock cycles to check. But once a window starts, there's no way to end it prematurely, right?
Bradley Harden
@bradleyharden
Ok, so I figured out what's going on. @LarsAsplund, I think you know the most about the check library, so maybe you can help. I am able to successfully disable the checkers during reset. The logging library function names aren't as intuitive as I would expect, but that's a separate issue. I realize now that I have a second problem. It looks like check_stable is opening an active window before the reset. Reset occurs, which changes the value of the stable signal, but it doesn't trigger the exit condition, so the check_stable FSM is still in the active window. I set the enable signal low during reset, so it doesn't cause a failure then. But when the checkers are re-enabled, it immediately throws an error. From what I can see in the code, there is no way to reset the check_stable FSM. Is that correct?
GlenNicholls
@GlenNicholls
@bradleyharden I had the same problem that I asked about a few weeks ago regarding AXI reset and AXI expecting a pop before a test finishes. I couldn't find anything that allowed me to easily disable these checks and ultimately just wrote my code around them. In all my tests, I popped AXI values to pass rule 9 I think and had to manipulate my code for the reset so that it would pass. It was not ideal but it worked.
As for the reset, I had to change my clock generator since if the reset started high on the first cycle before the clock was active, I would fail the reset test. My workaround was to start the clock high for this case. The problem here is my clock generator allows different phase/jitter/duty cycle so eventually this will break if AXI is used in conjunction, but it satisfied the rule check.
Bradley Harden
@bradleyharden
I think it would be fairly simple to add a reset parameter to the check procedures to resolve this. I'll add it to the list of things I wish I had time to fix.
GlenNicholls
@GlenNicholls
With that said, I agree with you. In some cases, disabling all or selective tests should be much easier. With the test I originally ran into these problems with, the only times the test failed were for times when I was selectively testing different logic within the large design and didn't care about these protocol checks that were failing falsely. I'd open a PR for this since there are probably going to be more people running into this as VUnit becomes more popular ande VCs are more widely used
Bradley Harden
@bradleyharden
I'm running custom checks, not one of the VCs, but the issue is still relevant
Honestly, even though I don't have any actual experience with it, it seems like formal verification is probably a better approach to checking protocols like this
Maybe @tmeissner can shed some light
T. Meissner
@tmeissner
Formal is very capable to check protocol interfaces, like AXI
kvantumnuly
@kvantumnuly
Hi all, I am used default setting, but i do not see info or print message in console (it appears only when test fails, etc). I tried --log-level parameter but without success. What can cause this problem? Thanks!
eine
@eine
@kvantumnuly, did you try python run.py -v?
GlenNicholls
@GlenNicholls

I am using disable() to prevent an error from causing the simulation to fail since I am trying to test against certain specifications. I want to then check the logs for the first error such that I can see if the previous pass meets this spec. How do I check the logs to see if there was an error?

mock/unmock don't seem like the right solution to me since if there is an error that I'm checking for, I don't want check_log to fail simulation on passing cases.

GlenNicholls
@GlenNicholls
has_stop_count seems reasonable to check every loop, but does this still increment the stop counter when a logger is mocked or disabled for the specified log level?
Lars Asplund
@LarsAsplund
@GlenNicholls I'm not sure I understand. Are you trying to suppress an error or are you verifying that an error mechanism trigger when it' supposed to?
Lars Asplund
@LarsAsplund

@bradleyharden

From what I can see in the code, there is no way to reset the check_stable

No, there is no reset. Rather than adding a reset feature and activate that would it be possible to activate the end event instead?

GlenNicholls
@GlenNicholls

@LarsAsplund I am trying to find the max rate that my system can support for many different clock rates. To do this, I will disable error and I want to see the point when the first error is reported by check. The latency is controlled by software, so I am ensuring the max rate at each latency paremeter meets the spec. I could do this manually, but this would be really time consuming and using disable and having some way of seeing if the logger reports an error, it would make this really easy.

Basically, all I'm looking for is a way to determine if a specific log message was an error or not. When I disable error and then check the most recent message, I don't want the simulation to fail if it wasn't an error, hence why I'm not using mock/unmock

Lars Asplund
@LarsAsplund

@GlenNicholls It seems like you're trying to do two things. One is to see that the design meets the spec and the other is to see what the design can handle, that is to see what margins you have

Fundamental to VUnit is automated testing and fundamental to automated testing is to separate pass from fail. For that reason we want the pass/fail criteria to be very clean and simple. Basically a failing check is an error and a simulation stop before test_runner_celanup is an error. Trying to work around this is sometimes possible but we're intentially making it a bit painful to make sure that you know what you're doing and do not suppress errors by mistake.

In your case I would use a check to see that your design meets the requirements when the clock frequency is within range and then use info logs to report when you hit the design limit. Something like

  monitor : process
  begin
    wait until rising_edge(clk);
    check_implication(clk_freq_within_required_range, this_must_be_true);
    if not clk_freq_within_required_range and not this_must_be_true then
      info("This is more than the design can handle");
    end if;
  end process;