The only problem is, for example - I only have my own spare time projects on Github
Sadly, most projects in FPGA/ASIC industry are closed or confidential
What I need to measure to make a comparison with the Wilson study is the number of professionals that produce open source on GitHub and provide their testbenches. That's a very special group of people but as long as it's a random unbiased subset of the engineers working in the closed industry it can be used to represent that industry
sfixed, but I guess that's not the case?
I agree with your analysis that this should be very non-intrusive so that we don't burden our separate development efforts with a new dependency.
Yes, cocotb does not have to be shipped with vunit, but can be used as long as it is installed in the user's python environment. cocotb would appear to vunit as nothing more than a VPI/VHPI/FLI cosim project.
@ktbarrett Just to be clear, what you're looking for is a command line experience where a user can list and run all or some tests and it doesn't matter if tests are implemented in VUnit or cocotb?
That is what we are looking for. We would like to take advantage of VUnit's capabilities of building and running tests with multiple simulators, and its useful regression features like test parallelization and isolation.
@ktbarrett, I was about to post....
@ktbarret, I've read some issues in cocotb recently (mostly references by @cmarqu), as Lars said we have briefly discussed it before, and overall I see that my work in VUnit/cosim and ghdl/ghdl-cosim is, once again, starting to partially converge with cocotb. For example, I'm now prototyping executing Python functions from VHDL (just let VHDL know the address of the function in memory, and use it, without any further argument management annoyances). That's obviously an equivalent feature to what cocotb allows by setting callbacks through VPI and using the API to get the arguments. However, I'm doing my work based on VHPIDIRECT. This is, IMHO, a much simpler and easy to understand solution for users with VHDL background. Unfortunately:
The way this is handled in plain VUnit is https://github.com/ghdl/ghdl-cosim/blob/master/vhpidirect/arrays/matrices/vunit_axis_vcs/run.py#L13. Of course, some additional VHDL files are added in line 9, which is where the VHDPIDIRECT bindings are declared. However, there is no further need to pre-build shared libraries or to provide runtime arguments, as opposed to VPI. Nevertheless, if required (e.g. because the software is not written in a single file) it is straightforward to use
check_call(['gcc', 'whatever']). For example: https://github.com/VUnit/cosim/blob/master/examples/buffer/run.py#L35-L44
Someone might ask which is the purpose of VUnit/cosim, given that examples such as the one above exist in ghdl-cosim. The answer is (re)usability and performance. On the one hand, Python (which is already a dependency of VUnit) allows to package and distribute reusable co-simulation modules/resources more easily than the shell scripts or makefiles used in ghdl/ghdl-cosim. On the other hand, VUnit provides Verification Components written in VHDL, which rely on an internal integer_vector_ptr type. In VUnit/cosim, that external type is exposed to C through VHPIDIRECT, so that Verification Components can use a malloc'ed buffer as the source/sink of their queues. @bradleyharden and me have discussed about it in several issue, the better entrypoint being #603 (note that there are multiple considerations about how similar/different cocotb is).
Moreover, while working on what was later split to VUnit/cosim, @kraigher raised his legit concerns about adding into VUnit's codebase features to call GCC or any other axiliary tool. VUnit is, fundamentally, a test runner. Not a project management or project build tool (although it works nice in that role). There is a general agreement on keeping it like that, and upstreaming other features to projects such as tsfpga, edalize, fusesoc, VUnit/cosim, etc. In this sense, I think that cocotb fits as "the VPI/VHPI co-simulation extension of VUnit", a sibling of VUnit/cosim, "the VHPIDIRECT co-simulation extension".
The integration of VUnit/cosim in an existing
run.py script is done by importing a class, and using some of the methods it provides to automatically fill arguments to VUnit's API calls. See usage of COSIM in https://github.com/VUnit/cosim/blob/master/examples/buffer/run.py. I'm certainly not a good coder, and I'm sure that https://github.com/VUnit/cosim/tree/master/cosim can be improved a lot. Nonetheless, I hope that the idea is clear enough.
I would propose to integrate cocotb and VUnit following a similar approach. It would likely require to extend the existing simulator APIs to allow providing some additional arguments (or maybe not). However I'm not sure about how are Python co-routines handled at runtime. In my prototypes with VHPIDIRECT, GHDL and the C sources are built into a single binary or shared library. If it's a binary (implying VHDL + C only), it gets executed as a regular VUnit test. If it's a shared library, the VUnit test builds it only, it is not executed. Then, a separate script is used to load the library in Python and do everything in "raw" Python (ctypes), including setting Python callbacks before calling ghdl_main. The advantage of using GHDL's VHPIDIRECT is that a single binary/shared library can be generated for runtime. But I'm afraid that this is not the case with other simulators. Instead, I'd suggest to explore using test pre-hooks and post-hooks for cocotb to "do it's stuff" before regular VUnit tests are executed. In fact, I'd like to explore this alternative as a cleanup of VUnit/cosim examples.
There is still a very important point: do users need to use VUnit's style of declaring sources? Do they need to adapt all their existing cocotb scripts? More on this in a minute...
Related to the integration with edalize, the point with how VUnit was added is that AFAIU edalize behaves as a "generator of run.py files" and as an "external trigger". If users create a "project", generate the
run.py for VUnit, then add some features to it, and last need to add/remove some files to the design, a very good understanding of the ecosystem is required:
run.pyfile will update the set of sources, but will overwrite any modification that the user did.
run.pymight introduce inconsistencies compared to what edalize would have generated.
run.py, but to define anything through the edalize interface. As a result:
Note that this is not bare criticism. I want to fix/improve it, and this is for others to discuss which would be the best solution. Some ideas I have, but which I could not prototype because I don't know edalize well enough:
run.pyfiles that are to be used by edalize to be loadable as modules. That is, to include
if __name__ == '__main__'as in https://github.com/eine/vunit/blob/19c45d517243d8ef82d45d672f15218ff149c430/examples/vhdl/array_axis_vcs/run.py#L38-L39
add_source_files, etc. after loading the module but before calling
main()needs to be wrapped in a
if not EDALIZEin their
run.pyfiles, in order to use alternative mechanisms to define the set of sources.
run.py"on steroids" (provided by an optional external class).
run.pywith a few constraints is imported as a module.
run.py" I'd like to be possible for plain VUnit workflows, cocotb extended workflow, edalize extended workflow, etc. to all coexist in the same
run.py. I believe it is important that different users with different roles in the development can share the same test running infrastructure.
cocotb uses VPI, VHPI, or FLI for communicating with simulators, I don't think we are planning on supporting VHPIDIRECT.
VPI/VHPI/FLI don't require any additional HDL sources to be added. Instead runtime arguments need to be provided. That's a relevant difference between cocotb and VUnit/cosim. There is no need for cocotb to support VHPIDIRECT, because that would directly overlap with VUnit/cosim. Also because of https://github.com/cocotb/cocotb/pull/1769#issuecomment-626045475
Apart from that, I believe that the purpose of integrating cocotb into VUnit should be shaped either as VUnit/cosim or as proposed for edalize. That's why I discussed how both integrations work.
And calling GCC is not necessary, as of cocotb v1.3 all C sources are built when the package is installed.
No, it's not required to build libs each time. But cocotb needs to run some pre-test hooks, maybe some post-test hooks and it needs to provide additional CLI arguments to the simulator interfaces through VUnit's API. If pre-test hooks and the existing API do not suffice, additional enhancements would need to be done in VUnit. Acceptance of such changes might be subject to the same "strategic criteria" that "callling GCC" was subject to.
checklibrary, so maybe you can help. I am able to successfully disable the checkers during reset. The logging library function names aren't as intuitive as I would expect, but that's a separate issue. I realize now that I have a second problem. It looks like
check_stableis opening an active window before the reset. Reset occurs, which changes the value of the stable signal, but it doesn't trigger the exit condition, so the
check_stableFSM is still in the active window. I set the
enablesignal low during reset, so it doesn't cause a failure then. But when the checkers are re-enabled, it immediately throws an error. From what I can see in the code, there is no way to reset the
check_stableFSM. Is that correct?
I am using
disable() to prevent an error from causing the simulation to fail since I am trying to test against certain specifications. I want to then check the logs for the first error such that I can see if the previous pass meets this spec. How do I check the logs to see if there was an error?
unmock don't seem like the right solution to me since if there is an error that I'm checking for, I don't want
check_log to fail simulation on passing cases.
@LarsAsplund I am trying to find the max rate that my system can support for many different clock rates. To do this, I will disable
error and I want to see the point when the first error is reported by check. The latency is controlled by software, so I am ensuring the max rate at each latency paremeter meets the spec. I could do this manually, but this would be really time consuming and using
disable and having some way of seeing if the logger reports an error, it would make this really easy.
Basically, all I'm looking for is a way to determine if a specific log message was an error or not. When I disable
error and then check the most recent message, I don't want the simulation to fail if it wasn't an error, hence why I'm not using
@GlenNicholls It seems like you're trying to do two things. One is to see that the design meets the spec and the other is to see what the design can handle, that is to see what margins you have
Fundamental to VUnit is automated testing and fundamental to automated testing is to separate pass from fail. For that reason we want the pass/fail criteria to be very clean and simple. Basically a failing check is an error and a simulation stop before
test_runner_celanup is an error. Trying to work around this is sometimes possible but we're intentially making it a bit painful to make sure that you know what you're doing and do not suppress errors by mistake.
In your case I would use a check to see that your design meets the requirements when the clock frequency is within range and then use info logs to report when you hit the design limit. Something like
monitor : process begin wait until rising_edge(clk); check_implication(clk_freq_within_required_range, this_must_be_true); if not clk_freq_within_required_range and not this_must_be_true then info("This is more than the design can handle"); end if; end process;