These are chat archives for PerfDotNet/BenchmarkDotNet

Aug 2016
Kenneth Kasajian
Aug 26 2016 17:14
I've tried this tool and like it a lot. I looked at all the docs, but I'm trying to wrap my head around the best way I can use it for my code-base. So I have several simple questions. Q1: An issue that I'm looking a solution for, and would like to know if this tool will address it. Specifically, if I have a method that was doing a lot of allocations, boxing, etc., and we optimized it so that it's now allocation free, I would like to assert that somehow so that we don't regress. I've looked at dotMemory, which I think can do this by recording all GC Alloc traffic via ETW. It takes a while to run for a simple method (like a minute or two, on a very fast computer). I think the number of process you have open may also be an issue. It's similar to how PerfView will make gigabyte-sized .etl files simply because it records EVERYTHING. I realize that BenchmarkDotNet is primarily about execution speed of code, but it seems to have some GC pressure stuff as well. Any thoughts?
Follow on my Q1 above, one possible "workaround" I was thinking of was: to keep the original, non-optimized version of the code, and measure that. The compare it to the optimized version in terms of speed and assert that it's 20% faster, or whatever tolerance we expect. But that seems kinda hacky.
Q2: When I Benchmark something, is everything that I'm benchmarking intended to be in the same process? For instance, what if I have a client-server application and I'm benchmarking the client, and waiting for it to call the server and come back. Some of the time the client is just waiting, and that's fine -- because I want to benchmark the end-to-end time. However, that means I'm waiting for code outside of my process to complete. Will that work, or does BenchmarkDotNet require that everything being benchmarked run in-proc.?