Although the code in that snippet is really cool, it makes me wince because I hate the thought of running such code in a unit test. Probably just me though.
Yeah, the only advantage of running them as part of a unit test it that it makes the C.I tooling, pass/fail metrics, failing the build. etc a bit easier. But it could just a easily be a separate .exe that is run via a script, like the BenchmarkDotNet.Samples exe
Want me to describe the scenario I was facing when I stumbled upon you guys?
So what scenario was it?
Yeh, I think I need to spend more time looking at what is/is not extensible. Have you got any examples of extensibility outside the core repo?
At the moment, unless I've forgotten about something, there's no extensibility outside of the core repo. But it's a good question, maybe raise an issue and we can see what @AndreyAkinshin reckons
moddingsupport, I find the same for libraries. The more extensible a package is, the less code required in the core repo and the more tinkering outsiders can do for their peculiar scenarios. :)
Hey, your StdDev is too big, you should pay attention to it) and toolchain system (custom Generator/Builder/Executor configurations). What do you think?
Guys, now we have big amount of upcoming features, idea and plans, discussions. What do you think about a wiki page with a roadmap?
Yeah, we need to put this somewhere, even if it's only a GitHub discussion. For instnace I've got several ideas for new Diagnostics that can go on the list
BenchmarkTask) , amount of warmup and target iterations (the two last parameters). E.g.,
targetIterationCount=5. I am trying to implement smart logic that will be adjust these parameters based on you benchmark: if you really don't need super precision, benchmarking will perform quickly. Hopefully, new logic will be published in December.
[BenchmarkTask(mode: BenchmarkMode.SingleRun, processCount: 1, warmupIterationCount: 1, targetIterationCount: 1)]i.e. "SingleRun"