Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Nov 10 2016 13:26
    martinwoodward unassigned #296
  • Nov 10 2016 13:26
    martinwoodward unassigned #295
  • Nov 10 2016 13:26
    martinwoodward unassigned #293
  • Nov 10 2016 13:26
    martinwoodward unassigned #294
  • Nov 10 2016 13:26
    martinwoodward unassigned #292
  • Nov 10 2016 13:26
    martinwoodward unassigned #291
  • Nov 10 2016 13:26
    martinwoodward unassigned #288
  • Nov 10 2016 13:26
    martinwoodward unassigned #286
  • Nov 10 2016 13:26
    martinwoodward unassigned #287
  • Nov 10 2016 13:26
    martinwoodward unassigned #285
  • Nov 10 2016 13:26
    martinwoodward unassigned #281
  • Nov 10 2016 13:26
    martinwoodward unassigned #280
  • Nov 10 2016 13:26
    martinwoodward unassigned #278
  • Nov 10 2016 13:26
    martinwoodward unassigned #273
  • Nov 10 2016 13:26
    martinwoodward unassigned #271
  • Nov 10 2016 13:26
    martinwoodward unassigned #276
  • Nov 10 2016 13:26
    martinwoodward unassigned #272
  • Nov 10 2016 13:26
    martinwoodward unassigned #275
  • Nov 10 2016 13:26
    martinwoodward unassigned #268
  • Nov 10 2016 13:26
    martinwoodward unassigned #265
Tommy Long
@smudge202
But in my mind, if I write tests CoreCLR compatible, I want to be able to run tests against all the targets I can. There are often fundamental differences between runtimes, let alone taking into account any #ifdefs
I'd want the same for Benchmark, but seeing as tooling doesn't support it...
And that most will run under dnx4* or net*, we can probably just try to resolve the dnx451 compatibility?
Also, it's probably obvious, but WinDBG doesn't work on *nix, Rasberry Pi, yada, so there'll probably never work on making that stuff CoreCLR compat because the amount of maintenance/work it would introduce providing native assemblies across platforms.
Matt Warren
@mattwarren
re the Xunit-performance tool:
One advantage of BenchmarkDotNet, we don't require a modified unit test runner for us to work. For instance I just wrote an integration tests that shows how you can use us in a unit test, so that the test will fail if one benchmark doesn't run quicker than another. See https://github.com/PerfDotNet/BenchmarkDotNet/blob/master/BenchmarkDotNet.IntegrationTests/PerformanceUnitTest.cs#L24-L31

I'd want the same for Benchmark, but seeing as tooling doesn't support it...

To be fair, the feature that dumps the assembly is probably only useful when you are investigating a benchmark, in which case you'll probably be running it in a console. I don't see it being that useful when you want to run Benchmarks as part of (performance) unit tests

Tommy Long
@smudge202
True. Trying to keep up with the xunit runner is a nightmare. A friend of mine has been trying to maintain a set of custom xunit attributes and associated runners, discoverers, and various other pieces of black magic. But there are so few examples online of xunit extensibility that are valid given the current API, which of course is constantly rotating alongside DNX development.
Side note: Unit tests shouldn't take more than a few ms in my opinion. I'd never want benchmarking to run as part of my Ctrl+R, A ritual. :)
(Replace Ctrl+R, A with R# CI / NCrunch / etc)
Matt Warren
@mattwarren
Right back when @AndreyAkinshin and I first talked, we chatted about the 2 main scenarios for benchmarks:
1) when you are investigating something, so running it in a console, pasting results onto a GitHub issue, understanding why something is faster, etc
2) running perf tests as part of a C.I build, to show you any regressions, ensure that 1 piece of code always runs faster than another piece of code or faster than X milliseconds
I think we're currently very good for 1), but maybe need a bit of work for 2)
Tommy Long
@smudge202
Want me to describe the scenario I was facing when I stumbled upon you guys? :)
Matt Warren
@mattwarren

True. Trying to keep up with the xunit runner is a nightmare. A friend of mine has been trying to maintain a set of custom xunit attributes and associated runners, discoverers, and various other pieces of black magic. But there are so few examples online of xunit extensibility that are valid given the current API, which of course is constantly rotating alongside DNX development.

Exactly and you'd have to do that for every test-runner that people wanted to use, compared to just writing code like this https://github.com/PerfDotNet/BenchmarkDotNet/blob/master/BenchmarkDotNet.IntegrationTests/PerformanceUnitTest.cs#L24-L31

Tommy Long
@smudge202
Loosely fits with your second point I guess
Although the code in that snippet is really cool, it makes me wince because I hate the thought of running such code in a unit test. Probably just me though.
Matt Warren
@mattwarren

Side note: Unit tests shouldn't take more than a few ms in my opinion. I'd never want benchmarking to run as part of my Ctrl+R, A ritual. :)

Yeah I agree, I see performance tests as similar to integration tests, probably only run as part of a nightly C.I build, or when a developer explicitly wants to run them. I guess controlled via a [Trait] in Xunit

Tommy Long
@smudge202
Yeh, I'd use Traits / Categories.
Matt Warren
@mattwarren

Want me to describe the scenario I was facing when I stumbled upon you guys?

yes please

Tommy Long
@smudge202
I wish the MS attributes weren't sealed / more extensible. :'(
Matt Warren
@mattwarren

Although the code in that snippet is really cool, it makes me wince because I hate the thought of running such code in a unit test. Probably just me though.

Yeah, the only advantage of running them as part of a unit test it that it makes the C.I tooling, pass/fail metrics, failing the build. etc a bit easier. But it could just a easily be a separate .exe that is run via a script, like the BenchmarkDotNet.Samples exe

Tommy Long
@smudge202
To be able to pass/fail metrics, is it assumed that the machine running the tests is identical to previous benchmarks?
I only ask because modern CI pipelines utilise an awful lot of virtualisation, which probably doesn't fair well for benchmarking?
Matt Warren
@mattwarren
If you want to compare over runs, then yes the machines would need to have identical specs, so anything cloud based is probably out :-(
From what I've heard about people who do this (maybe it was Roslyn, I can't remember), they have dedicated performance machines that runs that part of the C.I process on every build

Want me to describe the scenario I was facing when I stumbled upon you guys?

So what scenario was it?

Tommy Long
@smudge202
In an ideal world (and this plays into the scenario on how/why we wanted to use benchmarking), I wanted the ability for our team to define benchmarks for various modules of our software. .I'd want through CI or otherwise, to push our nuget packages (everything in DNX world translates to an exe or nuget package) along with a (no/low code) bootstrapper to a dedicated machine in the cloud (as the Roslyn team do - didn't know that before) in order to run the benchmarks in as close to an identical environment as possible. These benchmark outputs I'd ideally want to configure (no/low code) a way to output to a storage mechanism (think Serilog sinks so I could push to blob storage). Last on the list would be a no/low code method of comparing said outputs.
All of that scenario is possible with DNX compatibility, though not so much the "no/low code" parts as I don't think much exists for output persistence / comparison?
Matt Warren
@mattwarren

All of that scenario is possible with DNX compatibility, though not so much the "no/low code" parts as I don't think much exists for output persistence / comparison?

Do you mean in BenchmarkDotNet?

Tommy Long
@smudge202
Yes
Please forgive me for all the stuff I've undoubtedly overlooked. :D
Matt Warren
@mattwarren
The main options are the moment are markdown and csv export (see https://github.com/PerfDotNet/BenchmarkDotNet/tree/master/BenchmarkDotNet/Export), but it's relatively easy to extend that if needed
Tommy Long
@smudge202
It's worth mentioning; I didn't want to benchmark snippets of code. I wanted a quick way to oversee modules/components at a high level, so that we could dig into the nitty gritty if issues were highlighted,
Yeh, I think I need to spend more time looking at what is/is not extensible. Have you got any examples of extensibility outside the core repo?
Matt Warren
@mattwarren
fair enough BDN is definitely targeted at micro-benchmarks, but at the end of the day you can put whatever code you want inside a [Benchmark] method.
Tommy Long
@smudge202
i.e. How exactly I can push a new implementation of InterfaceX into the pipeline?
Matt Warren
@mattwarren

Yeh, I think I need to spend more time looking at what is/is not extensible. Have you got any examples of extensibility outside the core repo?

At the moment, unless I've forgotten about something, there's no extensibility outside of the core repo. But it's a good question, maybe raise an issue and we can see what @AndreyAkinshin reckons

Tommy Long
@smudge202
Or perhaps there's a sample in the repo I've missed that shows extensibility?
In the same way games are often more popular when they have good modding support, I find the same for libraries. The more extensible a package is, the less code required in the core repo and the more tinkering outsiders can do for their peculiar scenarios. :)
Matt Warren
@mattwarren
yeah it's definitely a good idea,
I have to log off now, glad you are liking BenchmarkDotNet!
Tommy Long
@smudge202
No prob. Thanks for the help.
Andrey Akinshin
@AndreyAkinshin
Extensibility is a really good idea. I want to implement a system of plugins such that our users can form a desired plugin set from the core, configure it, or write own plugins. My nearest problem is the result export logic, next I want to implement a warning system (messages like Hey, your StdDev is too big, you should pay attention to it) and toolchain system (custom Generator/Builder/Executor configurations). What do you think?
Tommy Long
@smudge202
Yes, yes, and more yes.
But how to go about it?
I toyed with creating an issue for extensibility, but I think I'll leave it to you fellas. :)
Andrey Akinshin
@AndreyAkinshin
Ok, I will implement some basic logic on the weekend.
Tommy Long
@smudge202
Let me know if I can do anything to help.
Andrey Akinshin
@AndreyAkinshin
Ok, thanks!
Andrey Akinshin
@AndreyAkinshin
Guys, now we have big amount of upcoming features, idea and plans, discussions. What do you think about a wiki page with a roadmap?
Andrey Akinshin
@AndreyAkinshin
@mattwarren, @smudge202, please, review a new plugin system: PerfDotNet/BenchmarkDotNet@7eb70a1
Tommy Long
@smudge202
Just scanning over that commit now, @AndreyAkinshin
Tommy Long
@smudge202
That commit doesn't paint the whole picture; I assume there was previous work/commits for a plugin mechanism.