Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Nov 10 2016 13:26
    martinwoodward unassigned #296
  • Nov 10 2016 13:26
    martinwoodward unassigned #295
  • Nov 10 2016 13:26
    martinwoodward unassigned #293
  • Nov 10 2016 13:26
    martinwoodward unassigned #294
  • Nov 10 2016 13:26
    martinwoodward unassigned #292
  • Nov 10 2016 13:26
    martinwoodward unassigned #291
  • Nov 10 2016 13:26
    martinwoodward unassigned #288
  • Nov 10 2016 13:26
    martinwoodward unassigned #286
  • Nov 10 2016 13:26
    martinwoodward unassigned #287
  • Nov 10 2016 13:26
    martinwoodward unassigned #285
  • Nov 10 2016 13:26
    martinwoodward unassigned #281
  • Nov 10 2016 13:26
    martinwoodward unassigned #280
  • Nov 10 2016 13:26
    martinwoodward unassigned #278
  • Nov 10 2016 13:26
    martinwoodward unassigned #273
  • Nov 10 2016 13:26
    martinwoodward unassigned #271
  • Nov 10 2016 13:26
    martinwoodward unassigned #276
  • Nov 10 2016 13:26
    martinwoodward unassigned #272
  • Nov 10 2016 13:26
    martinwoodward unassigned #275
  • Nov 10 2016 13:26
    martinwoodward unassigned #268
  • Nov 10 2016 13:26
    martinwoodward unassigned #265
Matt Warren
@mattwarren
But basically it's a .NET wrapper for the SOS extension for WinDBG, so it lets you analyse memory dumps and other low-level stuff like that
Tommy Long
@smudge202
I need to follow the calls we make into ClrMD. I can't see any external references in the csproj? https://github.com/Microsoft/clrmd/blob/master/src/Microsoft.Diagnostics.Runtime/Microsoft.Diagnostics.Runtime.csproj
Does it P/Invoke into something win32?
(If you don't know, don't worry, am looking and asking questions at same time)
So I guess porting will be quite a problem
I can only assume that the xunit.performance variant doesn't dig so deeply into the diagnostic output?
Matt Warren
@mattwarren
At the moment I'm including a custom version of the code they use to host at https://github.com/Microsoft/dotnetsamples/tree/master/Microsoft.Diagnostics.Runtime/CLRMD/
They've since re-factored everything and I haven't updated it yet. But all the code is in https://github.com/PerfDotNet/BenchmarkDotNet/blob/master/BenchmarkDotNet/Toolchain/BenchmarkCodeExtractor.cs

I can only assume that the xunit.performance variant doesn't dig so deeply into the diagnostic output?

I don't think they have anything similar at the moment, I've been tracking that project for a while and they are mostly focussed on making perf tests that run as part of a C.I build and let you know if there have been any perf regressions

I also tried to tell them about BenchmarkDotNet and suggest that they use that instead!! But I've not heard a response yet ;-) See https://github.com/dotnet/roslyn/issues/670#issuecomment-158365614

Tommy Long
@smudge202
Yeh. Huge portions of this may never become available on CoreCLR
That's not the end of the world thinking about it...
hmm, not ideal either...
Matt Warren
@mattwarren

Yeah I wondered if that would be the case, although it'll be shame as it's a handily library for debugging memory dumps. Although you can always use SOS in WinDBG, it's just a bit more manual doing it that way

At least by loading it dynamically we can just refuse to load the "plug-in" dll if we're running on CoreCLR

Tommy Long
@smudge202
I'm thinking most people will have, as an example, an aspnet5 website that targets dnx451 and dnxcore50. Their test projects will commonly only target dnx451 or perhaps even a full net4*, which wouldn't be too much work to get working (just resolve the DNX issue).
However, I've been harassing MS and Brad Wilson to update the VS tooling to allow not only cross compilation/analysis, but also modifications to Test Explorer and runtime to allow cross-testing.
Brad can't do anything until the VS team sort their stuff out
But in my mind, if I write tests CoreCLR compatible, I want to be able to run tests against all the targets I can. There are often fundamental differences between runtimes, let alone taking into account any #ifdefs
I'd want the same for Benchmark, but seeing as tooling doesn't support it...
And that most will run under dnx4* or net*, we can probably just try to resolve the dnx451 compatibility?
Also, it's probably obvious, but WinDBG doesn't work on *nix, Rasberry Pi, yada, so there'll probably never work on making that stuff CoreCLR compat because the amount of maintenance/work it would introduce providing native assemblies across platforms.
Matt Warren
@mattwarren
re the Xunit-performance tool:
One advantage of BenchmarkDotNet, we don't require a modified unit test runner for us to work. For instance I just wrote an integration tests that shows how you can use us in a unit test, so that the test will fail if one benchmark doesn't run quicker than another. See https://github.com/PerfDotNet/BenchmarkDotNet/blob/master/BenchmarkDotNet.IntegrationTests/PerformanceUnitTest.cs#L24-L31

I'd want the same for Benchmark, but seeing as tooling doesn't support it...

To be fair, the feature that dumps the assembly is probably only useful when you are investigating a benchmark, in which case you'll probably be running it in a console. I don't see it being that useful when you want to run Benchmarks as part of (performance) unit tests

Tommy Long
@smudge202
True. Trying to keep up with the xunit runner is a nightmare. A friend of mine has been trying to maintain a set of custom xunit attributes and associated runners, discoverers, and various other pieces of black magic. But there are so few examples online of xunit extensibility that are valid given the current API, which of course is constantly rotating alongside DNX development.
Side note: Unit tests shouldn't take more than a few ms in my opinion. I'd never want benchmarking to run as part of my Ctrl+R, A ritual. :)
(Replace Ctrl+R, A with R# CI / NCrunch / etc)
Matt Warren
@mattwarren
Right back when @AndreyAkinshin and I first talked, we chatted about the 2 main scenarios for benchmarks:
1) when you are investigating something, so running it in a console, pasting results onto a GitHub issue, understanding why something is faster, etc
2) running perf tests as part of a C.I build, to show you any regressions, ensure that 1 piece of code always runs faster than another piece of code or faster than X milliseconds
I think we're currently very good for 1), but maybe need a bit of work for 2)
Tommy Long
@smudge202
Want me to describe the scenario I was facing when I stumbled upon you guys? :)
Matt Warren
@mattwarren

True. Trying to keep up with the xunit runner is a nightmare. A friend of mine has been trying to maintain a set of custom xunit attributes and associated runners, discoverers, and various other pieces of black magic. But there are so few examples online of xunit extensibility that are valid given the current API, which of course is constantly rotating alongside DNX development.

Exactly and you'd have to do that for every test-runner that people wanted to use, compared to just writing code like this https://github.com/PerfDotNet/BenchmarkDotNet/blob/master/BenchmarkDotNet.IntegrationTests/PerformanceUnitTest.cs#L24-L31

Tommy Long
@smudge202
Loosely fits with your second point I guess
Although the code in that snippet is really cool, it makes me wince because I hate the thought of running such code in a unit test. Probably just me though.
Matt Warren
@mattwarren

Side note: Unit tests shouldn't take more than a few ms in my opinion. I'd never want benchmarking to run as part of my Ctrl+R, A ritual. :)

Yeah I agree, I see performance tests as similar to integration tests, probably only run as part of a nightly C.I build, or when a developer explicitly wants to run them. I guess controlled via a [Trait] in Xunit

Tommy Long
@smudge202
Yeh, I'd use Traits / Categories.
Matt Warren
@mattwarren

Want me to describe the scenario I was facing when I stumbled upon you guys?

yes please

Tommy Long
@smudge202
I wish the MS attributes weren't sealed / more extensible. :'(
Matt Warren
@mattwarren

Although the code in that snippet is really cool, it makes me wince because I hate the thought of running such code in a unit test. Probably just me though.

Yeah, the only advantage of running them as part of a unit test it that it makes the C.I tooling, pass/fail metrics, failing the build. etc a bit easier. But it could just a easily be a separate .exe that is run via a script, like the BenchmarkDotNet.Samples exe

Tommy Long
@smudge202
To be able to pass/fail metrics, is it assumed that the machine running the tests is identical to previous benchmarks?
I only ask because modern CI pipelines utilise an awful lot of virtualisation, which probably doesn't fair well for benchmarking?
Matt Warren
@mattwarren
If you want to compare over runs, then yes the machines would need to have identical specs, so anything cloud based is probably out :-(
From what I've heard about people who do this (maybe it was Roslyn, I can't remember), they have dedicated performance machines that runs that part of the C.I process on every build

Want me to describe the scenario I was facing when I stumbled upon you guys?

So what scenario was it?

Tommy Long
@smudge202
In an ideal world (and this plays into the scenario on how/why we wanted to use benchmarking), I wanted the ability for our team to define benchmarks for various modules of our software. .I'd want through CI or otherwise, to push our nuget packages (everything in DNX world translates to an exe or nuget package) along with a (no/low code) bootstrapper to a dedicated machine in the cloud (as the Roslyn team do - didn't know that before) in order to run the benchmarks in as close to an identical environment as possible. These benchmark outputs I'd ideally want to configure (no/low code) a way to output to a storage mechanism (think Serilog sinks so I could push to blob storage). Last on the list would be a no/low code method of comparing said outputs.
All of that scenario is possible with DNX compatibility, though not so much the "no/low code" parts as I don't think much exists for output persistence / comparison?
Matt Warren
@mattwarren

All of that scenario is possible with DNX compatibility, though not so much the "no/low code" parts as I don't think much exists for output persistence / comparison?

Do you mean in BenchmarkDotNet?

Tommy Long
@smudge202
Yes
Please forgive me for all the stuff I've undoubtedly overlooked. :D
Matt Warren
@mattwarren
The main options are the moment are markdown and csv export (see https://github.com/PerfDotNet/BenchmarkDotNet/tree/master/BenchmarkDotNet/Export), but it's relatively easy to extend that if needed
Tommy Long
@smudge202
It's worth mentioning; I didn't want to benchmark snippets of code. I wanted a quick way to oversee modules/components at a high level, so that we could dig into the nitty gritty if issues were highlighted,
Yeh, I think I need to spend more time looking at what is/is not extensible. Have you got any examples of extensibility outside the core repo?
Matt Warren
@mattwarren
fair enough BDN is definitely targeted at micro-benchmarks, but at the end of the day you can put whatever code you want inside a [Benchmark] method.
Tommy Long
@smudge202
i.e. How exactly I can push a new implementation of InterfaceX into the pipeline?
Matt Warren
@mattwarren

Yeh, I think I need to spend more time looking at what is/is not extensible. Have you got any examples of extensibility outside the core repo?

At the moment, unless I've forgotten about something, there's no extensibility outside of the core repo. But it's a good question, maybe raise an issue and we can see what @AndreyAkinshin reckons