These are chat archives for PerfDotNet/BenchmarkDotNet

28th
Jan 2016
Andrey Akinshin
@AndreyAkinshin
Jan 28 2016 01:30
@mattwarren, I want to update the F# test. Can you share a batch file that creates FSharpBenchmarkDotNet.exe from PretendFSharpTest.fs?
Andrey Akinshin
@AndreyAkinshin
Jan 28 2016 03:00
@mattwarren, I implemented new Params in the refactoring branch: https://github.com/PerfDotNet/BenchmarkDotNet/tree/1665d8afafd219fcc8998a7c6e04cd7ff75ee15f#params Now we can add multiple Params attribute (you can use any type for it, not only int) and we will have nice params columns in the summary table. Also I added IterationMode.Diagnostic for you.
@AndreyAkinshin I just uploaded the F# project that you can use to build FSharpBenchmarkDotNet.exe
I implemented new Params in the refactoring branch: .... Now we can add multiple Params attribute (you can use any type for it, not only int)
Cool, thanks for doing that
Matt Warren
@mattwarren
Jan 28 2016 11:15
Been thinking about FSharpBenchmarkDotNet.exe and PretendFSharpTest.fs, we probably need to build it each time, otherwise we're going to get issues each time we bump the version number. Currently FSharpBenchmarkDotNet.exe is build against 0.8.2, it'll stop working in the next release.
I'll see what our options are for including/building F# code in the IntegrationTest project.
Matt Warren
@mattwarren
Jan 28 2016 16:16
I sorted out the F# stuff in PerfDotNet/BenchmarkDotNet@1403126, so it's all working properly now
Andrey Akinshin
@AndreyAkinshin
Jan 28 2016 16:17
Cool!
And I almost finished my work with speed/accuracy balance and overhead calculating.
BTW, checkout new "dry" mode: it is very useful for testing.
Matt Warren
@mattwarren
Jan 28 2016 16:29
yeah I saw that, it is v. useful. Plus I really like all the new config stuff, it's very well done!
Andrey Akinshin
@AndreyAkinshin
Jan 28 2016 16:30
Thanks.
Matt Warren
@mattwarren
Jan 28 2016 16:30
one quick thing, I just saw this in a benchmark:
IdleTarget 1: 32 op, 1140.34 ns, 35.6357 ns/op
IdleTarget 2: 32 op, 380.11 ns, 11.8786 ns/op
IdleTarget 3: 32 op, 380.11 ns, 11.8786 ns/op
IdleTarget 4: 32 op, 380.11 ns, 11.8786 ns/op
IdleTarget 5: 32 op, 380.11 ns, 11.8786 ns/op
IdleTarget 6: 32 op, 380.11 ns, 11.8786 ns/op
IdleTarget 7: 32 op, 380.11 ns, 11.8786 ns/op
IdleTarget 8: 32 op, 380.11 ns, 11.8786 ns/op
IdleTarget 9: 32 op, 380.11 ns, 11.8786 ns/op
IdleTarget 10: 32 op, 380.11 ns, 11.8786 ns/op
IdleTarget 11: 32 op, 380.11 ns, 11.8786 ns/op
IdleTarget 12: 32 op, 380.11 ns, 11.8786 ns/op
IdleTarget 13: 32 op, 380.11 ns, 11.8786 ns/op
IdleTarget 14: 32 op, 380.11 ns, 11.8786 ns/op
IdleTarget 15: 32 op, 380.11 ns, 11.8786 ns/op
IdleTarget 16: 32 op, 380.11 ns, 11.8786 ns/op
IdleTarget 17: 32 op, 380.11 ns, 11.8786 ns/op
IdleTarget 18: 32 op, 380.11 ns, 11.8786 ns/op
IdleTarget 19: 32 op, 380.11 ns, 11.8786 ns/op
IdleTarget 20: 32 op, 380.11 ns, 11.8786 ns/op
IdleTarget 21: 32 op, 380.11 ns, 11.8786 ns/op
IdleTarget 22: 32 op, 380.11 ns, 11.8786 ns/op
IdleTarget 23: 32 op, 380.11 ns, 11.8786 ns/op
IdleTarget 24: 32 op, 380.11 ns, 11.8786 ns/op
IdleTarget 25: 32 op, 380.11 ns, 11.8786 ns/op
IdleTarget 26: 32 op, 380.11 ns, 11.8786 ns/op
IdleTarget 27: 32 op, 380.11 ns, 11.8786 ns/op
IdleTarget 28: 32 op, 380.11 ns, 11.8786 ns/op
IdleTarget 29: 32 op, 380.11 ns, 11.8786 ns/op
IdleTarget 30: 32 op, 380.11 ns, 11.8786 ns/op
IdleTarget 31: 32 op, 380.11 ns, 11.8786 ns/op
IdleTarget 32: 32 op, 380.11 ns, 11.8786 ns/op
IdleTarget 33: 32 op, 380.11 ns, 11.8786 ns/op
IdleTarget 34: 32 op, 380.11 ns, 11.8786 ns/op
IdleTarget 35: 32 op, 380.11 ns, 11.8786 ns/op
IdleTarget 36: 32 op, 380.11 ns, 11.8786 ns/op
IdleTarget 37: 32 op, 380.11 ns, 11.8786 ns/op
IdleTarget 38: 32 op, 380.11 ns, 11.8786 ns/op
Why does IdleTarget run so many time with identical ops and ns? Could it finish earlier?
Andrey Akinshin
@AndreyAkinshin
Jan 28 2016 16:31
Look at the first line: it takes 1140ns.
Now we have the following logic: standard error for idle target iteration should be less 5% of Mean value.
Matt Warren
@mattwarren
Jan 28 2016 16:32
yeah, I see that, but the next 37 times are quicker? (BTW the benchmark is Benchmark: IntroBasic_Sleep if that makes a difference)
Andrey Akinshin
@AndreyAkinshin
Jan 28 2016 16:33
However, don't worry. In my local repo, I implemented logic that removes outliers. I will publish it soon.
Matt Warren
@mattwarren
Jan 28 2016 16:33
cool, it's not a biggie. BTW it feels like benchmarks are running quicker now, is that right? (if so I think it's a good thing!)
No need for them to run longer than necessary (to be statistically accurate)
Andrey Akinshin
@AndreyAkinshin
Jan 28 2016 16:34
Yep.
Only very fast benchmarks (about ~10ns) takes a lot of time now because it is impossible to make cool measures quickly.
Matt Warren
@mattwarren
Jan 28 2016 16:36
yeah that makes sense
Andrey Akinshin
@AndreyAkinshin
Jan 28 2016 16:36
If you try to measure something that takes at least 100ns, it will be quick.
Matt Warren
@mattwarren
Jan 28 2016 16:38
One request, could you put a few comments in the code (mostly in MethodInvoker.cs) that explain the logic here, like "standard error for idle target iteration should be less 5% of Mean value" from above.
I can sort-of follow it by looking at the code itself, but a few explanations would help
Andrey Akinshin
@AndreyAkinshin
Jan 28 2016 16:39
Now I want to finish my refactoring and publish new version. After that I will be work on the source code, make it more readable, add comments, and so on.
Matt Warren
@mattwarren
Jan 28 2016 16:40
It's probably the deepest/most complex part of the code, well for me anyway :-)

After that I will be work on the source code, make it more readable, add comments, and so on.

Cool

Andrey Akinshin
@AndreyAkinshin
Jan 28 2016 16:42
However, source code is readable for now, I think. E.g., I have a const TargetIdleAutoMaxAcceptableError = 0.05 and the following lines:
var maxAcceptableError = iterationMode.IsOneOf(IterationMode.IdleWarmup, IterationMode.IdleTarget)
    ? TargetIdleAutoMaxAcceptableError
    : TargetMainAutoMaxAcceptableError;
// ...
if (withoutOutliers.Length >= TargetAutoMinIterationCount &&
    statistics.StandardError < maxAcceptableError * statistics.Mean)
    break;
Matt Warren
@mattwarren
Jan 28 2016 16:43
yeah that's true, maybe I just need to spend a bit more time reading it
Andrey Akinshin
@AndreyAkinshin
Jan 28 2016 17:21
Ok, "Big API refactoring, Part 6" is on GitHub.
Andrey Akinshin
@AndreyAkinshin
Jan 28 2016 20:02
@mattwarren , I have the following ideas:
  1. Rename: ProcessCount -> LaunchCount
  2. Transform OperationsPerInvoke from a independent attribute to a parameter of the Benchmark attribute.
    What do you think?