Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
  • Jan 15 22:22
    Jason1269 commented #1639
  • Jan 15 09:19
    JohannesDeml commented #1612
  • Jan 15 09:09

    adamsitnik on master

    Sorting parameter columns with … (compare)

  • Jan 15 09:09
    adamsitnik closed #1612
  • Jan 15 09:08
    adamsitnik synchronize #1612
  • Jan 15 09:06
    adamsitnik labeled #1612
  • Jan 15 09:06
    adamsitnik labeled #1612
  • Jan 15 09:06
    adamsitnik labeled #1612
  • Jan 15 09:06
    adamsitnik milestoned #1612
  • Jan 15 06:26
    timcassell commented #1640
  • Jan 15 00:46
    pha3z opened #1640
  • Jan 14 16:10
    papafe commented #687
  • Jan 14 16:08
    adamsitnik commented #687
  • Jan 14 15:28
    papafe commented #687
  • Jan 14 15:19
    papafe commented #687
  • Jan 14 09:30
    adamsitnik commented #1639
  • Jan 14 09:01
    codingdave commented #946
  • Jan 13 22:54
    Jason1269 opened #1639
  • Jan 13 10:57
    JohannesDeml synchronize #1612
  • Jan 13 09:21
    adamsitnik review_requested #1637
Adam Sitnik
@adamsitnik
You could also add a custom build configuration
TeBeCo
@tebeco
i though about that too, it's not a simple breaking change in my own lib, i'll try to see how it goes but it might complexify stuff just for the cost of a bench
while previous version of this lib where already shipped but the exposed "contract" is still the same, the .WithNuget might do the trick if i consume a "local" RestoreSource
i'll just make sure PackOnBuild is enabled to consume the "nuget feed" versus local feed
the breaking change is for MessagePack in my scenario so it's not a simple one ;)
Collin J. Sutton
@insaneinside
Hey folks, new user here. Trying to benchmark something that (in setup) requires creation of a resource, whose location isn't known before it's created but needs to be communicated to the benchmark methods somehow. Is this (a) not possible, (b) a bad idea/wrong way to do things, (c) an indication that I'm using the wrong benchmarking library, or (d) all of the above?
Adam Sitnik
@adamsitnik
hi @insaneinside you are fine ;) what kind of resource are we talking about?
Collin J. Sutton
@insaneinside
@adamsitnik , I'm specifically comparing how a filesystem search heuristic scales depending on certain settings. My plan was to create a set of test directories somewhere in %TEMP%, and compare performance between settings, using [Params("scaledirname1", "scaledirname2", "scaledirname3")] etc, but still need to communicate the root directory for each scale directory. (Yes, I'm aware of the fact that all sorts of things will affect benchmarks based on disk I/O, and am not yet worrying about it.)
Adam Sitnik
@adamsitnik
@insaneinside in such case, most probably the simplest solution would be to create the corresponding directories in the [GlobalSeutp] method based on the value of [Params]
an alternative is to use env vars to pass something to the process
Albert Hives
@ahives
Hi, is it possible to run multiple benchmark classes in the same Program.cs?
Tomasz Kajetan Stańczak
@tkstanczak
hi, is it normal that after selecting a benchmark in the console from BenchmarkSwitcher all the other jobs are executed later
also, is it possible to disable warnings?
liek this -> The minimum observed iteration time is 1.9000 us which is very smal
l. It's recommended to increase it.
Tomasz Kajetan Stańczak
@tkstanczak
the case here is that I know (more or less) what I am doing - I am trying to benchmark memory only and I do not care about the times
Tomasz Kajetan Stańczak
@tkstanczak
I know I can contribute :) but in case I can also make a comment that maybe you can place the table after the warnings and after the legend so there would be no need to scroll up after the benchmark run
the legend is only needed for the users at the beginning, warnings too, and then you often would run a benchmark and want to see the results after it finishes
I end up scrolling up and looking for tables after each run
Stephan Grein
@stephanmg
hey folks.
is there any update on the possibility to use BenchmarkDotNet with Unity?
Arunprasath
@csearun

While migrating webapplication from .netcore 2.1 to .netcore 3.1, I am getting runtime exception as

System.InvalidOperationException: 'The CORS protocol does not allow specifying a wildcard (any) origin and credentials at the same time. Configure the CORS policy by listing individual origins if credentials needs to be supported.'
How to resolve System.InvalidOperationException?

TeBeCo
@tebeco
not sure you're in the good channel here, it does'nt seems related to benchmark at all
Tebjan Halm
@tebjan
Hi, is there any guide on how to do a benchmark programmatically?
without using attributes... just: start measure (with params), do something (several times), stop, get results
TeBeCo
@tebeco
the idea of attribute i suppose is to avoid the buggy measure you would have by dirrectly calling code
because you would have to warmup stuff and so on
Tebjan Halm
@tebjan
the problem is, i am in an environment that doesn't do static compilation/reflection. it's a live-programming environment that does compilation at runtime: https://visualprogramming.net/
so I can call warm-up things myself, if necessary
TeBeCo
@tebeco
why not generate code with the attrbutes then ?
do you want bench or metrics ?
that's 2 different thing
also you could open https://github.com/dotnet/BenchmarkDotNet
and try to extract the logic you're looking for
that might exists, i never took a look at it to be fair
Tebjan Halm
@tebjan
yes, that would by my fallback strategy, I was hoping there was some kind of guide or example somewhere
TeBeCo
@tebeco
maybe opening an issue on the repo itself as a [QUESTION] ....
including details about your context and limitation might help you have a cleaner answer
Tebjan Halm
@tebjan
yes, thanks
i wonder if you could find something here
if you get an answer on that i'll be curoius ^^
Tebjan Halm
@tebjan
@tebeco i've created an issue: dotnet/BenchmarkDotNet#1445
Sebastiano Mandalà
@sebas77
Jumping in quickly to ask what config you would suggest me to use to have fast benchmark iterations without the result being meaningless. Running a benchmark on simple code takes minutes
Jason Bock
@JasonBock

I just tried to create a couple of benchmark tests, and....I got this error:

error MSB4086: A numeric comparison was attempted on "$(LangVersion)" that evaluates to "latest" instead of a number, in condition "'$(LangVersion)' == '' Or '$(LangVersion)' < '7.3'".

I've never seen that one before :). Any ideas why I'm getting this, and how to get rid of it?

Jason Bock
@JasonBock
Seems like if I have <LangVersion> set to latest, that's why I'm getting that error. Sounds like a bug to me. I'll submit something to the GitHub issues list.
TeBeCo
@tebeco
Bad idea of the day ....
Can BenchmarkDotnet do bisect so that we could detect threshold in the code,
[BisectParam]
public ValueTuple<int, int> RequestSize = (8, 4096)

[Benchmarck]
[BisectPivot(MemoryDiagnosterResult.Gen0)]
public void Foo()
{
  //Code will run with 
  // 0 => Alloc ?
  // 4096  => Alloc ?
   // (4096-0)/2 => Alloc ?
  // ....
}
i'm not sure if that would makes any sense to be fair
it would be a bit like a FlatMap/Reduce
Florian Verdonck
@nojaf
Hello
I created my first benchmark today and now I'm wondering what to do with it.
How many times do I want to run it? And what is the best strategy for saving the results after a CI build?
Because I guess you want to compare the results with previous runs, so what is the best way to do that.