Where communities thrive


  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
People
Repo info
Activity
    Stewart Stewart
    @stewSquared
    Could someone help me figure out what's going on?
    Konrad `ktoso` Malawski
    @ktoso
    use sbt 1.x i suppose
    (latest is 1.2.0)
    hm tho it seems also availeble for 013… not sure
    can you open a ticket with your build files shown?
    Georgi Krastev
    @joroKr21

    Hi, how do I load a resource into @State class? I have:

    benchmarks/jmh:resourceDirectory
    [info] .../benchmarks/src/test/resources

    But jmh:run still fails to find the resources. Running a simple (non-jmh) test does find the files. Any clue?

    Georgi Krastev
    @joroKr21
    Didn't change anything for me
    The class loader is the same anyway
    Georgi Krastev
    @joroKr21
    Ohh I think it's maybe fork in Test causing this?
    Georgi Krastev
    @joroKr21
    Hmm it's sbt/sbt#3963
    Georgi Krastev
    @joroKr21
    In the end I just consolidated my test data into one file, it's really too much hassle otherwise
    Jason Zaugg
    @retronym
    @ktoso Could you please tag/publish a new release when you get a moment? There is a decent amount of improvements stacked up: https://github.com/ktoso/sbt-jmh/compare/v0.3.3..master
    Andriy Plokhotnyuk
    @plokhotnyuk
    @ktoso please also add a missing tag for v0.3.4
    Harmeet Singh(Taara)
    @harmeetsingh0013
    Hello guys, I am new to JMH and try to use sbt-jmh plugin with in my sample project
    My sample project contains Java class files as well, so, os t possible to benchmark Java class with sbt-jmh plugin?
    while I was running this command jmh:run -i 20 -wi 10 -f1 -t1 .*JavaUtility.* got below error:
    [warn] Multiple main classes detected.  Run 'show discoveredMainClasses' to see the list
    [info] Running (fork) org.openjdk.jmh.Main -i 20 -wi 10 -f1 -t1 .*JavaUtility.*
    [error] No matching benchmarks. Miss-spelled regexp?
    [error] Use EXTRA verbose mode to debug the pattern matching.
    [error] Nonzero exit code returned from runner: 1
    [error] (Jmh / run) Nonzero exit code returned from runner: 1
    [error] Total time: 2 s, completed 21 Apr, 2019 3:38:33 PM
    etienne
    @crakjie
    have you tried to run show discoveredMainClasses in sbt?
    Harmeet Singh(Taara)
    @harmeetsingh0013
    no
    but @crakjie show discoveredMainClasses is used to show main classes in our projects, what is the link between with JMH?
    etienne
    @crakjie
    Maybe jmh can't choose the right main ?
    Do you have fonction annotated by @Benchmark in your JavaUtility file?
    Harmeet Singh(Taara)
    @harmeetsingh0013
    yes @crakjie I have
    Diego E. Alonso Blas
    @diesalbla

    Good evening. I have one question regarding the benchmarks:
    I understand that one must run JMH benchmarks with a few samples and after a few warmup iterations, just to make sure that all the possible optimisations of the JVM tiered compilation, just in time compiler, inlining, etc have time to kick in (thus creating a scenario more similar to real execution). Many of those optimisations, such as inlining, are intended to reduce time performance, by reducing the number of instructions that are executed, or stack bytes assigned, or the number of control-flow branches, be it by conditionals or by virtual calls.
    However, when you want to measure memory allocation, using the gc.alloc.norm.rate, I am not sure how many optimisations the JVM is currently capable of.
    In other words, I cannot recall any optimisation currently in the JVM (I know there are projects but not yet there) intended to avoid allocating in the heap object.

    Which is why I ask: have you ever noticed the number of warmup iterations to affect the measure of memory allocation?

    Konrad `ktoso` Malawski
    @ktoso
    yeah the JVM does things to avoid heap allocs if it can, it’s usually due to escape analysis being able to prove an obj does not escape some functions cope, so it can stack alloc it
    wow that was a quite ole message
    i dont often hang out on gitter nowadays...
    Muse Mekuria
    @sumew
    I'm trying to compare different algorithms with generated input values. The idea is to create random large data structures and compare implementations. Is JMH suited for something like this?
    mvillafuertem
    @mvillafuertem
    Hi all, I'm trying use sbt-jmh scope test, but I get this error. Need I another dependency?
    [error] Exception in thread "main" java.lang.RuntimeException: ERROR: Unable to find the resource: /META-INF/BenchmarkList
    [error]         at org.openjdk.jmh.runner.AbstractResourceReader.getReaders(AbstractResourceReader.java:98)
    [error]         at org.openjdk.jmh.runner.BenchmarkList.find(BenchmarkList.java:122)
    [error]         at org.openjdk.jmh.runner.Runner.internalRun(Runner.java:263)
    [error]         at org.openjdk.jmh.runner.Runner.run(Runner.java:209)
    [error]         at org.openjdk.jmh.Main.main(Main.java:71)
    class HelloBenchmark {
    
      @Benchmark
      def range(): Int =
        1.to(100000)
          .filter(_ % 2 == 0)
          .count(_.toString.length == 4)
    
      @Benchmark
      def iterator(): Int =
        Iterator.from(1)
          .takeWhile(_ < 100000)
          .filter(_ % 2 == 0)
          .count(_.toString.length == 4)
    
    }
    jmh:run .*Benchmark
        sourceDirectory in Jmh := (sourceDirectory in Test).value
        classDirectory in Jmh := (classDirectory in Test).value
        dependencyClasspath in Jmh := (dependencyClasspath in Test).value
        // rewire tasks, so that 'jmh:run' automatically invokes 'jmh:compile' (otherwise a clean 'jmh:run' would fail)
        compile in Jmh := (compile in Jmh).dependsOn(compile in Test).value
        run in Jmh := (run in Jmh).dependsOn(Keys.compile in Jmh).evaluated
    Tim Spence
    @TimWSpence

    I’m trying to get the async profiler working with flame graphs. I’ve tried:

    benchmarks/jmh:run -bm avgt -i 1 -wi 1 -f1 -t1 -prof async:libPath=/Users/tim/bin/async-profiler-1.8.1/build/libasyncProfiler.so .*TarBenchmark.*

    and that succeeds. But if I try

    benchmarks/jmh:run -bm avgt -i 1 -wi 1 -f1 -t1 -prof async:libPath=/Users/tim/bin/async-profiler-1.8.1/build/libasyncProfiler.so;dir=/tmp/result;output=flamegraph .*TarBenchmark.*

    I get

    …
    [info] Async profiler results:
    [info]   /Users/tim/work/postman-pat/benchmarks/com.itv.postman.pat.tar.TarBenchmark.run-AverageTime-n-10-size-1000000/summary-cpu.txt
    [info] # Run complete. Total time: 00:00:30
    [info] REMEMBER: The numbers below are just data. To gain reusable insights, you need to follow up on
    [info] why the numbers are the way they are. Use profilers (see -prof, -lprof), design factorial
    [info] experiments, perform baseline and negative tests that provide experimental control, make sure
    [info] the benchmarking environment is safe on JVM/OS/HW level, ask for reviews from the domain experts.
    [info] Do not assume the numbers tell you what you want them to tell.
    [info] Benchmark                (n)   (size)  Mode  Cnt  Score   Error  Units
    [info] TarBenchmark.run          10  1000000  avgt       7.399           s/op
    [info] TarBenchmark.run:·async   10  1000000  avgt         NaN            ---
    [success] Total time: 31 s, completed 18 Sep 2020, 16:06:36
    [error] Expected ';'
    [error] output=flamegraph .*TarBenchmark.*
    [error]

    What am I doing wrong?!

    (that was acutally quite a bit of text. The difference between the two commands is the extra args dir and output that I attempted to add
    Tim Spence
    @TimWSpence
    Anyone? :)
    Pau Alarcón
    @paualarco
    hello! what do you recommend to visualise benchmark results?
    https://jmh.morethan.io/ seems to be popular, however it only accepts json files and generate results on this format... what I tried is to pass -rf json to options when running the sbt jmh task.
    Andriy Plokhotnyuk
    @plokhotnyuk
    @paualarco here is an example of using JFree Chart library for plotting of custom charts. It run benchmarks and visualize results with a charts command. It adds -rf and -rff options to all passes options and supply them to jmh:run task, then group results per benchmark and plot main score series to separated images.
    Pau Alarcón
    @paualarco
    thanks!
    but the example you provided does not run the benchmarks, right? it does create visualisations given a file with the results.
    Andriy Plokhotnyuk
    @plokhotnyuk
    Please look here and here how it is called to run benches and plot results.
    Pau Alarcón
    @paualarco
    awesome, thank u!
    Pavan Lanka
    @pavibhai
    Is there a way to include the JMH dependencies into test instead of compile? I currently see that when I include the plugin, the dependencies are included into compile.
    Andriy Plokhotnyuk
    @plokhotnyuk
    @pavibhai Hi, Pavan! As for me benchmarks should be in the src directory and should have tests (in the test directory) that check if benchmarked behavior is as expected.
    Pavan Lanka
    @pavibhai
    @plokhotnyuk Thanks Adriy that makes sense. I was just exploring if in the absence of a separate module for bench does it make sense to include the benchmarks in src/test. Maybe that is not possible.
    Camilo Ariza
    @aariza1
    Hello, anybody here?
    Lucien Iseli
    @Gorzen

    Hello, I have the same problem as @TimWSpence I have the impression the problem is due to how arguments are given to async profiler
    -prof async:opt1=value1,value2;opt2=value3 the issue is when sending the command I have the strong impression the ; is read by sbt and seen as separate commands thus only giving first argument to async profiler and then running a junk command. How to fix it?
    How to specify multiple arguments? For me it never works. e.g. giving dir, output, libPath and event

    Example command:
    jmh:run -i 2 -wi 1 -f1 -t1 my.benchmark.ComputationBenchmark -prof async:libPath=/path/to/libasyncProfiler.dylib;output=jfr;event=cpu,allock,lock,cache-misses,itimer,wall;interval=500000;dir=results-profiler
    Error:

    [error] Expected ID character
    [error] Not a valid command: output (similar: about)
    [error] Expected project ID
    [error] Expected configuration
    [error] Expected ':'
    [error] Expected key
    [error] Not a valid key: output (similar: fileOutputs, earlyOutput, outputStrategy)
    [error] output=jfr
    [error]
    Andriy Plokhotnyuk
    @plokhotnyuk
    The nested quoting like this works fine for me:
    sbt 'jsoniter-scala-benchmarkJVM/jmh:run -prof "async:dir=target/async-reports;output=flamegraph;libPath=/opt/async-profiler/build/libasyncProfiler.so" -wi 10 -i 60 TwitterAPIReading.jsoniterScala'
    Lucien Iseli
    @Gorzen
    Thank you very much @plokhotnyuk ! It works :)
    I had tried to mess around with quotation marks but turns out I did not try the right spot.
    Thanks :)
    Lucien Iseli
    @Gorzen
    Is it possible to profile multiple events? It seems to be possible in async-profiler but when I give multiple events to sbt-jmh I get an error:
    [error] Event name should not contain commas: cpu,allock,lock,itimer,wall
    Andriy Plokhotnyuk
    @plokhotnyuk
    It seems that quite a limited number of secondary events are allowed in the latest integration of JMH with async-profiler and they are toggled on by separated options (alloc and lock): https://github.com/openjdk/jmh/commit/a394dad4bc32ba2ff6fa21cedbe3ef8dfde7f2cd#diff-ef9f37ff4bcd29fb1513944a20d1b77bdd52308fabde5265b648bca8be1fd538R261-R269
    Lucien Iseli
    @Gorzen
    Oh yes it would appear so. Thanks again for the very helpful answers @plokhotnyuk ! :smile: