jmh:run -i 20 -wi 10 -f1 -t1 .*JavaUtility.*
got below error: [warn] Multiple main classes detected. Run 'show discoveredMainClasses' to see the list
[info] Running (fork) org.openjdk.jmh.Main -i 20 -wi 10 -f1 -t1 .*JavaUtility.*
[error] No matching benchmarks. Miss-spelled regexp?
[error] Use EXTRA verbose mode to debug the pattern matching.
[error] Nonzero exit code returned from runner: 1
[error] (Jmh / run) Nonzero exit code returned from runner: 1
[error] Total time: 2 s, completed 21 Apr, 2019 3:38:33 PM
show discoveredMainClasses
is used to show main classes in our projects, what is the link between with JMH?
Good evening. I have one question regarding the benchmarks:
I understand that one must run JMH benchmarks with a few samples and after a few warmup iterations, just to make sure that all the possible optimisations of the JVM tiered compilation, just in time compiler, inlining, etc have time to kick in (thus creating a scenario more similar to real execution). Many of those optimisations, such as inlining, are intended to reduce time performance, by reducing the number of instructions that are executed, or stack bytes assigned, or the number of control-flow branches, be it by conditionals or by virtual calls.
However, when you want to measure memory allocation, using the gc.alloc.norm.rate
, I am not sure how many optimisations the JVM is currently capable of.
In other words, I cannot recall any optimisation currently in the JVM (I know there are projects but not yet there) intended to avoid allocating in the heap object.
Which is why I ask: have you ever noticed the number of warmup iterations to affect the measure of memory allocation?
[error] Exception in thread "main" java.lang.RuntimeException: ERROR: Unable to find the resource: /META-INF/BenchmarkList
[error] at org.openjdk.jmh.runner.AbstractResourceReader.getReaders(AbstractResourceReader.java:98)
[error] at org.openjdk.jmh.runner.BenchmarkList.find(BenchmarkList.java:122)
[error] at org.openjdk.jmh.runner.Runner.internalRun(Runner.java:263)
[error] at org.openjdk.jmh.runner.Runner.run(Runner.java:209)
[error] at org.openjdk.jmh.Main.main(Main.java:71)
class HelloBenchmark {
@Benchmark
def range(): Int =
1.to(100000)
.filter(_ % 2 == 0)
.count(_.toString.length == 4)
@Benchmark
def iterator(): Int =
Iterator.from(1)
.takeWhile(_ < 100000)
.filter(_ % 2 == 0)
.count(_.toString.length == 4)
}
jmh:run .*Benchmark
sourceDirectory in Jmh := (sourceDirectory in Test).value
classDirectory in Jmh := (classDirectory in Test).value
dependencyClasspath in Jmh := (dependencyClasspath in Test).value
// rewire tasks, so that 'jmh:run' automatically invokes 'jmh:compile' (otherwise a clean 'jmh:run' would fail)
compile in Jmh := (compile in Jmh).dependsOn(compile in Test).value
run in Jmh := (run in Jmh).dependsOn(Keys.compile in Jmh).evaluated
I’m trying to get the async profiler working with flame graphs. I’ve tried:
benchmarks/jmh:run -bm avgt -i 1 -wi 1 -f1 -t1 -prof async:libPath=/Users/tim/bin/async-profiler-1.8.1/build/libasyncProfiler.so .*TarBenchmark.*
and that succeeds. But if I try
benchmarks/jmh:run -bm avgt -i 1 -wi 1 -f1 -t1 -prof async:libPath=/Users/tim/bin/async-profiler-1.8.1/build/libasyncProfiler.so;dir=/tmp/result;output=flamegraph .*TarBenchmark.*
I get
…
[info] Async profiler results:
[info] /Users/tim/work/postman-pat/benchmarks/com.itv.postman.pat.tar.TarBenchmark.run-AverageTime-n-10-size-1000000/summary-cpu.txt
[info] # Run complete. Total time: 00:00:30
[info] REMEMBER: The numbers below are just data. To gain reusable insights, you need to follow up on
[info] why the numbers are the way they are. Use profilers (see -prof, -lprof), design factorial
[info] experiments, perform baseline and negative tests that provide experimental control, make sure
[info] the benchmarking environment is safe on JVM/OS/HW level, ask for reviews from the domain experts.
[info] Do not assume the numbers tell you what you want them to tell.
[info] Benchmark (n) (size) Mode Cnt Score Error Units
[info] TarBenchmark.run 10 1000000 avgt 7.399 s/op
[info] TarBenchmark.run:·async 10 1000000 avgt NaN ---
[success] Total time: 31 s, completed 18 Sep 2020, 16:06:36
[error] Expected ';'
[error] output=flamegraph .*TarBenchmark.*
[error]
What am I doing wrong?!
dir
and output
that I attempted to add
-rf json
to options when running the sbt jmh task.
Hello, I have the same problem as @TimWSpence I have the impression the problem is due to how arguments are given to async profiler-prof async:opt1=value1,value2;opt2=value3
the issue is when sending the command I have the strong impression the ;
is read by sbt and seen as separate commands thus only giving first argument to async profiler and then running a junk command. How to fix it?
How to specify multiple arguments? For me it never works. e.g. giving dir
, output
, libPath
and event
Example command:jmh:run -i 2 -wi 1 -f1 -t1 my.benchmark.ComputationBenchmark -prof async:libPath=/path/to/libasyncProfiler.dylib;output=jfr;event=cpu,allock,lock,cache-misses,itimer,wall;interval=500000;dir=results-profiler
Error:
[error] Expected ID character
[error] Not a valid command: output (similar: about)
[error] Expected project ID
[error] Expected configuration
[error] Expected ':'
[error] Expected key
[error] Not a valid key: output (similar: fileOutputs, earlyOutput, outputStrategy)
[error] output=jfr
[error]
alloc
and lock
): https://github.com/openjdk/jmh/commit/a394dad4bc32ba2ff6fa21cedbe3ef8dfde7f2cd#diff-ef9f37ff4bcd29fb1513944a20d1b77bdd52308fabde5265b648bca8be1fd538R261-R269