Skip to content

Add microbenchmarking infrastructure #18891

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Conversation

danielmitterdorfer
Copy link
Member

With this commit we add a benchmarks project that contains the necessary build infrastructure and an example benchmark. It is added as a separate project to avoid interfering with the regular build too much (especially sanity checks) and to keep the microbenchmarks isolated.

Microbenchmarks are generated with gradle :benchmarks:jmhJar and can be run with gradle :benchmarks:jmh.

We intentionally do not use the jmh-gradle-plugin as it causes all sorts of problems (dependencies are not properly excluded, not all JMH parameters can be set) and it adds another abstraction layer that is not needed.

Closes #18242

With this commit we add a benchmarks project that contains the necessary build
infrastructure and an example benchmark. It is added as a separate project to avoid
interfering with the regular build too much (especially sanity checks) and to keep
the microbenchmarks isolated.

Microbenchmarks are generated with `gradle :benchmarks:jmhJar` and can be run with
` gradle :benchmarks:jmh`.

We intentionally do not use the
[jmh-gradle-plugin](https://github.com/melix/jmh-gradle-plugin) as it causes all
sorts of problems (dependencies are not properly excluded, not all JMH parameters
can be set) and it adds another abstraction layer that is not needed.

Closes elastic#18242
@danielmitterdorfer
Copy link
Member Author

danielmitterdorfer commented Jun 15, 2016

@rjernst: I pushed initial support for microbenchmarking but I have two specific issues with benchmarks/build.gradle:

For some reason transitive dependencies of JMH don't get picked up, so I added them explicitly in the meantime. gradle :benchmarks:dependencies shows:

------------------------------------------------------------
Project :benchmarks - Elasticsearch subproject :benchmarks
------------------------------------------------------------

_transitive_org.openjdk.jmh:jmh-core:1.12
\--- org.openjdk.jmh:jmh-core:1.12
     +--- net.sf.jopt-simple:jopt-simple:4.6
     \--- org.apache.commons:commons-math3:3.2

_transitive_org.openjdk.jmh:jmh-generator-annprocess:1.12
\--- org.openjdk.jmh:jmh-generator-annprocess:1.12
     \--- org.openjdk.jmh:jmh-core:1.12
          +--- net.sf.jopt-simple:jopt-simple:4.6
          \--- org.apache.commons:commons-math3:3.2

[...]
compile - Dependencies for source set 'main'.
+--- org.elasticsearch:elasticsearch:5.0.0-alpha4-SNAPSHOT -> project :core
[...]
+--- org.openjdk.jmh:jmh-core:1.12
\--- org.openjdk.jmh:jmh-generator-annprocess:1.12

As you can see above, in the compile scope, the transitive dependencies are gone but their pom looks fine to me and I don't remember that I ever had problems with JMH dependencies (so I'm probably missing something very obvious :)).

The next issue is that JMH generates classes during the build and uses all sorts of tricks which - you might have guessed it - alert our forbidden API check. I have tried various approaches, like a custom SuppressForbidden annotation that gets applied to all generated benchmark (sub)classes (-> @Inherited) but JMH creates additional classes that don't derive from our classes. Now, we could postprocess the generated Java files before they get compiled but I am not sure it is worth the hassle. Ideally, I'd like to skip the forbidden API check for the benchmarks in general but the "best" option I've found was to set ignoreFailures to true (and I do not like that). Do you have a better idea?

@jpountz
Copy link
Contributor

jpountz commented Jun 15, 2016

Since this is for developer use only and will not be shipped, I don't think we need to go through a formal review process. I suggest that we merge it and then iterate directly on master?

@danielmitterdorfer
Copy link
Member Author

@jpountz I'm fine with that. @ywelsch also had a short look, so I guess it's ok. I just wait a couple of hours so @rjernst has a chance to answer but if he isn't strongly against merging, I'll merge it on master tomorrow morning.

@rjernst
Copy link
Member

rjernst commented Jun 15, 2016

I agree with @jpountz that since this is just for developers, I think we should just get it in to start, and iterate. I do actually think we should do this more tightly integrated in the build (eg, as a benchmark task that can be run inside each project using elasticsearch.build, it does not have to be included in check). Part of that reason for that is right now it requires sucking in every jar we have if we wanted to benchmark something not in core. But that can all be worked on over time; this is a good start.

Regarding transitive dependencies: elasticsearch.build does magic to not allow transitive dependencies. We do this because for software we ship, we want to know exactly what dependencies we are shipping. For now, as long as benchmarks is a separate project, you might consider making this a play 'java' project (and you can then apply the randomized testing plugin).

@danielmitterdorfer
Copy link
Member Author

@rjernst: Thanks for your explanation. Then I think we're good to start now.

Just as a general note: While this allows us now to benchmark on our machines, CI integration is not yet done but I'll work together with the infra team on that soon.

@danielmitterdorfer danielmitterdorfer merged commit 2c467fd into elastic:master Jun 15, 2016
@mark-vieira mark-vieira added the Team:Delivery Meta label for Delivery team label Nov 11, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
:Delivery/Build Build or test infrastructure Team:Delivery Meta label for Delivery team v5.0.0-alpha4
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Add infrastructure support for microbenchmarking
5 participants