Skip to content

Commit a49a2a0

Browse files
Use Kibana time series visualizations for benchmark results
With this commit, night_rally uses Kibana for visualizing nightly and release benchmark results. We also add an administration tool e.g. for adding annotations. While we add support for these new visualizations now, we are in a transitory phase and will still populate and use the current charts for the time being. After a grace period, we will then switch to using Kibana exclusively. Relates elastic#23
1 parent 04b863b commit a49a2a0

File tree

6 files changed

+535
-44
lines changed

6 files changed

+535
-44
lines changed

README.md

+27-23
Original file line numberDiff line numberDiff line change
@@ -23,27 +23,32 @@ Now you can invoke night_rally regularly with the startup script `night_rally.sh
2323

2424
#### Add an annotation
2525

26-
To add an annotation, just find the right `*_annotation.json` file and add an annotation there. Here is an example record:
27-
28-
```json
29-
{
30-
"series": "GC young gen (sec)",
31-
"x": "2016-08-08 06:10:01",
32-
"shortText": "A",
33-
"text": "Use 4GB heap instead of default"
34-
}
35-
```
26+
To add an annotation, use the admin tool. First find the correct trial timestamp by issuing `python3 admin.py list races --environment=nightly`. You will need the trial timestamp later. Below are examples for common cases:
27+
28+
* Add an annotation for all charts for a specific nightly benchmark trial: `python3 admin.py add annotation --environment=nightly --trial-timestamp=20170502T220213Z --message="Just a test annotation"`
29+
* Add an annotation for all charts of one track for a specific nightly benchmark trial: `python3 admin.py add annotation --environment=nightly --trial-timestamp=20170502T220213Z --track=geonames --message="Just a test annotation for geonames"`
30+
* Add an annotation for a specific chart of one track for a specific nightly benchmark trial: `python3 admin.py add annotation --environment=nightly --trial-timestamp=20170502T220213Z --track=geonames --chart=io --message="Just a test annotation"`
31+
32+
For more details, please issue `python3 admin.py add annotation --help`.
33+
34+
**Note:** The admin tool also supports a dry-run mode for all commands that would change the data store. Just append `--dry-run`.
35+
36+
**Note:** The new annotation will show up immediately.
37+
38+
#### Remove an annotation
3639

37-
* The series name has to match the series name in the CSV data file on the server (if no example is in the file you want to edit, inspect the S3 bucket `elasticsearch-benchmarks.elastic.co`).
38-
* In `x` you specify the timestamp where an annotation should appear. The timestamp format must be identical to the one in the example.
39-
* `shortText` is the annotation label.
40-
* `text` is the explanation that will be shown in the tooltip for this annotation.
40+
If you have made an error you can also remove specific annotations by id.
4141

42-
If you're finished, commit and push the change to `master` and the annotation will be shown after the next benchmark run.
42+
1. Issue `python3 admin.py list annotations --environment=nightly` and find the right annotation. Note that only the 20 most recent annotations are shown. You can show more, by specifying `--limit=NUMBER`.
43+
2. Suppose the id of the annotation that we want to delete is `AVwM0jAA-dI09MVLDV39`. Then issue `python3 admin.py delete annotation --id=AVwM0jAA-dI09MVLDV39`.
44+
45+
For more details, please issue `python3 admin.py delete annotation --help`.
46+
47+
**Note:** The admin tool also supports a dry-run mode for all commands that would change the data store. Just append `--dry-run`.
4348

4449
#### Add a new track
4550

46-
For this three steps are needed:
51+
The following steps are necessary to add a new track:
4752

4853
1. Copy a directory in `external/pages` and adjust the names accordingly.
4954
2. Adjust the menu structure in all other files (if this happens more often, we should think about using a template engine for that...)
@@ -54,14 +59,13 @@ If you're finished, please submit a PR. After the PR is merged, the new track wi
5459

5560
#### Run a release benchmark
5661

57-
Suppose we want to replace the (already published) results of the Elasticsearch release `5.3.0` with release `5.3.1` on our benchmark page.
62+
Suppose we want to publish a new release benchmark of the Elasticsearch release `5.3.1` on our benchmark page. To do that, start a new [macrobenchmark build](https://elasticsearch-ci.elastic.co/view/All/job/elastic+elasticsearch+master+macrobenchmark-periodic/) with the following parameters:
5863

59-
1. Replace "5.3.0" with "5.3.1" in the `versions` array in each `index.html` in `external/pages`. Commit and push your changes (commit message convention: "Update comparison charts to 5.3.1")
60-
2. On the benchmark machine, issue the following command:
64+
* MODE: release
65+
* RELEASE: 5.3.1
66+
* TARGET_HOST: Just use the default value
6167

62-
```
63-
night_rally.sh --target-host=target-551504.benchmark.hetzner-dc17.elasticnet.co:39200 --mode=comparison --release="5.3.1" --replace-release="5.3.0"
64-
```
68+
The results will show up automatically as soon as the build is finished
6569

6670
#### Run an ad-hoc benchmark
6771

@@ -73,5 +77,5 @@ Suppose we want to publish the results of the commit hash `66202dc` in the Elast
7377
2. On the benchmark machine, issue the following command:
7478

7579
```
76-
night_rally.sh --target-host=target-551504.benchmark.hetzner-dc17.elasticnet.co:39200 --mode=adhoc --revision=66202dc --release="Lucene 7" --replace-release="Lucene 7
80+
night_rally.sh --target-host=target-551504.benchmark.hetzner-dc17.elasticnet.co:39200 --mode=adhoc --revision=66202dc --release="Lucene 7"
7781
```

admin.py

+289
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,289 @@
1+
import os
2+
import sys
3+
import argparse
4+
import client
5+
# non-standard! requires setup.py!!
6+
import tabulate
7+
8+
9+
def list_races(es, args):
10+
limit = args.limit
11+
environment = args.environment
12+
track = args.track
13+
14+
if args.track:
15+
print("Listing %d most recent races for track %s in environment %s.\n" % (limit, track, environment))
16+
query = {
17+
"query": {
18+
"bool": {
19+
"filter": [
20+
{
21+
"term": {
22+
"environment": environment
23+
}
24+
},
25+
{
26+
"term": {
27+
"track": track
28+
}
29+
}
30+
]
31+
}
32+
33+
}
34+
}
35+
else:
36+
print("Listing %d most recent races in environment %s.\n" % (limit, environment))
37+
query = {
38+
"query": {
39+
"term": {
40+
"environment": environment
41+
}
42+
}
43+
}
44+
45+
query["sort"] = [
46+
{
47+
"trial-timestamp": "desc"
48+
},
49+
{
50+
"track": "asc"
51+
},
52+
{
53+
"challenge": "asc"
54+
}
55+
]
56+
57+
result = es.search(index="rally-races-*", body=query, size=limit)
58+
races = []
59+
for hit in result["hits"]["hits"]:
60+
src = hit["_source"]
61+
races.append([src["trial-timestamp"], src["track"], src["challenge"], src["car"],
62+
src["cluster"]["distribution-version"], src["user-tag"]])
63+
if races:
64+
print(tabulate.tabulate(races, headers=["Race Timestamp", "Track", "Challenge", "Car", "Version", "User Tag"]))
65+
else:
66+
print("No results")
67+
68+
69+
def list_annotations(es, args):
70+
limit = args.limit
71+
environment = args.environment
72+
track = args.track
73+
if track:
74+
print("Listing %d most recent annotations in environment %s for track %s.\n" % (limit, environment, track))
75+
query = {
76+
"query": {
77+
"bool": {
78+
"filter": [
79+
{
80+
"term": {
81+
"environment": environment
82+
}
83+
},
84+
{
85+
"term": {
86+
"track": track
87+
}
88+
}
89+
]
90+
}
91+
92+
}
93+
}
94+
else:
95+
print("Listing %d most recent annotations in environment %s.\n" % (limit, environment))
96+
query = {
97+
"query": {
98+
"term": {
99+
"environment": environment
100+
}
101+
},
102+
}
103+
query["sort"] = [
104+
{
105+
"trial-timestamp": "desc"
106+
},
107+
{
108+
"track": "asc"
109+
},
110+
{
111+
"chart": "asc"
112+
}
113+
]
114+
115+
result = es.search(index="rally-annotations", body=query, size=limit)
116+
annotations = []
117+
for hit in result["hits"]["hits"]:
118+
src = hit["_source"]
119+
annotations.append([hit["_id"], src["trial-timestamp"], src.get("track", ""), src.get("chart", ""), src["message"]])
120+
if annotations:
121+
print(tabulate.tabulate(annotations, headers=["Annotation Id", "Timestamp", "Track", "Chart", "Message"]))
122+
else:
123+
print("No results")
124+
125+
126+
def add_annotation(es, args):
127+
environment = args.environment
128+
trial_timestamp = args.trial_timestamp
129+
track = args.track
130+
chart = args.chart
131+
message = args.message
132+
dry_run = args.dry_run
133+
134+
if dry_run:
135+
print("Would add annotation with message [%s] for environment=[%s], trial timestamp=[%s], track=[%s], chart=[%s]" %
136+
(message, environment, trial_timestamp, track, chart))
137+
else:
138+
if not es.indices.exists(index="rally-annotations"):
139+
body = open("%s/resources/annotation-mapping.json" % os.path.dirname(os.path.realpath(__file__)), "rt").read()
140+
es.indices.create(index="rally-annotations", body=body)
141+
es.index(index="rally-annotations", doc_type="type", body={
142+
"environment": environment,
143+
"trial-timestamp": trial_timestamp,
144+
"track": track,
145+
"chart": chart,
146+
"message": message
147+
})
148+
149+
150+
def delete_annotation(es, args):
151+
import elasticsearch
152+
annotations = args.id.split(",")
153+
if args.dry_run:
154+
if len(annotations) == 1:
155+
print("Would delete annotation with id [%s]." % annotations[0])
156+
else:
157+
print("Would delete %s annotations: %s." % (len(annotations, annotations)))
158+
else:
159+
for annotation_id in annotations:
160+
try:
161+
es.delete(index="rally-annotations", doc_type="type", id=annotation_id)
162+
print("Successfully deleted [%s]." % annotation_id)
163+
except elasticsearch.TransportError as e:
164+
if e.status_code == 404:
165+
print("Did not find [%s]." % annotation_id)
166+
else:
167+
raise
168+
169+
170+
def arg_parser():
171+
parser = argparse.ArgumentParser(description="Admin tool for Elasticsearch benchmarks",
172+
formatter_class=argparse.RawDescriptionHelpFormatter)
173+
174+
subparsers = parser.add_subparsers(
175+
title="subcommands",
176+
dest="subcommand",
177+
help="")
178+
179+
# list races --max-results=20
180+
# list annotations --max-results=20
181+
list_parser = subparsers.add_parser("list", help="List configuration options")
182+
list_parser.add_argument(
183+
"configuration",
184+
metavar="configuration",
185+
help="What the admin tool should list. Possible values are: races, annotations",
186+
choices=["races", "annotations"])
187+
188+
list_parser.add_argument(
189+
"--limit",
190+
help="Limit the number of search results (default: 20).",
191+
default=20,
192+
)
193+
list_parser.add_argument(
194+
"--environment",
195+
help="Show only records from this environment",
196+
required=True
197+
)
198+
list_parser.add_argument(
199+
"--track",
200+
help="Show only records from this track",
201+
default=None
202+
)
203+
204+
# if no "track" is given -> annotate all tracks
205+
# "chart" indicates the graph. If no chart, is given, it is empty -> we need to write the queries to that we update all chart
206+
#
207+
# add [annotation] --environment=nightly --trial-timestamp --track --chart --text
208+
add_parser = subparsers.add_parser("add", help="Add records")
209+
add_parser.add_argument(
210+
"configuration",
211+
metavar="configuration",
212+
help="",
213+
choices=["annotation"])
214+
add_parser.add_argument(
215+
"--dry-run",
216+
help="Just show what would be done but do not apply the operation.",
217+
default=False,
218+
action="store_true"
219+
)
220+
add_parser.add_argument(
221+
"--environment",
222+
help="Environment (default: nightly)",
223+
default="nightly"
224+
)
225+
add_parser.add_argument(
226+
"--trial-timestamp",
227+
help="Trial timestamp"
228+
)
229+
add_parser.add_argument(
230+
"--track",
231+
help="Track. If none given, applies to all tracks",
232+
default=None
233+
)
234+
add_parser.add_argument(
235+
"--chart",
236+
help="Chart to target. If none given, applies to all charts.",
237+
choices=['query', 'script', 'stats', 'indexing', 'gc', 'index_times', 'merge_times', 'segment_count', 'segment_memory', 'io'],
238+
default=None
239+
)
240+
add_parser.add_argument(
241+
"--message",
242+
help="Annotation message",
243+
required=True
244+
)
245+
246+
delete_parser = subparsers.add_parser("delete", help="Delete records")
247+
delete_parser.add_argument(
248+
"configuration",
249+
metavar="configuration",
250+
help="",
251+
choices=["annotation"])
252+
delete_parser.add_argument(
253+
"--dry-run",
254+
help="Just show what would be done but do not apply the operation.",
255+
default=False,
256+
action="store_true"
257+
)
258+
delete_parser.add_argument(
259+
"--id",
260+
help="Id of the annotation to delete. Separate multiple ids with a comma.",
261+
required=True
262+
)
263+
return parser
264+
265+
266+
def main():
267+
parser = arg_parser()
268+
es = client.create_client()
269+
270+
args = parser.parse_args()
271+
if args.subcommand == "list":
272+
if args.configuration == "races":
273+
list_races(es, args)
274+
elif args.configuration == "annotations":
275+
list_annotations(es, args)
276+
else:
277+
print("Do not know how to list [%s]" % args.configuration, file=sys.stderr)
278+
exit(1)
279+
elif args.subcommand == "add" and args.configuration == "annotation":
280+
add_annotation(es, args)
281+
elif args.subcommand == "delete" and args.configuration == "annotation":
282+
delete_annotation(es, args)
283+
else:
284+
parser.print_help(file=sys.stderr)
285+
exit(1)
286+
287+
288+
if __name__ == '__main__':
289+
main()

client.py

+31
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,31 @@
1+
def create_client():
2+
import configparser
3+
import os
4+
# non-standard! requires setup.py!!
5+
import elasticsearch
6+
import certifi
7+
8+
def load():
9+
config = configparser.ConfigParser(interpolation=configparser.ExtendedInterpolation())
10+
config.read("%s/resources/rally-template.ini" % os.path.dirname(os.path.realpath(__file__)))
11+
return config
12+
13+
complete_cfg = load()
14+
cfg = complete_cfg["reporting"]
15+
if cfg["datastore.secure"] == "True":
16+
secure = True
17+
elif cfg["datastore.secure"] == "False":
18+
secure = False
19+
else:
20+
raise ValueError("Setting [datastore.secure] is neither [True] nor [False] but [%s]" % cfg["datastore.secure"])
21+
hosts = [
22+
{
23+
"host": cfg["datastore.host"],
24+
"port": cfg["datastore.port"],
25+
"use_ssl": secure
26+
}
27+
]
28+
http_auth = (cfg["datastore.user"], cfg["datastore.password"]) if secure else None
29+
certs = certifi.where() if secure else None
30+
31+
return elasticsearch.Elasticsearch(hosts=hosts, http_auth=http_auth, ca_certs=certs)

0 commit comments

Comments
 (0)