@@ -11,13 +11,28 @@ distributes these calculations across your cluster. You can then feed this
11
11
aggregated data into the {ml-features} instead of raw results, which
12
12
reduces the volume of data that must be considered while detecting anomalies.
13
13
14
- There are some limitations to using aggregations in {dfeeds}, however.
15
- Your aggregation must include a `date_histogram` aggregation, which in turn must
16
- contain a `max` aggregation on the time field. This requirement ensures that the
17
- aggregated data is a time series and the timestamp of each bucket is the time
18
- of the last record in the bucket. If you use a terms aggregation and the
19
- cardinality of a term is high, then the aggregation might not be effective and
20
- you might want to just use the default search and scroll behavior.
14
+ TIP: If you use a terms aggregation and the cardinality of a term is high, the
15
+ aggregation might not be effective and you might want to just use the default
16
+ search and scroll behavior.
17
+
18
+ There are some limitations to using aggregations in {dfeeds}. Your aggregation
19
+ must include a `date_histogram` aggregation, which in turn must contain a `max`
20
+ aggregation on the time field. This requirement ensures that the aggregated data
21
+ is a time series and the timestamp of each bucket is the time of the last record
22
+ in the bucket.
23
+
24
+ You must also consider the interval of the date histogram aggregation carefully.
25
+ The bucket span of your {anomaly-job} must be divisible by the value of the
26
+ `calendar_interval` or `fixed_interval` in your aggregation (with no remainder).
27
+ If you specify a `frequency` for your {dfeed}, it must also be divisible by this
28
+ interval.
29
+
30
+ TIP: As a rule of thumb, if your detectors use <<ml-metric-functions,metric>> or
31
+ <<ml-sum-functions,sum>> analytical functions, set the date histogram
32
+ aggregation interval to a tenth of the bucket span. This suggestion creates
33
+ finer, more granular time buckets, which are ideal for this type of analysis. If
34
+ your detectors use <<ml-count-functions,count>> or <<ml-rare-functions,rare>>
35
+ functions, set the interval to the same value as the bucket span.
21
36
22
37
When you create or update an {anomaly-job}, you can include the names of
23
38
aggregations, for example:
@@ -143,9 +158,9 @@ pipeline aggregation to find the first order derivative of the counter
143
158
----------------------------------
144
159
// NOTCONSOLE
145
160
146
- {dfeeds-cap} not only supports multi-bucket aggregations, but also single bucket aggregations.
147
- The following shows two `filter` aggregations, each gathering the number of unique entries for
148
- the `error` field.
161
+ {dfeeds-cap} not only supports multi-bucket aggregations, but also single bucket
162
+ aggregations. The following shows two `filter` aggregations, each gathering the
163
+ number of unique entries for the `error` field.
149
164
150
165
[source,js]
151
166
----------------------------------
@@ -225,14 +240,15 @@ When you define an aggregation in a {dfeed}, it must have the following form:
225
240
----------------------------------
226
241
// NOTCONSOLE
227
242
228
- The top level aggregation must be either a {ref}/search-aggregations-bucket.html[Bucket Aggregation]
229
- containing as single sub-aggregation that is a `date_histogram` or the top level aggregation
230
- is the required `date_histogram`. There must be exactly 1 `date_histogram` aggregation.
243
+ The top level aggregation must be either a
244
+ {ref}/search-aggregations-bucket.html[bucket aggregation] containing as single
245
+ sub-aggregation that is a `date_histogram` or the top level aggregation is the
246
+ required `date_histogram`. There must be exactly 1 `date_histogram` aggregation.
231
247
For more information, see
232
- {ref}/search-aggregations-bucket-datehistogram-aggregation.html[Date Histogram Aggregation ].
248
+ {ref}/search-aggregations-bucket-datehistogram-aggregation.html[Date histogram aggregation ].
233
249
234
- NOTE: The `time_zone` parameter in the date histogram aggregation must be set to `UTC`,
235
- which is the default value.
250
+ NOTE: The `time_zone` parameter in the date histogram aggregation must be set to
251
+ `UTC`, which is the default value.
236
252
237
253
Each histogram bucket has a key, which is the bucket start time. This key cannot
238
254
be used for aggregations in {dfeeds}, however, because they need to know the
@@ -269,16 +285,9 @@ By default, {es} limits the maximum number of terms returned to 10000. For high
269
285
cardinality fields, the query might not run. It might return errors related to
270
286
circuit breaking exceptions that indicate that the data is too large. In such
271
287
cases, do not use aggregations in your {dfeed}. For more
272
- information, see {ref}/search-aggregations-bucket-terms-aggregation.html[Terms Aggregation].
273
-
274
- You can also optionally specify multiple sub-aggregations.
275
- The sub-aggregations are aggregated for the buckets that were created by their
276
- parent aggregation. For more information, see
277
- {ref}/search-aggregations.html[Aggregations].
288
+ information, see
289
+ {ref}/search-aggregations-bucket-terms-aggregation.html[Terms aggregation].
278
290
279
- TIP: If your detectors use metric or sum analytical functions, set the
280
- `interval` of the date histogram aggregation to a tenth of the `bucket_span`
281
- that was defined in the job. This suggestion creates finer, more granular time
282
- buckets, which are ideal for this type of analysis. If your detectors use count
283
- or rare functions, set `interval` to the same value as `bucket_span`. For more
284
- information about analytical functions, see <<ml-functions>>.
291
+ You can also optionally specify multiple sub-aggregations. The sub-aggregations
292
+ are aggregated for the buckets that were created by their parent aggregation.
293
+ For more information, see {ref}/search-aggregations.html[Aggregations].
0 commit comments