Skip to content

Commit ba605f4

Browse files
committed
Merge branch 'master' into feature/eql
2 parents fe1b478 + 1511486 commit ba605f4

File tree

44 files changed

+550
-283
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

44 files changed

+550
-283
lines changed

docs/reference/aggregations/metrics/percentile-rank-aggregation.asciidoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -183,7 +183,7 @@ number of significant digits). This means that if data is recorded with values f
183183
microseconds) in a histogram set to 3 significant digits, it will maintain a value resolution of 1 microsecond for values up to
184184
1 millisecond and 3.6 seconds (or better) for the maximum tracked value (1 hour).
185185

186-
The HDR Histogram can be used by specifying the `method` parameter in the request:
186+
The HDR Histogram can be used by specifying the `hdr` object in the request:
187187

188188
[source,console]
189189
--------------------------------------------------

docs/reference/ml/anomaly-detection/aggregations.asciidoc

Lines changed: 37 additions & 28 deletions
Original file line numberDiff line numberDiff line change
@@ -11,13 +11,28 @@ distributes these calculations across your cluster. You can then feed this
1111
aggregated data into the {ml-features} instead of raw results, which
1212
reduces the volume of data that must be considered while detecting anomalies.
1313

14-
There are some limitations to using aggregations in {dfeeds}, however.
15-
Your aggregation must include a `date_histogram` aggregation, which in turn must
16-
contain a `max` aggregation on the time field. This requirement ensures that the
17-
aggregated data is a time series and the timestamp of each bucket is the time
18-
of the last record in the bucket. If you use a terms aggregation and the
19-
cardinality of a term is high, then the aggregation might not be effective and
20-
you might want to just use the default search and scroll behavior.
14+
TIP: If you use a terms aggregation and the cardinality of a term is high, the
15+
aggregation might not be effective and you might want to just use the default
16+
search and scroll behavior.
17+
18+
There are some limitations to using aggregations in {dfeeds}. Your aggregation
19+
must include a `date_histogram` aggregation, which in turn must contain a `max`
20+
aggregation on the time field. This requirement ensures that the aggregated data
21+
is a time series and the timestamp of each bucket is the time of the last record
22+
in the bucket.
23+
24+
You must also consider the interval of the date histogram aggregation carefully.
25+
The bucket span of your {anomaly-job} must be divisible by the value of the
26+
`calendar_interval` or `fixed_interval` in your aggregation (with no remainder).
27+
If you specify a `frequency` for your {dfeed}, it must also be divisible by this
28+
interval.
29+
30+
TIP: As a rule of thumb, if your detectors use <<ml-metric-functions,metric>> or
31+
<<ml-sum-functions,sum>> analytical functions, set the date histogram
32+
aggregation interval to a tenth of the bucket span. This suggestion creates
33+
finer, more granular time buckets, which are ideal for this type of analysis. If
34+
your detectors use <<ml-count-functions,count>> or <<ml-rare-functions,rare>>
35+
functions, set the interval to the same value as the bucket span.
2136

2237
When you create or update an {anomaly-job}, you can include the names of
2338
aggregations, for example:
@@ -143,9 +158,9 @@ pipeline aggregation to find the first order derivative of the counter
143158
----------------------------------
144159
// NOTCONSOLE
145160

146-
{dfeeds-cap} not only supports multi-bucket aggregations, but also single bucket aggregations.
147-
The following shows two `filter` aggregations, each gathering the number of unique entries for
148-
the `error` field.
161+
{dfeeds-cap} not only supports multi-bucket aggregations, but also single bucket
162+
aggregations. The following shows two `filter` aggregations, each gathering the
163+
number of unique entries for the `error` field.
149164

150165
[source,js]
151166
----------------------------------
@@ -225,14 +240,15 @@ When you define an aggregation in a {dfeed}, it must have the following form:
225240
----------------------------------
226241
// NOTCONSOLE
227242

228-
The top level aggregation must be either a {ref}/search-aggregations-bucket.html[Bucket Aggregation]
229-
containing as single sub-aggregation that is a `date_histogram` or the top level aggregation
230-
is the required `date_histogram`. There must be exactly 1 `date_histogram` aggregation.
243+
The top level aggregation must be either a
244+
{ref}/search-aggregations-bucket.html[bucket aggregation] containing as single
245+
sub-aggregation that is a `date_histogram` or the top level aggregation is the
246+
required `date_histogram`. There must be exactly 1 `date_histogram` aggregation.
231247
For more information, see
232-
{ref}/search-aggregations-bucket-datehistogram-aggregation.html[Date Histogram Aggregation].
248+
{ref}/search-aggregations-bucket-datehistogram-aggregation.html[Date histogram aggregation].
233249

234-
NOTE: The `time_zone` parameter in the date histogram aggregation must be set to `UTC`,
235-
which is the default value.
250+
NOTE: The `time_zone` parameter in the date histogram aggregation must be set to
251+
`UTC`, which is the default value.
236252

237253
Each histogram bucket has a key, which is the bucket start time. This key cannot
238254
be used for aggregations in {dfeeds}, however, because they need to know the
@@ -269,16 +285,9 @@ By default, {es} limits the maximum number of terms returned to 10000. For high
269285
cardinality fields, the query might not run. It might return errors related to
270286
circuit breaking exceptions that indicate that the data is too large. In such
271287
cases, do not use aggregations in your {dfeed}. For more
272-
information, see {ref}/search-aggregations-bucket-terms-aggregation.html[Terms Aggregation].
273-
274-
You can also optionally specify multiple sub-aggregations.
275-
The sub-aggregations are aggregated for the buckets that were created by their
276-
parent aggregation. For more information, see
277-
{ref}/search-aggregations.html[Aggregations].
288+
information, see
289+
{ref}/search-aggregations-bucket-terms-aggregation.html[Terms aggregation].
278290

279-
TIP: If your detectors use metric or sum analytical functions, set the
280-
`interval` of the date histogram aggregation to a tenth of the `bucket_span`
281-
that was defined in the job. This suggestion creates finer, more granular time
282-
buckets, which are ideal for this type of analysis. If your detectors use count
283-
or rare functions, set `interval` to the same value as `bucket_span`. For more
284-
information about analytical functions, see <<ml-functions>>.
291+
You can also optionally specify multiple sub-aggregations. The sub-aggregations
292+
are aggregated for the buckets that were created by their parent aggregation.
293+
For more information, see {ref}/search-aggregations.html[Aggregations].

docs/reference/ml/anomaly-detection/apis/put-datafeed.asciidoc

Lines changed: 6 additions & 25 deletions
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,12 @@ cluster privileges to use this API. See
2626
[[ml-put-datafeed-desc]]
2727
==== {api-description-title}
2828

29-
You can associate only one {dfeed} to each {anomaly-job}.
29+
{ml-docs}/ml-dfeeds.html[{dfeeds-cap}] retrieve data from {es} for analysis by
30+
an {anomaly-job}. You can associate only one {dfeed} to each {anomaly-job}.
31+
32+
The {dfeed} contains a query that runs at a defined interval (`frequency`). If
33+
you are concerned about delayed data, you can add a delay (`query_delay`) at
34+
each interval. See {ml-docs}/ml-delayed-data-detection.html[Handling delayed data].
3035

3136
[IMPORTANT]
3237
====
@@ -64,11 +69,6 @@ include::{docdir}/ml/ml-shared.asciidoc[tag=delayed-data-check-config]
6469
`frequency`::
6570
(Optional, <<time-units, time units>>)
6671
include::{docdir}/ml/ml-shared.asciidoc[tag=frequency]
67-
+
68-
--
69-
To learn more about the relationship between time related settings, see
70-
<<ml-put-datafeed-time-related-settings>>.
71-
--
7272

7373
`indices`::
7474
(Required, array)
@@ -89,11 +89,6 @@ include::{docdir}/ml/ml-shared.asciidoc[tag=query]
8989
`query_delay`::
9090
(Optional, <<time-units, time units>>)
9191
include::{docdir}/ml/ml-shared.asciidoc[tag=query-delay]
92-
+
93-
--
94-
To learn more about the relationship between time related settings, see
95-
<<ml-put-datafeed-time-related-settings>>.
96-
--
9792

9893
`script_fields`::
9994
(Optional, object)
@@ -103,20 +98,6 @@ include::{docdir}/ml/ml-shared.asciidoc[tag=script-fields]
10398
(Optional, unsigned integer)
10499
include::{docdir}/ml/ml-shared.asciidoc[tag=scroll-size]
105100

106-
107-
[[ml-put-datafeed-time-related-settings]]
108-
===== Interaction between time-related settings
109-
110-
Time-related settings have the following relationships:
111-
112-
* Queries run at `query_delay` after the end of
113-
each `frequency`.
114-
115-
* When `frequency` is shorter than `bucket_span` of the associated job, interim
116-
results for the last (partial) bucket are written, and then overwritten by the
117-
full bucket results eventually.
118-
119-
120101
[[ml-put-datafeed-example]]
121102
==== {api-examples-title}
122103

docs/reference/ml/anomaly-detection/apis/put-job.asciidoc

Lines changed: 0 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -49,11 +49,6 @@ include::{docdir}/ml/ml-shared.asciidoc[tag=analysis-config]
4949
`analysis_config`.`bucket_span`:::
5050
(<<time-units,time units>>)
5151
include::{docdir}/ml/ml-shared.asciidoc[tag=bucket-span]
52-
+
53-
--
54-
To learn more about the relationship between time related settings, see
55-
<<ml-put-datafeed-time-related-settings>>.
56-
--
5752

5853
`analysis_config`.`categorization_field_name`:::
5954
(string)

docs/reference/ml/ml-shared.asciidoc

Lines changed: 11 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
tag::aggregations[]
22
If set, the {dfeed} performs aggregation searches. Support for aggregations is
3-
limited and should only be used with low cardinality data. For more information,
3+
limited and should be used only with low cardinality data. For more information,
44
see
55
{ml-docs}/ml-configuring-aggregation.html[Aggregating data for faster performance].
66
end::aggregations[]
@@ -149,8 +149,10 @@ end::background-persist-interval[]
149149

150150
tag::bucket-span[]
151151
The size of the interval that the analysis is aggregated into, typically between
152-
`5m` and `1h`. The default value is `5m`. For more information about time units,
153-
see <<time-units>>.
152+
`5m` and `1h`. The default value is `5m`. If the {anomaly-job} uses a {dfeed}
153+
with {ml-docs}/ml-configuring-aggregation.html[aggregations], this value must be
154+
divisible by the interval of the date histogram aggregation. For more
155+
information, see {ml-docs}/ml-buckets.html[Buckets].
154156
end::bucket-span[]
155157

156158
tag::bucket-span-results[]
@@ -605,7 +607,10 @@ tag::frequency[]
605607
The interval at which scheduled queries are made while the {dfeed} runs in real
606608
time. The default value is either the bucket span for short bucket spans, or,
607609
for longer bucket spans, a sensible fraction of the bucket span. For example:
608-
`150s`.
610+
`150s`. When `frequency` is shorter than the bucket span, interim results for
611+
the last (partial) bucket are written then eventually overwritten by the full
612+
bucket results. If the {dfeed} uses aggregations, this value must be divisible
613+
by the interval of the date histogram aggregation.
609614
end::frequency[]
610615

611616
tag::from[]
@@ -949,7 +954,8 @@ The number of seconds behind real time that data is queried. For example, if
949954
data from 10:04 a.m. might not be searchable in {es} until 10:06 a.m., set this
950955
property to 120 seconds. The default value is randomly selected between `60s`
951956
and `120s`. This randomness improves the query performance when there are
952-
multiple jobs running on the same node.
957+
multiple jobs running on the same node. For more information, see
958+
{ml-docs}/ml-delayed-data-detection.html[Handling delayed data].
953959
end::query-delay[]
954960

955961
tag::renormalization-window-days[]

server/src/main/java/org/elasticsearch/index/engine/CombinedDocValues.java

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -47,6 +47,7 @@ final class CombinedDocValues {
4747
long docVersion(int segmentDocId) throws IOException {
4848
assert versionDV.docID() < segmentDocId;
4949
if (versionDV.advanceExact(segmentDocId) == false) {
50+
assert false : "DocValues for field [" + VersionFieldMapper.NAME + "] is not found";
5051
throw new IllegalStateException("DocValues for field [" + VersionFieldMapper.NAME + "] is not found");
5152
}
5253
return versionDV.longValue();
@@ -55,19 +56,18 @@ long docVersion(int segmentDocId) throws IOException {
5556
long docSeqNo(int segmentDocId) throws IOException {
5657
assert seqNoDV.docID() < segmentDocId;
5758
if (seqNoDV.advanceExact(segmentDocId) == false) {
59+
assert false : "DocValues for field [" + SeqNoFieldMapper.NAME + "] is not found";
5860
throw new IllegalStateException("DocValues for field [" + SeqNoFieldMapper.NAME + "] is not found");
5961
}
6062
return seqNoDV.longValue();
6163
}
6264

6365
long docPrimaryTerm(int segmentDocId) throws IOException {
64-
if (primaryTermDV == null) {
65-
return -1L;
66-
}
66+
// We exclude non-root nested documents when querying changes, every returned document must have primary term.
6767
assert primaryTermDV.docID() < segmentDocId;
68-
// Use -1 for docs which don't have primary term. The caller considers those docs as nested docs.
6968
if (primaryTermDV.advanceExact(segmentDocId) == false) {
70-
return -1;
69+
assert false : "DocValues for field [" + SeqNoFieldMapper.PRIMARY_TERM_NAME + "] is not found";
70+
throw new IllegalStateException("DocValues for field [" + SeqNoFieldMapper.PRIMARY_TERM_NAME + "] is not found");
7171
}
7272
return primaryTermDV.longValue();
7373
}

server/src/main/java/org/elasticsearch/index/engine/InternalEngine.java

Lines changed: 9 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -38,6 +38,8 @@
3838
import org.apache.lucene.index.ShuffleForcedMergePolicy;
3939
import org.apache.lucene.index.SoftDeletesRetentionMergePolicy;
4040
import org.apache.lucene.index.Term;
41+
import org.apache.lucene.search.BooleanClause;
42+
import org.apache.lucene.search.BooleanQuery;
4143
import org.apache.lucene.search.DocIdSetIterator;
4244
import org.apache.lucene.search.IndexSearcher;
4345
import org.apache.lucene.search.Query;
@@ -63,6 +65,7 @@
6365
import org.elasticsearch.common.lucene.LoggerInfoStream;
6466
import org.elasticsearch.common.lucene.Lucene;
6567
import org.elasticsearch.common.lucene.index.ElasticsearchDirectoryReader;
68+
import org.elasticsearch.common.lucene.search.Queries;
6669
import org.elasticsearch.common.lucene.uid.Versions;
6770
import org.elasticsearch.common.lucene.uid.VersionsAndSeqNoResolver;
6871
import org.elasticsearch.common.lucene.uid.VersionsAndSeqNoResolver.DocIdAndSeqNo;
@@ -2731,8 +2734,12 @@ private boolean assertMaxSeqNoOfUpdatesIsAdvanced(Term id, long seqNo, boolean a
27312734
private void restoreVersionMapAndCheckpointTracker(DirectoryReader directoryReader) throws IOException {
27322735
final IndexSearcher searcher = new IndexSearcher(directoryReader);
27332736
searcher.setQueryCache(null);
2734-
final Query query = LongPoint.newRangeQuery(SeqNoFieldMapper.NAME, getPersistedLocalCheckpoint() + 1, Long.MAX_VALUE);
2735-
final Weight weight = searcher.createWeight(query, ScoreMode.COMPLETE_NO_SCORES, 1.0f);
2737+
final Query query = new BooleanQuery.Builder()
2738+
.add(LongPoint.newRangeQuery(
2739+
SeqNoFieldMapper.NAME, getPersistedLocalCheckpoint() + 1, Long.MAX_VALUE), BooleanClause.Occur.MUST)
2740+
.add(Queries.newNonNestedFilter(), BooleanClause.Occur.MUST) // exclude non-root nested documents
2741+
.build();
2742+
final Weight weight = searcher.createWeight(searcher.rewrite(query), ScoreMode.COMPLETE_NO_SCORES, 1.0f);
27362743
for (LeafReaderContext leaf : directoryReader.leaves()) {
27372744
final Scorer scorer = weight.scorer(leaf);
27382745
if (scorer == null) {
@@ -2744,9 +2751,6 @@ private void restoreVersionMapAndCheckpointTracker(DirectoryReader directoryRead
27442751
int docId;
27452752
while ((docId = iterator.nextDoc()) != DocIdSetIterator.NO_MORE_DOCS) {
27462753
final long primaryTerm = dv.docPrimaryTerm(docId);
2747-
if (primaryTerm == -1L) {
2748-
continue; // skip children docs which do not have primary term
2749-
}
27502754
final long seqNo = dv.docSeqNo(docId);
27512755
localCheckpointTracker.markSeqNoAsProcessed(seqNo);
27522756
localCheckpointTracker.markSeqNoAsPersisted(seqNo);

server/src/main/java/org/elasticsearch/index/engine/LuceneChangesSnapshot.java

Lines changed: 8 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -24,6 +24,8 @@
2424
import org.apache.lucene.index.LeafReaderContext;
2525
import org.apache.lucene.index.NumericDocValues;
2626
import org.apache.lucene.index.Term;
27+
import org.apache.lucene.search.BooleanClause;
28+
import org.apache.lucene.search.BooleanQuery;
2729
import org.apache.lucene.search.IndexSearcher;
2830
import org.apache.lucene.search.Query;
2931
import org.apache.lucene.search.ScoreDoc;
@@ -33,6 +35,7 @@
3335
import org.apache.lucene.util.ArrayUtil;
3436
import org.elasticsearch.common.bytes.BytesReference;
3537
import org.elasticsearch.common.lucene.Lucene;
38+
import org.elasticsearch.common.lucene.search.Queries;
3639
import org.elasticsearch.core.internal.io.IOUtils;
3740
import org.elasticsearch.index.fieldvisitor.FieldsVisitor;
3841
import org.elasticsearch.index.mapper.IdFieldMapper;
@@ -210,7 +213,10 @@ private void fillParallelArray(ScoreDoc[] scoreDocs, ParallelArray parallelArray
210213
}
211214

212215
private TopDocs searchOperations(ScoreDoc after) throws IOException {
213-
final Query rangeQuery = LongPoint.newRangeQuery(SeqNoFieldMapper.NAME, Math.max(fromSeqNo, lastSeenSeqNo), toSeqNo);
216+
final Query rangeQuery = new BooleanQuery.Builder()
217+
.add(LongPoint.newRangeQuery(SeqNoFieldMapper.NAME, Math.max(fromSeqNo, lastSeenSeqNo), toSeqNo), BooleanClause.Occur.MUST)
218+
.add(Queries.newNonNestedFilter(), BooleanClause.Occur.MUST) // exclude non-root nested documents
219+
.build();
214220
final Sort sortedBySeqNo = new Sort(new SortField(SeqNoFieldMapper.NAME, SortField.Type.LONG));
215221
return indexSearcher.searchAfter(after, rangeQuery, searchBatchSize, sortedBySeqNo);
216222
}
@@ -219,11 +225,7 @@ private Translog.Operation readDocAsOp(int docIndex) throws IOException {
219225
final LeafReaderContext leaf = parallelArray.leafReaderContexts[docIndex];
220226
final int segmentDocID = scoreDocs[docIndex].doc - leaf.docBase;
221227
final long primaryTerm = parallelArray.primaryTerm[docIndex];
222-
// We don't have to read the nested child documents - those docs don't have primary terms.
223-
if (primaryTerm == -1) {
224-
skippedOperations++;
225-
return null;
226-
}
228+
assert primaryTerm > 0 : "nested child document must be excluded";
227229
final long seqNo = parallelArray.seqNo[docIndex];
228230
// Only pick the first seen seq#
229231
if (seqNo == lastSeenSeqNo) {

server/src/main/java/org/elasticsearch/index/mapper/DocumentParser.java

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -24,6 +24,7 @@
2424
import org.elasticsearch.ElasticsearchParseException;
2525
import org.elasticsearch.common.Strings;
2626
import org.elasticsearch.common.collect.Tuple;
27+
import org.elasticsearch.common.settings.Settings;
2728
import org.elasticsearch.common.time.DateFormatter;
2829
import org.elasticsearch.common.xcontent.LoggingDeprecationHandler;
2930
import org.elasticsearch.common.xcontent.XContentHelper;
@@ -417,21 +418,23 @@ private static void innerParseObject(ParseContext context, ObjectMapper mapper,
417418
private static void nested(ParseContext context, ObjectMapper.Nested nested) {
418419
ParseContext.Document nestedDoc = context.doc();
419420
ParseContext.Document parentDoc = nestedDoc.getParent();
421+
Settings settings = context.indexSettings().getSettings();
420422
if (nested.isIncludeInParent()) {
421-
addFields(nestedDoc, parentDoc);
423+
addFields(settings, nestedDoc, parentDoc);
422424
}
423425
if (nested.isIncludeInRoot()) {
424426
ParseContext.Document rootDoc = context.rootDoc();
425427
// don't add it twice, if its included in parent, and we are handling the master doc...
426428
if (!nested.isIncludeInParent() || parentDoc != rootDoc) {
427-
addFields(nestedDoc, rootDoc);
429+
addFields(settings, nestedDoc, rootDoc);
428430
}
429431
}
430432
}
431433

432-
private static void addFields(ParseContext.Document nestedDoc, ParseContext.Document rootDoc) {
434+
private static void addFields(Settings settings, ParseContext.Document nestedDoc, ParseContext.Document rootDoc) {
435+
String nestedPathFieldName = NestedPathFieldMapper.name(settings);
433436
for (IndexableField field : nestedDoc.getFields()) {
434-
if (!field.name().equals(TypeFieldMapper.NAME)) {
437+
if (field.name().equals(nestedPathFieldName) == false) {
435438
rootDoc.add(field);
436439
}
437440
}
@@ -457,10 +460,7 @@ private static ParseContext nestedContext(ParseContext context, ObjectMapper map
457460
throw new IllegalStateException("The root document of a nested document should have an _id field");
458461
}
459462

460-
// the type of the nested doc starts with __, so we can identify that its a nested one in filters
461-
// note, we don't prefix it with the type of the doc since it allows us to execute a nested query
462-
// across types (for example, with similar nested objects)
463-
nestedDoc.add(new Field(TypeFieldMapper.NAME, mapper.nestedTypePathAsString(), TypeFieldMapper.Defaults.FIELD_TYPE));
463+
nestedDoc.add(NestedPathFieldMapper.field(context.indexSettings().getSettings(), mapper.nestedTypePath()));
464464
return context;
465465
}
466466

0 commit comments

Comments
 (0)