Skip to content

Commit fa48ccd

Browse files
committed
Merge branch 'master' into feature/runtime_fields
2 parents 949d1de + 24786ac commit fa48ccd

File tree

113 files changed

+4563
-2348
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

113 files changed

+4563
-2348
lines changed

build.gradle

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -174,8 +174,8 @@ tasks.register("verifyVersions") {
174174
* after the backport of the backcompat code is complete.
175175
*/
176176

177-
boolean bwc_tests_enabled = true
178-
final String bwc_tests_disabled_issue = "" /* place a PR link here when committing bwc changes */
177+
boolean bwc_tests_enabled = false
178+
final String bwc_tests_disabled_issue = "https://github.com/elastic/elasticsearch/pull/59293" /* place a PR link here when committing bwc changes */
179179
if (bwc_tests_enabled == false) {
180180
if (bwc_tests_disabled_issue.isEmpty()) {
181181
throw new GradleException("bwc_tests_disabled_issue must be set when bwc_tests_enabled == false")

docs/plugins/analysis-icu.asciidoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -30,7 +30,7 @@ include::install_remove.asciidoc[]
3030
==== ICU Analyzer
3131

3232
The `icu_analyzer` analyzer performs basic normalization, tokenization and character folding, using the
33-
`icu_normalizer` char filter, `icu_tokenizer` and `icu_normalizer` token filter
33+
`icu_normalizer` char filter, `icu_tokenizer` and `icu_folding` token filter
3434

3535
The following parameters are accepted:
3636

docs/reference/how-to/indexing-speed.asciidoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -62,7 +62,7 @@ gets indexed and when it becomes visible, increasing the
6262

6363
If you have a large amount of data that you want to load all at once into
6464
Elasticsearch, it may be beneficial to set `index.number_of_replicas` to `0` in
65-
order to speep up indexing. Having no replicas means that losing a single node
65+
order to speed up indexing. Having no replicas means that losing a single node
6666
may incur data loss, so it is important that the data lives elsewhere so that
6767
this initial load can be retried in case of an issue. Once the initial load is
6868
finished, you can set `index.number_of_replicas` back to its original value.

docs/reference/indices/get-data-stream.asciidoc

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -85,9 +85,10 @@ GET /_data_stream/my-data-stream
8585
==== {api-path-parms-title}
8686

8787
`<data-stream>`::
88-
(Required, string)
89-
Name of the data stream to retrieve.
90-
Wildcard (`*`) expressions are supported.
88+
(Optional, string)
89+
Comma-separated list of data stream names used to limit the request. Wildcard
90+
(`*`) expressions are supported. If omitted, all data streams will be
91+
returned.
9192

9293
[role="child_attributes"]
9394
[[get-data-stream-api-response-body]]

docs/reference/snapshot-restore/apis/create-snapshot-api.asciidoc

Lines changed: 2 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -45,11 +45,11 @@ cluster, as well as the cluster state. You can change this behavior by
4545
specifying a list of data streams and indices to back up in the body of the
4646
snapshot request.
4747

48-
NOTE: You must register a snapshot before performing snapshot and restore operations. Use the <<put-snapshot-repo-api,put snapshot repository API>> to register new repositories and update existing ones.
48+
NOTE: You must register a snapshot repository before performing snapshot and restore operations. Use the <<put-snapshot-repo-api,put snapshot repository API>> to register new repositories and update existing ones.
4949

5050
The snapshot process is incremental. When creating a snapshot, {es} analyzes the list of files that are already stored in the repository and copies only files that were created or changed since the last snapshot. This process allows multiple snapshots to be preserved in the repository in a compact form.
5151

52-
The snapshot process is executed in non-blocking fashion, so all indexing and searching operations can run concurrently against the data stream or index that {es} is snapshotting. Only one snapshot process can run in the cluster at any time.
52+
The snapshot process is executed in non-blocking fashion, so all indexing and searching operations can run concurrently against the data stream or index that {es} is snapshotting.
5353

5454
A snapshot represents a point-in-time view of the moment when the snapshot was created. No records that were added to a data stream or index after the snapshot process started will be present in the snapshot.
5555

@@ -124,9 +124,6 @@ If `true`, allows taking a partial snapshot of indices with unavailable shards.
124124
If `true`, the request returns a response when the snapshot is complete.
125125
If `false`, the request returns a response when the snapshot initializes.
126126
Defaults to `false`.
127-
+
128-
NOTE: During snapshot initialization, information about all
129-
previous snapshots is loaded into memory. In large repositories, this load time can cause requests to take several seconds (or even minutes) to return a response, even if the `wait_for_completion` parameter is `false`.
130127

131128
[[create-snapshot-api-example]]
132129
==== {api-examples-title}

docs/reference/transform/painless-examples.asciidoc

Lines changed: 29 additions & 27 deletions
Original file line numberDiff line numberDiff line change
@@ -106,7 +106,7 @@ You can retrieve the last value in a similar way:
106106

107107
[discrete]
108108
[[painless-time-features]]
109-
==== Getting time features as scripted fields
109+
==== Getting time features by using aggregations
110110

111111
This snippet shows how to extract time based features by using Painless in a
112112
{transform}. The snippet uses an index where `@timestamp` is defined as a `date`
@@ -115,37 +115,39 @@ type field.
115115
[source,js]
116116
--------------------------------------------------
117117
"aggregations": {
118-
"script_fields": {
119-
"hour_of_day": { <1>
120-
"script": {
121-
"lang": "painless",
122-
"source": """
123-
ZonedDateTime date = doc['@timestamp'].value; <2>
124-
return date.getHour(); <3>
125-
"""
126-
}
127-
},
128-
"month_of_year": { <4>
129-
"script": {
130-
"lang": "painless",
131-
"source": """
132-
ZonedDateTime date = doc['@timestamp'].value; <5>
133-
return date.getMonthValue(); <6>
134-
"""
135-
}
118+
"avg_hour_of_day": { <1>
119+
"avg":{
120+
"script": { <2>
121+
"source": """
122+
ZonedDateTime date = doc['@timestamp'].value; <3>
123+
return date.getHour(); <4>
124+
"""
136125
}
137-
},
138-
...
126+
}
127+
},
128+
"avg_month_of_year": { <5>
129+
"avg":{
130+
"script": { <6>
131+
"source": """
132+
ZonedDateTime date = doc['@timestamp'].value; <7>
133+
return date.getMonthValue(); <8>
134+
"""
135+
}
136+
}
137+
},
138+
...
139139
}
140140
--------------------------------------------------
141141
// NOTCONSOLE
142142

143-
<1> Contains the Painless script that returns the hour of the day.
144-
<2> Sets `date` based on the timestamp of the document.
145-
<3> Returns the hour value from `date`.
146-
<4> Contains the Painless script that returns the month of the year.
147-
<5> Sets `date` based on the timestamp of the document.
148-
<6> Returns the month value from `date`.
143+
<1> Name of the aggregation.
144+
<2> Contains the Painless script that returns the hour of the day.
145+
<3> Sets `date` based on the timestamp of the document.
146+
<4> Returns the hour value from `date`.
147+
<5> Name of the aggregation.
148+
<6> Contains the Painless script that returns the month of the year.
149+
<7> Sets `date` based on the timestamp of the document.
150+
<8> Returns the month value from `date`.
149151

150152

151153
[discrete]

libs/x-content/src/main/java/org/elasticsearch/common/xcontent/AbstractObjectParser.java

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -210,6 +210,12 @@ public void declareLong(BiConsumer<Value, Long> consumer, ParseField field) {
210210
declareField(consumer, p -> p.longValue(), field, ValueType.LONG);
211211
}
212212

213+
public void declareLongOrNull(BiConsumer<Value, Long> consumer, long nullValue, ParseField field) {
214+
// Using a method reference here angers some compilers
215+
declareField(consumer, p -> p.currentToken() == XContentParser.Token.VALUE_NULL ? nullValue : p.longValue(),
216+
field, ValueType.LONG_OR_NULL);
217+
}
218+
213219
public void declareInt(BiConsumer<Value, Integer> consumer, ParseField field) {
214220
// Using a method reference here angers some compilers
215221
declareField(consumer, p -> p.intValue(), field, ValueType.INT);

modules/rank-eval/src/test/resources/rest-api-spec/test/rank_eval/50_data_streams.yml

Lines changed: 0 additions & 100 deletions
This file was deleted.

qa/full-cluster-restart/src/test/java/org/elasticsearch/upgrades/FullClusterRestartIT.java

Lines changed: 2 additions & 60 deletions
Original file line numberDiff line numberDiff line change
@@ -19,23 +19,17 @@
1919

2020
package org.elasticsearch.upgrades;
2121

22-
import org.apache.http.entity.ContentType;
23-
import org.apache.http.entity.StringEntity;
2422
import org.apache.http.util.EntityUtils;
2523
import org.elasticsearch.Version;
2624
import org.elasticsearch.client.Request;
2725
import org.elasticsearch.client.Response;
2826
import org.elasticsearch.client.RestClient;
29-
import org.elasticsearch.cluster.metadata.DataStream;
3027
import org.elasticsearch.cluster.metadata.IndexMetadata;
3128
import org.elasticsearch.cluster.metadata.MetadataIndexStateService;
32-
import org.elasticsearch.cluster.metadata.Template;
3329
import org.elasticsearch.common.Booleans;
3430
import org.elasticsearch.common.CheckedFunction;
3531
import org.elasticsearch.common.Strings;
36-
import org.elasticsearch.common.compress.CompressedXContent;
3732
import org.elasticsearch.common.settings.Settings;
38-
import org.elasticsearch.common.xcontent.ToXContent;
3933
import org.elasticsearch.common.xcontent.XContentBuilder;
4034
import org.elasticsearch.common.xcontent.json.JsonXContent;
4135
import org.elasticsearch.common.xcontent.support.XContentMapValues;
@@ -49,7 +43,6 @@
4943
import java.util.ArrayList;
5044
import java.util.Base64;
5145
import java.util.Collection;
52-
import java.util.Date;
5346
import java.util.HashMap;
5447
import java.util.HashSet;
5548
import java.util.List;
@@ -616,7 +609,7 @@ void assertTotalHits(int expectedTotalHits, Map<?, ?> response) {
616609
assertEquals(response.toString(), expectedTotalHits, actualTotalHits);
617610
}
618611

619-
int extractTotalHits(Map<?, ?> response) {
612+
static int extractTotalHits(Map<?, ?> response) {
620613
return (Integer) XContentMapValues.extractValue("hits.total.value", response);
621614
}
622615

@@ -1392,63 +1385,12 @@ public void testResize() throws Exception {
13921385
}
13931386
}
13941387

1395-
private void assertNumHits(String index, int numHits, int totalShards) throws IOException {
1388+
public static void assertNumHits(String index, int numHits, int totalShards) throws IOException {
13961389
Map<String, Object> resp = entityAsMap(client().performRequest(new Request("GET", "/" + index + "/_search")));
13971390
assertNoFailures(resp);
13981391
assertThat(XContentMapValues.extractValue("_shards.total", resp), equalTo(totalShards));
13991392
assertThat(XContentMapValues.extractValue("_shards.successful", resp), equalTo(totalShards));
14001393
assertThat(extractTotalHits(resp), equalTo(numHits));
14011394
}
14021395

1403-
@SuppressWarnings("unchecked")
1404-
public void testDataStreams() throws Exception {
1405-
assumeTrue("no data streams in versions before " + Version.V_7_9_0, getOldClusterVersion().onOrAfter(Version.V_7_9_0));
1406-
if (isRunningAgainstOldCluster()) {
1407-
String mapping = "{\n" +
1408-
" \"properties\": {\n" +
1409-
" \"@timestamp\": {\n" +
1410-
" \"type\": \"date\"\n" +
1411-
" }\n" +
1412-
" }\n" +
1413-
" }";
1414-
Template template = new Template(null, new CompressedXContent(mapping), null);
1415-
createComposableTemplate(client(), "dst", "ds", template);
1416-
1417-
Request indexRequest = new Request("POST", "/ds/_doc/1?op_type=create&refresh");
1418-
XContentBuilder builder = JsonXContent.contentBuilder().startObject()
1419-
.field("f", "v")
1420-
.field("@timestamp", new Date())
1421-
.endObject();
1422-
indexRequest.setJsonEntity(Strings.toString(builder));
1423-
assertOK(client().performRequest(indexRequest));
1424-
}
1425-
1426-
Request getDataStream = new Request("GET", "/_data_stream/ds");
1427-
Response response = client().performRequest(getDataStream);
1428-
assertOK(response);
1429-
List<Object> dataStreams = (List<Object>) entityAsMap(response).get("data_streams");
1430-
assertEquals(1, dataStreams.size());
1431-
Map<String, Object> ds = (Map<String, Object>) dataStreams.get(0);
1432-
List<Map<String, String>> indices = (List<Map<String, String>>) ds.get("indices");
1433-
assertEquals("ds", ds.get("name"));
1434-
assertEquals(1, indices.size());
1435-
assertEquals(DataStream.getDefaultBackingIndexName("ds", 1), indices.get(0).get("index_name"));
1436-
assertNumHits("ds", 1, 1);
1437-
}
1438-
1439-
private static void createComposableTemplate(RestClient client, String templateName, String indexPattern, Template template)
1440-
throws IOException {
1441-
XContentBuilder builder = jsonBuilder();
1442-
template.toXContent(builder, ToXContent.EMPTY_PARAMS);
1443-
StringEntity templateJSON = new StringEntity(
1444-
String.format(Locale.ROOT, "{\n" +
1445-
" \"index_patterns\": \"%s\",\n" +
1446-
" \"data_stream\": { \"timestamp_field\": \"@timestamp\" },\n" +
1447-
" \"template\": %s\n" +
1448-
"}", indexPattern, Strings.toString(builder)),
1449-
ContentType.APPLICATION_JSON);
1450-
Request createIndexTemplateRequest = new Request("PUT", "_index_template/" + templateName);
1451-
createIndexTemplateRequest.setEntity(templateJSON);
1452-
client.performRequest(createIndexTemplateRequest);
1453-
}
14541396
}

0 commit comments

Comments
 (0)