Skip to content

Commit 0a63e21

Browse files
committed
Merge branch 'master' into bump-compiler-to-jdk-10
* master: Fix BWC issue for PreSyncedFlushResponse Remove BytesArray and BytesReference usage from XContentFactory (elastic#29151) Add pluggable XContentBuilder writers and human readable writers (elastic#29120) Add unreleased version 6.2.4 (elastic#29171) Add unreleased version 6.1.5 (elastic#29168) Add a note about using the `retry_failed` flag before accepting data loss (elastic#29160) Fix typo in percolate-query.asciidoc (elastic#29155) Require HTTP::Tiny 0.070 for release notes script
2 parents 89cd1cc + f938c42 commit 0a63e21

File tree

55 files changed

+314
-198
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

55 files changed

+314
-198
lines changed

dev-tools/es_release_notes.pl

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@
1818
use strict;
1919
use warnings;
2020

21-
use HTTP::Tiny;
21+
use HTTP::Tiny 0.070;
2222
use IO::Socket::SSL 1.52;
2323
use utf8;
2424

docs/reference/cluster/reroute.asciidoc

Lines changed: 20 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -83,8 +83,26 @@ Reasons why a primary shard cannot be automatically allocated include the follow
8383
the cluster. To prevent data loss, the system does not automatically promote a stale
8484
shard copy to primary.
8585

86-
As a manual override, two commands to forcefully allocate primary shards
87-
are available:
86+
[float]
87+
=== Retry failed shards
88+
89+
The cluster will attempt to allocate a shard a maximum of
90+
`index.allocation.max_retries` times in a row (defaults to `5`), before giving
91+
up and leaving the shard unallocated. This scenario can be caused by
92+
structural problems such as having an analyzer which refers to a stopwords
93+
file which doesn't exist on all nodes.
94+
95+
Once the problem has been corrected, allocation can be manually retried by
96+
calling the <<cluster-reroute,`reroute`>> API with `?retry_failed`, which
97+
will attempt a single retry round for these shards.
98+
99+
[float]
100+
=== Forced allocation on unrecoverable errors
101+
102+
The following two commands are dangerous and may result in data loss. They are
103+
meant to be used in cases where the original data can not be recovered and the cluster
104+
administrator accepts the loss. If you have suffered a temporary issue that has been
105+
fixed, please see the `retry_failed` flag described above.
88106

89107
`allocate_stale_primary`::
90108
Allocate a primary shard to a node that holds a stale copy. Accepts the
@@ -108,15 +126,3 @@ are available:
108126
this command requires the special field `accept_data_loss` to be
109127
explicitly set to `true` for it to work.
110128

111-
[float]
112-
=== Retry failed shards
113-
114-
The cluster will attempt to allocate a shard a maximum of
115-
`index.allocation.max_retries` times in a row (defaults to `5`), before giving
116-
up and leaving the shard unallocated. This scenario can be caused by
117-
structural problems such as having an analyzer which refers to a stopwords
118-
file which doesn't exist on all nodes.
119-
120-
Once the problem has been corrected, allocation can be manually retried by
121-
calling the <<cluster-reroute,`reroute`>> API with `?retry_failed`, which
122-
will attempt a single retry round for these shards.

docs/reference/query-dsl/percolate-query.asciidoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -37,7 +37,7 @@ the `percolator` query before it gets indexed into a temporary index.
3737
The `query` field is used for indexing the query documents. It will hold a
3838
json object that represents an actual Elasticsearch query. The `query` field
3939
has been configured to use the <<percolator,percolator field type>>. This field
40-
type understands the query dsl and stored the query in such a way that it can be
40+
type understands the query dsl and stores the query in such a way that it can be
4141
used later on to match documents defined on the `percolate` query.
4242

4343
Register a query in the percolator:

modules/percolator/src/main/java/org/elasticsearch/percolator/PercolateQueryBuilder.java

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -133,7 +133,7 @@ public class PercolateQueryBuilder extends AbstractQueryBuilder<PercolateQueryBu
133133
*/
134134
@Deprecated
135135
public PercolateQueryBuilder(String field, String documentType, BytesReference document) {
136-
this(field, documentType, Collections.singletonList(document), XContentFactory.xContentType(document));
136+
this(field, documentType, Collections.singletonList(document), XContentHelper.xContentType(document));
137137
}
138138

139139
/**
@@ -276,7 +276,7 @@ public PercolateQueryBuilder(String field, String documentType, String indexedDo
276276
if (in.getVersion().onOrAfter(Version.V_5_3_0)) {
277277
documentXContentType = in.readEnum(XContentType.class);
278278
} else {
279-
documentXContentType = XContentFactory.xContentType(documents.iterator().next());
279+
documentXContentType = XContentHelper.xContentType(documents.iterator().next());
280280
}
281281
} else {
282282
documentXContentType = null;
@@ -525,7 +525,7 @@ protected QueryBuilder doRewrite(QueryRewriteContext queryShardContext) {
525525
return this; // not executed yet
526526
} else {
527527
return new PercolateQueryBuilder(field, documentType, Collections.singletonList(source),
528-
XContentFactory.xContentType(source));
528+
XContentHelper.xContentType(source));
529529
}
530530
}
531531
GetRequest getRequest = new GetRequest(indexedDocumentIndex, indexedDocumentType, indexedDocumentId);

server/src/main/java/org/elasticsearch/Version.java

Lines changed: 12 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -147,6 +147,10 @@ public class Version implements Comparable<Version> {
147147
public static final Version V_6_1_2 = new Version(V_6_1_2_ID, org.apache.lucene.util.Version.LUCENE_7_1_0);
148148
public static final int V_6_1_3_ID = 6010399;
149149
public static final Version V_6_1_3 = new Version(V_6_1_3_ID, org.apache.lucene.util.Version.LUCENE_7_1_0);
150+
public static final int V_6_1_4_ID = 6010499;
151+
public static final Version V_6_1_4 = new Version(V_6_1_4_ID, org.apache.lucene.util.Version.LUCENE_7_1_0);
152+
public static final int V_6_1_5_ID = 6010599;
153+
public static final Version V_6_1_5 = new Version(V_6_1_5_ID, org.apache.lucene.util.Version.LUCENE_7_1_0);
150154
public static final int V_6_2_0_ID = 6020099;
151155
public static final Version V_6_2_0 = new Version(V_6_2_0_ID, org.apache.lucene.util.Version.LUCENE_7_2_1);
152156
public static final int V_6_2_1_ID = 6020199;
@@ -155,6 +159,8 @@ public class Version implements Comparable<Version> {
155159
public static final Version V_6_2_2 = new Version(V_6_2_2_ID, org.apache.lucene.util.Version.LUCENE_7_2_1);
156160
public static final int V_6_2_3_ID = 6020399;
157161
public static final Version V_6_2_3 = new Version(V_6_2_3_ID, org.apache.lucene.util.Version.LUCENE_7_2_1);
162+
public static final int V_6_2_4_ID = 6020499;
163+
public static final Version V_6_2_4 = new Version(V_6_2_4_ID, org.apache.lucene.util.Version.LUCENE_7_2_1);
158164
public static final int V_6_3_0_ID = 6030099;
159165
public static final Version V_6_3_0 = new Version(V_6_3_0_ID, org.apache.lucene.util.Version.LUCENE_7_2_1);
160166
public static final int V_7_0_0_alpha1_ID = 7000001;
@@ -177,6 +183,8 @@ public static Version fromId(int id) {
177183
return V_7_0_0_alpha1;
178184
case V_6_3_0_ID:
179185
return V_6_3_0;
186+
case V_6_2_4_ID:
187+
return V_6_2_4;
180188
case V_6_2_3_ID:
181189
return V_6_2_3;
182190
case V_6_2_2_ID:
@@ -185,6 +193,10 @@ public static Version fromId(int id) {
185193
return V_6_2_1;
186194
case V_6_2_0_ID:
187195
return V_6_2_0;
196+
case V_6_1_5_ID:
197+
return V_6_1_5;
198+
case V_6_1_4_ID:
199+
return V_6_1_4;
188200
case V_6_1_3_ID:
189201
return V_6_1_3;
190202
case V_6_1_2_ID:

server/src/main/java/org/elasticsearch/action/admin/cluster/health/ClusterHealthResponse.java

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -245,7 +245,7 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws
245245
builder.field(DELAYED_UNASSIGNED_SHARDS, getDelayedUnassignedShards());
246246
builder.field(NUMBER_OF_PENDING_TASKS, getNumberOfPendingTasks());
247247
builder.field(NUMBER_OF_IN_FLIGHT_FETCH, getNumberOfInFlightFetch());
248-
builder.timeValueField(TASK_MAX_WAIT_TIME_IN_QUEUE_IN_MILLIS, TASK_MAX_WAIT_TIME_IN_QUEUE, getTaskMaxWaitingTime());
248+
builder.humanReadableField(TASK_MAX_WAIT_TIME_IN_QUEUE_IN_MILLIS, TASK_MAX_WAIT_TIME_IN_QUEUE, getTaskMaxWaitingTime());
249249
builder.percentageField(ACTIVE_SHARDS_PERCENT_AS_NUMBER, ACTIVE_SHARDS_PERCENT, getActiveShardsPercent());
250250

251251
String level = params.param("level", "cluster");

server/src/main/java/org/elasticsearch/action/admin/cluster/node/info/NodesInfoResponse.java

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -68,7 +68,7 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws
6868
builder.field("version", nodeInfo.getVersion());
6969
builder.field("build_hash", nodeInfo.getBuild().shortHash());
7070
if (nodeInfo.getTotalIndexingBuffer() != null) {
71-
builder.byteSizeField("total_indexing_buffer", "total_indexing_buffer_in_bytes", nodeInfo.getTotalIndexingBuffer());
71+
builder.humanReadableField("total_indexing_buffer", "total_indexing_buffer_in_bytes", nodeInfo.getTotalIndexingBuffer());
7272
}
7373

7474
builder.startArray("roles");

server/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/status/SnapshotStats.java

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -22,6 +22,7 @@
2222
import org.elasticsearch.common.io.stream.StreamInput;
2323
import org.elasticsearch.common.io.stream.StreamOutput;
2424
import org.elasticsearch.common.io.stream.Streamable;
25+
import org.elasticsearch.common.unit.TimeValue;
2526
import org.elasticsearch.common.xcontent.ToXContent;
2627
import org.elasticsearch.common.xcontent.ToXContentFragment;
2728
import org.elasticsearch.common.xcontent.XContentBuilder;
@@ -143,7 +144,7 @@ public XContentBuilder toXContent(XContentBuilder builder, ToXContent.Params par
143144
builder.byteSizeField(Fields.TOTAL_SIZE_IN_BYTES, Fields.TOTAL_SIZE, getTotalSize());
144145
builder.byteSizeField(Fields.PROCESSED_SIZE_IN_BYTES, Fields.PROCESSED_SIZE, getProcessedSize());
145146
builder.field(Fields.START_TIME_IN_MILLIS, getStartTime());
146-
builder.timeValueField(Fields.TIME_IN_MILLIS, Fields.TIME, getTime());
147+
builder.humanReadableField(Fields.TIME_IN_MILLIS, Fields.TIME, new TimeValue(getTime()));
147148
builder.endObject();
148149
return builder;
149150
}

server/src/main/java/org/elasticsearch/action/admin/cluster/stats/ClusterStatsNodes.java

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -488,7 +488,7 @@ static final class Fields {
488488
@Override
489489
public XContentBuilder toXContent(XContentBuilder builder, Params params)
490490
throws IOException {
491-
builder.timeValueField(Fields.MAX_UPTIME_IN_MILLIS, Fields.MAX_UPTIME, maxUptime);
491+
builder.humanReadableField(Fields.MAX_UPTIME_IN_MILLIS, Fields.MAX_UPTIME, new TimeValue(maxUptime));
492492
builder.startArray(Fields.VERSIONS);
493493
for (ObjectIntCursor<JvmVersion> v : versions) {
494494
builder.startObject();

server/src/main/java/org/elasticsearch/action/admin/cluster/storedscripts/PutStoredScriptRequest.java

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -125,7 +125,7 @@ public void readFrom(StreamInput in) throws IOException {
125125
if (in.getVersion().onOrAfter(Version.V_5_3_0)) {
126126
xContentType = in.readEnum(XContentType.class);
127127
} else {
128-
xContentType = XContentFactory.xContentType(content);
128+
xContentType = XContentHelper.xContentType(content);
129129
}
130130
if (in.getVersion().onOrAfter(Version.V_6_0_0_alpha2)) {
131131
context = in.readOptionalString();

server/src/main/java/org/elasticsearch/action/admin/indices/segments/IndicesSegmentResponse.java

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -198,7 +198,7 @@ static void toXContent(XContentBuilder builder, Sort sort) throws IOException {
198198
static void toXContent(XContentBuilder builder, Accountable tree) throws IOException {
199199
builder.startObject();
200200
builder.field(Fields.DESCRIPTION, tree.toString());
201-
builder.byteSizeField(Fields.SIZE_IN_BYTES, Fields.SIZE, new ByteSizeValue(tree.ramBytesUsed()));
201+
builder.humanReadableField(Fields.SIZE_IN_BYTES, Fields.SIZE, new ByteSizeValue(tree.ramBytesUsed()));
202202
Collection<Accountable> children = tree.getChildResources();
203203
if (children.isEmpty() == false) {
204204
builder.startArray(Fields.CHILDREN);

server/src/main/java/org/elasticsearch/action/ingest/PutPipelineRequest.java

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,7 @@
2525
import org.elasticsearch.common.bytes.BytesReference;
2626
import org.elasticsearch.common.io.stream.StreamInput;
2727
import org.elasticsearch.common.io.stream.StreamOutput;
28-
import org.elasticsearch.common.xcontent.XContentFactory;
28+
import org.elasticsearch.common.xcontent.XContentHelper;
2929
import org.elasticsearch.common.xcontent.XContentType;
3030

3131
import java.io.IOException;
@@ -43,7 +43,7 @@ public class PutPipelineRequest extends AcknowledgedRequest<PutPipelineRequest>
4343
*/
4444
@Deprecated
4545
public PutPipelineRequest(String id, BytesReference source) {
46-
this(id, source, XContentFactory.xContentType(source));
46+
this(id, source, XContentHelper.xContentType(source));
4747
}
4848

4949
/**
@@ -83,7 +83,7 @@ public void readFrom(StreamInput in) throws IOException {
8383
if (in.getVersion().onOrAfter(Version.V_5_3_0)) {
8484
xContentType = in.readEnum(XContentType.class);
8585
} else {
86-
xContentType = XContentFactory.xContentType(source);
86+
xContentType = XContentHelper.xContentType(source);
8787
}
8888
}
8989

server/src/main/java/org/elasticsearch/action/ingest/SimulatePipelineRequest.java

Lines changed: 3 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -25,8 +25,7 @@
2525
import org.elasticsearch.common.bytes.BytesReference;
2626
import org.elasticsearch.common.io.stream.StreamInput;
2727
import org.elasticsearch.common.io.stream.StreamOutput;
28-
import org.elasticsearch.common.io.stream.Writeable;
29-
import org.elasticsearch.common.xcontent.XContentFactory;
28+
import org.elasticsearch.common.xcontent.XContentHelper;
3029
import org.elasticsearch.common.xcontent.XContentType;
3130
import org.elasticsearch.index.VersionType;
3231
import org.elasticsearch.ingest.ConfigurationUtils;
@@ -56,7 +55,7 @@ public class SimulatePipelineRequest extends ActionRequest {
5655
*/
5756
@Deprecated
5857
public SimulatePipelineRequest(BytesReference source) {
59-
this(source, XContentFactory.xContentType(source));
58+
this(source, XContentHelper.xContentType(source));
6059
}
6160

6261
/**
@@ -78,7 +77,7 @@ public SimulatePipelineRequest(BytesReference source, XContentType xContentType)
7877
if (in.getVersion().onOrAfter(Version.V_5_3_0)) {
7978
xContentType = in.readEnum(XContentType.class);
8079
} else {
81-
xContentType = XContentFactory.xContentType(source);
80+
xContentType = XContentHelper.xContentType(source);
8281
}
8382
}
8483

server/src/main/java/org/elasticsearch/action/termvectors/TermVectorsRequest.java

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -35,7 +35,7 @@
3535
import org.elasticsearch.common.lucene.uid.Versions;
3636
import org.elasticsearch.common.util.set.Sets;
3737
import org.elasticsearch.common.xcontent.XContentBuilder;
38-
import org.elasticsearch.common.xcontent.XContentFactory;
38+
import org.elasticsearch.common.xcontent.XContentHelper;
3939
import org.elasticsearch.common.xcontent.XContentParser;
4040
import org.elasticsearch.common.xcontent.XContentType;
4141
import org.elasticsearch.index.VersionType;
@@ -265,7 +265,7 @@ public TermVectorsRequest doc(XContentBuilder documentBuilder) {
265265
*/
266266
@Deprecated
267267
public TermVectorsRequest doc(BytesReference doc, boolean generateRandomId) {
268-
return this.doc(doc, generateRandomId, XContentFactory.xContentType(doc));
268+
return this.doc(doc, generateRandomId, XContentHelper.xContentType(doc));
269269
}
270270

271271
/**
@@ -518,7 +518,7 @@ public void readFrom(StreamInput in) throws IOException {
518518
if (in.getVersion().onOrAfter(Version.V_5_3_0)) {
519519
xContentType = in.readEnum(XContentType.class);
520520
} else {
521-
xContentType = XContentFactory.xContentType(doc);
521+
xContentType = XContentHelper.xContentType(doc);
522522
}
523523
}
524524
routing = in.readOptionalString();

server/src/main/java/org/elasticsearch/cluster/SnapshotDeletionsInProgress.java

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -24,6 +24,7 @@
2424
import org.elasticsearch.common.io.stream.StreamInput;
2525
import org.elasticsearch.common.io.stream.StreamOutput;
2626
import org.elasticsearch.common.io.stream.Writeable;
27+
import org.elasticsearch.common.unit.TimeValue;
2728
import org.elasticsearch.common.xcontent.XContentBuilder;
2829
import org.elasticsearch.snapshots.Snapshot;
2930

@@ -145,7 +146,7 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws
145146
{
146147
builder.field("repository", entry.snapshot.getRepository());
147148
builder.field("snapshot", entry.snapshot.getSnapshotId().getName());
148-
builder.timeValueField("start_time_millis", "start_time", entry.startTime);
149+
builder.humanReadableField("start_time_millis", "start_time", new TimeValue(entry.startTime));
149150
builder.field("repository_state_id", entry.repositoryStateId);
150151
}
151152
builder.endObject();

server/src/main/java/org/elasticsearch/cluster/SnapshotsInProgress.java

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -27,6 +27,7 @@
2727
import org.elasticsearch.common.collect.ImmutableOpenMap;
2828
import org.elasticsearch.common.io.stream.StreamInput;
2929
import org.elasticsearch.common.io.stream.StreamOutput;
30+
import org.elasticsearch.common.unit.TimeValue;
3031
import org.elasticsearch.common.xcontent.ToXContent;
3132
import org.elasticsearch.common.xcontent.XContentBuilder;
3233
import org.elasticsearch.index.shard.ShardId;
@@ -512,7 +513,7 @@ public void toXContent(Entry entry, XContentBuilder builder, ToXContent.Params p
512513
}
513514
}
514515
builder.endArray();
515-
builder.timeValueField(START_TIME_MILLIS, START_TIME, entry.startTime());
516+
builder.humanReadableField(START_TIME_MILLIS, START_TIME, new TimeValue(entry.startTime()));
516517
builder.field(REPOSITORY_STATE_ID, entry.getRepositoryStateId());
517518
builder.startArray(SHARDS);
518519
{

server/src/main/java/org/elasticsearch/cluster/routing/allocation/AllocateUnassignedDecision.java

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -289,8 +289,10 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws
289289
builder.field("allocation_id", allocationId);
290290
}
291291
if (allocationStatus == AllocationStatus.DELAYED_ALLOCATION) {
292-
builder.timeValueField("configured_delay_in_millis", "configured_delay", TimeValue.timeValueMillis(configuredDelayInMillis));
293-
builder.timeValueField("remaining_delay_in_millis", "remaining_delay", TimeValue.timeValueMillis(remainingDelayInMillis));
292+
builder.humanReadableField("configured_delay_in_millis", "configured_delay",
293+
TimeValue.timeValueMillis(configuredDelayInMillis));
294+
builder.humanReadableField("remaining_delay_in_millis", "remaining_delay",
295+
TimeValue.timeValueMillis(remainingDelayInMillis));
294296
}
295297
nodeDecisionsToXContent(nodeDecisions, builder, params);
296298
return builder;

server/src/main/java/org/elasticsearch/common/compress/CompressorFactory.java

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -24,7 +24,7 @@
2424
import org.elasticsearch.common.io.Streams;
2525
import org.elasticsearch.common.io.stream.BytesStreamOutput;
2626
import org.elasticsearch.common.io.stream.StreamInput;
27-
import org.elasticsearch.common.xcontent.XContentFactory;
27+
import org.elasticsearch.common.xcontent.XContentHelper;
2828
import org.elasticsearch.common.xcontent.XContentType;
2929

3030
import java.io.IOException;
@@ -44,11 +44,11 @@ public static Compressor compressor(BytesReference bytes) {
4444
// bytes should be either detected as compressed or as xcontent,
4545
// if we have bytes that can be either detected as compressed or
4646
// as a xcontent, we have a problem
47-
assert XContentFactory.xContentType(bytes) == null;
47+
assert XContentHelper.xContentType(bytes) == null;
4848
return COMPRESSOR;
4949
}
5050

51-
XContentType contentType = XContentFactory.xContentType(bytes);
51+
XContentType contentType = XContentHelper.xContentType(bytes);
5252
if (contentType == null) {
5353
if (isAncient(bytes)) {
5454
throw new IllegalStateException("unsupported compression: index was created before v2.0.0.beta1 and wasn't upgraded?");

0 commit comments

Comments
 (0)