Skip to content

Commit 2d3772c

Browse files
committed
Merge remote-tracking branch 'elastic/master' into retention-leases-version
* elastic/master: Do not set up NodeAndClusterIdStateListener in test (elastic#38110) ML: better handle task state race condition (elastic#38040) Soft-deletes policy should always fetch latest leases (elastic#37940) Handle scheduler exceptions (elastic#38014) Minor logging improvements (elastic#38084) Fix Painless void return bug (elastic#38046) Update PutFollowAction serialization post-backport (elastic#37989) fix a few versionAdded values in ElasticsearchExceptions (elastic#37877) Reenable BWC tests after backport of elastic#37899 (elastic#38093) Mute failing test Mute failing test Fail start on obsolete indices documentation (elastic#37786) SQL: Implement FIRST/LAST aggregate functions (elastic#37936) Un-mute NoMasterNodeIT.testNoMasterActionsWriteMasterBlock remove unused parser fields in RemoteResponseParsers
2 parents 6ab494d + 28b5c7c commit 2d3772c

File tree

115 files changed

+2475
-848
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

115 files changed

+2475
-848
lines changed

build.gradle

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -159,8 +159,8 @@ task verifyVersions {
159159
* the enabled state of every bwc task. It should be set back to true
160160
* after the backport of the backcompat code is complete.
161161
*/
162-
final boolean bwc_tests_enabled = false
163-
final String bwc_tests_disabled_issue = "https://github.com/elastic/elasticsearch/pull/37899" /* place a PR link here when committing bwc changes */
162+
final boolean bwc_tests_enabled = true
163+
final String bwc_tests_disabled_issue = "" /* place a PR link here when committing bwc changes */
164164
if (bwc_tests_enabled == false) {
165165
if (bwc_tests_disabled_issue.isEmpty()) {
166166
throw new GradleException("bwc_tests_disabled_issue must be set when bwc_tests_enabled == false")

docs/reference/migration/migrate_7_0.asciidoc

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -26,6 +26,7 @@ See also <<release-highlights>> and <<es-release-notes>>.
2626
* <<breaking_70_restclient_changes>>
2727
* <<breaking_70_low_level_restclient_changes>>
2828
* <<breaking_70_logging_changes>>
29+
* <<breaking_70_node_changes>>
2930

3031
[float]
3132
=== Indices created before 7.0
@@ -60,3 +61,4 @@ include::migrate_7_0/snapshotstats.asciidoc[]
6061
include::migrate_7_0/restclient.asciidoc[]
6162
include::migrate_7_0/low_level_restclient.asciidoc[]
6263
include::migrate_7_0/logging.asciidoc[]
64+
include::migrate_7_0/node.asciidoc[]
Lines changed: 15 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,15 @@
1+
[float]
2+
[[breaking_70_node_changes]]
3+
=== Node start up
4+
5+
[float]
6+
==== Nodes with left-behind data or metadata refuse to start
7+
Repurposing an existing node by changing node.master or node.data to false can leave lingering on-disk metadata and
8+
data around, which will not be accessible by the node's new role. Beside storing non-accessible data, this can lead
9+
to situations where dangling indices are imported even though the node might not be able to host any shards, leading
10+
to a red cluster health. To avoid this,
11+
12+
* nodes with on-disk shard data and node.data set to false will refuse to start
13+
* nodes with on-disk index/shard data and both node.master and node.data set to false will refuse to start
14+
15+
Beware that such role changes done prior to the 7.0 upgrade could prevent node start up in 7.0.

docs/reference/sql/functions/aggs.asciidoc

Lines changed: 198 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -113,6 +113,196 @@ Returns the total number of _distinct non-null_ values in input values.
113113
include-tagged::{sql-specs}/docs.csv-spec[aggCountDistinct]
114114
--------------------------------------------------
115115

116+
[[sql-functions-aggs-first]]
117+
===== `FIRST/FIRST_VALUE`
118+
119+
.Synopsis:
120+
[source, sql]
121+
----------------------------------------------
122+
FIRST(field_name<1>[, ordering_field_name]<2>)
123+
----------------------------------------------
124+
125+
*Input*:
126+
127+
<1> target field for the aggregation
128+
<2> optional field used for ordering
129+
130+
*Output*: same type as the input
131+
132+
.Description:
133+
134+
Returns the first **non-NULL** value (if such exists) of the `field_name` input column sorted by
135+
the `ordering_field_name` column. If `ordering_field_name` is not provided, only the `field_name`
136+
column is used for the sorting. E.g.:
137+
138+
[cols="<,<"]
139+
|===
140+
s| a | b
141+
142+
| 100 | 1
143+
| 200 | 1
144+
| 1 | 2
145+
| 2 | 2
146+
| 10 | null
147+
| 20 | null
148+
| null | null
149+
|===
150+
151+
[source, sql]
152+
----------------------
153+
SELECT FIRST(a) FROM t
154+
----------------------
155+
156+
will result in:
157+
[cols="<"]
158+
|===
159+
s| FIRST(a)
160+
| 1
161+
|===
162+
163+
and
164+
165+
[source, sql]
166+
-------------------------
167+
SELECT FIRST(a, b) FROM t
168+
-------------------------
169+
170+
will result in:
171+
[cols="<"]
172+
|===
173+
s| FIRST(a, b)
174+
| 100
175+
|===
176+
177+
178+
["source","sql",subs="attributes,macros"]
179+
-----------------------------------------------------------
180+
include-tagged::{sql-specs}/docs.csv-spec[firstWithOneArg]
181+
-----------------------------------------------------------
182+
183+
["source","sql",subs="attributes,macros"]
184+
--------------------------------------------------------------------
185+
include-tagged::{sql-specs}/docs.csv-spec[firstWithOneArgAndGroupBy]
186+
--------------------------------------------------------------------
187+
188+
["source","sql",subs="attributes,macros"]
189+
-----------------------------------------------------------
190+
include-tagged::{sql-specs}/docs.csv-spec[firstWithTwoArgs]
191+
-----------------------------------------------------------
192+
193+
["source","sql",subs="attributes,macros"]
194+
---------------------------------------------------------------------
195+
include-tagged::{sql-specs}/docs.csv-spec[firstWithTwoArgsAndGroupBy]
196+
---------------------------------------------------------------------
197+
198+
`FIRST_VALUE` is a name alias and can be used instead of `FIRST`, e.g.:
199+
200+
["source","sql",subs="attributes,macros"]
201+
--------------------------------------------------------------------------
202+
include-tagged::{sql-specs}/docs.csv-spec[firstValueWithTwoArgsAndGroupBy]
203+
--------------------------------------------------------------------------
204+
205+
[NOTE]
206+
`FIRST` cannot be used in a HAVING clause.
207+
[NOTE]
208+
`FIRST` cannot be used with columns of type <<text, `text`>> unless
209+
the field is also <<before-enabling-fielddata,saved as a keyword>>.
210+
211+
[[sql-functions-aggs-last]]
212+
===== `LAST/LAST_VALUE`
213+
214+
.Synopsis:
215+
[source, sql]
216+
--------------------------------------------------
217+
LAST(field_name<1>[, ordering_field_name]<2>)
218+
--------------------------------------------------
219+
220+
*Input*:
221+
222+
<1> target field for the aggregation
223+
<2> optional field used for ordering
224+
225+
*Output*: same type as the input
226+
227+
.Description:
228+
229+
It's the inverse of <<sql-functions-aggs-first>>. Returns the last **non-NULL** value (if such exists) of the
230+
`field_name`input column sorted descending by the `ordering_field_name` column. If `ordering_field_name` is not
231+
provided, only the `field_name` column is used for the sorting. E.g.:
232+
233+
[cols="<,<"]
234+
|===
235+
s| a | b
236+
237+
| 10 | 1
238+
| 20 | 1
239+
| 1 | 2
240+
| 2 | 2
241+
| 100 | null
242+
| 200 | null
243+
| null | null
244+
|===
245+
246+
[source, sql]
247+
------------------------
248+
SELECT LAST(a) FROM t
249+
------------------------
250+
251+
will result in:
252+
[cols="<"]
253+
|===
254+
s| LAST(a)
255+
| 200
256+
|===
257+
258+
and
259+
260+
[source, sql]
261+
------------------------
262+
SELECT LAST(a, b) FROM t
263+
------------------------
264+
265+
will result in:
266+
[cols="<"]
267+
|===
268+
s| LAST(a, b)
269+
| 2
270+
|===
271+
272+
273+
["source","sql",subs="attributes,macros"]
274+
-----------------------------------------------------------
275+
include-tagged::{sql-specs}/docs.csv-spec[lastWithOneArg]
276+
-----------------------------------------------------------
277+
278+
["source","sql",subs="attributes,macros"]
279+
-------------------------------------------------------------------
280+
include-tagged::{sql-specs}/docs.csv-spec[lastWithOneArgAndGroupBy]
281+
-------------------------------------------------------------------
282+
283+
["source","sql",subs="attributes,macros"]
284+
-----------------------------------------------------------
285+
include-tagged::{sql-specs}/docs.csv-spec[lastWithTwoArgs]
286+
-----------------------------------------------------------
287+
288+
["source","sql",subs="attributes,macros"]
289+
--------------------------------------------------------------------
290+
include-tagged::{sql-specs}/docs.csv-spec[lastWithTwoArgsAndGroupBy]
291+
--------------------------------------------------------------------
292+
293+
`LAST_VALUE` is a name alias and can be used instead of `LAST`, e.g.:
294+
295+
["source","sql",subs="attributes,macros"]
296+
-------------------------------------------------------------------------
297+
include-tagged::{sql-specs}/docs.csv-spec[lastValueWithTwoArgsAndGroupBy]
298+
-------------------------------------------------------------------------
299+
300+
[NOTE]
301+
`LAST` cannot be used in `HAVING` clause.
302+
[NOTE]
303+
`LAST` cannot be used with columns of type <<text, `text`>> unless
304+
the field is also <<before-enabling-fielddata,`saved as a keyword`>>.
305+
116306
[[sql-functions-aggs-max]]
117307
===== `MAX`
118308

@@ -137,6 +327,10 @@ Returns the maximum value across input values in the field `field_name`.
137327
include-tagged::{sql-specs}/docs.csv-spec[aggMax]
138328
--------------------------------------------------
139329

330+
[NOTE]
331+
`MAX` on a field of type <<text, `text`>> or <<keyword, `keyword`>> is translated into
332+
<<sql-functions-aggs-last>> and therefore, it cannot be used in `HAVING` clause.
333+
140334
[[sql-functions-aggs-min]]
141335
===== `MIN`
142336

@@ -161,6 +355,10 @@ Returns the minimum value across input values in the field `field_name`.
161355
include-tagged::{sql-specs}/docs.csv-spec[aggMin]
162356
--------------------------------------------------
163357

358+
[NOTE]
359+
`MIN` on a field of type <<text, `text`>> or <<keyword, `keyword`>> is translated into
360+
<<sql-functions-aggs-first>> and therefore, it cannot be used in `HAVING` clause.
361+
164362
[[sql-functions-aggs-sum]]
165363
===== `SUM`
166364

docs/reference/sql/limitations.asciidoc

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -90,3 +90,10 @@ include-tagged::{sql-specs}/docs.csv-spec[limitationSubSelectRewritten]
9090

9191
But, if the sub-select would include a `GROUP BY` or `HAVING` or the enclosing `SELECT` would be more complex than `SELECT X
9292
FROM (SELECT ...) WHERE [simple_condition]`, this is currently **un-supported**.
93+
94+
[float]
95+
=== Use <<sql-functions-aggs-first, `FIRST`>>/<<sql-functions-aggs-last,`LAST`>> aggregation functions in `HAVING` clause
96+
97+
Using `FIRST` and `LAST` in the `HAVING` clause is not supported. The same applies to
98+
<<sql-functions-aggs-min,`MIN`>> and <<sql-functions-aggs-max,`MAX`>> when their target column
99+
is of type <<keyword, `keyword`>> as they are internally translated to `FIRST` and `LAST`.

libs/grok/src/main/java/org/elasticsearch/grok/ThreadWatchdog.java

Lines changed: 6 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -20,10 +20,9 @@
2020

2121
import java.util.Map;
2222
import java.util.concurrent.ConcurrentHashMap;
23-
import java.util.concurrent.ScheduledFuture;
2423
import java.util.concurrent.atomic.AtomicBoolean;
2524
import java.util.concurrent.atomic.AtomicInteger;
26-
import java.util.function.BiFunction;
25+
import java.util.function.BiConsumer;
2726
import java.util.function.LongSupplier;
2827

2928
/**
@@ -68,7 +67,7 @@ public interface ThreadWatchdog {
6867
static ThreadWatchdog newInstance(long interval,
6968
long maxExecutionTime,
7069
LongSupplier relativeTimeSupplier,
71-
BiFunction<Long, Runnable, ScheduledFuture<?>> scheduler) {
70+
BiConsumer<Long, Runnable> scheduler) {
7271
return new Default(interval, maxExecutionTime, relativeTimeSupplier, scheduler);
7372
}
7473

@@ -105,15 +104,15 @@ class Default implements ThreadWatchdog {
105104
private final long interval;
106105
private final long maxExecutionTime;
107106
private final LongSupplier relativeTimeSupplier;
108-
private final BiFunction<Long, Runnable, ScheduledFuture<?>> scheduler;
107+
private final BiConsumer<Long, Runnable> scheduler;
109108
private final AtomicInteger registered = new AtomicInteger(0);
110109
private final AtomicBoolean running = new AtomicBoolean(false);
111110
final ConcurrentHashMap<Thread, Long> registry = new ConcurrentHashMap<>();
112111

113112
private Default(long interval,
114113
long maxExecutionTime,
115114
LongSupplier relativeTimeSupplier,
116-
BiFunction<Long, Runnable, ScheduledFuture<?>> scheduler) {
115+
BiConsumer<Long, Runnable> scheduler) {
117116
this.interval = interval;
118117
this.maxExecutionTime = maxExecutionTime;
119118
this.relativeTimeSupplier = relativeTimeSupplier;
@@ -124,7 +123,7 @@ public void register() {
124123
registered.getAndIncrement();
125124
Long previousValue = registry.put(Thread.currentThread(), relativeTimeSupplier.getAsLong());
126125
if (running.compareAndSet(false, true) == true) {
127-
scheduler.apply(interval, this::interruptLongRunningExecutions);
126+
scheduler.accept(interval, this::interruptLongRunningExecutions);
128127
}
129128
assert previousValue == null;
130129
}
@@ -149,7 +148,7 @@ private void interruptLongRunningExecutions() {
149148
}
150149
}
151150
if (registered.get() > 0) {
152-
scheduler.apply(interval, this::interruptLongRunningExecutions);
151+
scheduler.accept(interval, this::interruptLongRunningExecutions);
153152
} else {
154153
running.set(false);
155154
}

libs/grok/src/test/java/org/elasticsearch/grok/GrokTests.java

Lines changed: 2 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -27,9 +27,8 @@
2727
import java.util.List;
2828
import java.util.Map;
2929
import java.util.TreeMap;
30-
import java.util.concurrent.ScheduledFuture;
3130
import java.util.concurrent.atomic.AtomicBoolean;
32-
import java.util.function.BiFunction;
31+
import java.util.function.BiConsumer;
3332

3433
import static org.hamcrest.Matchers.equalTo;
3534
import static org.hamcrest.Matchers.is;
@@ -418,7 +417,7 @@ public void testExponentialExpressions() {
418417
"Zustand->ABGESCHLOSSEN Kassennummer->%{WORD:param9} Bonnummer->%{WORD:param10} Datum->%{DATESTAMP_OTHER:param11}";
419418
String logLine = "Bonsuche mit folgender Anfrage: Belegart->[EINGESCHRAENKTER_VERKAUF, VERKAUF, NACHERFASSUNG] " +
420419
"Zustand->ABGESCHLOSSEN Kassennummer->2 Bonnummer->6362 Datum->Mon Jan 08 00:00:00 UTC 2018";
421-
BiFunction<Long, Runnable, ScheduledFuture<?>> scheduler = (delay, command) -> {
420+
BiConsumer<Long, Runnable> scheduler = (delay, command) -> {
422421
try {
423422
Thread.sleep(delay);
424423
} catch (InterruptedException e) {
@@ -430,7 +429,6 @@ public void testExponentialExpressions() {
430429
}
431430
});
432431
t.start();
433-
return null;
434432
};
435433
Grok grok = new Grok(basePatterns, grokPattern, ThreadWatchdog.newInstance(10, 200, System::currentTimeMillis, scheduler));
436434
Exception e = expectThrows(RuntimeException.class, () -> grok.captures(logLine));

libs/grok/src/test/java/org/elasticsearch/grok/ThreadWatchdogTests.java

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -51,7 +51,6 @@ public void testInterrupt() throws Exception {
5151
}
5252
});
5353
thread.start();
54-
return null;
5554
});
5655

5756
Map<?, ?> registry = ((ThreadWatchdog.Default) watchdog).registry;

modules/ingest-common/src/main/java/org/elasticsearch/ingest/common/IngestCommonPlugin.java

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -111,7 +111,8 @@ public List<Setting<?>> getSettings() {
111111
private static ThreadWatchdog createGrokThreadWatchdog(Processor.Parameters parameters) {
112112
long intervalMillis = WATCHDOG_INTERVAL.get(parameters.env.settings()).getMillis();
113113
long maxExecutionTimeMillis = WATCHDOG_MAX_EXECUTION_TIME.get(parameters.env.settings()).getMillis();
114-
return ThreadWatchdog.newInstance(intervalMillis, maxExecutionTimeMillis, parameters.relativeTimeSupplier, parameters.scheduler);
114+
return ThreadWatchdog.newInstance(intervalMillis, maxExecutionTimeMillis,
115+
parameters.relativeTimeSupplier, parameters.scheduler::apply);
115116
}
116117

117118
}

modules/lang-painless/src/main/antlr/PainlessParser.g4

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -55,7 +55,7 @@ dstatement
5555
| declaration # decl
5656
| CONTINUE # continue
5757
| BREAK # break
58-
| RETURN expression # return
58+
| RETURN expression? # return
5959
| THROW expression # throw
6060
| expression # expr
6161
;

0 commit comments

Comments
 (0)