Skip to content

Commit 5d99157

Browse files
committed
Merge branch 'master' into ccr
* master: Mute ML upgrade test (#30458) Stop forking javac (#30462) Client: Deprecate many argument performRequest (#30315) Docs: Use task_id in examples of tasks (#30436) Security: Rename IndexLifecycleManager to SecurityIndexManager (#30442) [Docs] Fix typo in cardinality-aggregation.asciidoc (#30434) Avoid NPE in `more_like_this` when field has zero tokens (#30365) Build: Switch to building javadoc with html5 (#30440) Add a quick tour of the project to CONTRIBUTING (#30187) Reindex: Use request flavored methods (#30317) Silence SplitIndexIT.testSplitIndexPrimaryTerm test failure. (#30432) Auto-expand replicas when adding or removing nodes (#30423) Docs: fix changelog merge Fix line length violation in cache tests Add stricter geohash parsing (#30376) Add failing test for core cache deadlock [DOCS] convert forcemerge snippet Update forcemerge.asciidoc (#30113) Added zentity to the list of API extension plugins (#29143) Fix the search request default operation behavior doc (#29302) (#29405)
2 parents 75719ac + 4b36ea7 commit 5d99157

File tree

60 files changed

+772
-463
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

60 files changed

+772
-463
lines changed

CONTRIBUTING.md

+90-1
Original file line numberDiff line numberDiff line change
@@ -41,7 +41,7 @@ We enjoy working with contributors to get their code accepted. There are many ap
4141
Note that it is unlikely the project will merge refactors for the sake of refactoring. These
4242
types of pull requests have a high cost to maintainers in reviewing and testing with little
4343
to no tangible benefit. This especially includes changes generated by tools. For example,
44-
converting all generic interface instances to use the diamond operator.
44+
converting all generic interface instances to use the diamond operator.
4545

4646
The process for contributing to any of the [Elastic repositories](https://github.com/elastic/) is similar. Details for individual projects can be found below.
4747

@@ -209,6 +209,95 @@ Before submitting your changes, run the test suite to make sure that nothing is
209209
./gradlew check
210210
```
211211

212+
### Project layout
213+
214+
This repository is split into many top level directories. The most important
215+
ones are:
216+
217+
#### `docs`
218+
Documentation for the project.
219+
220+
#### `distribution`
221+
Builds our tar and zip archives and our rpm and deb packages.
222+
223+
#### `libs`
224+
Libraries used to build other parts of the project. These are meant to be
225+
internal rather than general purpose. We have no plans to
226+
[semver](https://semver.org/) their APIs or accept feature requests for them.
227+
We publish them to maven central because they are dependencies of our plugin
228+
test framework, high level rest client, and jdbc driver but they really aren't
229+
general purpose enough to *belong* in maven central. We're still working out
230+
what to do here.
231+
232+
#### `modules`
233+
Features that are shipped with Elasticsearch by default but are not built in to
234+
the server. We typically separate features from the server because they require
235+
permissions that we don't believe *all* of Elasticsearch should have or because
236+
they depend on libraries that we don't believe *all* of Elasticsearch should
237+
depend on.
238+
239+
For example, reindex requires the `connect` permission so it can perform
240+
reindex-from-remote but we don't believe that the *all* of Elasticsearch should
241+
have the "connect". For another example, Painless is implemented using antlr4
242+
and asm and we don't believe that *all* of Elasticsearch should have access to
243+
them.
244+
245+
#### `plugins`
246+
Officially supported plugins to Elasticsearch. We decide that a feature should
247+
be a plugin rather than shipped as a module because we feel that it is only
248+
important to a subset of users, especially if it requires extra dependencies.
249+
250+
The canonical example of this is the ICU analysis plugin. It is important for
251+
folks who want the fairly language neutral ICU analyzer but the library to
252+
implement the analyzer is 11MB so we don't ship it with Elasticsearch by
253+
default.
254+
255+
Another example is the `discovery-gce` plugin. It is *vital* to folks running
256+
in [GCP](https://cloud.google.com/) but useless otherwise and it depends on a
257+
dozen extra jars.
258+
259+
#### `qa`
260+
Honestly this is kind of in flux and we're not 100% sure where we'll end up.
261+
Right now the directory contains
262+
* Tests that require multiple modules or plugins to work
263+
* Tests that form a cluster made up of multiple versions of Elasticsearch like
264+
full cluster restart, rolling restarts, and mixed version tests
265+
* Tests that test the Elasticsearch clients in "interesting" places like the
266+
`wildfly` project.
267+
* Tests that test Elasticsearch in funny configurations like with ingest
268+
disabled
269+
* Tests that need to do strange things like install plugins that thrown
270+
uncaught `Throwable`s or add a shutdown hook
271+
But we're not convinced that all of these things *belong* in the qa directory.
272+
We're fairly sure that tests that require multiple modules or plugins to work
273+
should just pick a "home" plugin. We're fairly sure that the multi-version
274+
tests *do* belong in qa. Beyond that, we're not sure. If you want to add a new
275+
qa project, open a PR and be ready to discuss options.
276+
277+
#### `server`
278+
The server component of Elasticsearch that contains all of the modules and
279+
plugins. Right now things like the high level rest client depend on the server
280+
but we'd like to fix that in the future.
281+
282+
#### `test`
283+
Our test framework and test fixtures. We use the test framework for testing the
284+
server, the plugins, and modules, and pretty much everything else. We publish
285+
the test framework so folks who develop Elasticsearch plugins can use it to
286+
test the plugins. The test fixtures are external processes that we start before
287+
running specific tests that rely on them.
288+
289+
For example, we have an hdfs test that uses mini-hdfs to test our
290+
repository-hdfs plugin.
291+
292+
#### `x-pack`
293+
Commercially licensed code that integrates with the rest of Elasticsearch. The
294+
`docs` subdirectory functions just like the top level `docs` subdirectory and
295+
the `qa` subdirectory functions just like the top level `qa` subdirectory. The
296+
`plugin` subdirectory contains the x-pack module which runs inside the
297+
Elasticsearch process. The `transport-client` subdirectory contains extensions
298+
to Elasticsearch's standard transport client to work properly with x-pack.
299+
300+
212301
Contributing as part of a class
213302
-------------------------------
214303
In general Elasticsearch is happy to accept contributions that were created as

buildSrc/src/main/groovy/org/elasticsearch/gradle/BuildPlugin.groovy

+14-4
Original file line numberDiff line numberDiff line change
@@ -497,10 +497,15 @@ class BuildPlugin implements Plugin<Project> {
497497
project.afterEvaluate {
498498
project.tasks.withType(JavaCompile) {
499499
final JavaVersion targetCompatibilityVersion = JavaVersion.toVersion(it.targetCompatibility)
500-
// we fork because compiling lots of different classes in a shared jvm can eventually trigger GC overhead limitations
501-
options.fork = true
502-
options.forkOptions.javaHome = new File(project.compilerJavaHome)
503-
options.forkOptions.memoryMaximumSize = "512m"
500+
final compilerJavaHomeFile = new File(project.compilerJavaHome)
501+
// we only fork if the Gradle JDK is not the same as the compiler JDK
502+
if (compilerJavaHomeFile.canonicalPath == Jvm.current().javaHome.canonicalPath) {
503+
options.fork = false
504+
} else {
505+
options.fork = true
506+
options.forkOptions.javaHome = compilerJavaHomeFile
507+
options.forkOptions.memoryMaximumSize = "512m"
508+
}
504509
if (targetCompatibilityVersion == JavaVersion.VERSION_1_8) {
505510
// compile with compact 3 profile by default
506511
// NOTE: this is just a compile time check: does not replace testing with a compact3 JRE
@@ -549,6 +554,11 @@ class BuildPlugin implements Plugin<Project> {
549554
javadoc.classpath = javadoc.getClasspath().filter { f ->
550555
return classes.contains(f) == false
551556
}
557+
/*
558+
* Generate docs using html5 to suppress a warning from `javadoc`
559+
* that the default will change to html5 in the future.
560+
*/
561+
javadoc.options.addBooleanOption('html5', true)
552562
}
553563
configureJavadocJar(project)
554564
}

client/rest/src/main/java/org/elasticsearch/client/RestClient.java

+16
Original file line numberDiff line numberDiff line change
@@ -210,7 +210,9 @@ public void performRequestAsync(Request request, ResponseListener responseListen
210210
* @throws IOException in case of a problem or the connection was aborted
211211
* @throws ClientProtocolException in case of an http protocol error
212212
* @throws ResponseException in case Elasticsearch responded with a status code that indicated an error
213+
* @deprecated prefer {@link #performRequest(Request)}
213214
*/
215+
@Deprecated
214216
public Response performRequest(String method, String endpoint, Header... headers) throws IOException {
215217
Request request = new Request(method, endpoint);
216218
request.setHeaders(headers);
@@ -229,7 +231,9 @@ public Response performRequest(String method, String endpoint, Header... headers
229231
* @throws IOException in case of a problem or the connection was aborted
230232
* @throws ClientProtocolException in case of an http protocol error
231233
* @throws ResponseException in case Elasticsearch responded with a status code that indicated an error
234+
* @deprecated prefer {@link #performRequest(Request)}
232235
*/
236+
@Deprecated
233237
public Response performRequest(String method, String endpoint, Map<String, String> params, Header... headers) throws IOException {
234238
Request request = new Request(method, endpoint);
235239
addParameters(request, params);
@@ -252,7 +256,9 @@ public Response performRequest(String method, String endpoint, Map<String, Strin
252256
* @throws IOException in case of a problem or the connection was aborted
253257
* @throws ClientProtocolException in case of an http protocol error
254258
* @throws ResponseException in case Elasticsearch responded with a status code that indicated an error
259+
* @deprecated prefer {@link #performRequest(Request)}
255260
*/
261+
@Deprecated
256262
public Response performRequest(String method, String endpoint, Map<String, String> params,
257263
HttpEntity entity, Header... headers) throws IOException {
258264
Request request = new Request(method, endpoint);
@@ -289,7 +295,9 @@ public Response performRequest(String method, String endpoint, Map<String, Strin
289295
* @throws IOException in case of a problem or the connection was aborted
290296
* @throws ClientProtocolException in case of an http protocol error
291297
* @throws ResponseException in case Elasticsearch responded with a status code that indicated an error
298+
* @deprecated prefer {@link #performRequest(Request)}
292299
*/
300+
@Deprecated
293301
public Response performRequest(String method, String endpoint, Map<String, String> params,
294302
HttpEntity entity, HttpAsyncResponseConsumerFactory httpAsyncResponseConsumerFactory,
295303
Header... headers) throws IOException {
@@ -310,7 +318,9 @@ public Response performRequest(String method, String endpoint, Map<String, Strin
310318
* @param endpoint the path of the request (without host and port)
311319
* @param responseListener the {@link ResponseListener} to notify when the request is completed or fails
312320
* @param headers the optional request headers
321+
* @deprecated prefer {@link #performRequestAsync(Request, ResponseListener)}
313322
*/
323+
@Deprecated
314324
public void performRequestAsync(String method, String endpoint, ResponseListener responseListener, Header... headers) {
315325
Request request;
316326
try {
@@ -333,7 +343,9 @@ public void performRequestAsync(String method, String endpoint, ResponseListener
333343
* @param params the query_string parameters
334344
* @param responseListener the {@link ResponseListener} to notify when the request is completed or fails
335345
* @param headers the optional request headers
346+
* @deprecated prefer {@link #performRequestAsync(Request, ResponseListener)}
336347
*/
348+
@Deprecated
337349
public void performRequestAsync(String method, String endpoint, Map<String, String> params,
338350
ResponseListener responseListener, Header... headers) {
339351
Request request;
@@ -361,7 +373,9 @@ public void performRequestAsync(String method, String endpoint, Map<String, Stri
361373
* @param entity the body of the request, null if not applicable
362374
* @param responseListener the {@link ResponseListener} to notify when the request is completed or fails
363375
* @param headers the optional request headers
376+
* @deprecated prefer {@link #performRequestAsync(Request, ResponseListener)}
364377
*/
378+
@Deprecated
365379
public void performRequestAsync(String method, String endpoint, Map<String, String> params,
366380
HttpEntity entity, ResponseListener responseListener, Header... headers) {
367381
Request request;
@@ -394,7 +408,9 @@ public void performRequestAsync(String method, String endpoint, Map<String, Stri
394408
* connection on the client side.
395409
* @param responseListener the {@link ResponseListener} to notify when the request is completed or fails
396410
* @param headers the optional request headers
411+
* @deprecated prefer {@link #performRequestAsync(Request, ResponseListener)}
397412
*/
413+
@Deprecated
398414
public void performRequestAsync(String method, String endpoint, Map<String, String> params,
399415
HttpEntity entity, HttpAsyncResponseConsumerFactory httpAsyncResponseConsumerFactory,
400416
ResponseListener responseListener, Header... headers) {

docs/CHANGELOG.asciidoc

+20-5
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33

44
[partintro]
55
--
6-
// To add a release, copy and paste the template text
6+
// To add a release, copy and paste the template text
77
// and add a link to the new section. Note that release subheads must
88
// be floated and sections cannot be empty.
99

@@ -104,6 +104,8 @@ ones that the user is authorized to access in case field level security is enabl
104104
[float]
105105
=== Bug Fixes
106106

107+
Fix NPE in 'more_like_this' when field has zero tokens ({pull}30365[#30365])
108+
107109
Fixed prerelease version of elasticsearch in the `deb` package to sort before GA versions
108110
({pull}29000[#29000])
109111

@@ -137,8 +139,11 @@ coming[6.4.0]
137139
//[float]
138140
//=== Breaking Java Changes
139141

140-
//[float]
141-
//=== Deprecations
142+
[float]
143+
=== Deprecations
144+
145+
Deprecated multi-argument versions of the request methods in the RestClient.
146+
Prefer the "Request" object flavored methods. ({pull}30315[#30315])
142147

143148
[float]
144149
=== New Features
@@ -155,8 +160,8 @@ analysis module. ({pull}30397[#30397])
155160

156161
{ref-64}/breaking_64_api_changes.html#copy-source-settings-on-resize[Allow copying source settings on index resize operations] ({pull}30255[#30255])
157162

158-
Added new "Request" object flavored request methods. Prefer these instead of the
159-
multi-argument versions. ({pull}29623[#29623])
163+
Added new "Request" object flavored request methods in the RestClient. Prefer
164+
these instead of the multi-argument versions. ({pull}29623[#29623])
160165

161166
The cluster state listener to decide if watcher should be
162167
stopped/started/paused now runs far less code in an executor but is more
@@ -169,6 +174,8 @@ Added put index template API to the high level rest client ({pull}30400[#30400])
169174
[float]
170175
=== Bug Fixes
171176

177+
Fix NPE in 'more_like_this' when field has zero tokens ({pull}30365[#30365])
178+
172179
Do not ignore request analysis/similarity settings on index resize operations when the source index already contains such settings ({pull}30216[#30216])
173180

174181
Fix NPE when CumulativeSum agg encounters null value/empty bucket ({pull}29641[#29641])
@@ -177,10 +184,18 @@ Machine Learning::
177184

178185
* Account for gaps in data counts after job is reopened ({pull}30294[#30294])
179186

187+
Add validation that geohashes are not empty and don't contain unsupported characters ({pull}30376[#30376])
188+
180189
Rollup::
181190
* Validate timezone in range queries to ensure they match the selected job when
182191
searching ({pull}30338[#30338])
183192

193+
194+
Allocation::
195+
196+
Auto-expand replicas when adding or removing nodes to prevent shard copies from
197+
being dropped and resynced when a data node rejoins the cluster ({pull}30423[#30423])
198+
184199
//[float]
185200
//=== Regressions
186201

docs/plugins/api.asciidoc

+3
Original file line numberDiff line numberDiff line change
@@ -19,6 +19,9 @@ A number of plugins have been contributed by our community:
1919

2020
* https://github.com/YannBrrd/elasticsearch-entity-resolution[Entity Resolution Plugin]:
2121
Uses http://github.com/larsga/Duke[Duke] for duplication detection (by Yann Barraud)
22+
23+
* https://github.com/zentity-io/zentity[Entity Resolution Plugin] (https://zentity.io[zentity]):
24+
Real-time entity resolution with pure Elasticsearch (by Dave Moore)
2225

2326
* https://github.com/NLPchina/elasticsearch-sql/[SQL language Plugin]:
2427
Allows Elasticsearch to be queried with SQL (by nlpcn)

docs/reference/aggregations/metrics/cardinality-aggregation.asciidoc

+1-1
Original file line numberDiff line numberDiff line change
@@ -63,7 +63,7 @@ POST /sales/_search?size=0
6363
defines a unique count below which counts are expected to be close to
6464
accurate. Above this value, counts might become a bit more fuzzy. The maximum
6565
supported value is 40000, thresholds above this number will have the same
66-
effect as a threshold of 40000. The default values is +3000+.
66+
effect as a threshold of 40000. The default value is +3000+.
6767

6868
==== Counts are approximate
6969

docs/reference/cluster/tasks.asciidoc

+4-2
Original file line numberDiff line numberDiff line change
@@ -64,9 +64,10 @@ It is also possible to retrieve information for a particular task:
6464

6565
[source,js]
6666
--------------------------------------------------
67-
GET _tasks/task_id:1 <1>
67+
GET _tasks/task_id <1>
6868
--------------------------------------------------
6969
// CONSOLE
70+
// TEST[s/task_id/node_id:1/]
7071
// TEST[catch:missing]
7172

7273
<1> This will return a 404 if the task isn't found.
@@ -75,9 +76,10 @@ Or to retrieve all children of a particular task:
7576

7677
[source,js]
7778
--------------------------------------------------
78-
GET _tasks?parent_task_id=parentTaskId:1 <1>
79+
GET _tasks?parent_task_id=parent_task_id <1>
7980
--------------------------------------------------
8081
// CONSOLE
82+
// TEST[s/=parent_task_id/=node_id:1/]
8183

8284
<1> This won't return a 404 if the parent isn't found.
8385

docs/reference/docs/delete-by-query.asciidoc

+6-3
Original file line numberDiff line numberDiff line change
@@ -357,9 +357,10 @@ With the task id you can look up the task directly:
357357

358358
[source,js]
359359
--------------------------------------------------
360-
GET /_tasks/taskId:1
360+
GET /_tasks/task_id
361361
--------------------------------------------------
362362
// CONSOLE
363+
// TEST[s/task_id/node_id:1/]
363364
// TEST[catch:missing]
364365

365366
The advantage of this API is that it integrates with `wait_for_completion=false`
@@ -378,8 +379,9 @@ Any Delete By Query can be canceled using the <<tasks,Task Cancel API>>:
378379

379380
[source,js]
380381
--------------------------------------------------
381-
POST _tasks/task_id:1/_cancel
382+
POST _tasks/task_id/_cancel
382383
--------------------------------------------------
384+
// TEST[s/task_id/node_id:1/]
383385
// CONSOLE
384386

385387
The `task_id` can be found using the tasks API above.
@@ -397,8 +399,9 @@ using the `_rethrottle` API:
397399

398400
[source,js]
399401
--------------------------------------------------
400-
POST _delete_by_query/task_id:1/_rethrottle?requests_per_second=-1
402+
POST _delete_by_query/task_id/_rethrottle?requests_per_second=-1
401403
--------------------------------------------------
404+
// TEST[s/task_id/node_id:1/]
402405
// CONSOLE
403406

404407
The `task_id` can be found using the tasks API above.

0 commit comments

Comments
 (0)