Skip to content

[ML][Inference] adding more options to inference processor #48545

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Conversation

benwtrent
Copy link
Member

This adds the option to include/exclude model metadata from the doc as it passes through the processor.

Additionally, we no longer overwrite the field if it exists already but add it into the path.

@elasticmachine
Copy link
Collaborator

Pinging @elastic/ml-core (:ml)

Copy link
Contributor

@przemekwitek przemekwitek left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

if (modelInfoField != null) {
ingestDocument.setFieldValue(modelInfoField, modelInfo);
if (includeModelMetadata) {
ingestDocument.setFieldValue(modelInfoField + "." + MODEL_ID, modelId);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems that modelInfoField should never be null?
Does it make sense to add requireNonNull in the constructor?

if (modelInfoField != null && tag != null) {
String modelInfoField = ConfigurationUtils.readStringProperty(TYPE, tag, config, MODEL_INFO_FIELD, "ml");
// If multiple inference processors are in the same pipeline, it is wise to tag them
// The tag will keep metadata entries from stepping on eachother
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
// The tag will keep metadata entries from stepping on eachother
// The tag will keep metadata entries from stepping on each other

@@ -142,7 +179,8 @@ public void testGenerateRequestWithEmptyMapping() {
modelId,
new ClassificationConfig(topNClasses),
Collections.emptyMap(),
null);
"ml.my_processor",
false);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
false);
false);

@benwtrent benwtrent merged commit 484886a into elastic:feature/ml-inference Oct 28, 2019
@benwtrent benwtrent deleted the feature/ml-inference-adding-processor-options branch October 28, 2019 11:38
benwtrent added a commit that referenced this pull request Nov 18, 2019
* [ML][Inference] adds lazy model loader and inference (#47410)

This adds a couple of things:

- A model loader service that is accessible via transport calls. This service will load in models and cache them. They will stay loaded until a processor no longer references them
- A Model class and its first sub-class LocalModel. Used to cache model information and run inference.
- Transport action and handler for requests to infer against a local model
Related Feature PRs: 
* [ML][Inference] Adjust inference configuration option API (#47812)

* [ML][Inference] adds logistic_regression output aggregator (#48075)

* [ML][Inference] Adding read/del trained models (#47882)

* [ML][Inference] Adding inference ingest processor (#47859)

* [ML][Inference] fixing classification inference for ensemble (#48463)

* [ML][Inference] Adding model memory estimations (#48323)

* [ML][Inference] adding more options to inference processor (#48545)

* [ML][Inference] handle string values better in feature extraction (#48584)

* [ML][Inference] Adding _stats endpoint for inference (#48492)

* [ML][Inference] add inference processors and trained models to usage (#47869)

* [ML][Inference] add new flag for optionally including model definition (#48718)

* [ML][Inference] adding license checks (#49056)

* [ML][Inference] Adding memory and compute estimates to inference (#48955)
benwtrent added a commit to benwtrent/elasticsearch that referenced this pull request Nov 18, 2019
* [ML][Inference] adds lazy model loader and inference (elastic#47410)

This adds a couple of things:

- A model loader service that is accessible via transport calls. This service will load in models and cache them. They will stay loaded until a processor no longer references them
- A Model class and its first sub-class LocalModel. Used to cache model information and run inference.
- Transport action and handler for requests to infer against a local model
Related Feature PRs:
* [ML][Inference] Adjust inference configuration option API (elastic#47812)

* [ML][Inference] adds logistic_regression output aggregator (elastic#48075)

* [ML][Inference] Adding read/del trained models (elastic#47882)

* [ML][Inference] Adding inference ingest processor (elastic#47859)

* [ML][Inference] fixing classification inference for ensemble (elastic#48463)

* [ML][Inference] Adding model memory estimations (elastic#48323)

* [ML][Inference] adding more options to inference processor (elastic#48545)

* [ML][Inference] handle string values better in feature extraction (elastic#48584)

* [ML][Inference] Adding _stats endpoint for inference (elastic#48492)

* [ML][Inference] add inference processors and trained models to usage (elastic#47869)

* [ML][Inference] add new flag for optionally including model definition (elastic#48718)

* [ML][Inference] adding license checks (elastic#49056)

* [ML][Inference] Adding memory and compute estimates to inference (elastic#48955)
benwtrent added a commit that referenced this pull request Nov 18, 2019
* [ML] ML Model Inference Ingest Processor (#49052)

* [ML][Inference] adds lazy model loader and inference (#47410)

This adds a couple of things:

- A model loader service that is accessible via transport calls. This service will load in models and cache them. They will stay loaded until a processor no longer references them
- A Model class and its first sub-class LocalModel. Used to cache model information and run inference.
- Transport action and handler for requests to infer against a local model
Related Feature PRs:

* [ML][Inference] Adjust inference configuration option API (#47812)

* [ML][Inference] adds logistic_regression output aggregator (#48075)

* [ML][Inference] Adding read/del trained models (#47882)

* [ML][Inference] Adding inference ingest processor (#47859)

* [ML][Inference] fixing classification inference for ensemble (#48463)

* [ML][Inference] Adding model memory estimations (#48323)

* [ML][Inference] adding more options to inference processor (#48545)

* [ML][Inference] handle string values better in feature extraction (#48584)

* [ML][Inference] Adding _stats endpoint for inference (#48492)

* [ML][Inference] add inference processors and trained models to usage (#47869)

* [ML][Inference] add new flag for optionally including model definition (#48718)

* [ML][Inference] adding license checks (#49056)

* [ML][Inference] Adding memory and compute estimates to inference (#48955)

* fixing version of indexed docs for model inference
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants