Skip to content

Commit 57492e2

Browse files
committed
[DOCS] Adds inference conceptual documentation (elastic#758)
1 parent d226d2e commit 57492e2

File tree

2 files changed

+49
-0
lines changed

2 files changed

+49
-0
lines changed

docs/en/stack/ml/df-analytics/ml-dfa-concepts.asciidoc

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -8,9 +8,11 @@ feature and the corresponding {evaluatedf-api}.
88
* <<dfa-outlier-detection>>
99
* <<dfa-regression>>
1010
* <<dfa-classification>>
11+
* <<ml-inference>>
1112
* <<ml-dfanalytics-evaluate>>
1213

1314
include::dfa-outlierdetection.asciidoc[]
1415
include::dfa-regression.asciidoc[]
1516
include::dfa-classification.asciidoc[]
17+
include::ml-inference.asciidoc[]
1618
include::evaluatedf-api.asciidoc[]
Lines changed: 47 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,47 @@
1+
[role="xpack"]
2+
[[ml-inference]]
3+
=== {infer-cap}
4+
5+
experimental[]
6+
7+
{infer-cap} is a {ml} feature that enables you to use supervised {ml} processes
8+
– like <<dfa-regression>> or <<dfa-classification>> – not only as a batch
9+
analysis but in a continuous fashion. This means that {infer} makes it possible
10+
to use trained {ml} models against incoming data.
11+
12+
For instance, suppose you have an online service and you would like to predict
13+
whether a customer is likely to churn. You have an index with historical data –
14+
information on the customer behavior throughout the years in your business – and
15+
a {classification} model that is trained on this data. The new information comes
16+
into a destination index of a {ctransform}. With {infer}, you can perform the
17+
{classanalysis} against the new data with the same input fields that you've
18+
trained the model on, and get a prediction.
19+
20+
Let's take a closer look at the machinery behind {infer}.
21+
22+
23+
[discrete]
24+
==== Trained {ml} models as functions
25+
26+
When you create a {dfanalytics-job} that executes a supervised process, you need
27+
to train a {ml} model on a training dataset to be able to make predictions on
28+
data points that the model has never seen. The models that are created by
29+
{dfanalytics} are stored as {es} documents in internal indices. In other words,
30+
the characteristics of your trained models are saved and ready to be used as
31+
functions.
32+
33+
34+
[discrete]
35+
==== {infer-cap} processor
36+
37+
{infer-cap} is a processor specified in an {ref}/pipeline.html[ingest pipeline].
38+
It uses a stored {dfanalytics} model to infer against the data that is being
39+
ingested in the pipeline. The model is used on the
40+
{ref}/ingest.html[ingest node]. {infer-cap} pre-processes the data by using the
41+
model and provides a prediction. After the process, the pipeline continues
42+
executing (if there is any other processor in the pipeline), finally the new
43+
data together with the results are indexed into the destination index.
44+
45+
Check the {ref}/inference-processor.html[{infer} processor] and
46+
{ref}/ml-df-analytics-apis.html[the {ml} {dfanalytics} API documentation] to
47+
learn more about the feature.

0 commit comments

Comments
 (0)