@@ -42,31 +42,32 @@ result field to be present.
42
42
[[ml-evaluate-dfanalytics-request-body]]
43
43
==== {api-request-body-title}
44
44
45
+ `index`::
46
+ (Required, object) Defines the `index` in which the evaluation will be
47
+ performed.
48
+
49
+ `query`::
50
+ (Optional, object) A query clause that retrieves a subset of data from the
51
+ source index. See <<query-dsl>>.
52
+
45
53
`evaluation`::
46
- (Required, object) Defines the type of evaluation you want to perform. The
47
- value of this object can be different depending on the type of evaluation you
48
- want to perform. See <<ml-evaluate-dfanalytics-resources>>.
54
+ (Required, object) Defines the type of evaluation you want to perform.
55
+ See <<ml-evaluate-dfanalytics-resources>>.
49
56
+
50
57
--
51
58
Available evaluation types:
59
+
52
60
* `binary_soft_classification`
53
61
* `regression`
54
62
* `classification`
55
- --
56
63
57
- `index`::
58
- (Required, object) Defines the `index` in which the evaluation will be
59
- performed.
60
-
61
- `query`::
62
- (Optional, object) A query clause that retrieves a subset of data from the
63
- source index. See <<query-dsl>>.
64
+ --
64
65
65
66
[[ml-evaluate-dfanalytics-resources]]
66
67
==== {dfanalytics-cap} evaluation resources
67
68
68
69
[[binary-sc-resources]]
69
- ===== Binary soft classification configuration objects
70
+ ===== Binary soft classification evaluation objects
70
71
71
72
Binary soft classification evaluates the results of an analysis which outputs
72
73
the probability that each document belongs to a certain class. For example, in
@@ -87,20 +88,20 @@ document is an outlier.
87
88
(Optional, object) Specifies the metrics that are used for the evaluation.
88
89
Available metrics:
89
90
90
- `auc_roc`::
91
+ `auc_roc`:::
91
92
(Optional, object) The AUC ROC (area under the curve of the receiver
92
93
operating characteristic) score and optionally the curve. Default value is
93
94
{"includes_curve": false}.
94
95
95
- `precision`::
96
+ `precision`:::
96
97
(Optional, object) Set the different thresholds of the {olscore} at where
97
98
the metric is calculated. Default value is {"at": [0.25, 0.50, 0.75]}.
98
99
99
- `recall`::
100
+ `recall`:::
100
101
(Optional, object) Set the different thresholds of the {olscore} at where
101
102
the metric is calculated. Default value is {"at": [0.25, 0.50, 0.75]}.
102
103
103
- `confusion_matrix`::
104
+ `confusion_matrix`:::
104
105
(Optional, object) Set the different thresholds of the {olscore} at where
105
106
the metrics (`tp` - true positive, `fp` - false positive, `tn` - true
106
107
negative, `fn` - false negative) are calculated. Default value is
@@ -122,9 +123,18 @@ which outputs a prediction of values.
122
123
in other words the results of the {regression} analysis.
123
124
124
125
`metrics`::
125
- (Required, object) Specifies the metrics that are used for the evaluation.
126
- Available metrics are `r_squared` and `mean_squared_error`.
127
-
126
+ (Optional, object) Specifies the metrics that are used for the evaluation.
127
+ Available metrics:
128
+
129
+ `mean_squared_error`:::
130
+ (Optional, object) Average squared difference between the predicted values and the actual (`ground truth`) value.
131
+ Read more on https://en.wikipedia.org/wiki/Mean_squared_error[Wikipedia]
132
+
133
+ `r_squared`:::
134
+ (Optional, object) Proportion of the variance in the dependent variable that is predictable from the independent variables.
135
+ Read more on https://en.wikipedia.org/wiki/Coefficient_of_determination[Wikipedia]
136
+
137
+
128
138
129
139
[[classification-evaluation-resources]]
130
140
==== {classification-cap} evaluation objects
@@ -134,8 +144,8 @@ outputs a prediction that identifies to which of the classes each document
134
144
belongs.
135
145
136
146
`actual_field`::
137
- (Required, string) The field of the `index` which contains the ground truth.
138
- The data type of this field must be keyword .
147
+ (Required, string) The field of the `index` which contains the ` ground truth`.
148
+ The data type of this field must be categorical .
139
149
140
150
`predicted_field`::
141
151
(Required, string) The field in the `index` that contains the predicted value,
@@ -146,8 +156,20 @@ belongs.
146
156
example, `predicted_field` : `ml.animal_class_prediction.keyword`.
147
157
148
158
`metrics`::
149
- (Required, object) Specifies the metrics that are used for the evaluation.
150
- Available metric is `multiclass_confusion_matrix`.
159
+ (Optional, object) Specifies the metrics that are used for the evaluation.
160
+ Available metrics:
161
+
162
+ `accuracy`:::
163
+ (Optional, object) Accuracy of predictions (per-class and overall)
164
+
165
+ `precision`:::
166
+ (Optional, object) Precision of predictions (per-class and average)
167
+
168
+ `recall`:::
169
+ (Optional, object) Recall of predictions (per-class and average)
170
+
171
+ `multiclass_confusion_matrix`:::
172
+ (Optional, object) Multiclass confusion matrix
151
173
152
174
153
175
////
0 commit comments