@@ -42,14 +42,6 @@ result field to be present.
42
42
[[ml-evaluate-dfanalytics-request-body]]
43
43
==== {api-request-body-title}
44
44
45
- `index`::
46
- (Required, object) Defines the `index` in which the evaluation will be
47
- performed.
48
-
49
- `query`::
50
- (Optional, object) A query clause that retrieves a subset of data from the
51
- source index. See <<query-dsl>>.
52
-
53
45
`evaluation`::
54
46
(Required, object) Defines the type of evaluation you want to perform.
55
47
See <<ml-evaluate-dfanalytics-resources>>.
@@ -63,6 +55,14 @@ Available evaluation types:
63
55
64
56
--
65
57
58
+ `index`::
59
+ (Required, object) Defines the `index` in which the evaluation will be
60
+ performed.
61
+
62
+ `query`::
63
+ (Optional, object) A query clause that retrieves a subset of data from the
64
+ source index. See <<query-dsl>>.
65
+
66
66
[[ml-evaluate-dfanalytics-resources]]
67
67
==== {dfanalytics-cap} evaluation resources
68
68
@@ -93,19 +93,19 @@ document is an outlier.
93
93
operating characteristic) score and optionally the curve. Default value is
94
94
{"includes_curve": false}.
95
95
96
+ `confusion_matrix`:::
97
+ (Optional, object) Set the different thresholds of the {olscore} at where
98
+ the metrics (`tp` - true positive, `fp` - false positive, `tn` - true
99
+ negative, `fn` - false negative) are calculated. Default value is
100
+ {"at": [0.25, 0.50, 0.75]}.
101
+
96
102
`precision`:::
97
103
(Optional, object) Set the different thresholds of the {olscore} at where
98
104
the metric is calculated. Default value is {"at": [0.25, 0.50, 0.75]}.
99
105
100
106
`recall`:::
101
107
(Optional, object) Set the different thresholds of the {olscore} at where
102
108
the metric is calculated. Default value is {"at": [0.25, 0.50, 0.75]}.
103
-
104
- `confusion_matrix`:::
105
- (Optional, object) Set the different thresholds of the {olscore} at where
106
- the metrics (`tp` - true positive, `fp` - false positive, `tn` - true
107
- negative, `fn` - false negative) are calculated. Default value is
108
- {"at": [0.25, 0.50, 0.75]}.
109
109
110
110
111
111
[[regression-evaluation-resources]]
@@ -128,11 +128,11 @@ which outputs a prediction of values.
128
128
129
129
`mean_squared_error`:::
130
130
(Optional, object) Average squared difference between the predicted values and the actual (`ground truth`) value.
131
- Read more on https://en.wikipedia.org/wiki/Mean_squared_error[Wikipedia]
131
+ For more information, read https://en.wikipedia.org/wiki/Mean_squared_error[this wiki article].
132
132
133
133
`r_squared`:::
134
134
(Optional, object) Proportion of the variance in the dependent variable that is predictable from the independent variables.
135
- Read more on https://en.wikipedia.org/wiki/Coefficient_of_determination[Wikipedia]
135
+ For more information, read https://en.wikipedia.org/wiki/Coefficient_of_determination[this wiki article].
136
136
137
137
138
138
@@ -160,16 +160,16 @@ belongs.
160
160
Available metrics:
161
161
162
162
`accuracy`:::
163
- (Optional, object) Accuracy of predictions (per-class and overall)
163
+ (Optional, object) Accuracy of predictions (per-class and overall).
164
+
165
+ `multiclass_confusion_matrix`:::
166
+ (Optional, object) Multiclass confusion matrix.
164
167
165
168
`precision`:::
166
- (Optional, object) Precision of predictions (per-class and average)
169
+ (Optional, object) Precision of predictions (per-class and average).
167
170
168
171
`recall`:::
169
- (Optional, object) Recall of predictions (per-class and average)
170
-
171
- `multiclass_confusion_matrix`:::
172
- (Optional, object) Multiclass confusion matrix
172
+ (Optional, object) Recall of predictions (per-class and average).
173
173
174
174
175
175
////
0 commit comments