You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: doc/metrics.rst
+12-7
Original file line number
Diff line number
Diff line change
@@ -45,6 +45,18 @@ The :func:`make_index_balanced_accuracy` :cite:`garcia2012effectiveness` can
45
45
wrap any metric and give more importance to a specific class using the
46
46
parameter ``alpha``.
47
47
48
+
.. _macro_averaged_mean_absolute_error:
49
+
50
+
Macro-Averaged Mean Absolute Error (MA-MAE)
51
+
-------------------------------------------
52
+
53
+
Ordinal classification is used when there is a rank among classes, for example
54
+
levels of functionality or movie ratings.
55
+
56
+
The :func:`macro_averaged_mean_absolute_error` :cite:`esuli2009ordinal` is used
57
+
for imbalanced ordinal classification. We compute each MAE for each class and
58
+
average them, giving an equal weight to each class.
59
+
48
60
.. _classification_report:
49
61
50
62
Summary of important metrics
@@ -54,10 +66,3 @@ The :func:`classification_report_imbalanced` will compute a set of metrics
54
66
per class and summarize it in a table. The parameter `output_dict` allows
55
67
to get a string or a Python dictionary. This dictionary can be reused to create
56
68
a Pandas dataframe for instance.
57
-
58
-
.. _macro_averaged_mean_absolute_error:
59
-
60
-
Macro-Averaged Mean Absolute Error (MA-MAE)
61
-
-------------------------------------------
62
-
63
-
The :func:`macro_averaged_mean_absolute_error` :cite:`esuli2009ordinal` is used for imbalanced ordinal classification. We compute each MAE for each class and average them, giving an equal weight to each class.
0 commit comments