Skip to content

Commit 26eeabe

Browse files
author
GitName
committed
review
1 parent de54ace commit 26eeabe

File tree

2 files changed

+20
-16
lines changed

2 files changed

+20
-16
lines changed

Diff for: doc/metrics.rst

+12-7
Original file line numberDiff line numberDiff line change
@@ -45,6 +45,18 @@ The :func:`make_index_balanced_accuracy` :cite:`garcia2012effectiveness` can
4545
wrap any metric and give more importance to a specific class using the
4646
parameter ``alpha``.
4747

48+
.. _macro_averaged_mean_absolute_error:
49+
50+
Macro-Averaged Mean Absolute Error (MA-MAE)
51+
-------------------------------------------
52+
53+
Ordinal classification is used when there is a rank among classes, for example
54+
levels of functionality or movie ratings.
55+
56+
The :func:`macro_averaged_mean_absolute_error` :cite:`esuli2009ordinal` is used
57+
for imbalanced ordinal classification. We compute each MAE for each class and
58+
average them, giving an equal weight to each class.
59+
4860
.. _classification_report:
4961

5062
Summary of important metrics
@@ -54,10 +66,3 @@ The :func:`classification_report_imbalanced` will compute a set of metrics
5466
per class and summarize it in a table. The parameter `output_dict` allows
5567
to get a string or a Python dictionary. This dictionary can be reused to create
5668
a Pandas dataframe for instance.
57-
58-
.. _macro_averaged_mean_absolute_error:
59-
60-
Macro-Averaged Mean Absolute Error (MA-MAE)
61-
-------------------------------------------
62-
63-
The :func:`macro_averaged_mean_absolute_error` :cite:`esuli2009ordinal` is used for imbalanced ordinal classification. We compute each MAE for each class and average them, giving an equal weight to each class.

Diff for: imblearn/metrics/_classification.py

+8-9
Original file line numberDiff line numberDiff line change
@@ -1012,10 +1012,10 @@ def macro_averaged_mean_absolute_error(y_true, y_pred):
10121012
10131013
Parameters
10141014
----------
1015-
y_true : 1d array-like, or label indicator array / sparse matrix
1015+
y_true : array-like of shape (n_samples,) or (n_samples, n_outputs)
10161016
Ground truth (correct) target values.
10171017
1018-
y_pred : 1d array-like, or label indicator array / sparse matrix
1018+
y_pred : array-like of shape (n_samples,) or (n_samples, n_outputs)
10191019
Estimated targets as returned by a classifier.
10201020
10211021
Returns
@@ -1033,20 +1033,19 @@ def macro_averaged_mean_absolute_error(y_true, y_pred):
10331033
>>> y_true_imbalanced = [1, 2, 2, 2]
10341034
>>> y_pred = [1, 2, 1, 2]
10351035
>>> mean_absolute_error(y_true_balanced, y_pred)
1036-
0.5
1036+
0.5
10371037
>>> mean_absolute_error(y_true_imbalanced, y_pred)
1038-
0.25
1038+
0.25
10391039
>>> macro_averaged_mean_absolute_error(y_true_balanced, y_pred)
1040-
0.5
1040+
0.5
10411041
>>> macro_averaged_mean_absolute_error(y_true_imbalanced, y_pred)
1042-
0.16666666666666666
1042+
0.16666666666666666
10431043
10441044
"""
10451045
all_mae = []
1046-
y_true = np.array(y_true)
1047-
y_pred = np.array(y_pred)
1046+
y_true, y_pred = np.asarray(y_true), np.asarray(y_pred)
10481047
for class_to_predict in np.unique(y_true):
1049-
index_class_to_predict = np.where(y_true == class_to_predict)[0]
1048+
index_class_to_predict = np.flatnonzero(y_true == class_to_predict)
10501049
mae_class = mean_absolute_error(y_true[index_class_to_predict],
10511050
y_pred[index_class_to_predict])
10521051
all_mae.append(mae_class)

0 commit comments

Comments
 (0)