Skip to content

Commit e8fa731

Browse files
Updated AveragedPerceptron default iterations from 1 to 10 (#5258)
* updated averaged perceptron default tries from 1 to 10 * Update CmdParser.cs * Update CmdParser.cs * reverted changes to CmdParser.cs * reverted changes to CmdParser.cs
1 parent 63a1714 commit e8fa731

File tree

70 files changed

+13485
-11898
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

70 files changed

+13485
-11898
lines changed

src/Microsoft.ML.StandardTrainers/Standard/Online/AveragedPerceptron.cs

+5
Original file line numberDiff line numberDiff line change
@@ -82,6 +82,11 @@ public sealed class AveragedPerceptronTrainer : AveragedLinearTrainer<BinaryPred
8282
/// </summary>
8383
public sealed class Options : AveragedLinearOptions
8484
{
85+
public Options()
86+
{
87+
NumberOfIterations = 10;
88+
}
89+
8590
/// <summary>
8691
/// A custom <a href="https://en.wikipedia.org/wiki/Loss_function">loss</a>.
8792
/// </summary>

test/BaselineOutput/Common/AveragedPerceptron/AveragedPerceptron-CV-breast-cancer.PAVcalibration-out.txt

+22-22
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,10 @@
11
maml.exe CV tr=AveragedPerceptron threads=- cali=PAV dout=%Output% data=%Data% seed=1
22
Automatically adding a MinMax normalization transform, use 'norm=Warn' or 'norm=No' to turn this behavior off.
3-
Warning: Skipped 8 instances with missing features during training (over 1 iterations; 8 inst/iter)
3+
Warning: Skipped 80 instances with missing features during training (over 10 iterations; 8 inst/iter)
44
Training calibrator.
5-
PAV calibrator: piecewise function approximation has 6 components.
5+
PAV calibrator: piecewise function approximation has 5 components.
66
Automatically adding a MinMax normalization transform, use 'norm=Warn' or 'norm=No' to turn this behavior off.
7-
Warning: Skipped 8 instances with missing features during training (over 1 iterations; 8 inst/iter)
7+
Warning: Skipped 80 instances with missing features during training (over 10 iterations; 8 inst/iter)
88
Training calibrator.
99
PAV calibrator: piecewise function approximation has 6 components.
1010
Warning: The predictor produced non-finite prediction values on 8 instances during testing. Possible causes: abnormal data or the predictor is numerically unstable.
@@ -13,43 +13,43 @@ Confusion table
1313
||======================
1414
PREDICTED || positive | negative | Recall
1515
TRUTH ||======================
16-
positive || 132 | 2 | 0.9851
16+
positive || 133 | 1 | 0.9925
1717
negative || 9 | 211 | 0.9591
1818
||======================
19-
Precision || 0.9362 | 0.9906 |
20-
OVERALL 0/1 ACCURACY: 0.968927
19+
Precision || 0.9366 | 0.9953 |
20+
OVERALL 0/1 ACCURACY: 0.971751
2121
LOG LOSS/instance: Infinity
2222
Test-set entropy (prior Log-Loss/instance): 0.956998
2323
LOG-LOSS REDUCTION (RIG): -Infinity
24-
AUC: 0.992809
24+
AUC: 0.994403
2525
Warning: The predictor produced non-finite prediction values on 8 instances during testing. Possible causes: abnormal data or the predictor is numerically unstable.
2626
TEST POSITIVE RATIO: 0.3191 (105.0/(105.0+224.0))
2727
Confusion table
2828
||======================
2929
PREDICTED || positive | negative | Recall
3030
TRUTH ||======================
31-
positive || 102 | 3 | 0.9714
32-
negative || 4 | 220 | 0.9821
31+
positive || 100 | 5 | 0.9524
32+
negative || 3 | 221 | 0.9866
3333
||======================
34-
Precision || 0.9623 | 0.9865 |
35-
OVERALL 0/1 ACCURACY: 0.978723
36-
LOG LOSS/instance: 0.239330
34+
Precision || 0.9709 | 0.9779 |
35+
OVERALL 0/1 ACCURACY: 0.975684
36+
LOG LOSS/instance: 0.227705
3737
Test-set entropy (prior Log-Loss/instance): 0.903454
38-
LOG-LOSS REDUCTION (RIG): 0.735095
39-
AUC: 0.997279
38+
LOG-LOSS REDUCTION (RIG): 0.747961
39+
AUC: 0.997619
4040

4141
OVERALL RESULTS
4242
---------------------------------------
43-
AUC: 0.995044 (0.0022)
44-
Accuracy: 0.973825 (0.0049)
45-
Positive precision: 0.949217 (0.0130)
46-
Positive recall: 0.978252 (0.0068)
47-
Negative precision: 0.988579 (0.0020)
48-
Negative recall: 0.970617 (0.0115)
43+
AUC: 0.996011 (0.0016)
44+
Accuracy: 0.973718 (0.0020)
45+
Positive precision: 0.953747 (0.0171)
46+
Positive recall: 0.972459 (0.0201)
47+
Negative precision: 0.986580 (0.0087)
48+
Negative recall: 0.972849 (0.0138)
4949
Log-loss: Infinity (NaN)
5050
Log-loss reduction: -Infinity (NaN)
51-
F1 Score: 0.963412 (0.0034)
52-
AUPRC: 0.990172 (0.0037)
51+
F1 Score: 0.962653 (0.0011)
52+
AUPRC: 0.992269 (0.0025)
5353

5454
---------------------------------------
5555
Physical memory usage(MB): %Number%
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
11
AveragedPerceptron
22
AUC Accuracy Positive precision Positive recall Negative precision Negative recall Log-loss Log-loss reduction F1 Score AUPRC Learner Name Train Dataset Test Dataset Results File Run Time Physical Memory Virtual Memory Command Line Settings
3-
0.995044 0.973825 0.949217 0.978252 0.988579 0.970617 Infinity -Infinity 0.963412 0.990172 AveragedPerceptron %Data% %Output% 99 0 0 maml.exe CV tr=AveragedPerceptron threads=- cali=PAV dout=%Output% data=%Data% seed=1
3+
0.996011 0.973718 0.953747 0.972459 0.98658 0.972849 Infinity -Infinity 0.962653 0.992269 AveragedPerceptron %Data% %Output% 99 0 0 maml.exe CV tr=AveragedPerceptron threads=- cali=PAV dout=%Output% data=%Data% seed=1
44

0 commit comments

Comments
 (0)