|
| 1 | +maml.exe CV tr=FieldAwareFactorizationMachine{d=5 shuf- norm-} col[Feature]=DupFeatures threads=- norm=No dout=%Output% data=%Data% seed=1 xf=Copy{col=DupFeatures:Features} xf=MinMax{col=Features col=DupFeatures} |
| 2 | +Not adding a normalizer. |
| 3 | +Warning: Skipped 8 examples with bad label/weight/features in training set |
| 4 | +Not training a calibrator because it is not needed. |
| 5 | +Not adding a normalizer. |
| 6 | +Warning: Skipped 8 examples with bad label/weight/features in training set |
| 7 | +Not training a calibrator because it is not needed. |
| 8 | +Warning: The predictor produced non-finite prediction values on 8 instances during testing. Possible causes: abnormal data or the predictor is numerically unstable. |
| 9 | +TEST POSITIVE RATIO: 0.3785 (134.0/(134.0+220.0)) |
| 10 | +Confusion table |
| 11 | + ||====================== |
| 12 | +PREDICTED || positive | negative | Recall |
| 13 | +TRUTH ||====================== |
| 14 | + positive || 122 | 12 | 0.9104 |
| 15 | + negative || 4 | 216 | 0.9818 |
| 16 | + ||====================== |
| 17 | +Precision || 0.9683 | 0.9474 | |
| 18 | +OVERALL 0/1 ACCURACY: 0.954802 |
| 19 | +LOG LOSS/instance: 0.259660 |
| 20 | +Test-set entropy (prior Log-Loss/instance): 0.956998 |
| 21 | +LOG-LOSS REDUCTION (RIG): 72.867233 |
| 22 | +AUC: 0.984973 |
| 23 | +Warning: The predictor produced non-finite prediction values on 8 instances during testing. Possible causes: abnormal data or the predictor is numerically unstable. |
| 24 | +TEST POSITIVE RATIO: 0.3191 (105.0/(105.0+224.0)) |
| 25 | +Confusion table |
| 26 | + ||====================== |
| 27 | +PREDICTED || positive | negative | Recall |
| 28 | +TRUTH ||====================== |
| 29 | + positive || 92 | 13 | 0.8762 |
| 30 | + negative || 2 | 222 | 0.9911 |
| 31 | + ||====================== |
| 32 | +Precision || 0.9787 | 0.9447 | |
| 33 | +OVERALL 0/1 ACCURACY: 0.954407 |
| 34 | +LOG LOSS/instance: 0.260480 |
| 35 | +Test-set entropy (prior Log-Loss/instance): 0.903454 |
| 36 | +LOG-LOSS REDUCTION (RIG): 71.168362 |
| 37 | +AUC: 0.967049 |
| 38 | + |
| 39 | +OVERALL RESULTS |
| 40 | +--------------------------------------- |
| 41 | +AUC: 0.976011 (0.0090) |
| 42 | +Accuracy: 0.954605 (0.0002) |
| 43 | +Positive precision: 0.973489 (0.0052) |
| 44 | +Positive recall: 0.893319 (0.0171) |
| 45 | +Negative precision: 0.946025 (0.0013) |
| 46 | +Negative recall: 0.986445 (0.0046) |
| 47 | +Log-loss: 0.260070 (0.0004) |
| 48 | +Log-loss reduction: 72.017798 (0.8494) |
| 49 | +F1 Score: 0.931542 (0.0069) |
| 50 | +AUPRC: 0.974115 (0.0054) |
| 51 | + |
| 52 | +--------------------------------------- |
| 53 | +Physical memory usage(MB): %Number% |
| 54 | +Virtual memory usage(MB): %Number% |
| 55 | +%DateTime% Time elapsed(s): %Number% |
| 56 | + |
| 57 | +--- Progress log --- |
| 58 | +[1] 'Normalize' started. |
| 59 | +[1] (%Time%) 337 examples |
| 60 | +[1] 'Normalize' finished in %Time%. |
| 61 | +[2] 'Training' started. |
| 62 | +[2] (%Time%) 1 iterations, 329 examples Training-loss: 0.371414389819699 |
| 63 | +[2] (%Time%) 2 iterations, 329 examples Training-loss: 0.225137821503565 |
| 64 | +[2] (%Time%) 3 iterations, 329 examples Training-loss: 0.197323119398265 |
| 65 | +[2] (%Time%) 4 iterations, 329 examples Training-loss: 0.183649426646222 |
| 66 | +[2] (%Time%) 5 iterations, 329 examples Training-loss: 0.174400635825405 |
| 67 | +[2] 'Training' finished in %Time%. |
| 68 | +[3] 'Normalize #2' started. |
| 69 | +[3] (%Time%) 362 examples |
| 70 | +[3] 'Normalize #2' finished in %Time%. |
| 71 | +[4] 'Training #2' started. |
| 72 | +[4] (%Time%) 1 iterations, 354 examples Training-loss: 0.35872800705401 |
| 73 | +[4] (%Time%) 2 iterations, 354 examples Training-loss: 0.239609312114266 |
| 74 | +[4] (%Time%) 3 iterations, 354 examples Training-loss: 0.210775498912242 |
| 75 | +[4] (%Time%) 4 iterations, 354 examples Training-loss: 0.19625903089058 |
| 76 | +[4] (%Time%) 5 iterations, 354 examples Training-loss: 0.187121580244397 |
| 77 | +[4] 'Training #2' finished in %Time%. |
0 commit comments