You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
/// The <see cref="IEstimator{TTransformer}"/> to predict a target using a linear logistic regression model trained with L-BFGS method.
32
+
/// </summary>
33
+
/// <remarks>
34
+
/// <format type="text/markdown"><)
36
+
/// or [LbfgsLogisticRegression(Options)](xref:Microsoft.ML.StandardTrainersCatalog.LbfgsLogisticRegression(Microsoft.ML.BinaryClassificationCatalog.BinaryClassificationTrainers,Microsoft.ML.Trainers.LbfgsLogisticRegressionBinaryTrainer.Options)).
/// | Required NuGet in addition to Microsoft.ML | None |
47
+
///
48
+
/// ### Scoring Function
49
+
/// Linear logistic regression is a variant of linear model. It maps feature vector $\boldsymbol{x} \in {\mathbb R}^n$ to a scalar via $\hat{y}\left(\boldsymbol{x}\right) = \boldsymbol{w}^T \boldsymbol{x} + b = \sum_{j=1}^n w_j x_j + b$,
50
+
/// where the $x_j$ is the $j$-th feature's value, the $j$-th element of $\boldsymbol{w}$ is the $j$-th feature's coefficient, and $b$ is a learnable bias.
51
+
/// The corresponding probability of getting a true label is $\frac{1}{1 + e^{\hat{y}\left(\boldsymbol{x}\right)}}$.
52
+
///
53
+
/// ### Training Algorithm Details
54
+
/// The optimization technique implemented is based on [the limited memory Broyden-Fletcher-Goldfarb-Shanno method (L-BFGS)](https://en.wikipedia.org/wiki/Limited-memory_BFGS).
55
+
/// L-BFGS is a [quasi-Newtonian method](https://en.wikipedia.org/wiki/Quasi-Newton_method) which replaces the expensive computation cost of Hessian matrix with an approximation but still enjoys a fast convergence rate like [Newton method](https://en.wikipedia.org/wiki/Newton%27s_method_in_optimization) where the full Hessian matrix is computed.
56
+
/// Since L-BFGS approximation uses only a limited amount of historical states to compute the next step direction, it is especially suited for problems with high-dimensional feature vector.
57
+
/// The number of historical states is a user-specified parameter, using a larger number may lead to a better approximation to the Hessian matrix but also a higher computation cost per step.
58
+
///
59
+
/// Regularization is a method that can render an ill-posed problem more tractable by imposing constraints that provide information to supplement the data and that prevents overfitting by penalizing model's magnitude usually measured by some norm functions.
60
+
/// This can improve the generalization of the model learned by selecting the optimal complexity in the bias-variance tradeoff.
61
+
/// Regularization works by adding the penalty that is associated with coefficient values to the error of the hypothesis.
62
+
/// An accurate model with extreme coefficient values would be penalized more, but a less accurate model with more conservative values would be penalized less.
63
+
///
64
+
/// This learner supports [elastic net regularization](https://en.wikipedia.org/wiki/Elastic_net_regularization): a linear combination of L1-norm (LASSO), $|| \boldsymbol{w} ||_1$, and L2-norm (ridge), $|| \boldsymbol{w} ||_2^2$ regularizations.
65
+
/// L1-norm and L2-norm regularizations have different effects and uses that are complementary in certain respects.
66
+
/// Using L1-norm can increase sparsity of the trained $\boldsymbol{w}$.
67
+
/// When working with high-dimensional data, it shrinks small weights of irrelevant features to 0 and therefore no resource will be spent on those bad features when making prediction.
68
+
/// If L1-norm regularization is used, the used training algorithm would be [QWL-QN](http://citeseer.ist.psu.edu/viewdoc/summary?doi=10.1.1.68.5260).
69
+
/// L2-norm regularization is preferable for data that is not sparse and it largely penalizes the existence of large weights.
70
+
///
71
+
/// An aggressive regularization (that is, assigning large coefficients to L1-norm or L2-norm regularization terms) can harm predictive capacity by excluding important variables out of the model.
72
+
/// Therefore, choosing the right regularization coefficients is important when applying logistic regression.
Copy file name to clipboardExpand all lines: src/Microsoft.ML.StandardTrainers/StandardTrainersCatalog.cs
+4-4
Original file line number
Diff line number
Diff line change
@@ -518,11 +518,11 @@ public static OnlineGradientDescentTrainer OnlineGradientDescent(this Regression
518
518
}
519
519
520
520
/// <summary>
521
-
/// Predict a target using a linear binary classification model trained with the <see cref="Trainers.LbfgsLogisticRegressionBinaryTrainer"/> trainer.
521
+
/// Create <see cref="Trainers.LbfgsLogisticRegressionBinaryTrainer"/>, which predicts a target using a linear binary classification model trained over boolean label data.
/// <param name="labelColumnName">The name of the label column.</param>
525
-
/// <param name="featureColumnName">The name of the feature column.</param>
524
+
/// <param name="labelColumnName">The name of the label column. The column data must be <see cref="System.Boolean"/>.</param>
525
+
/// <param name="featureColumnName">The name of the feature column. The column data must be a known-sized vector of <see cref="System.Single"/>.</param>
526
526
/// <param name="exampleWeightColumnName">The name of the example weight column (optional).</param>
/// <param name="l1Regularization">The L1 <a href='https://en.wikipedia.org/wiki/Regularization_(mathematics)'>regularization</a> hyperparameter. Higher values will tend to lead to more sparse model.</param>
@@ -552,7 +552,7 @@ public static LbfgsLogisticRegressionBinaryTrainer LbfgsLogisticRegression(this
552
552
}
553
553
554
554
/// <summary>
555
-
/// Predict a target using a linear binary classification model trained with the <see cref="Trainers.LbfgsLogisticRegressionBinaryTrainer"/> trainer.
555
+
/// Create <see cref="Trainers.LbfgsLogisticRegressionBinaryTrainer"/> with advanced options, which predicts a target using a linear binary classification model trained over boolean label data.
0 commit comments