You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Adding xml style documentation for trainers (dotnet#393)
* Adding xml style documentation for lbfgs, sdca and averagerPerceptron trainers, to improve what's currently on docs.microsoft.com
* regenerating the C#Api file
* Removing the control characters from the description when generating the ep_list.tsv, so that they have one line per entry point.
Formatting.
* spaces
* The epList.tsv file and the manifest should not have the platform specific new line characters.
* merge fix
internalconststringSDCADetailedSummary=@"This classifier is a trainer based on the Stochastic DualCoordinate
226
+
Ascent(SDCA) method, a state-of-the-art optimization technique for convex objective functions.
227
+
The algorithm can be scaled for use on large out-of-memory data sets due to a semi-asynchronized implementation
228
+
that supports multi-threading.
229
+
Convergence is underwritten by periodically enforcing synchronization between primal and dual updates in a separate thread.
230
+
Several choices of loss functions are also provided.
231
+
The SDCA method combines several of the best properties and capabilities of logistic regression and SVM algorithms.
232
+
For more information on SDCA, see:
233
+
<see href='https://www.microsoft.com/en-us/research/wp-content/uploads/2016/06/main-3.pdf'>Scaling Up Stochastic Dual Coordinate Ascent</see>.
234
+
<see href='http://www.jmlr.org/papers/volume14/shalev-shwartz13a/shalev-shwartz13a.pdf'>Stochastic Dual Coordinate Ascent Methods for Regularized Loss Minimization</see>.
235
+
Note that SDCA is a stochastic and streaming optimization algorithm.
236
+
The results depends on the order of the training data. For reproducible results, it is recommended that one sets `shuffle` to
237
+
`False` and `NumThreads` to `1`.
238
+
Elastic net regularization can be specified by the l2_weight and l1_weight parameters. Note that the l2_weight has an effect on the rate of convergence.
239
+
In general, the larger the l2_weight, the faster SDCA converges.";
240
+
225
241
// The order of these matter, since they are used as indices into arrays.
Copy file name to clipboardExpand all lines: src/Microsoft.ML.StandardLearners/Standard/LogisticRegression/LbfgsPredictorBase.cs
+22
Original file line number
Diff line number
Diff line change
@@ -94,6 +94,28 @@ public abstract class ArgumentsBase : LearnerInputBaseWithWeight
94
94
publicboolEnforceNonNegativity=false;
95
95
}
96
96
97
+
internalconststringDetailedSummary=@"Logistic Regression is a classification method used to predict the value of a categorical dependent variable from its relationship to one or more independent variables assumed to have a logistic distribution.
98
+
If the dependent variable has only two possible values (success/failure), then the logistic regression is binary.
99
+
If the dependent variable has more than two possible values (blood type given diagnostic test results), then the logistic regression is multinomial.
100
+
The optimization technique used for LogisticRegressionBinaryClassifier is the limited memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS).
101
+
Both the L-BFGS and regular BFGS algorithms use quasi-Newtonian methods to estimate the computationally intensive Hessian matrix in the equation used by Newton's method to calculate steps.
102
+
But the L-BFGS approximation uses only a limited amount of memory to compute the next step direction, so that it is especially suited for problems with a large number of variables.
103
+
The memory_size parameter specifies the number of past positions and gradients to store for use in the computation of the next step.
104
+
This learner can use elastic net regularization: a linear combination of L1 (lasso) and L2 (ridge) regularizations.
105
+
Regularization is a method that can render an ill-posed problem more tractable by imposing constraints that provide information to supplement the data and that prevents overfitting by penalizing models with extreme coefficient values.
106
+
This can improve the generalization of the model learned by selecting the optimal complexity in the bias-variance tradeoff. Regularization works by adding the penalty that is associated with coefficient values to the error of the hypothesis.
107
+
An accurate model with extreme coefficient values would be penalized more, but a less accurate model with more conservative values would be penalized less. L1 and L2 regularization have different effects and uses that are complementary in certain respects.
108
+
l1_weight: can be applied to sparse models, when working with high-dimensional data. It pulls small weights associated features that are relatively unimportant towards 0.
109
+
l2_weight: is preferable for data that is not sparse. It pulls large weights towards zero.
110
+
Adding the ridge penalty to the regularization overcomes some of lasso's limitations. It can improve its predictive accuracy, for example, when the number of predictors is greater than the sample size. If x = l1_weight and y = l2_weight, ax + by = c defines the linear span of the regularization terms.
111
+
The default values of x and y are both 1.
112
+
An agressive regularization can harm predictive capacity by excluding important variables out of the model. So choosing the optimal values for the regularization parameters is important for the performance of the logistic regression model.
Copy file name to clipboardExpand all lines: src/Microsoft.ML.StandardLearners/Standard/LogisticRegression/MulticlassLogisticRegression.cs
+1-1
Original file line number
Diff line number
Diff line change
@@ -961,7 +961,7 @@ public IRow GetStatsIRowOrNull(RoleMappedSchema schema)
961
961
/// </summary>
962
962
publicpartialclassLogisticRegression
963
963
{
964
-
[TlcModule.EntryPoint(Name="Trainers.LogisticRegressionClassifier",Desc="Train a logistic regression multi class model",UserName=MulticlassLogisticRegression.UserNameValue,ShortName=MulticlassLogisticRegression.ShortName)]
internalconststringSummary="Perceptron is a binary classification algorithm that makes its predictions based on a linear function.";
40
+
internalconststringDetailedSummary=@"Perceptron is a classification algorithm that makes its predictions based on a linear function.
41
+
I.e., for an instance with feature values f0, f1,..., f_D-1, , the prediction is given by the sign of sigma[0,D-1] ( w_i * f_i), where w_0, w_1,...,w_D-1 are the weights computed by the algorithm.
42
+
Perceptron is an online algorithm, i.e., it processes the instances in the training set one at a time.
43
+
The weights are initialized to be 0, or some random values. Then, for each example in the training set, the value of sigma[0, D-1] (w_i * f_i) is computed.
44
+
If this value has the same sign as the label of the current example, the weights remain the same. If they have opposite signs,
45
+
the weights vector is updated by either subtracting or adding (if the label is negative or positive, respectively) the feature vector of the current example,
46
+
multiplied by a factor 0 < a <= 1, called the learning rate. In a generalization of this algorithm, the weights are updated by adding the feature vector multiplied by the learning rate,
47
+
and by the gradient of some loss function (in the specific case described above, the loss is hinge-loss, whose gradient is 1 when it is non-zero).
48
+
In Averaged Perceptron (AKA voted-perceptron), the weight vectors are stored,
49
+
together with a weight that counts the number of iterations it survived (this is equivalent to storing the weight vector after every iteration, regardless of whether it was updated or not).
50
+
The prediction is then calculated by taking the weighted average of all the sums sigma[0, D-1] (w_i * f_i) or the different weight vectors.";
40
51
41
52
publicclassArguments:AveragedLinearArguments
42
53
{
@@ -91,7 +102,7 @@ public override LinearBinaryPredictor CreatePredictor()
[TlcModule.EntryPoint(Name="Trainers.AveragedPerceptronBinaryClassifier",Desc="Train a Average perceptron.",UserName=UserNameValue,ShortName=ShortName)]
[TlcModule.EntryPoint(Name="Trainers.StochasticDualCoordinateAscentClassifier",Desc="Train an SDCA multi class model",UserName=SdcaMultiClassTrainer.UserNameValue,ShortName=SdcaMultiClassTrainer.ShortName)]
Copy file name to clipboardExpand all lines: src/Microsoft.ML.StandardLearners/Standard/SdcaRegression.cs
+1-1
Original file line number
Diff line number
Diff line change
@@ -131,7 +131,7 @@ protected override Float TuneDefaultL2(IChannel ch, int maxIterations, long rowC
131
131
/// </summary>
132
132
publicstaticpartialclassSdca
133
133
{
134
-
[TlcModule.EntryPoint(Name="Trainers.StochasticDualCoordinateAscentRegressor",Desc="Train an SDCA regression model",UserName=SdcaRegressionTrainer.UserNameValue,ShortName=SdcaRegressionTrainer.ShortName)]
0 commit comments