You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: src/Microsoft.ML.StandardTrainers/Standard/SdcaBinary.cs
+14-14
Original file line number
Diff line number
Diff line change
@@ -2170,7 +2170,7 @@ private protected override void CheckLabel(RoleMappedData examples, out int weig
2170
2170
}
2171
2171
2172
2172
/// <summary>
2173
-
/// The<see cref="IEstimator{TTransformer}"/> for training logistic regression using a parallel stochastic gradient method.
2173
+
/// The<see cref="IEstimator{TTransformer}"/> for training logistic regression using a parallel stochastic gradient method.
2174
2174
/// The trained model is <a href='https://en.wikipedia.org/wiki/Calibration_(statistics)'>calibrated</a> and can produce probability by feeding the output value of the
2175
2175
/// linear function to a <see cref="PlattCalibrator"/>.
2176
2176
/// </summary>
@@ -2179,7 +2179,7 @@ private protected override void CheckLabel(RoleMappedData examples, out int weig
2179
2179
/// To create this trainer, use [SgdCalibrated](xref:Microsoft.ML.StandardTrainersCatalog.SgdCalibrated(Microsoft.ML.BinaryClassificationCatalog.BinaryClassificationTrainers,System.String,System.String,System.String,System.Int32,System.Double,System.Single))
2180
2180
/// or [SgdCalibrated(Options)](xref:Microsoft.ML.StandardTrainersCatalog.SgdCalibrated(Microsoft.ML.BinaryClassificationCatalog.BinaryClassificationTrainers,Microsoft.ML.Trainers.SgdCalibratedTrainer.Options)).
@@ -2191,14 +2191,14 @@ private protected override void CheckLabel(RoleMappedData examples, out int weig
2191
2191
///
2192
2192
/// ### Training Algorithm Details
2193
2193
/// The Stochastic Gradient Descent (SGD) is one of the popular stochastic optimization procedures that can be integrated
2194
-
/// into several machine learning tasks to achieve state-of-the-art performance. This trainer implements the Hogwild SGD for binary classification
2195
-
/// that supports multi-threading without any locking. If the associated optimization problem is sparse, Hogwild SGD achieves a nearly optimal
2196
-
/// rate of convergence. For more details about Hogwild SGD can be found [here](http://arxiv.org/pdf/1106.5730v2.pdf).
2194
+
/// into several machine learning tasks to achieve state-of-the-art performance. This trainer implements the Hogwild Stochastic Gradient Descent for binary classification
2195
+
/// that supports multi-threading without any locking. If the associated optimization problem is sparse, Hogwild Stochastic Gradient Descent achieves a nearly optimal
2196
+
/// rate of convergence. For more details about Hogwild Stochastic Gradient Descent can be found [here](http://arxiv.org/pdf/1106.5730v2.pdf).
/// To create this trainer, use [SgdNonCalibrated](xref:Microsoft.ML.StandardTrainersCatalog.SgdNonCalibrated(Microsoft.ML.BinaryClassificationCatalog.BinaryClassificationTrainers,System.String,System.String,System.String,Microsoft.ML.Trainers.IClassificationLoss,System.Int32,System.Double,System.Single))
2270
2270
/// or [SgdNonCalibrated(Options)](xref:Microsoft.ML.StandardTrainersCatalog.SgdNonCalibrated(Microsoft.ML.BinaryClassificationCatalog.BinaryClassificationTrainers,Microsoft.ML.Trainers.SgdNonCalibratedTrainer.Options)).
/// | Required NuGet in addition to Microsoft.ML | None |
2281
2281
///
2282
2282
/// ### Training Algorithm Details
2283
-
/// The Stochastic Gradient Descent (SGD) is one of the popular stochastic optimization procedures that can be integrated
2284
-
/// into several machine learning tasks to achieve state-of-the-art performance. This trainer implements the Hogwild SGD for binary classification
2285
-
/// that supports multi-threading without any locking. If the associated optimization problem is sparse, Hogwild SGD achieves a nearly optimal
2286
-
/// rate of convergence. For more details about Hogwild SGD can be found [here](http://arxiv.org/pdf/1106.5730v2.pdf).
2283
+
/// The Stochastic Gradient Descent is one of the popular stochastic optimization procedures that can be integrated
2284
+
/// into several machine learning tasks to achieve state-of-the-art performance. This trainer implements the Hogwild Stochastic Gradient Descent for binary classification
2285
+
/// that supports multi-threading without any locking. If the associated optimization problem is sparse, Hogwild Stochastic Gradient Descent achieves a nearly optimal
2286
+
/// rate of convergence. For more details about Hogwild Stochastic Gradient Descent can be found [here](http://arxiv.org/pdf/1106.5730v2.pdf).
/// <param name="labelColumnName">The name of the label column, or dependent variable. The column data must be <see cref="System.Boolean"/>.</param>
25
-
/// <param name="featureColumnName">The features, or independent variables. The column data must be a known-sized vector of <see cref="System.Single"/></param>
25
+
/// <param name="featureColumnName">The features, or independent variables. The column data must be a known-sized vector of <see cref="System.Single"/>.</param>
26
26
/// <param name="exampleWeightColumnName">The name of the example weight column (optional).</param>
27
27
/// <param name="numberOfIterations">The maximum number of passes through the training dataset; set to 1 to simulate online learning.</param>
28
28
/// <param name="learningRate">The initial learning rate used by SGD.</param>
@@ -49,7 +49,7 @@ public static SgdCalibratedTrainer SgdCalibrated(this BinaryClassificationCatalo
49
49
}
50
50
51
51
/// <summary>
52
-
/// Creates a <see cref="Trainers.SgdCalibratedTrainer"/> that predicts a target using a linear classification model and advanced options.
52
+
/// Create <see cref="Trainers.SgdCalibratedTrainer"/> with advanced options, which predicts a target using a linear classification model.
53
53
/// Stochastic gradient descent (SGD) is an iterative algorithm that optimizes a differentiable objective function.
/// <param name="labelColumnName">The name of the label column, or dependent variable. The column data must be <see cref="System.Boolean"/>.</param>
80
-
/// <param name="featureColumnName">The features, or independent variables. The column data must be a known-sized vector of <see cref="System.Single"/></param>
80
+
/// <param name="featureColumnName">The features, or independent variables. The column data must be a known-sized vector of <see cref="System.Single"/>.</param>
81
81
/// <param name="exampleWeightColumnName">The name of the example weight column (optional).</param>
82
82
/// <param name="lossFunction">The <a href="https://en.wikipedia.org/wiki/Loss_function">loss</a> function minimized in the training process. Using, for example, <see cref="HingeLoss"/> leads to a support vector machine trainer.</param>
83
83
/// <param name="numberOfIterations">The maximum number of passes through the training dataset; set to 1 to simulate online learning.</param>
@@ -106,7 +106,7 @@ public static SgdNonCalibratedTrainer SgdNonCalibrated(this BinaryClassification
106
106
}
107
107
108
108
/// <summary>
109
-
/// Creates a <see cref="Trainers.SgdNonCalibratedTrainer"/> that predicts a target using a linear classification model and advanced options.
109
+
/// Create <see cref="Trainers.SgdNonCalibratedTrainer"/> with advanced options, which predicts a target using a linear classification model.
110
110
/// Stochastic gradient descent (SGD) is an iterative algorithm that optimizes a differentiable objective function.
0 commit comments