You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -2175,16 +2175,34 @@ private protected override void CheckLabel(RoleMappedData examples, out int weig
2175
2175
/// linear function to a <see cref="PlattCalibrator"/>.
2176
2176
/// </summary>
2177
2177
/// <remarks>
2178
-
/// The Stochastic Gradient Descent (SGD) is one of the popular stochastic optimization procedures that can be integrated
2179
-
/// into several machine learning tasks to achieve state-of-the-art performance. This trainer implements the Hogwild SGD for binary classification
2180
-
/// that supports multi-threading without any locking. If the associated optimization problem is sparse, Hogwild SGD achieves a nearly optimal
2181
-
/// rate of convergence. For more details about Hogwild SGD, please refer to http://arxiv.org/pdf/1106.5730v2.pdf.
2178
+
/// <format type="text/markdown"><)
2180
+
/// or [SgdCalibrated(Options)](xref:Microsoft.ML.StandardTrainersCatalog.SgdCalibrated(Microsoft.ML.BinaryClassificationCatalog.BinaryClassificationTrainers,Microsoft.ML.Trainers.SgdCalibratedTrainer.Options)).
/// <see cref="SgdNonCalibratedTrainer"/> can train a linear classification model by minimizing any loss function
2244
-
/// which implements <see cref="IClassificationLoss"/>.
2261
+
/// The <see cref="IEstimator{TTransformer}"/> for training logistic regression using a parallel stochastic gradient method.
2245
2262
/// </summary>
2263
+
/// <remarks>
2264
+
/// <format type="text/markdown"><)
2266
+
/// or [SgdNonCalibrated(Options)](xref:Microsoft.ML.StandardTrainersCatalog.SgdNonCalibrated(Microsoft.ML.BinaryClassificationCatalog.BinaryClassificationTrainers,Microsoft.ML.Trainers.SgdNonCalibratedTrainer.Options)).
/// <param name="labelColumnName">The name of the label column, or dependent variable.</param>
25
-
/// <param name="featureColumnName">The features, or independent variables.</param>
24
+
/// <param name="labelColumnName">The name of the label column, or dependent variable. The column data must be <see cref="System.Boolean"/>.</param>
25
+
/// <param name="featureColumnName">The features, or independent variables. The column data must be a known-sized vector of <see cref="System.Single"/>.</param>
26
26
/// <param name="exampleWeightColumnName">The name of the example weight column (optional).</param>
27
27
/// <param name="numberOfIterations">The maximum number of passes through the training dataset; set to 1 to simulate online learning.</param>
28
28
/// <param name="learningRate">The initial learning rate used by SGD.</param>
@@ -49,7 +49,7 @@ public static SgdCalibratedTrainer SgdCalibrated(this BinaryClassificationCatalo
49
49
}
50
50
51
51
/// <summary>
52
-
/// Predict a target using a linear classification model trained with <see cref="SgdCalibratedTrainer"/> and advanced options.
52
+
/// Create <see cref="Trainers.SgdCalibratedTrainer"/> with advanced options, which predicts a target using a linear classification model.
53
53
/// Stochastic gradient descent (SGD) is an iterative algorithm that optimizes a differentiable objective function.
/// <param name="labelColumnName">The name of the label column, or dependent variable.</param>
80
-
/// <param name="featureColumnName">The features, or independent variables.</param>
79
+
/// <param name="labelColumnName">The name of the label column, or dependent variable. The column data must be <see cref="System.Boolean"/>.</param>
80
+
/// <param name="featureColumnName">The features, or independent variables. The column data must be a known-sized vector of <see cref="System.Single"/>.</param>
81
81
/// <param name="exampleWeightColumnName">The name of the example weight column (optional).</param>
82
82
/// <param name="lossFunction">The <a href="https://en.wikipedia.org/wiki/Loss_function">loss</a> function minimized in the training process. Using, for example, <see cref="HingeLoss"/> leads to a support vector machine trainer.</param>
83
83
/// <param name="numberOfIterations">The maximum number of passes through the training dataset; set to 1 to simulate online learning.</param>
@@ -106,7 +106,7 @@ public static SgdNonCalibratedTrainer SgdNonCalibrated(this BinaryClassification
106
106
}
107
107
108
108
/// <summary>
109
-
/// Predict a target using a linear classification model trained with <see cref="SgdNonCalibratedTrainer"/> and advanced options.
109
+
/// Create <see cref="Trainers.SgdNonCalibratedTrainer"/> with advanced options, which predicts a target using a linear classification model.
110
110
/// Stochastic gradient descent (SGD) is an iterative algorithm that optimizes a differentiable objective function.
0 commit comments