You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: src/Microsoft.ML.Mkl.Components/MklComponentsCatalog.cs
+2-2
Original file line number
Diff line number
Diff line change
@@ -69,7 +69,7 @@ public static OlsTrainer Ols(
69
69
}
70
70
71
71
/// <summary>
72
-
/// Create an <see cref="SymbolicSgdLogisticRegressionBinaryTrainer"/> with advanced options, which predicts a target using a linear binary classification model trained over boolean label data.
72
+
/// Create <see cref="SymbolicSgdLogisticRegressionBinaryTrainer"/>, which predicts a target using a linear binary classification model trained over boolean label data.
73
73
/// Stochastic gradient descent (SGD) is an iterative algorithm that optimizes a differentiable objective function.
74
74
/// The <see cref="SymbolicSgdLogisticRegressionBinaryTrainer"/> parallelizes SGD using <a href="https://www.microsoft.com/en-us/research/project/project-parade/#!symbolic-execution">symbolic execution</a>.
75
75
/// </summary>
@@ -102,7 +102,7 @@ public static SymbolicSgdLogisticRegressionBinaryTrainer SymbolicSgdLogisticRegr
102
102
}
103
103
104
104
/// <summary>
105
-
/// Create an<see cref= "SymbolicSgdLogisticRegressionBinaryTrainer" />, which predicts a target using a linear binary classification model trained over boolean label data.
105
+
/// Create <see cref= "SymbolicSgdLogisticRegressionBinaryTrainer" /> with advanced options, which predicts a target using a linear binary classification model trained over boolean label data.
106
106
/// Stochastic gradient descent (SGD) is an iterative algorithm that optimizes a differentiable objective function.
107
107
/// The <see cref="SymbolicSgdLogisticRegressionBinaryTrainer"/> parallelizes SGD using <a href="https://www.microsoft.com/en-us/research/project/project-parade/#!symbolic-execution">symbolic execution</a>.
Copy file name to clipboardExpand all lines: src/Microsoft.ML.Mkl.Components/SymSgdClassificationTrainer.cs
+9-8
Original file line number
Diff line number
Diff line change
@@ -36,8 +36,8 @@ namespace Microsoft.ML.Trainers
36
36
/// </summary>
37
37
/// <remarks>
38
38
/// <format type="text/markdown"><
40
-
/// or [SymbolicStochasticGradientDescent(Options)](xref:Microsoft.ML.MklComponentsCatalog.SymbolicSgdLogisticRegression(Microsoft.ML.BinaryClassificationCatalog.BinaryClassificationTrainers,Microsoft.ML.Trainers.SymbolicSgdLogisticRegressionBinaryTrainer.Options).
39
+
/// To create this trainer, use [SymbolicStochasticGradientDescent](xref:Microsoft.ML.MklComponentsCatalog.SymbolicSgdLogisticRegression(Microsoft.ML.BinaryClassificationCatalog.BinaryClassificationTrainers,System.String,System.String,System.Int32))
40
+
/// or [SymbolicStochasticGradientDescent(Options)](xref:Microsoft.ML.MklComponentsCatalog.SymbolicSgdLogisticRegression(Microsoft.ML.BinaryClassificationCatalog.BinaryClassificationTrainers,Microsoft.ML.Trainers.SymbolicSgdLogisticRegressionBinaryTrainer.Options)).
/// | Required NuGet in addition to Microsoft.ML |Microsoft.ML.Mkl.Components |
51
+
///
51
52
/// ### Training Algorithm Details
52
-
/// The symbolic SGD is a classification algorithm that makes its predictions by finding a separating hyperplane.
53
+
/// The symbolic stochastic gradient descent is an algorithm that makes its predictions by finding a separating hyperplane.
53
54
/// For instance, with feature values $f0, f1,..., f_{D-1}$, the prediction is given by determining what side of the hyperplane the point falls into.
54
-
/// That is the same as the sign of the feautures' weighted sum, i.e. $\sum_{i = 0}^{D-1} (w_i * f_i)$, where $w_0, w_1,..., w_{D-1}$ are the weights computed by the algorithm.
55
+
/// That is the same as the sign of the feature's weighted sum, i.e. $\sum_{i = 0}^{D-1} (w_i * f_i)$, where $w_0, w_1,..., w_{D-1}$ are the weights computed by the algorithm.
55
56
///
56
-
/// While most of SGD algorithms is inherently sequential - at each step, the processing of the current example depends on the parameters learned from previous examples.
57
+
/// While most symbolic stochastic gradient descent algorithms are inherently sequential - at each step, the processing of the current example depends on the parameters learned from previous examples.
57
58
/// This algorithm trains local models in separate threads and probabilistic model cobminer that allows the local models to be combined
58
-
/// to produce the same result as what a sequential SGD would have produced, in expectation.
59
+
/// to produce the same result as what a sequential symbolic stochastic gradient descent would have produced, in expectation.
59
60
///
60
61
/// For more information see [Parallel Stochastic Gradient Descent with Sound Combiners](https://arxiv.org/abs/1705.08030).
0 commit comments