You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
/// | Required NuGet in addition to Microsoft.ML | None |
41
41
///
42
-
/// ### Training Algorithm Details
43
-
/// This trainer is based on the Stochastic Dual Coordinate Ascent (SDCA) method, a state-of-the-art optimization technique for convex objective functions.
44
-
/// The algorithm can be scaled for use on large out-of-memory data sets due to a semi-asynchronized implementation that supports multi-threading.
45
-
/// Convergence is underwritten by periodically enforcing synchronization between primal and dual updates in a separate thread.
46
-
/// Several choices of loss functions are also provided.The SDCA method combines several of the best properties and capabilities of logistic regression and SVM algorithms.
47
-
/// Note that SDCA is a stochastic and streaming optimization algorithm. The results depends on the order of the training data.
48
-
/// For reproducible results, it is recommended that one sets 'Shuffle' to False and 'NumThreads' to 1.
49
-
/// Elastic net regularization can be specified by the 'L2Const' and 'L1Threshold' parameters. Note that the 'L2Const' has an effect on the rate of convergence.
50
-
/// In general, the larger the 'L2Const', the faster SDCA converges.
51
-
/// For more information, see: [Scaling Up Stochastic Dual Coordinate Ascent](https://www.microsoft.com/en-us/research/wp-content/uploads/2016/06/main-3.pdf ) and
52
-
/// [Stochastic Dual Coordinate Ascent Methods for Regularized Loss Minimization](http://www.jmlr.org/papers/volume14/shalev-shwartz13a/shalev-shwartz13a.pdf).
0 commit comments