copt.fmin_SAGA¶
-
copt.
fmin_SAGA
(f, g, x0, step_size=-1, n_jobs=1, max_iter=100, tol=1e-06, verbose=False, callback=None, trace=False)[source]¶ Stochastic average gradient augmented (SAGA) algorithm.
The SAGA algorithm can solve optimization problems of the form
argmin_x f(x) + g(x)Parameters: - g (f,) – loss functions. g can be none
- x0 (
ndarray
) – Starting point
Return type: OptimizeResult
Returns: The optimization result represented as a
scipy.optimize.OptimizeResult
object. Important attributes are:x
the solution array,success
a Boolean flag indicating if the optimizer exited successfully andmessage
which describes the cause of the termination. See scipy.optimize.OptimizeResult for a description of other attributes.Return type: opt
References
Defazio, Aaron, Francis Bach, and Simon Lacoste-Julien. “SAGA: A fast incremental gradient method with support for non-strongly convex composite objectives.” Advances in Neural Information Processing Systems. 2014.
Rémi Leblond, Fabian Pedregosa, Simon Lacoste-Julien. “ASAGA: Asynchronous parallel SAGA”. Proceedings of the 20th International Conference on Artificial Intelligence and Statistics (AISTATS). 2017.