BriefGPT.xyz
Dec, 2021
随机梯度下降算法自适应步长的局部二次收敛
Local Quadratic Convergence of Stochastic Gradient Descent with Adaptive Step Size
HTML
PDF
Adityanarayanan Radhakrishnan, Mikhail Belkin, Caroline Uhler
TL;DR
研究了一种在求解矩阵求逆等问题中具有局部二次收敛性的随机梯度下降优化方法,该方法采用自适应步长和一阶优化方法,为优化方法在深度学习中的应用提供了一条快速收敛的途径。
Abstract
Establishing a fast rate of convergence for
optimization methods
is crucial to their applicability in practice. With the increasing popularity of
deep learning
over the past decade,
→