BriefGPT.xyz
Apr, 2023
深度神经网络中广义Mini-batch梯度下降法的逐点收敛定理
Pointwise convergence theorem of generalized mini-batch gradient descent in deep neural network
HTML
PDF
Tsuyoshi Yoneda
TL;DR
本文通过构建深度神经网络,并使用 mini-batch 梯度下降法在 ReLU-DNN 中对非平滑指示器函数进行训练,证明了该网络具有逐点收敛性。
Abstract
The theoretical structure of
deep neural network
(DNN) has been clarified gradually. Imaizumi-Fukumizu (2019) and Suzuki (2019) clarified that the
learning ability
of DNN is superior to the previous theories when
→