BriefGPT.xyz
Oct, 2020
基于Bandit Sampling的Adam深度学习方法
Adam with Bandit Sampling for Deep Learning
HTML
PDF
Rui Liu, Tianyi Wu, Barzan Mozafari
TL;DR
本文提出了一种名为Adambs的通用优化方法,该方法可以适应模型收敛过程中不同训练样本的重要性,从而加速收敛,实验结果表明Adambs在各种模型和数据集上都可以快速收敛。
Abstract
Adam is a widely used
optimization
method for training
deep learning
models. It computes individual adaptive learning rates for different parameters. In this paper, we propose a generalization of Adam, called
→