BriefGPT.xyz
Apr, 2020
梯度 Dropout 正则化元学习
Regularizing Meta-Learning via Gradient Dropout
HTML
PDF
Hung-Yu Tseng, Yi-Wen Chen, Yi-Hsuan Tsai, Sifei Liu, Yen-Yu Lin...
TL;DR
本文提出了一种简单而有效的方法,通过在每个参数的内部循环优化中随机删除梯度来缓解基于梯度的元学习的过拟合风险,从而改善了深度神经网络在新任务上的泛化性能。 作者在大量计算机视觉任务上进行了实验和分析,证明了梯度丢失规范化可以缓解过拟合问题并提高各种基于梯度的元学习框架的性能。
Abstract
With the growing attention on learning-to-learn new tasks using only a few examples,
meta-learning
has been widely used in numerous problems such as few-shot classification, reinforcement learning, and domain generalization. However,
→