BriefGPT.xyz
Dec, 2023
通过共享梯度加速元学习
Accelerating Meta-Learning by Sharing Gradients
HTML
PDF
Oscar Chang, Hod Lipson
TL;DR
通过内部循环正则化机制,共享梯度信息并根据元学习参数进行贡献缩放,实现了渐进共享的元学习,从而有效地加速元训练过程,提高内部循环学习速率。
Abstract
The success of
gradient-based meta-learning
is primarily attributed to its ability to leverage related tasks to learn
task-invariant information
. However, the absence of interactions between different tasks in th
→