BriefGPT.xyz
Feb, 2021
具保证的学习调参元策略
Meta-strategy for Learning Tuning Parameters with Guarantees
HTML
PDF
Dimitri Meunier, Pierre Alquier
TL;DR
本文提出了一种元学习策略,通过最小化后悔界来学习在线学习的初始化和步长,以及exponentially weighted aggregation 的先验分布或学习率,并进行了后悔分析,以确定元学习是否确实改进了每个单独任务的学习效果。
Abstract
Online gradient methods, like the
online gradient algorithm
(OGA), often depend on tuning parameters that are difficult to set in practice. We consider an online
meta-learning
scenario, and we propose a meta-stra
→