BriefGPT.xyz
Feb, 2019
连续学习的统一贝叶斯视角
A Unifying Bayesian View of Continual Learning
HTML
PDF
Sebastian Farquhar, Yarin Gal
TL;DR
这篇论文介绍了一种新的Bayesian衍生连续学习损失函数,该函数不仅仅依赖于早期任务的后验分布,而是通过改变似然项自适应地调整模型,并将先验和似然项结合在一个框架下。
Abstract
Some
machine learning
applications require
continual learning
- where data comes in a sequence of datasets, each is used for training and then permanently discarded. From a
→