BriefGPT.xyz
May, 2020
信息密度和条件信息密度的泛化界
Generalization Bounds via Information Density and Conditional Information Density
HTML
PDF
Fredrik Hellström, Giuseppe Durisi
TL;DR
通过指数不等式的方法,我们研究了随机学习算法的泛化误差的界限和概率分布,针对亚高斯损失函数提供了以训练数据和输出假设之间信息密度为依据的新的界限,并将该方法扩展到了基于随机选择训练数据子集的情况。
Abstract
We present a general approach, based on an exponential inequality, to derive bounds on the
generalization error
of
randomized learning algorithms
. Using this approach, we provide bounds on the average
→