BriefGPT.xyz
Jan, 2019
基于互信息的泛化误差界限的紧缩
Tightening Mutual Information Based Bounds on Generalization Error
HTML
PDF
Yuheng Bu, Shaofeng Zou, Venugopal V. Veeravalli
TL;DR
利用信息论推导出监督学习算法的泛化误差的信息熵上界,能够更全面地考虑损失函数的条件,并且在应用于嘈杂和迭代算法时能够给出比现有结果更紧密的泛化误差表征。
Abstract
A
mutual information
based upper bound on the
generalization error
of a supervised learning algorithm is derived in this paper. The bound is constructed in terms of the
→