BriefGPT.xyz
May, 2019
核方法中的分布鲁棒优化和泛化
Distributionally Robust Optimization and Generalization in Kernel Methods
HTML
PDF
Matthew Staib, Stefanie Jegelka
TL;DR
本文研究了使用最大均值差(MMD)来测量不确定性集合的DRO,证明了MMD DRO与希尔伯特范数的正则化基本等价,并揭示了与统计学习中的经典结果的深刻联系,并且通过DRO证明了高斯核岭回归的广义上界,从而得出一种新的正则化方法。
Abstract
distributionally robust optimization
(DRO) has attracted attention in machine learning due to its connections to regularization, generalization, and robustness. Existing work has considered
uncertainty sets
based
→