BriefGPT.xyz
Mar, 2020
在线逻辑回归的高效非正当学习
Efficient improper learning for online logistic regression
HTML
PDF
Rémi Jézéquel, Pierre Gaillard, Alessandro Rudi
TL;DR
本文研究在线逻辑回归的问题,提出了一种高效的不合适算法,它避免了指数倍的常数,并保持了对数回归。通过采用经验风险最小化的正则化和替代损失,我们的新算法仅需O(B log(Bn))的损失缩放,每轮的时间复杂度为O(d ^ 2)。
Abstract
We consider the setting of
online logistic regression
and consider the
regret
with respect to the 2-ball of radius B. It is known (see [Hazan et al., 2014]) that any proper algorithm which has logarithmic
→