Zheng Qu, Peter Richtárik, Martin Takáč, Olivier Fercoq
TL;DR通过Stochastic Dual Newton Ascent算法,我们提出一种新的途径最小化正则化经验损失,该方法更新了随机子集的对偶变量,可以利用模型中所有曲率信息,实践中有着明显的提高,特别对于二次损失函数。
Abstract
We propose a new algorithm for minimizing regularized empirical loss: stochastic dual newton ascent (SDNA). Our method is dual in nature: in each iteration we update a random subset of the →