BriefGPT.xyz
Feb, 2018
自适应SGD分布式随机优化
Distributed Stochastic Optimization via Adaptive Stochastic Gradient Descent
HTML
PDF
Ashok Cutkosky, Robert Busa-Fekete
TL;DR
本文提出了一种高效的分布式随机优化方法,通过结合适应性与方差约减技术,从而实现任何串行在线学习算法的并行计算,能够在不需要光滑参数的先验知识的情况下实现最优收敛速率,同时通过Spark分布式框架的实现能够对大规模逻辑回归问题进行高效处理。
Abstract
stochastic convex optimization
algorithms are the most popular way to train
machine learning
models on large-scale data. Scaling up the training process of these models is crucial in many applications, but the mo
→