variance reduction techniques like svrg provide simple and fast algorithms
for optimizing a convex finite-sum objective. For nonconvex objectives, these
techniques can also find a first-order stationary point (wi
本文提出了一种基于损失函数的零阶优化算法(ZO-SVRG)以及相应的快速收敛的优化方法,即方差减少,用于解决应用中需要零阶优化的挑战,可用于黑盒物质分类和黑盒深度神经网络模型生成对抗性示例。我们的理论分析揭示了 ZO-SVRG 的本质难点,并提出了两种利用方差减少的梯度估计器的加速版本 ZO-SVRG,其在 ZO 随机优化(迭代次数)方面具有已知的最佳速度。与当前其他最先进的 ZO 算法相比,我们的方法在功能查询复杂性和收敛速率之间取得了平衡。
CheapSVRG is proposed as a new stochastic variance-reduction optimization scheme which achieves a linear convergence rate through a surrogate computation while also balancing computational complexity.