In this paper, we focus on a novel optimization problem in which the
objective function is a black-box and can only be evaluated through a ranking
oracle. This problem is common in real-world applications, particularly in
cases where the function is assessed by human judges. reinforcement lea
本文将 learning to learn(L2L)框架扩展到零阶(ZO)优化设置,其中没有明确的梯度信息,并将学习的优化器建模为循环神经网络(RNN),通过 ZO 梯度估算器近似梯度,并利用以前迭代的知识产生参数更新,进一步引入另一个 RNN 来学习高斯采样规则并动态指导查询方向采样。我们的学习优化器在合成和实际 ZO 优化任务中表现出优异的收敛率和最终解决方案,特别是在 Black-box Adversarial Attack 任务中。
DeepZero 是一个基于 Zeroth-order optimization 的深度学习框架,通过三个主要创新使得 ZO 优化可用于深度神经网络的训练,同时实现了与一阶优化相当的性能,其优点包括坐标梯度估计(CGE)在训练准确性和计算效率上的优势,以及利用模型剪枝方法扩展稀疏 DL 先验信息的 ZO 训练协议,并通过特征重用和前向并行化方法提高 ZO 训练的实际实施。