BriefGPT.xyz
May, 2024
通过凸优化对两层多项式和ReLU激活网络进行对抗训练
Adversarial Training of Two-Layer Polynomial and ReLU Activation Networks via Convex Optimization
HTML
PDF
Daniel Kuelbs, Sanjay Lall, Mert Pilanci
TL;DR
训练神经网络,使其对抗性攻击具有鲁棒性,特别是在采用过度参数化的模型进行安全关键设置时,仍然是深度学习中的一个重要问题。
Abstract
Training
neural networks
which are robust to
adversarial attacks
remains an important problem in deep learning, especially as heavily overparameterized models are adopted in safety-critical settings. Drawing from
→