BriefGPT.xyz
May, 2023
增强对多任务模型的对抗攻击之动态梯度平衡
Dynamic Gradient Balancing for Enhanced Adversarial Attacks on Multi-Task Models
HTML
PDF
Lijun Zhang, Xiao Liu, Kaleel Mahmood, Caiwen Ding, Hui Guan
TL;DR
本文研究多任务学习中的单任务机器学习攻击,提出了基于平均相对损失变化的动态梯度平衡攻击方法(DGBA),并在两个流行的多任务学习基准测试库上进行了广泛评估。结果显示参数共享会提高任务准确性,但对提高模型健壮性有贡献。
Abstract
multi-task learning
(MTL) creates a single machine learning model called
multi-task model
to simultaneously perform multiple tasks. Although the security of single task classifiers has been extensively studied, t
→