BriefGPT.xyz
Oct, 2024
统一多边际BERT用于稳健自然语言处理
Unitary Multi-Margin BERT for Robust Natural Language Processing
HTML
PDF
Hao-Yuan Chang, Kang L. Wang
TL;DR
本研究解决了深度学习在对抗攻击中面临的挑战,特别是针对自然语言处理(NLP)系统的漏洞。通过结合单位权重和多边际损失的方法,提出了一种新的通用技术,显著提升了BERT模型的鲁棒性,后攻击分类准确率提高了5.3%,达到73.8%,同时保持了竞争性的攻击前准确率。
Abstract
Recent developments in
Adversarial Attacks
on deep learning leave many mission-critical
Natural Language Processing
(NLP) systems at risk of exploitation. To address the lack of computationally efficient adversar
→