BriefGPT.xyz
Jun, 2024
多语言语言模型中选择性知识的跨语言遗忘
Cross-Lingual Unlearning of Selective Knowledge in Multilingual Language Models
HTML
PDF
Minseok Choi, Kyunghyun Min, Jaegul Choo
TL;DR
本研究论文提出了一种创新的方法,针对多语言语言模型的机器遗忘,通过选择性地擦除不同语言中的信息,同时保持总体性能,有效解决了低资源语言攻击的问题,为安全可适应的多语言语言模型设定了新的标准。
Abstract
pretrained language models
memorize vast amounts of information, including private and copyrighted data, raising significant safety concerns. Retraining these models after excluding sensitive data is prohibitively expensive, making
→