BriefGPT.xyz
Aug, 2024
妥协!意大利多重越狱攻击削弱大型语言模型的安全性
Compromesso! Italian Many-Shot Jailbreaks Undermine the Safety of Large Language Models
HTML
PDF
Fabio Pernisi, Dirk Hovy, Paul Röttger
TL;DR
本研究针对大型语言模型(LLM)在非英语环境下的安全性漏洞进行了调查,填补了当前研究的空白。我们通过构建一个包含不安全意大利问答对的新数据集,发现意大利LLM在多次越狱提示下表现出明显的不安全行为,尤其是在少量不安全示范的情况下,这种不安全倾向会迅速加剧。
Abstract
As diverse linguistic communities and users adopt
Large Language Models
(LLMs), assessing their
Safety
across languages becomes critical. Despite ongoing efforts to make LLMs safe, they can still be made to behav
→