BriefGPT.xyz
Feb, 2024
大型语言模型的细粒度排毒与实例级前缀
Fine-Grained Detoxification via Instance-Level Prefixes for Large Language Models
HTML
PDF
Xin Yi, Linlin Wang, Xiaoling Wang, Liang He
TL;DR
使用细粒度去毒化的方法通过添加正向和多个负向前缀构造细粒度的次毒性向量,从而在提供原始提示时协同去毒,进而实现对毒性文本的控制生成。
Abstract
Impressive results have been achieved in
natural language processing
(NLP) tasks through the training of
large language models
(LLMs). However, these models occasionally produce toxic content such as insults, thr
→