BriefGPT.xyz
Jun, 2023
评估语言模型的零样本鲁棒性
Evaluating the Zero-shot Robustness of Instruction-tuned Language Models
HTML
PDF
Jiuding Sun, Chantal Shaib, Byron C. Wallace
TL;DR
本研究提出了一种简单的方法来提高指导微调模型的鲁棒性,即通过引入“软提示”嵌入参数并优化这些参数来最大化语义等效说明的表示之间的相似性。
Abstract
instruction fine-tuning
has recently emerged as a promising approach for improving the zero-shot capabilities of
large language models
(LLMs) on new tasks. This technique has shown particular strength in improvin
→