BriefGPT.xyz
Jan, 2025
从表面模式到语义理解:对比集上微调语言模型
From Superficial Patterns to Semantic Understanding: Fine-Tuning Language Models on Contrast Sets
HTML
PDF
Daniel Petrov
TL;DR
本研究针对语言模型在对比集上的表现不佳这一问题,提出通过在训练过程中引入更多复杂对比集的方式来提升模型的鲁棒性。研究发现,这种方法能显著提升模型对语言模式的理解能力,使其在对比集上的准确率达到近90%,强调了多样化和具有挑战性的训练数据的重要性。
Abstract
Large-scale pre-trained
Language Models
have demonstrated high performance on standard datasets for
Natural Language Inference
(NLI) tasks. Unfortunately, these evaluations can be misleading, as although the mode
→