BriefGPT.xyz
Mar, 2022
为对称分类任务减少预训练模型中的不一致性保持平衡
Striking a Balance: Alleviating Inconsistency in Pre-trained Models for Symmetric Classification Tasks
HTML
PDF
Ashutosh Kumar, Aditya Joshi
TL;DR
该研究使用一种一致性损失函数来解决在NLP领域中模型对称分类的不一致性问题,并在对称和非对称的六个数据集上测试并验证该方法。
Abstract
While
fine-tuning
pre-trained models for downstream
classification
is the conventional paradigm in
nlp
, often task-specific nuances may no
→