BriefGPT.xyz
Oct, 2022
Sample-Efficient NLP 模型更加鲁棒吗?
Are Sample-Efficient NLP Models More Robust?
HTML
PDF
Nelson F. Liu, Ananya Kumar, Percy Liang, Robin Jia
TL;DR
本篇研究发现,虽然预训练模型具有更高的非分布式鲁棒性,但当零样本模型在更多的领域内数据进行精细调节时,其鲁棒性增益会逐渐减弱,因此该研究关注了不同模型的样本效率与鲁棒性之间的关系,并在案例分析中发现,获得更好的样本效率可能会带来更高的鲁棒性,也可能不会,取决于数据的情况以及建模技术的影响。
Abstract
Recent work has observed that
pre-trained models
have higher out-of-distribution (OOD) robustness when they are exposed to less in-distribution (ID) training data (Radford et al., 2021). In particular,
zero-shot models<
→