TL;DR本文提出了一种名为LMTurk的方法,将pretrained language models作为众包工作者进行任务注解,利用active learning减少对PLMs的查询,提高few-shot learners生成的注解的效率和准确性,从而在降低计算代价的同时提高当前PLMs的使用效果。
Abstract
Vast efforts have been devoted to creating high-performance few-shot learners, i.e., models that perform well with little training data. Training large-scale pretrained language models (PLMs) has incurred signifi