BriefGPT.xyz
Apr, 2022
低资源命名实体识别的预训练编码器比较研究
A Comparative Study of Pre-trained Encoders for Low-Resource Named Entity Recognition
HTML
PDF
Yuxuan Chen, Jonas Mikkelsen, Arne Binder, Christoph Alt, Leonhard Hennig
TL;DR
该研究比较了不同策略下的预训练编码器在低数据量情境下的命名实体识别表现,结果表明编码器表现存在显著差异,并需要结合具体场景进行评估选择。
Abstract
pre-trained language models
(PLM) are effective components of few-shot
named entity recognition
(NER) approaches when augmented with continued pre-training on task-specific out-of-domain data or
→