BriefGPT.xyz
Jan, 2021
BERT和ALBERT句子嵌入在下游NLP任务中的表现评估
Evaluation of BERT and ALBERT Sentence Embedding Performance on Downstream NLP Tasks
HTML
PDF
Hyunjin Choi, Judong Kim, Seongho Joe, Youngjune Gwon
TL;DR
探讨了使用BERT和ALBERT进行Sentence Embedding的方式,并通过实验发现,对于STS和NLI数据集的任务,ALBERT表现明显优于BERT。
Abstract
Contextualized representations from a
pre-trained language model
are central to achieve a high performance on downstream
nlp task
. The pre-trained
→