BriefGPT.xyz
Jun, 2019
预训练变形金刚模型在抽象摘要中的有效适应性
Efficient Adaptation of Pretrained Transformers for Abstractive Summarization
HTML
PDF
Andrew Hoang, Antoine Bosselut, Asli Celikyilmaz, Yejin Choi
TL;DR
本文探讨使用预训练的Transformer语言模型来进行文本摘要的实现,提出了基于源嵌入和领域自适应训练的方法,并在三个摘要数据集上进行了测试,并在其中两个数据集上取得了新的最佳表现。结果表明,该方法能够产生更专注的摘要,并且对于更抽象的数据集表现得更加明显。
Abstract
Large-scale learning of
transformer language models
has yielded improvements on a variety of natural language understanding tasks. Whether they can be effectively adapted for
summarization
, however, has been less
→