BriefGPT.xyz
Sep, 2021
XLM-K: 利用多语言知识提高跨语言语言模型的预训练
XLM-K: Improving Cross-Lingual Language Model Pre-Training with Multilingual Knowledge
HTML
PDF
Xiaoze Jiang, Yaobo Liang, Weizhu Chen, Nan Duan
TL;DR
本文提出了一种名为XLM-K的跨语言语言模型,它将多语言知识融入预训练中并通过两种知识任务对其进行了拓展,结果显示XLM-K在多项任务上表现出更高的优越性。
Abstract
cross-lingual
pre-training
has achieved great successes using monolingual and bilingual plain text corpora. However, existing pre-trained models neglect
→