BriefGPT.xyz
Oct, 2019
Multilingual BERT在语言生成方面是否流利?
Is Multilingual BERT Fluent in Language Generation?
HTML
PDF
Samuel Rönnqvist, Jenna Kanerva, Tapio Salakoski, Filip Ginter
TL;DR
本文探讨了多语言 BERT 模型在语言编码、语法特性、语言生成等任务上的表现,发现该模型性能低于单语言模型,在某些情况下无法取代单语言模型,尤其在北欧语言方面表现不足。
Abstract
The
multilingual bert
model is trained on 104 languages and meant to serve as a universal
language model
and tool for encoding sentences. We explore how well the model performs on several languages across several
→