BriefGPT.xyz
Jan, 2021
跨语言视觉预训练用于多模式机器翻译
Cross-lingual Visual Pre-training for Multimodal Machine Translation
HTML
PDF
Ozan Caglayan, Menekse Kuyu, Mustafa Sercan Amac, Pranava Madhyastha, Erkut Erdem...
TL;DR
本文结合视觉和语言的跨语言预训练方法,使用三重并行视觉和语言语料库进行预训练,并说明所学习的基于视觉的跨语言表示对于多模式机器翻译的性能具有领先优势。
Abstract
pre-trained language models
have been shown to improve performance in many natural language tasks substantially. Although the early focus of such models was single language pre-training, recent advances have resulted in cross-lingual and
→