BriefGPT.xyz
Jun, 2019
基于自注意力机制的格点输入模型
Self-Attentional Models for Lattice Inputs
HTML
PDF
Matthias Sperber, Graham Neubig, Ngoc-Quan Pham, Alex Waibel
TL;DR
使用自注意力机制来扩展以往基于循环神经网络的图输入的方法,以处理上游系统中自然语言处理任务的歧义度,进而有效地提高语音翻译任务的性能。
Abstract
lattices
are an efficient and effective method to encode ambiguity of upstream systems in
natural language processing
tasks, for example to compactly capture multiple speech recognition hypotheses, or to represen
→