BriefGPT.xyz
Oct, 2023
利用多语言自监督预训练模型进行序列到序列端到端口语理解
Leveraging Multilingual Self-Supervised Pretrained Models for Sequence-to-Sequence End-to-End Spoken Language Understanding
HTML
PDF
Pavel Denisov, Ngoc Thang Vu
TL;DR
使用预训练模型和多语言设置,提出了一种统一的方法来进行End-to-End语音语言理解,包括词槽填充,通过在可用的大规模语音识别数据上进行预训练,该方法在多个数据集和跨语言任务上取得了显著的性能提升。
Abstract
A number of methods have been proposed for
end-to-end spoken language understanding
(E2E-SLU) using
pretrained models
, however their evaluation often lacks
→