BriefGPT.xyz
Oct, 2020
关注在语音分离中至关重要
Attention is All You Need in Speech Separation
HTML
PDF
Cem Subakan, Mirco Ravanelli, Samuele Cornell, Mirko Bronzi, Jianyuan Zhong
TL;DR
本文介绍了一种基于Transformers、无RNN结构的深度神经网络,即SepFormer,并运用多尺度方法使其实现短时和长时依赖性的学习,从而在语音分离任务中取得了最优结果,并具有较高的计算速度和较小的内存占用。
Abstract
recurrent neural networks
(RNNs) have long been the dominant architecture in sequence-to-sequence learning. RNNs, however, are inherently sequential models that do not allow parallelization of their computations.
transf
→