transfer learning is a vital technique that generalizes models trained for
one setting or task to other settings or tasks. For example in speech
recognition, an acoustic model trained for one language can be used to
recognize speech in another language, with little or no re-training da
通过模型自适应的迁移学习方法,将原本用于英语自动语音识别的 Wav2Letter 卷积神经网络适配到德语 ASR 模型的训练中,实现了在受限 GPU 内存、吞吐量和训练数据的情况下,基于消费级硬件实现更快的训练,同时减少了训练数据量,从而降低了在其他语言中训练 ASR 模型的成本。网络层的微小调整已经足够实现较好的性能。