large language models, the dominant starting point for Natural Language
Processing (NLP) applications, fail at a higher rate for speakers of English
dialects other than Standard American English (SAE). Prior work addresses this
using task-specific data or synthetic data augmentation, b
通过使用 aligned data augmentation 技术增强语言多样性和 deep prefix tuning 方法进行方言适应,Tallinn University of Technology(TalTech)在 ASRU MADASR 2023 Challenge 的两个轨道中都取得了显著的改进,并实现了参与团队中最低的词错误率。