BriefGPT.xyz
Apr, 2024
在对比自监督框架中添加边际以学习有区别的说话者表示
Additive Margin in Contrastive Self-Supervised Frameworks to Learn Discriminative Speaker Representations
HTML
PDF
Theo Lepage, Reda Dehak
TL;DR
通过改进SimCLR方法中的NT-Xent-AM损失和对称对比损失,我们实现了更好的性能表现,并在VoxCeleb1-O数据集上取得了7.85%的均等误差率,超越了其他等效方法。
Abstract
self-supervised learning
(SSL) frameworks became the standard for learning robust class representations by benefiting from large
unlabeled datasets
. For
→