BriefGPT.xyz
Apr, 2022
预训练变形金刚在文本排名中的差异性如何?
How Different are Pre-trained Transformers for Text Ranking?
HTML
PDF
David Rau, Jaap Kamps
TL;DR
本研究分析了基于BERT的交叉编码器与传统BM25排名在段落检索任务中的效果,发现它们在相关性概念上存在重要的差异,旨在鼓励未来改进研究。
Abstract
In recent years, large
pre-trained transformers
have led to substantial gains in performance over traditional retrieval models and feedback approaches. However, these results are primarily based on the
ms marco/trec dee
→