BriefGPT.xyz
Oct, 2021
探究作为归纳偏好的定量化
Probing as Quantifying the Inductive Bias of Pre-trained Representations
HTML
PDF
Alexander Immer, Lucas Torroba Hennigen, Vincent Fortuin, Ryan Cotterell
TL;DR
该研究旨在通过贝叶斯框架度量文本中的归纳偏差量,并通过对Contextual embeddings的探究,比较了fastText和BERT在不同任务上的性能表现差异。
Abstract
Pre-trained
contextual representations
have led to dramatic performance improvements on a range of downstream tasks. This has motivated researchers to quantify and understand the
linguistic information
encoded in
→