BriefGPT.xyz
May, 2018
基於不確定性的注意力機制用於可靠的解釋和預測
Uncertainty-Aware Attention for Reliable Interpretation and Prediction
HTML
PDF
Jay Heo, Hae Beom Lee, Saehoon Kim, Juho Lee, Kwang Joon Kim...
TL;DR
通过引入输入相关的不确定度来学习输入不确定的实例的更大方差, 我们使用变分推断学习了不确定性感知注意力机制, 并在不同的高风险预测任务中验证了其有效性。进一步的评估表明,我们的模型生成符合临床医生解释的注意力,并通过学习方差提供更丰富的解释。
Abstract
attention mechanism
is effective in both focusing the
deep learning models
on relevant features and interpreting them. However, attentions may be unreliable since the networks that generate them are often trained
→