BriefGPT.xyz
Nov, 2021
混合拉普拉斯逼近以改善深度学习后验不确定性
Mixtures of Laplace Approximations for Improved Post-Hoc Uncertainty in Deep Learning
HTML
PDF
Runa Eschenhagen, Erik Daxberger, Philipp Hennig, Agustinus Kristiadi
TL;DR
本文介绍了一种基于高斯混合模型后验的预测方法,通过对独立训练的深度神经网络的拉普拉斯近似加权求和,可以缓解深度神经网络对离群值的过于自信预测问题,并在标准不确定性量化基准测试中与最先进的基准进行了比较。
Abstract
deep neural networks
are prone to overconfident predictions on outliers.
bayesian neural networks
and deep ensembles have both been shown to mitigate this problem to some extent. In this work, we aim to combine t
→