Estimated uncertainty by approximate posteriors in Bayesian neural networks
are prone to miscalibration, which leads to overconfident predictions in
critical tasks that have a clear asymmetric cost or significant losses. Here,
we extend the approximate inference for the loss-calibrated Bayesian framework
to dropweights based Bayesian neural networks by maxim