How Good is the Bayes Posterior in Deep Neural Networks Really?

Dear colleagues,

I am looking how UQ and Bayesian Update are connected/can be applied to neural networks.

I found this interesting manuscript (
How Good is the Bayes Posterior in Deep Neural Networks Really?)

Abstract: During the past five years the Bayesian deep learn- ing community has developed increasingly accu- rate and efficient approximate inference proce- dures that allow for Bayesian inference in deep neural networks. However, despite this algo- rithmic progress and the promise of improved uncertainty quantification and sample efficiency there are—as of early 2020—no publicized de- ployments of Bayesian neural networks in indus- trial practice. In this work we cast doubt on the current understanding of Bayes posteriors in popular deep neural networks: we demonstrate through careful MCMC sampling that the pos- terior predictive induced by the Bayes posterior yields systematically worse predictions compared to simpler methods including point estimates ob- tained from SGD. Furthermore, we demonstrate that predictive performance is improved signifi- cantly through the use of a “cold posterior” that overcounts evidence. Such cold posteriors sharply deviate from the Bayesian paradigm but are com- monly used as heuristic in Bayesian deep learn- ing papers. We put forward several hypotheses that could explain cold posteriors and evaluate the hypotheses through experiments. Our work questions the goal of accurate posterior approx- imations in Bayesian deep learning: If the true Bayes posterior is poor, what is the use of more accurate approximations? Instead, we argue that it is timely to focus on understanding the origin of the improved performance of cold posteriors

1 Like

Dear @Litvinenko

Thanks for your interesting discussion.
I think @olaf.klein could answer this question.

Best regards
Ali