Papers

A variational neural Bayes framework for inference on intractable posterior distributions

E Maceda, EC Hector, A Lenzi, BJ Reich - arXiv preprint arXiv:2404.10899, 2024 - arxiv.org
Statistics paper stat.CO Suggest

Classic Bayesian methods with complex models are frequently infeasible due to an intractable likelihood. Simulation-based inference methods, such as Approximate …

Link to paper

BibTeX

@article{2404.10899v1,
Author = {Elliot Maceda and Emily C. Hector and Amanda Lenzi and Brian J. Reich},
Title = {A variational neural Bayes framework for inference on intractable
posterior distributions},
Eprint = {2404.10899v1},
ArchivePrefix = {arXiv},
PrimaryClass = {stat.CO},
Abstract = {Classic Bayesian methods with complex models are frequently infeasible due to
an intractable likelihood. Simulation-based inference methods, such as
Approximate Bayesian Computing (ABC), calculate posteriors without accessing a
likelihood function by leveraging the fact that data can be quickly simulated
from the model, but converge slowly and/or poorly in high-dimensional settings.
In this paper, we propose a framework for Bayesian posterior estimation by
mapping data to posteriors of parameters using a neural network trained on data
simulated from the complex model. Posterior distributions of model parameters
are efficiently obtained by feeding observed data into the trained neural
network. We show theoretically that our posteriors converge to the true
posteriors in Kullback-Leibler divergence. Our approach yields computationally
efficient and theoretically justified uncertainty quantification, which is
lacking in existing simulation-based neural network approaches. Comprehensive
simulation studies highlight our method's robustness and accuracy.},
Year = {2024},
Month = {Apr},
Url = {http://arxiv.org/abs/2404.10899v1},
File = {2404.10899v1.pdf}
}

Share