Close
This site uses cookies

By using this site, you consent to our use of cookies. You can view our terms and conditions for more information.

Compressing Bayesian Inference with Information Maximization

Authors
Stefan Radev
Rensselaer Polytechnic Institute ~ Cognitive Science
Abstract

Amortized deep learning methods are transforming the field of simulation-based inference (SBI). However, most amortized methods rely solely on simulated data to refine their global approximations. We investigate a method to jointly compress both simulated and actually observed exchangeable sequences with varying sizes and use the compressed representations for downstream Bayesian tasks. We employ information maximizing variational autoencoders (VAEs) which we augment with normalizing flows for more expressive representation learning. We showcase the ability of our method to learn informative embeddings on toy examples and two real world modeling scenarios.

Tags

Keywords

Deep learning
simulation-based inference
Bayesian estimation
de Finetti
model comparison
Discussion
New

There is nothing here yet. Be the first to create a thread.

Cite this as:

Radev, S. T. (2023, July). Compressing Bayesian Inference with Information Maximization. Abstract published at MathPsych/ICCM/EMPG 2023. Via mathpsych.org/presentation/964.