Compressing Bayesian Inference with Information Maximization
Amortized deep learning methods are transforming the field of simulation-based inference (SBI). However, most amortized methods rely solely on simulated data to refine their global approximations. We investigate a method to jointly compress both simulated and actually observed exchangeable sequences with varying sizes and use the compressed representations for downstream Bayesian tasks. We employ information maximizing variational autoencoders (VAEs) which we augment with normalizing flows for more expressive representation learning. We showcase the ability of our method to learn informative embeddings on toy examples and two real world modeling scenarios.