Close
This site uses cookies

By using this site, you consent to our use of cookies. You can view our terms and conditions for more information.

Automating experiments with neural information estimators

Authors
Prof. Noah Goodman
Stanford University
Abstract

The idea that good experiments maximize information gain has a long history (e.g. Lindley, 1956). A key obstacle to using this insight to automate experiment design has been the difficulty of estimating information gain. In the last few years new methods have become available for casting information estimation as an optimization problem. Using these ideas, we describe variational bounds on expected information gain. Deep neural networks and gradient-based optimization then allow efficient optimization of these bounds even for complex data models. We apply these ideas to experiment design, adaptive surveys, and computer adaptive testing.

Tags

Keywords

OED
variational inference
mutual information
adaptive experiments

Topics

Probabilistic Models
Other
Study design
Discussion
New
Two questions Last updated 3 years ago

Hi Noah, Nice talk. Two questions. 1) Do you have any results or ideas on whether the performance of the bounds generalizes to different problems for which you have tractable optimal benchmarks? 2) Could various benchmarks be pooled to create a better estimate? I guess this depends on whether they bracket the true optimal, which will depen...

Michael Lee 0 comments
Cite this as:

Goodman, N. (2020, July). Automating experiments with neural information estimators. Paper presented at Virtual MathPsych/ICCM 2020. Via mathpsych.org/presentation/65.