Machine Learning approaches to estimating and comparing models of intertemporal choice
Subjective value has long been measured using binary choice experiments to assess individual differences in intertemporal preferences. Dynamic, stochastic models of choice permit meaningful inferences about cognition from process-level data, explaining value in terms of underlying mechanisms in a way that simpler, static models cannot. However, the usability of complex generative models is severely limited by the technical difficulty of model fitting and model comparison steps, along with the computational power they require. In this talk, we develop and test an approach that uses deep neural networks to estimate the parameters of three behavioral models and perform model comparison between the three to assess their ability to better account for intertemporal choice. The models we explore differ in their complexity and the theoretical assumptions they make when it comes to the study of preference; the traditional and static hyperbolic discount and hyperboloid functions compared with a probabilistic attribute-wise model constructed by direct and relative differences in delay and payoff. Once trained, the neural networks allow for accurate and instantaneous parameter estimation and model comparison, as opposed to traditional methods that can take several hours and in some cases days. We compare different network architectures and show that they are able to accurately recover true intertemporal preferences related to each model's parameters, and then compare each model's performance in their ability to predict individual choice. The models were applied to a large data set of substance users in protracted abstinence from Sofia, Bulgaria who completed a short, 27-question choice task. The results illustrate the utility of machine-learning approaches for wider adoption and integration of cognitive and economic models, providing efficient methods for quantifying meaningful differences in intertemporal preferences from simple experiments.
Cite this as:
Kvam, P., &