Evidence accumulation
Prof. Andrew Heathcote
Dr. Jim Sauer
Matt Palmer
Dr. Adam Osth
Accurate decisions tend to be both confident and fast. Nonetheless, there are relatively few models that can simultaneously address this three-way relationship, especially for single stage decisions where participants indicate both their choice and their confidence. Extending on a common decision architecture of the linear ballistic accumulator framework, two models have been proposed – 1) a Multiple Threshold Race model which instantiates the Balance-of-Evidence hypothesis where confidence is determined through the difference between accumulated evidence for competing options (e.g., Reynolds, Osth, Kvam, & Heathcote, in revision), and 2) a newly developed Confidence Accumulator model which assumes that confidence itself is accumulated independently for each confidence option. To test these two confidence architectures, we ran two experiments manipulating the length of the confidence rating scale across 2-, 4-, or 6-options in a recognition memory task along with a perceptual task. Different models were compared that made different allowance for how the length of the confidence scale affected model parameters. While both model classes found that thresholds were affected by the length of the scale, drift rates were only minimally affected. Implications for models of confidence and response time will be discussed.
Dr. Jamal Amani Rad
Recently, a circular diffusion model (CDM) (Smith, 2016) has been developed to handle both choices and response time for decisions in a continuous option space. It assumes that the process of evidence accumulation progresses following a Brownian motion within a circle and it terminates whenever the accumulator reaches any point on the perimeter, so a decision is made. While this model is excellent at capturing different continuous behavioral phenomena, it has not yet been welcomed and tested by decision psychologists due to its mathematical complexity. Here we propose a simple method for estimating the circular diffusion model parameters which only requires the calculation of straightforward formulas with some statistics of data. The method is based on the traditional method of moments. The accuracy in parameter recovery for the method is shown to be nearly the accuracy of the maximum likelihood method.
Dr. Peter Kvam
Tim Pleskac
Dr. Jerome Busemeyer
Sequential sampling models have provided accurate accounts of people’s choice, response time, and preference strength in value-based decision-making tasks. Conventionally, these models are developed as Markov-type processes (such as random walks or diffusion processes) following the Kolmogorov axioms. Quantum probability theory has been proposed as an alternative framework upon which to develop models of cognition, including quantum random walk models. When modeling people’s behavior during decision-making tasks, previous work has demonstrated that both the Markov and quantum models have their respective strengths. Recently, the open system model, which is a hybrid version of the Markov and quantum models, has been shown to provide a more accurate account of preference strength compared to the Markov and quantum models in isolation. In this work, we extend the open system model to make predictions on pairwise choice and response time and compare it to the Markov and quantum random walk models.
Gaurav Malhotra
Casimir Ludwig
Models of decision-making assume an accumulation-to-threshold mechanism, whereby an individual pre-selects a single decision threshold such that the speed and accuracy of their responses are balanced. However, when the goal is to maximise an outcome variable (such as reward rate), it is very likely that the decision maker would keep adjusting the initially adopted threshold until satisfactory performance is reached. The standard assumption of stationarity leads to a threshold estimate that reflects the averaged performance, which may not necessarily be representative of the participant’s strategy at any one time. We analysed data from an expanded judgment task where the goal was to maximize reward rate, and estimated the height and slope of the decision threshold in a sliding window of trials, as well as over all trials. The overall best-fitting threshold parameters of a participant were often not representative of the estimated thresholds used in smaller windows of trials at any point during the experiment. This is largely because the majority of participants explored the threshold parameter space throughout the task, rather than settling on a specific threshold early on. Importantly, this exploration was not driven by the reward rate that a particular threshold yielded – in fact, the exploration often resulted in a lower average reward rate late in the experiment, relative to early trial windows. As such, participants failed to approach the threshold parameters that were optimal with respect to the task – i.e. those that would maximize reward rate. Our findings indicate that participants sample various distinct decision thresholds during a reward optimization task, rather than adopting a single threshold as is frequently assumed by models of decision-making. These results also introduce the question of whether such exploration is random, or whether it is modulated by a different variable (other than reward rate) that decision-makers prioritise instead.
Dr. Nathan J Evans
Dr. Jamal Amani Rad
Recently, Levy Flight models have attracted much attention. The main reason for their outstanding performance in modeling human behavior is considering a heavy-tailed distribution for the noise of the accumulation process which causes some sudden jumps during the accumulation process. But it is worth mentioning that when the distribution of noise of the accumulation tends to a more heavy-tailed distribution, low values of the noise are less likely to happen, and then the accumulation process between two jumps is less noisy than in the diffusion model. Consequently, in the Levy Flight models, large sudden jumps and low-value noises can not happen simultaneously. Thus, it is not so realistic, because we have both low values of noise and also some jumps during the accumulation process. In contrast with the previous evidence accumulation models that include only one noise distribution, the Levy-Brownian model utilizes both Gaussian and Levy white noises simultaneously in a way that the noise of the accumulation process is a weighted summation of the Gaussian (its weight is equal to lambda) and the Levy (its weight is equal to 1). Therefore, this model is the general form of the Levy Flight model and when lambda is equal to zero, this model is reduced to the Levy Flight model. Considering such a hybrid distribution yields an accumulation process that has both lo value noises and some sudden large jumps at the same time. We have tested the performance of this model on some perceptual and lexical decision tasks and the obtained results exhibit a better performance of the model in comparison with the Levy Flight and diffusion models.
Dr. Henrik Singmann
The Ratcliff diffusion decision model (DDM) is the most prominent model for jointly modelling binary responses and their associated response times. The DDM can decompose behavioural data into cognitive processes that are assumed to underlie performance in binary decision-making tasks. However, the DDM is notorious for being difficult to actually fit to such data. We have developed an R package, fddm (https://cran.r-project.org/package=fddm), simplifying the fitting process for the 5-parameter DDM with drift rate, drift-rate variability, boundary separation, start-point, and non-decision time. fddm provides the ability to fit the DDM using the R formula interface. We have added one function, ddm(), that allows fitting the DDM to any number of conditions where the mapping of conditions to DDM parameter can be specified through the R formula interface, separately for each parameter. In addition, fddm comes with a number of methods common for fitted model objects, such as for the print() and summary() function, integrating it into the R ecosystem. This makes it easy to perform likelihood-ratio tests or other similar procedures for model comparison. fddm uses a newly developed mechanism for selecting the faster among two equivalent formulations of the probability density function. Furthermore, fddm uses the analytical gradient for the full 5-parameter DDM variant when numerically optimising the log-likelihood function. Therefore, fitting with fddm is usually faster than with comparable packages.
Prof. Elliot Ludvig
M. Francois Rivest
Animals are very efficient at estimating elapsing time intervals, maximizing their responding around the time reward generally occurs. To assess the psychological mechanisms underlying this timing ability, one strategy has been to examine behaviour when a gap or pause is inserted into the stimulus that indicates the time to reward. In such a gap procedure, animals will generally show delayed responding after the gap, peaking after the time reward would have occurred. The standard animal response pattern to the gap procedure sees two periods of increased responding, one on either side of the gap. Most current models of interval timing are not able to simulate the gap procedure, and there is an open question as to whether the animals sense of time decays or resets during the gap. To date, only gaps that occur before the usual time of reward have been examined. Using an unpublished dataset with rats that has gaps 10 and 30 seconds after the time of reward, two new observations emerge. First, there is no secondary increase in responding when the gap is after the rat’s peak response (or its expected time of reward). Second, the rat’s behavior in response to a gap is different pending whether its subjective estimate of the time to reward has elapsed or not. Existing models, such as Scalar Expectancy Theory (SET) and the Time-adaptive Drift Diffusion Model (TDDM) have difficulties modeling the GAP data. Here, we extend the recently published Probabilistic-Response Drift-Diffusion Model (PRDDM) to the gap procedure and show how it can reproduce the data well. In all cases, PRDDM’s performance on gaps after the usual time of reward are more realistic than competing models–and is the only model that allows for responding to stop after a late gap. PRDDM’s behavior after the estimated time to reward is novel, in that the DDM’s accumulator is decreasing towards zero which simulates that animals decrease in interest in the stimulus. Unlike other models, a secondary peak when simulating gaps after the time to reward does not occur. In addition, we evaluate the PRDDM with four different possible mechanisms during the gap: Decay, Probabilitstic Reset, Decay and Probabilistic Reset, and a Hybrid model with different behaviors before and after the gap. We use these models to simulate the gap procedure and attempt to answer the questions of what occurs during the gap. Preliminary results show that the differences between these variants are practically indistinguishable, suggesting that further empirical data will be needed to pinpoint the psychological mechanism at play.
Submitting author
Author