By using this site, you consent to our use of cookies. You can view our terms and conditions for more information.
The memory measurement model framework (M3; Oberauer & Lewandowsky, 2018) comprises a collection of cognitive measurement models that isolate parameters associated with distinct working memory processes in widely used tasks such as simple or complex span and memory updating. By transforming activation of item categories into recall probabilities, the model framework estimates contributions of memory processes to working memory performance. We assessed the parameter recovery of the extended encoding and updating models within the M3 framework using our own hierarchical implementation in STAN. We assessed the subject-parameter recovery of the complex span extended encoding and updating models, with a particular focus on time-dependent parameters for removal, extended encoding, and extended updating, and the time-independent parameter for immediate deletion. Simulations revealed difficulties in recovering the time-dependent removal parameter in both models, while recovery for the extended encoding and extended updating parameters was sufficient under certain conditions. The time-independent immediate deletion parameter could only be recovered sufficiently with a disproportional amount of data. Based on the simulation results, the current state of the extended encoding and updating models of the M3 framework does not allow the estimation of subject-level parameters with reasonable efficacy and precision. We provide recommendations for experimental designs to maximize parameter recovery and discuss possible workarounds to improve subject parameter recovery.
The detection of careless responses in questionnaire data is of increasing importance in a time online surveys and web-based experiments. There are many statistics that aim at detecting careless responses in already collected data. Based on classical test theory and item response theory these indices have been tested and compared. But until now no evaluation is available from the perspective of knowledge structure theory. We compared representatives from various classes of indices within a simulation study based on knowledge structure theory. For two subscales of the Freiburg Personality Inventory (Fahrenberg, Hampel, & Selg, 2001) derived from the respective normative sample, knowledge states and response patterns were simulated. Careless responses were characterized by increased careless error and lucky guess rates, or systematic responses (e.g., answering ”no” throughout). The number of careless responders and the extent of their carelessness were varied. Signal detection theory was used to evaluate the performance of the indices. References Fahrenberg, J., Hampel, R., & Selg, H. (2001). FPI-R. Das Freiburger Persönlichkeitsinventar, 7.
Humans often need to make choices that trade off a benefit against a small chance of extinction (e.g., death or even human extinction). We developed a novel risky-choice gambling task, the ”Extinction Gambling Task'', to study how people reason about these types of events. In the Extinction Gambling Task, participants need to decide between a risky gamble (with a higher expected payoff but a small chance of extinction) and a safe gamble (lower expected payoff but no chance of extinction) across a series of trials (100 in our study). Our task has two possible payoff structures to model extinction risk: In the “complete extinction scenario” drawing the extinction option implies that all past earnings are wiped out and that no earnings can be accumulated in future trials. In the “opportunity cost extinction” scenario, extinction merely implies that the participant cannot earn additional money in trials after the extinction event; however, they can keep their earnings from previous trials. We derived optimal decision strategies for both of these scenarios and validated them against simulations. In the “complete extinction case”, the optimal strategy only considers the probability of the risky choice and the number of trials but not the order in which choices are made. In the “opportunity cost extinction” case, the optimal strategy considers both the probability of risky choices and the order in which choices are made. Compared to the complete extinction scenario, the optimal number of risky choices is higher in the opportunity cost case. Furthermore, the optimal strategy in the opportunity cost case involves first playing safe and switching to solely playing the risky gamble towards the end of the experiment. We compared participants' performance across both scenarios in a between-participants design where each participant played one round of 100 trials. We found that (1) people are far too risk-seeking in early trials, which leads a large proportion of participants to become extinct relatively soon, and (2) for participants in the opportunity cost condition, the first choice of the risky gamble is later than for participants in the complete extinction condition, indicating that participants have some understanding of the different affordances of the two scenarios. Further, participants qualitatively follow the optional strategy by increasing the proportion of risky choices towards the end in the opportunity cost condition of the experiment, whereas participants in the complete extinction case do not show this pattern. We will present results from a mixture model describing different groups of participants. The “Extinction Gambling task” is a promising approach that can shed light on human decision processes with important real-world implications.
The Ornstein-Uhlenbeck (OU) model represents time series data as mean-reverting stochastic processes that gravitate towards a particular level. The OU model is widely used in fields such as finance, physics, and biology to model the dynamics of non-linear time series data. The model has three main parameters, namely the attractor, the elasticity, and the volatility, which are interpreted as the steady-state level of the variable, the speed of reversion to the mean, and the intensity of random fluctuations around the mean, respectively. Hierarchical Bayesian implementations of the OU model allow for flexible and robust data analysis by incorporating population-level parameters and individual-level heterogeneity. It also allows flexibility in the structure of the model, so that we can include time-varying parameters, latent class indicators, and relevant prior information. We apply a Bayesian hierarchical OU model to data from a mobile health intervention design aimed at promoting psychological well-being in college students. The model allows us to estimate the effectiveness of the intervention on psychological well-being over time, the persistence of the effect after the intervention, and to identify individual-level features of the response to the intervention.
This paper presents multiple models of strategies that people may apply to find faults in a complex circuit. Previous researchers have modeled how, when, and what is learned in a simple fault-finding task. Furthermore, they started to explore individual differences in strategies. We continued modeling multiple strategies for tasks with higher complexity, moving from a simple circuit to a more complex circuit with 5 subcircuits. The multiple strategies that participants may use are implemented in novel approach that uses hierarchical task analysis, the KLM, and the ACT-R learning equations. We compared the time spent to finish tasks in Session 1 and 5 between participants and each model. This research provides insights into why we sometimes failed to predict behaviors well—it is not the problem of strategies being modeled but the variations and range of participants’ strategies.