Fast talk session
Diffusion based models have been successfully used to model response time distributions in decision making psychological experiments (see Ratcliff et al. (2016) for a review). van der Maas et al. (2011) proposed an item response theory-based extension of the diffusion model (Q-diffusion) designed to incorporate item-specific characteristics. Kang et al. (2022) and van der Maas et al. (2011) successfully used Bayesian posterior sampling methods to estimate Q-diffusion model response time distributions using a mental rotation dataset and demonstrated model convergence even in the presence of non-informative prior distribution. The current study empirically investigated how the posterior distribution of response times in the Q-diffusion model are affected by difference choices of the mean for a person-specific log-normal prior distribution. Both small and large perturbations of the log-normal mean were chosen to represent situations where a baseline posterior mean is either within or outside the high probability zone of the prior distribution representing "Data-Prior conflict" (see Clarke and Gustafson (1998)). Using the Ruggeri and Sivaganesan (2000) relative sensitivity Rπ metric defined as the square of difference between posterior means of the baselined prior and the perturbed prior distributions and then divided by the posterior variance of the perturbed prior distribution. Results for small perturbations found 0.01 < Rπ < 0.02 while for large perturbations: 0.2 < Rπ < 0.5. These results suggest that the posterior distribution of the Q-diffusion model is sensitive to poor choices of the prior distribution but more robust for appropriate prior distribution choices.
Prof. Pernille Hemmer
Temporal Binding (TB) is standardly regarded as an implicit measure of the sense of agency (Haggard, 2017) though an underlying mechanism has not been agreed upon (Hoerl et al., 2020). Here we propose a memory process as an explanation for the observed effect in two publicly available datasets (Weller et al., 2020). The dataset consisted of two experiments that manipulated ‘action type’ and length of timing intervals. Replotting the data, we found a classic memory pattern (regression to the mean) in both experiments. We simulated the behavioral patterns using a simple Bayesian model of memory (Hemmer & Steyvers, 2009), which assumes memory to be a combination of episodic and semantic memory. The model provided a good qualitative fit in all but one experimental condition. Adjusting the prior mean for the ‘action’ condition resulted in an improved fit. Next, we evaluated whether systematic variation in memory noise values follow Weber’s law. We hypothesized that increased perceptual noise at longer time intervals also influences memory noise and would be observed as a non-linear regression pattern (Huttenlocher et al., 2000), as observed in this dataset. We calculated an overall Weber fraction constant (K) and scaled memory noise by K. The simulation remained ‘too linear’ compared to participant responses. We tested various values a memory noise – scaled by K. Finally, we calculate a K per timing interval and use these values to scale the memory noise at each interval. While the memory model provided a good fit to the empirical data, the qualitative fits varied across simulations, indicating that the underlying mechanism might be more complex. We discuss the results in the context of Weber’s law and TB. Our findings suggest the TB effect may arise, at least in part, from cognitive processes other than experienced agency.
Mr. Ashwin Somasundaram
Mr. Ritesh Malaiya
Prof. Richard Golden
Cognitive Diagnostic Models (CDMs) are widely used psychometric models which assume the probability an exam item is correctly answered is functionally dependent upon the examinee’s binary-valued latent skills of the examinee. The skill requirement is formalized by the examiner in the form of a Q-matrix which specifies the skills required to successfully answer an exam item with a high probability. Given the Q matrix may not always be known a-priori, several studies have evaluated ways to retrofit a Q-Matrix to existing assessments (see Ravand and Baghaei, 2019 for a review). In the current experiment, we examined the model fit of two different approaches for constructing the Q matrix for an undergraduate course (n=79). In the top-down approach, each course-level learning objective is utilized as a skill by itself or broken into subcategories. Groups of exam items are then associated with the relevant subcategories. In the bottom-up approach, skills associated with individual exam items are identified and only the most frequently used skills are included in the final analysis. Using a bootstrap simulation methodology, three model selection criteria were used to compare model fits between the two Q matrices – Generalized Akaike Information Criterion (GAICTIC), Bayesian Information Criterion (BIC), and Cross-Entropy Bayesian Information Criterion (XBIC) (Golden, 2020). For different variations in sample sizes and regularization, all three measures consistently selected the bottom-up model as a better model. The results have implications for guiding the development of methods for developing Q matrix specifications (i.e., skill to exam item mappings).
Frank E Ritter
Mr. Jacob Oury
The KRK theory (Kim, Ritter, & Koubek 2013) describes the learning of a complex task in three stages, and describes specific curves of forgetting that occur depending on the stage of learning. Our study of the learning and retention of a complex task, troubleshooting the Ben Franklin Radar System, predicted that these differing curves could be found through using three learning and three retention periods of different lengths. We measured performance on procedural learning by measuring the completion time of troubleshooting problems, as well as recall and recognition-based assessments. We found that while these curves were not seen when plotting learning over multiple sessions, learning from the end of training to the beginning of the testing did follow these curves. Within a session, completion time for trials decreased at the expected trajectory. Additionally, forgetting was more clearly seen between the end of a session and beginning of the next session. Forgetting happens between sessions and is most clearly seen by comparison of performance at the end of the last practice session to performance at the start of the testing session. (We included additional tests to get more stable measures, but due to learning this was not obtained.) These results suggest that the scale at which this theory can be applied may be different depending on task complexity, and whether learning may continue within testing.
Participants in categorization experiments usually assign a single stimulus to one of multiple categories. Despite the real-world significance, participants are rarely asked which of multiple options belong to a single category. In the current experiment, participants selected the stimulus, from a set of 2 or 3, that most likely to belongs to a learned category. The results of Experiment 1 (1-dimensional stimuli) suggest a repulsion effect, in which a nearby dominated stimulus reduced the probability of selecting the dominating stimulus. The results of Experiment 2 (2-dimensional stimuli) suggest a small attraction effect, in which the probability of selecting the dominating stimulus is increased. We extend standard exemplar-similarity models (GCM) by incorporating random utility modeling (RUM). The modeling results of both experiments suggest that stimulus utility alone may not be able to account for choice, i.e., the model must also incorporate similarity between choice options, although this finding is tentative for Experiment 2 and may represent a spatial bias.
When faced with positive and negative outcomes, people seem to use different learning strategies. Facing losses, people tend to explore their environment more thoroughly by alternating between different options. Facing gains, people tend to explore less and instead exploit known options. Different theoretical explanations for this exploratory tendency have been discussed without agreement on any single theory. Some have considered Reinforcement Learning (RL) models, but have ultimately concluded that human exploratory behavior across domains is best described by either a Win-Stay-Lose-Shift (WSLS) heuristic or the so-called Bayesian shrinkage hypothesis, which assumes different prior expectations by domain. In the current study we conduct simulations to test whether any of these three accounts can re-create increased explorative behavior in the domain of losses as found in human data. We demonstrate that, of the three accounts, only a conventional RL model with neutral initial beliefs exhibited the same sort of asymmetric exploratory choice behavior that has been documented in human learners. Neither simulated WSLS-type learners nor Bayesian shrinkage-type learners–formalized as an RL model with domain-specific initial beliefs–exhibited more exploration in the domain of losses. Ultimately, the RL model’s ability to reproduce the exploratory behavior of interest depended on assumptions that must be made about learners’ pre-experimental expectations. We highlight how these assumptions are particularly relevant for the design of decision-from-experience tasks, especially those in which exploration is the behavior of interest. Overall, the current study advances the ongoing discussion in the literature about which models are able to account for domain differences in exploratory choice and highlights, yet again, the interdependence of modeling choices and design choices in experiments.
Prof. Arndt Bröder
Semantic space models are powerful tools in semantic memory research, which use the distributional structure of words in large natural language datasets to derive high dimensional vector representations for the words or concepts in a semantic space. In a recent line of research, these word vectors have been used to predict judgments of similarity, probability, or other quantities. If these spaces capture the structure of human conceptual representations, it should also be possible to predict comparative choices of concepts on nonsensical attributes as long as the concepts are spatially arranged at sufficiently distinct locations along the attribute dimension. In a first experiment, we presented n = 30 participants with k = 60 nonsensical comparisons, in order to investigate the ability of the semantic space model to predict the response of participants. Overall, the analysis using a Bayesian logistic hierarchical regression model showed that the model could predict the responses of participants above chance level, with an accordance rate of model-predicted and observed responses of θ = 57%. However, the results also showed that while there was only a small difference between participants (θ ranging from 53% to 56%), there were large differences between items in how good the model predicted the actual judgment of participants, with accordance rates ranging from θ = 36% to θ = 89%. Given that the observed responses of participants are similar and as predicted by the semantic space model, at least for some items, might indicate that the derived high dimensional vector representation of the semantic space to some extent incorporates some shared aspects of people’s semantic memory.
Adaptive design optimization (ADO) is a state-of-the-art technique for designing experiments for cognitive modeling (Cavagnaro, Myung, Pitt, and Kujala, 2010). ADO dynamically identifies stimuli that, in expectation, yield the most information about the hypothetical construct of interest (e.g., parameters of a cognitive model). To calculate this expectation, ADO leverages the modeler’s existing knowledge, specified in the form of a prior distribution. “Informative” priors, constructed on the basis of domain knowledge or previous data, have the potential to align the prior with the empirical distribution in the participant population, thereby making ADO maximally efficient. However, if the informative prior is inaccurate, i.e., “misinformative,” then ADO may be led astray, leading to wasted trials and lower efficiency. To play it safe, many researchers turn to “uninformative” priors. Yet, priors chosen on the basis of their predictive agnosticism rather than insight are also unlikely to align with the population distribution, possibly making them equally inefficient. In on-going work, we investigate the consequences of informative, misinformative and uninformative prior distributions on the efficiency of experiments using ADO.
While flying aircraft, pilots must balance the physical and cognitive requirements of piloting while also communicating with air traffic control (ATC) via radio. Increased workload, such as extraneous information from ATC or additional cognitive tasks, tax pilots’ limited cognitive resources, and subsequently affect performance. To limit this effect, pilots are trained to prioritise those tasks most crucial to safe flight, with verbal communication considered a low priority. Pilot communication has been found to be affected by increased information density from ATC, radio frequency congestion, and higher cognitive workload. However, it is unclear whether this is due to effective task prioritisation or a more general deficit in piloting performance. To examine this issue, avionics data was examined from a previous study in which seventeen pilots participated in a flight simulation experiment. Pilots flew six flights in total, three high-load flights which imposed high workload from different sources (high ATC speech rate, high ATC information density, and a mid-flight fuel calculation respectively), and three low-load flights which matched the high-load flights’ profiles, but lacked the additional workload demands. Flight performance was assessed by comparing the pilots’ compass heading throughout each flight to the heading they were instructed to hold by ATC, and a time series of heading error calculated. No significant difference in heading error was found between workload level (high/low) or source of workload (speech rate/information density/congitive workload), and Bayesian analysis found evidence against these factors. These results indicate that, while pilots’ communication was negatively affected by increased workload, their flight performance was not similarly affected, implying effective task prioritisation under high cognitive workload.
Dr. Marija Blagojević
Dr. Parviz Azadfallah
Dr. Piotr Oles
By reviewing the literature in psychological sciences it can be found there is no considerable research using rule mining algorithm. The core results relied on the classic statistics aimed hypothesis testing. In practice, there are big recorded data in psychology which have been mostly ignored. The purpose of this study is to clarify the importance of association rule mining which can lead to find micro-theories from messy data. Method: The participants in this research were a sample of 325 (85.3% female and 14.7% male) people living in Tehran in 2021 who were selected by convenience sampling through online platforms supported by the internet. All of the participants completed childhood trauma, social-emotional competence, internalized shame, disability/shame, cognitive flexibility, distress tolerance, The Toronto Alexithymia scales. The data are analyzed using Rstudio.4.1 and Apriori package. Results: 39368 rules initially discovered from 7 variables and 20 top rules with support ranged 0.003-0.243, confidence ranged 0.05-1, lift ranged 0.15-3.43 selected. They indicated new relationships between disability/shame schema and the other 6 variables. There was set at least 2 variables in each of the rules. Conclusion: Using Association Rule Mining as a knowledge-driven can be used and of interest to all mind researchers for exploring the hidden pattern among a database. This pattern leads to practical and theoretical knowledge.
Marieke Van Vugt
Making decisions requires the accumulation of evidence, which is described quantitatively by the drift diffusion model (DDM). In most of the DDM applications, it is assumed that such evidence is driven by a single process. Yet, in reality, this accumulation could be driven by multiple different sources of information that drive the decision. Here we examine the situation where the accumulation process is driven by orthographic and semantic information, in the service of making lexical decisions. We can separate those factors neatly by using Chinese characters. The DDM was fit to the behavioural data to obtain estimates of its model parameters. We found decreased drift rate, which reflects the strength of evidence, for non-words relative to actual words. There was a negative correlation between the drift rate and subjective word-likeness and familiarity. Although the amplitude of the N1 (which is related to orthographic processing) and N400 (which is related to semantic processing) did not differ across word types, after fitting the ERPs components to separate models as the regressors, the N1 and the N400 did help to better estimate the trial-by-trial estimates of the drift rate in the conditions relevant for orthographic and semantic processing, respectively. Taken together, our study shows how different sources of evidence for lexical decisions are reflected in brain activity and inform the decision making process.
We are sometimes faced with inference tasks in a domain of interest where we do not have sufficient information, but we could use our knowledge from other domains to help solve the problem. We frequently undergo this knowledge transfer process, but what are the underlying mechanisms that enable us to achieve this feat? One possible answer is through analogy. This study is interested in how analogy influences decision making performance in a new environment. The knowledge transferred to a new environment can be the importance of cues, and the strategies. The experiments in the study investigate analogical transfer from one domain to another in multi-attribute decision-making tasks. It investigates whether knowledge, such as cue-criterion correlations and best-performing strategy, can be transferred via analogical mapping. The goal of the modeling is to understand the mechanisms underlying analogical transfer in cue learning and strategy selection. The model has two components: reinforcement learning of strategy selection and analogical transfer. Both components will be implemented in ACT-R because it is a well-established framework for integrating cognitive models.
Mr. Jay Wimsatt
Ms. Abigail Sedziol
Mr. Raghvendra Yadav
Mr. Cody Ross
In a seminal work, Rosch (1973) argued for the existence of structured non-arbitrary semantic categories in the domain of form which developed around perceptually salient “natural prototypes”. For a working hypothesis, Rosch used the most “ecologically typical” salient forms which she regarded as the “good forms” (i.e., the square, circle, etc.) of Gestalt Psychology. Based on this idea, categories where the presumed natural prototypes were central tendencies were constructed to test participants in her experiment. In this talk we perform mathematical reverse engineering and show that the generative natural prototypes in Rosch’s experiments may be determined or accurately predicted from the generated categories using a model and theory of subjective information derived from Generalized Invariance Structure Theory (GIST; Vigo, 2013, 2015) and referred to as Generalized Representational Information Theory (GRIT; Vigo, 2011, 2012, 2015). We also show a natural procedure for n-ary dimensional encoding in GRIT when the number of dimensions involved are insufficient (i.e., when observers conduct dimensional surgery, thereby extending the dimensional space).
Generalization studies typically use a design in which multiple stimuli vary along a single stimulus dimension and a given outcome or response is associated with a single value in the dimension. This is similar to the method of constant stimuli used to characterize psychometric curves in psychophysics, although in many cases measuring continuous rather than discrete responses to each stimulus. Here, we propose a generalization of the signal detection model for the psychometric curve, that deals with continuous responses. As in the traditional model, we assume normally-distributed decision variables with means and variances that change depending on the presented stimulus. We also assume that a monotonic link function transforms such variables into the measured responses, which are perturbed by random normal noise. The model is a generalization of traditional signal detection models, which are obtained by assuming a staircase link function. We propose an algorithm that uses a combination of quantile functions and monotone spline regression to estimate the parameters of this model from data, and show that the inclusion of a flexible link function allows the model to fit continuous data better than ROC analyses previously proposed for continuous data. Potential applications include the adaptive estimation of generalization curves and application to continuous neural data such as fMRI activity estimates.
It is increasingly common to use Bayesian modeling techniques that rely on Markov chain Monte Carlo (MCMC) methods, such as the variant of Hamiltonian Monte Carlo implemented by Stan (mc-stan.org). While excellent tools for the processing, visualization, and analysis of output from Stan exist in R (bayesplot, posterior) and Python (ArviZ), few such resources exist for MATLAB users. I created the matstanlib library to fill this gap. In this fast talk, I will demonstrate how matstanlib supports multiple stages of a modern Bayesian modeling workflow in MATLAB. First, I will show how matstanlib automates a full set of computational diagnostic checks, consistent with current best practices for Bayesian sampling methods (e.g., Vehtari et al., 2021; Betancourt, 2018). Next, I’ll review matstanlib’s diagnostic plots, from trace plots to ESS interval plots to parameter recovery plots, which can be used to better understand model performance. Finally, I’ll show how matstanlib can facilitate model-based inference with plotting functions for the visualization of posterior densities and intervals, and analysis functions for the computation of density estimates and model comparison metrics. This fast talk will also serve as a quick-start guide for working with the matstanlib library.
Ms. Peyton Corbi
Robin D. Thomas
Thomas, et al. (2019) extended systems factorial technology to nested architectures with respect to the predictions that various mental architectures make for survivor and mean interaction contrasts of response times. In this presentation, we explore capacity coefficient predictions that plausible nested architectures make and apply them to the case of visual perception of compound sinewave gratings across the visual fields to address questions of hemispheric differences in global and local processing of information sources.