Fast Talk session
A fundamental question in cognitive science is how people comprehend sentences word by word. An important step in sentence comprehension is determining the syntactic relationships between words (figuring out who did what to whom). Building these syntactic relationships is known to take differing amounts of time depending on the type of sentence and the words it contains. A good theory of sentence comprehension should not only say how syntactic relations are established but also how long it takes to establish them. Here, we analyze a new model that aims to accomplish both goals. At each word in a sentence, the model stochastically explores a network of discrete states. Each state consists of a partial parse of the sentence so far, i.e., some set of dependency links between head words and dependent words. The model can jump between states if they differ by a single link until it reaches a state corresponding to a complete parse of the sentence so far. We use the master equation to analyze this continuous-time random walk. We present formulas for first passage time distributions and splitting probabilities, which are treated as the predicted reading times for that word and the probabilities of building different alternative parses, respectively. We illustrate how we can gain new insights into known phenomena (temporary ambiguities like, "the horse raced past the barn fell") using these techniques. The hope is that these quantitative tools will facilitate comparisons with other sentence comprehension models and lead to new theory-driven experiments.
Davis-Stober and Regenwetter (2019; D&R) showed that even if all predictions of a theory hold in separate experiments, not even a single individual may be described by all predictions jointly. To illustrate this 'paradox' of converging evidence, D&R derived upper and lower bounds on the proportion of individuals for whom all predictions of a theory hold. These bounds reflect extreme positive and negative stochastic dependence of individual differences across predictions. However, psychological theories often make more specific and plausible assumptions, such as that true individual differences are independent or show a certain degree of consistency (e.g., due to a common underlying trait). Based on this psychometric perspective, I extend D&R's conceptual framework by developing a multivariate normal model of individual effects. The model mitigates the 'paradox' of converging evidence even though it does not resolve it. Overall, scholars can improve the scope of their theories by assuming that individual effects are highly correlated across predictions.
Dr. Scott Brown
Dr. David Gunawan
Dr. Minh-Ngoc Tran
Dr. Robert Kohn
Many psychological experiments have participants repeat a simple task. This repetition is often necessary in order to gain the statistical precision required to answer questions about quantitative theories of the psychological processes underlying performance. In such experiments, time-on-task can have important and sizable effects on performance, changing the psychological processes under investigation in interesting ways. These changes are often ignored, and the underlying process is treated as static. We propose modern statistical approaches that are based on recent advances in particle Markov chain Monte Carlo (MCMC) to extend a static model of decision-making to account for time-varying changes in a psychologically plausible manner. Using data from three highly-cited experiments we show that there are changes in performance with time-on-task, and that these changes vary substantially over individuals -- both in magnitude and direction. Model-based analysis reveals how different cognitive processes contribute to the observed changes. We find strong evidence in favor of a Markov switching process for the time-based evolution of individual subjects' model parameters. This embodies the psychological theory that participants switch in and out of different cognitive states during the experiment. The central idea of our approach can be applied quite generally to quantitative psychological theories, beyond the model that we investigate and the experimental data that we use.
Prof. Arndt Bröder
Exemplar models are often used in research on multiple-cue judgments to describe the underlying process of participant’s responses. In these experiments, participants are repeatedly presented with the same exemplars (e.g., poisonous bugs) and instructed to memorize these exemplars and their corresponding criterion values (e.g., the toxicity of a bug). We propose that by using this experimental paradigm the judgments of participants in a multiple-cue judgment experiment are a mixture of two qualitatively distinct cognitive processes: judgment and recall. When participants are presented with one of the trained exemplars in some later block of the experiment, they either have learned the exemplar and its respective criterion value and are thus able to recall the exact value, or they have not learned it and thus have to judge its criterion value, as if it was a new stimulus. However, the analysis procedure and the models usually applied do not differentiate between these processes and the data generated by them. We therefore investigated the effect of disregarding the distinction between these two processes on the parameter recovery and the model fit of one exemplar model. The results of a computer simulation and the reanalysis of five experiments show that the current combination of experimental design and modelling procedure can lead to extremely bias in parameter estimates and thus impaired validity of these parameters, as well as negatively affect the fit and predictive performance of the model. As a remedy, we present a latent-mixture extension of the original model which solves these issues.
Doug Dong
Dr. Ross Otto
A large body of work reveals that in decision-making from experience, our risk preferences are sensitive to both decision frames (i.e., losses vs gains) and the decision context (i.e., other available options). However, the specific mechanisms underlying our frame-dependent risk preferences remain unclear. One influential account posits that the relative overweighting of extreme events leads to frame-dependent risk preferences—known as the extreme-outcome rule. However, this mechanism has yet to be formalized computationally. Critically, current reinforcement-learning models, like the delta rule, rely on learning the expected outcome of options while remaining agnostic to decision-frames. Recent work has begun to address this gap by incorporating learned reference points (i.e. the overall expected outcome) to which individual events are compared. Here, we extend these models by overweighting the influence of extreme events (i.e. surprising outcomes relative to the reference point) on learning. Simulating choice behavior in well-characterized decision-making from experience paradigms, we show that the context model, but not the delta rule, can capture the framing effect. Evaluating model fits on participant data, we show the context model outperforms the classic delta rule model. We further probed whether this context model could capture risk-preferences in a number of other decision-scenarios (i.e., gains only, losses only). Together, our results suggest that the learned reference point and the relative overweighting of extreme events can predict the frame-dependent risk preferences often seen in decisions from experience and offers a computational formalization of the extreme-outcome rule.
Dr. Robert Kohn
Dr. Scott Brown
Guy Hawkins
Dr. Minh-Ngoc Tran
Dr. David Gunawan
Model comparison is the cornerstone of theoretical progress in psychological research. Common practice overwhelmingly relies on tools that evaluate competing models by balancing in-sample descriptive adequacy against model flexibility, with modern approaches advocating the use of marginal likelihood for hierarchical cognitive models. Cross-validation is another popular approach but its implementation has remained out of reach for cognitive models evaluated in a Bayesian hierarchical framework, with the major hurdle being prohibitive computational cost. To address this issue, we develop novel algorithms that make Variational Bayes (VB) inference for hierarchical models feasible and computationally efficient for complex cognitive models of substantive theoretical interest. It is well known that VB produces good estimates of the first moments of the parameters which gives good predictive densities estimates. We thus develop a novel VB algorithm with Bayesian prediction as a tool to perform model comparison by cross-validation, which we refer to as CVVB. In particular, the CVVB can be used as a model screening device that quickly identifies bad models. We demonstrate the utility of CVVB by revisiting a classic question in decision making research: what latent components of processing drive the ubiquitous speed-accuracy trade-off? We demonstrate that CVVB strongly agrees with model comparison via marginal likelihood yet achieves the outcome in much less time. Our approach brings cross-validation within reach of theoretically important psychological models, and makes it feasible to compare much larger families of hierarchically specified cognitive models than has previously been possible.
Koji Kosugi
Mazer, Shampanier, and Ariely (2017) called a probabilistic free price promotion a promotion in which the purchase amount is free through a lottery. On the other hand, a promotion that ensures a lower purchase price was called a sure price promotion. Probabilistic free price promotions are known to have higher selection rates and sales than sure price promotions with equal expected values (Mazer et al., 2017; Lee, Morewedge, Hochman, and Ariely, 2019). In Experiment 4 of Mazer et al. (2017), they investigated whether participants would choose a probabilistic or sure price promotion when the amount was controlled and the probability was varied. The study also examined how the selection rate differed among the four other conditions of the promotion: pen, Amazon gift certificate, monetary gain, and monetary loss. The results showed that the promotion condition tended to pursue more risk than the monetary gain condition, even though the expected values were equal. This suggests that people have different risk tolerance depending on what they are willing to pay for. However, we conducted a similar survey in Japan and found a different trend from the previous studies, which we report here.
Shawn Betts
Dr. John Anderson
This talk is concerned with the implementation of period error correction in the adaptive control of thought - rational (ACT-R) architecture as part of a novel periodic tapping motor extension. Past sensorimotor synchronization models have often implemented error correction via joint phase and period correction mechanisms in the context of synchronization-continuation paradigms (Repp, 2005). Unlike past work, our goal was to model error correction in a self-paced tapping task with discrete feedback. To do so, we designed a new experiment named ChemLab in which players filled rows of 8 beakers by pressing the space bar periodically. In this task, feedback was provided both visually and auditorily. Specifically, taps that were too fast triggered a high-pitched sound and turned on a red light on the screen. Conversely, taps that were too slow triggered a low-pitched sound and turned on a blue light on the screen. We assessed periodic tapping in 4 non-overlapping temporal intervals between 200 and 1,200 ms. For each row of beakers, the temporal interval was set to switch once between the 3rd and the 5th beaker, such that participants either needed to speed up or slow down. In this talk, we show how period correction can be modeled in ACT-R with productions implementing feedback perceptual processing, and a basic motor error correction mechanism. We conclude by showing that modeling error correction in periodic tapping tasks with discrete feedback requires one to capture task-specific elements of feedback in addition to more general motor mechanisms.
Mr. Daiki Hojo
Mr. Jiro Sakamoto
Dr. Kota Takaoka
In the survey design, various options in constructing the survey screen may influence the response behavior. When survey designers use slider scales, one of the options is whether or not to present anchors. It is said that adding numerical feedback to the slider scales can lead to response heaping, in which ratings are concentrated in round numbers such as 5 or 10. One explanation for response heaping behavior by Furukawa et al. (2021) considered the possibility of satisficing via response granularity. They attempted to examine the individual differences of response granularity by modeling with mixture models. This study aimed to examine the individual differences in the impact of the anchor presentation on response heaping behavior by modeling the data taken on the 0-100 slider scale with presenting five-increments anchors. We used the same mixture models as the previous study, which assumed that respondents would not necessarily rate subjective quantities in response granularity of 0-100, but rather in coarser levels of response granularity, such as 11 increments (rating in multiples of 10) or five increments (rating in multiples of 25). As a result, we could quantitatively evaluate the individual differences in response granularity as in the previous study. We also found that more respondents were likely to rate in five increments than in the previous study. The results suggest that presenting five-increment anchors may have affected individuals' response granularity to the subjective quantity, thereby leading to differences in response heaping behavior.
Prof. Joe Houpt
Imperfect automation aids can lead to many negative consequences. To help mitigate those consequences, researchers have suggested that users be more vigilant, and particularly use multiple sources of information when making a decision with an automated aid. Prior research has suggested that people may still rely solely on the aid even when provided with other sources of information, but this research has tended to rely on strong assumptions and may have confounds. To test whether participants are using all or only one source of information when provided with an automated aid with more robust methods, we examined automation usage with Survivor Interaction Contrast from the Systems Factorial Technology framework. Additionally, we tested whether performance incentives and early experience with automation failures during training encourages more exhaustive processing. Participants completed a speeded length judgment task where they were provided with a reliable but imperfect aid to assist them in their decision. We found that across all conditions, participants used a serial, first-terminating process, supporting the view that participants use only one source of information. However, results from a logistic regression suggest that participants are likely using both the automated aid and the signal across all trials instead of relying solely on one. Implications of this research highlight a different strategy where participants may be alternating what source of information they use, which may be beneficial when using an imperfect aid in speeded decisions. This research can inform interface designs that support effective strategies for making speeded decisions with an automated aid.
Ms. Berenice López López Ventura
Dr. Reyna Xoxocotzi Xoxocotzi Aguilar
Dr. Alfonso Díaz Cárdenas
The psychological processes of depression, stress, and anxiety have traditionally been measured by indicators and analyzed by dimensional reduction methods (e.g. exploratory factor analysis). Due to some limitations on the results obtained by the classical methods, we considered a Network Analysis approach. In this setup, the symptoms form a complex dynamical system with interactions among them. The symptoms could mediate, moderate, increase, or decrease other symptoms. In this study, we built the symptom networks to analyze the interactions of the factors of the depression, anxiety, and stress processes in a sample of university students. We used a Network Analysis in JASP to estimate the network structure of DASS21 symptoms (Depression, Anxiety Stress Scale) evaluated in 174 university students from the Benemérita Universidad Autónoma de Puebla, Mexico. We built the networks through the Graphic Gaussian Model to discriminate edges and we selected the lowest EBIC model. We measured the indices of centrality, cluster, strength, closeness, and intermediates. We present the results for students of different areas of knowledge and the corresponding gender networks. Based on the results, appropriate intervention programs could be constructed for the particular symptoms shown in the different groups of participants.
Dr. Kazunori Tobisawa
Ms. Yui Furukawa
Dr. Kota Takaoka
Child sexual abuse (CSA) often lasts for more than a few years. Various kinds of clinical symptoms appear in CSA victims, depending on the persistent damage. Trauma response such as problematic sexual behavior is a highly specific feature in CSA victims. Nevertheless, it remains unclear how the developmental status of a child relates to the trauma response resulting from CSA. The aim of this study was to (ⅰ) describe accumulation of and accommodation to CSA effect in relation to age and the duration of victimization, (ⅱ) estimate the developmental transition of inhibition function, and (ⅲ) predict the trauma response via a computational conflict model regarding CSA effect and inhibition function. The data was collected by the national survey in Japan (December 2020). Four hundred ninety-two CSA cases were met the inclusion criteria. The proposed model was implemented in Stan. All chains were well mixed and converged. The results indicated that (ⅰ) the impact of CSA on trauma response was cumulative over the duration of victimization, (ⅱ) the magnitude of the cumulative added impact was inversely proportional to the duration of victimization, (ⅲ) developmental transitions of inhibition function varied with the trauma responses, and (ⅳ) some types of trauma response might be observed only at a particular age and only for a specific duration of victimization. The proposed conflict model regarding clinical outcomes will be widely applicable and give us interpretable predictions.
Dr. Marc Jekel
Prof. Andreas Glöckner
The integrated coherence-based decisions and search model (iCodes) predicts that participants show a tendency to search for information on the option currently supported by the already available evidence, a prediction coined as the attraction search effect. While this search tendency could be shown to be robust, the data also showed considerable interindividual variability in the attraction search effect. One explanation could be that the relative strength of the attractiveness influence on search varies between situations and participants. Within iCodes, the relative strength of option attractiveness on the information-search process is represented by the γ parameter. In this project we experimentally manipulated between-subjects participants’ awareness of differences in attractiveness of the choice options by asking the experimental group to rate option attractiveness before search. Indeed, rating options’ attractiveness increased the tendency to search for the more attractive option compared to not rating options’ attractiveness. The effect of these ratings was further reflected in individually fitted γ parameters: Parameter values of participants who rated option attractiveness showed that their search was influenced more strongly by attractiveness than participants in the control group. The results of this project corroborate the role of the γ parameter and that iCodes is able to capture the effect of a theoretically-motivated manipulation of information-search behavior. Thus, this project further validates the assumed information search process and emphasizes the role of the already available evidence in information search but also takes systematic differences in the size of the effect into account.
Tim Pleskac
When people are asked to estimate the probability an event will occur, they could make different subjective probability (SP) judgments for different descriptions of the same event. This implies the evidence or support recruit to make SPs is based on the descriptions or hypotheses instead of the events. To capture this violation of description invariance descriptive theories like support theory often make a different invariance assumption: the support assigned to a hypothesis is invariant to the hypotheses it is being considered with. Here we examined the support invariance assumption across two studies where participants were asked to estimate the probability with a verbal scale or a numeric scale that a target bicyclist would win a race. The first study shows that the presence of a distractor—a bicyclist that is objectively dominated by the target— boosts the SP assigned to the target hypothesis with a verbal scale compared to when no distractor is present. The second study shows that the presence of a resembler -a bicyclist that is objectively similar to the target- differentially detracts from the SP assigned to the target regardless of the type of scale. These context effects invalidate the regularity and the strong independence assumptions of support theory. This invalidation suggests that the support people recruit about the target hypothesis also depends on the other hypotheses (bicyclists) which are under consideration.
Guy Hawkins
I have previously applied evidence accumulation models to discriminate between which decision strategies are used by participants making multi-attribute choices about products. One limitation of this work is that it has currently been applied only to choices with 2 attributes. A natural extension of this work is to move towards a higher number of attributes or options, however model complexity increases exponentially with attributes x options when assessing strategies. I will present an approach currently being undertaken that asks participants to assess pairs of options (phones) differing across 5 attributes. The participants are asked to make two different judgements of each pair of phones, a preference judgement and a similarity judgement. The preference component of the experiment simply asks participants which phone of each pair they would choose. The similarity judgements are over the same set of phone pairs and participants rate each pair on a 7 point scale from low to high similarity. An initial analysis using multi-dimensional scaling on the similarity data (both average similarity and individual ratings) shows the phones are well represented by two dimensions. The plan will be to take each individuals multi-dimensional scaling solution and use that as input to a cognitive model of the preferences. This model will be contrasted to approaches where option utilities are derived from multi-attribute utility theory to see which better explains preferences.
Hyesue Jang
Dr. Richard Lewis
We investigate the possibility that adult age differences in a choice learning task can be explained by adaptations to age differences in the limits ("bounds") of different components of learning and memory. Learning which choice option is most likely to lead to reward involves both conscious, effortful working memory (WM) and automatic, implicit reinforcement learning (RL) processes (Collins 2018; Collins & Frank, 2018). WM and RL have complementary strengths and weaknesses (WM: fast/accurate but capacity-limited/delay-sensitive; RL: robust but slow). Optimal performance depends on finding the right balance between these systems, based on their relative effectiveness. WM declines more than RL with age, and thus the theoretical concept of bounded optimality (Lewis et al., 2014) predicts that older adults will rely more on RL than WM during the choice-learning task than will young adults. We will explore how a modified version of an existing computational model (Collins & Frank, 2018) might explain individual differences in the performance of young and older adults by deriving the optimal balance between these systems depending on their limitations.
Dr. Henrik Singmann
Prof. Arndt Bröder
People's explicit probability judgements often appear to be probabilistically incoherent. The most prominent example of this is the conjunction fallacy (Kahneman & Tversky, 1983). Recently, a growing body of research argues that biases in probability judgements can arise from rational reasoning processes based on mental samples from coherent probability distributions. However, the sample-based normative accounts of probability judgements are mainly investigated in probability estimation tasks. In the current study, a ranking task is used to study people's explicit probability judgements, and more importantly, to test the sample-based normative accounts of probability judgements. In the ranking task, participants are asked to rank four events, A, not-A, B, and not-B, according to their perceived likelihoods of occurrence. Results show a novel probabilistic reasoning bias: Participants often provide logically impossible rankings, violating the complement rule and the transitive rule. Interestingly, one existing sample-based normative account, namely the Probability Theory plus Noise (PT+N) account (Costello & Watts, 2014), can potentially explain the logical inconsistencies in rankings of events. We formally derive the predictions for rankings from the PT+N account. Our predictions suggest that specific qualitative patterns should appear in people's responses if the logically impossible rankings are solely the products of internal sampling processes instead of inconsistent inherent beliefs.
Bennett L. Schwartz
Fabian Soto
Tip-of-the-tongue states (TOT) and feeling-of-knowing judgments (FOK) are metacognitive experiences about the possibility of future retrieval of information when recall fails. Many studies show that experiencing a TOT or a high FOK increases the possibility of correct retrieval of missing information, which demonstrates metacognitive sensitivity (see Schwartz & Pournaghdali, 2021). However, evidence for metacognitive sensitivity of TOT and FOK mainly derives from measures that conflate metacognitive sensitivity with metacognitive bias. In the current study, we used general recognition theory (GRT) to provide bias-free assessments of metacognitive sensitivity for TOT and FOK. We asked participants to answer general-knowledge questions. If recall failed, participants provided metacognitive judgments of TOT and FOK, memory recognition responses, and metacognitive judgements of confidence on those recognition responses. After collecting the behavioral data, we fit two different GRT models to the data to assess metacognitive sensitivity of TOT and FOK. Using estimated parameters of the models, we constructed two sensitivity vs. metacognition (SvM) curves, which represent sensitivity in the recognition task, as a function of strength of metacognitive experiences: an SvM curve for TOT and an SvM curve for FOK. According to both SvM analyses, the highest level of recognition sensitivity was accompanied with highest strength of metacognitive experiences, and as the magnitude of metacognitive experiences dropped, so did recognition sensitivity. However, the recognition sensitivity was higher than chance level when people did not experience a TOT or FOK. These results are the first bias-free indication of metacognitive sensitivity of TOT and FOK judgments.
Robin D. Thomas
Lauren Davidson
Dr. Allan Collins
Elizabeth Pettit
We use hierarchical estimation of a drift diffusion model (HDDM) in conjunction with neural data (EEG) and individual differences to understand and compare perceptual and value-based choice. For perceptual decisions, participants selected the more horizontally-oriented grating among a pair, with orientations across pairs designed to produce easy vs. difficult trials. For value-based choice, participants selected their preference among pairs of gambles with two equiprobable outcomes. Gamble pairs had equal expected values but different outcome ranges (risk), and we varied the difference between their ranges to produce similar vs. different levels of risk. We collected EEG data throughout both tasks and calculated a variety of frequency-based (N200, CPP) and time-based (parietal theta, gamma) measures to serve as continuous regressors in determining the HDDM model parameters. Finally, participants self-reported individual difference variables on decision-making styles, impulsivity, and personality. We present results that show the effects of task type, stimulus condition, and EEG signals on model parameters, such as lower drift rates for more difficult perceptual tasks and more similar risk levels. We also provide correlations between individually-estimated model parameters and relevant individual difference measures, such as lower thresholds for more intuitive decision makers. In total, we deploy a unique collection of behavioral tasks, physiological data, psychometric variables, and computational modeling to better understand decision processes.
Tim Pleskac
COVID-19 immersed us in a sea of uncertainties, several social: Will people wear masks? Are they wearing them now? Will people vaccinate? We were curious how well the wisdom of the crowd could reduce these uncertainties. Across two studies, we surveyed 1,869 students at the University of Kansas on their likelihood of engaging in health-protective behavior, how likely they assumed others were to engage in that behavior, and their confidence in those estimates. We also asked them to predict how other students would respond and collected numeracy, discounting, and risk-taking propensity measures. We compared predictions from multiple wisdom of the crowd aggregation methods, including simple averaging, weighted averaging, and the surprisingly popular algorithm, which makes use of differences between self- and other-related beliefs. We found that weighting by confidence produced predictions that most closely approximated actual observed data for mask-wearing. However, surprisingly popular predictions also proved accurate. We will discuss the implications of these findings, particularly in the context of identifying the environments when different wisdom of the crowd algorithms will work better or worse, and the challenges in using wisdom of the crowd algorithms to predict human behavior.
Dr. Scott Brown
Prof. Juanita Todd
Our ability to focus on a task whilst remaining sensitive to unexpected changes in the environment is vital to goal-directed behaviour. The distraction task has been widely used in cognitive neurosciences, especially in people with schizophrenia, to study performance impairments when the environment changes. In the distraction paradigm, participants perform an active task requiring simple responses while task-irrelevant changes occur occasionally. In the current study, the distraction paradigm featured a simple auditory tone duration judgment task with occasional (irrelevant) changes in the tone frequency. In the original study (Schroger & Wolff, 1998) these ‘deviant’ trials were associated with a distraction effect (slower and more error-prone responding). Simultaneous EEG recording of event-related responses to the sequence of tones has linked the distraction effect to key response components known as the mismatch-negativity (MMN) occurring ~150ms after the deviance onset and the subsequent P300 peaking around 250-350ms. In the present study, we compared several evidence accumulation models of behavioural response times in the distraction paradigm. These linear ballistic accumulator (LBA) models could vary across threshold and drift rates for a variety of conditional combinations. Following this we incorporated EEG recordings to inform the drift rate parameter in a directed joint model approach. As expected, the free model provided the best descriptive adequacy of the data, however, the directed model did capture variance in the data. This is promising as the directed model allows EEG measurements to inform the model by linking latent variables to observable phenomenon.
Florence Bockting
The illusory truth effect refers to the phenomenon that participants tend to judge repeated statements as more true than new statements. The effect of repetition on truth judgments is measured as the difference of mean truth ratings between repeated and new statements (TE). An aspect which has received little attention concerns the use of natural-language statements as stimuli. Given that these statements evoke different individual mental representations, the question arises to what extent the TE does indeed measure an effect of repetition or rather a difference in prior plausibility between statements. We argue that the appropriateness of the TE depends on the research focus: group or individual level. While it is a valid measure of the effect of repetition on the group level, when using a counterbalanced design, it is potentially biased on the individual level. We use a mixed-model approach to formalize our theoretical argument and discuss the implications for the group as well as the individual level. We further support the relevance of these theoretical implications by simulating individual truth effects using extant data simulations. In this approach, empirical data are used as a data base to perform realistic simulations of variation in the population. Finally, we discuss consequences for research on individual differences in the illusory truth effect.
Dr. Aaron Voelker
Terry Stewart
Chris Eliasmith
Hedderik van Rijn
Keeping track of time is essential for everyday behavior. Theoretical models have proposed a wide variety of neural processes that could tell time, but it is unclear which ones the brain actually uses. Low-level neural models are specific, but rarely explicate how cognitive processes, such as attention and memory, modulate prospective and retrospective timing. Here we develop a neurocomputational model of prospective and retrospective timing, using a spiking recurrent neural network. The model captures behavior of individual spiking neurons and population dynamics when producing and perceiving time intervals, thus bridging low- and high-level phenomena. When interrupting events are introduced, the model delays responding in a similar way to pigeons and rats. Crucially, the model also explains why attending incoming stimuli decreases prospective estimates and increases retrospective estimates of time. In sum, our model offers a neurocomputational account of prospective and retrospective timing, from low-level neural dynamics to high-level cognition.
Dr. Gabriel Wallin
Bayesian item response theory modeling is a complex issue as it requires the estimation of many parameters (at least one parameter per respondent and one per item). The problem is especially intricate when Bayesian nonparametric item response theory models (BNIRMs) are used, as the number of parameters scale really quickly. Also, to guarantee the identifiability of the model, restrictions regarding the distribution of the true scores or the item response function (IRF) are used. The aim of the present study is to develop BNIRMs derived from optimal scoring, a new nonparametric psychometric approach that similar to Mokken Scale Analysis uses sum scores as initial guesses for estimating the IRFs. We propose four approaches for estimating the IRFs: the first two use basis expansion (Legendre and B-splines); the third one uses a single hidden layer neural network; and the last one is a new proposed way (developed in the present study) of doing of piecewise regression, which we call Rademacher basis. The priors for the regression coefficients of the bases follow a normal distribution with mean 0 and standard deviation equals to 1 for L2-regularization. For the priors of the latent true scores, we propose what we call a Kolmogorov-Smirnov prior, which uses the empirical cumulative distribution of the sum scores as an initial estimate for the distribution function. We provide Maximum a Posteriori estimation with Genetic Algorithm, as well as MCMC estimation with a Hit-and-Run algorithm. Comparisons between performances and future studies are discussed.
Prof. Jun Zhang
Bayesian inference has been used in the past to model visual perception (Kerson et al., 2004), accounting for the Helmholtz principle of perception of “unconscious inference.” In this paper, we adapt the Bayesian framework to model emotion in accordance with Schachter-Singer’s Two-Factor theory, which argued that emotion is the outcome of cognitive labeling or attribution of a diffuse pattern of autonomic arousal (Schachter & Singer, 1962). In analogous to visual perception, we conceptualize the emotion process, in which emotional labels are constructed, as an instance of unconscious Bayesian inference combining the contextual information with a person’s physiological arousal patterns. We develop a drift-diffusion model to simulate Schachter-Singer’s experimental findings. There, participants who were physiologically aroused (via drug injection but were not informed of arousal) later reported different emotions (i.e., labeled their arousal pattern differently) based on the nature of their interaction with a experimental confederate they encountered post-injection. In our drift-diffusion modeling, the decision boundaries correspond to the euphoric and anger state experienced by the participants in the experiment, and boundary-crossing constitutes “labeling” in Schachter-Singer’s sense. Response time (RT) in the drift-diffusion model is used as a surrogate measure of the self-rated intensity of the emotional state, where high intensity corresponds to a shorter response time. We propose two model scenarios (versions). In the first version, arousal pattern is used as the prior and the likelihood function for evidence accumulation is models the interaction with the confederate (context). We adopt an unbiased prior, while allowing the drift-rate (and its sign) to capture the nature of interaction with the confederate. In the second setup, we use the context as the prior and physiological arousal patterns as the likelihood function. We expect an initial bias depending on the polarity of the interactive experience with the confederate, but the drift-rate is of zero-mean (diffuse but polarity-neutral arousal pattern). The comparison between the simulations of the two versions of the Bayesian drift-diffusion models and the original Schachter & Singer (1962) experimental data will be reported.
Dr. Alfonso Díaz Furlong
Research regarding the learning processes of mathematics is focused primarily on pedagogical, didactic, and teaching practice aspects. On the other hand, researchers have been working on the understanding of the cognitive processes related to the acquisition of mathematical concepts and methods. The convergence of different areas of knowledge can be especially useful to achieve this objective, tackling it from a multidisciplinary point of view. Cognitive modeling, mathematical psychology, and neurosciences are necessary approaches to study, research, and predict the phenomena related to mathematical learning and reasoning. From the study of categorization processes, memory, and multitasking aspects, it is possible to glimpse the dynamics involved in the development of mathematical thinking. In this research proposal we are interested in studying brain activity patterns through the use of an EEG device (CYTON Biosensing board 8-channels / Emotiv-EPOC + 14 channels), to later generate and implement a cognitive model that allows us to understand the process of developing mathematical skills and reasoning, specifically, for solving geometry problems by students of secondary education, high school, and early college years; this following the inspiration of past work of ACT-R concerning algebra problems frame. In this fast talk, we present the theoretical and methodological aspects of the research proposal and further applications.
Dr. Yoshihiko Kunisato
Insomnia is a risk factor for various mental and physical diseases. Understanding the information processing that is unique to this disorder will help in its treatment. This study explores whether the severity of insomnia relates to any unique characteristic learning process distinguished from other symptoms. For this purpose, we used a decision-making task that can dissociate the influence of positive from negative outcomes on choice behavior by estimating dual learning rates. We recruited general participants using a crowdsourcing service. They performed the task online and completed self-report measures on insomnia, anxiety, and depression. The data gathered from 391 participants were analyzed. First, we found a strong correlation between the self-report measures, as predicted. Next, to explore unique learning processes associated with insomnia, we applied the reinforcement learning model to the data from the decision-making task and estimated the model parameters. The higher learning rate of positive outcomes over negative outcomes is a feature observed as a whole and can be used as an index of biased information processing in the learning process. Analyses using linear models revealed that this index is higher in those with higher insomnia scores, which implies that insomnia is related to attention to positive outcomes. Interestingly, higher anxiety scores were predicted in the opposite direction. Possible explanations for the results may be differences in cognitive resources and attention biases. We also report other findings on the association between learning processes and mental health.
Jeremy B. Caplan
Despite many examples of order-sensitive paired associates (e.g., FISH HOOK), the study of association memory (e.g., AB, CD) has been theoretically isolated from the study of order memory (e.g., ABCD). As a result, formal models of association memory are poor at accounting for within pair order (AB vs. BA), and either predict that order judgments of a retrieved pair should be at chance or perfect. Behaviour contradicts both predictions, when the pair can be recalled, order judgment is above chance, but well below perfect. We tested four separate order encoding mechanisms that could be added to existing convolution-based models, which otherwise predict chance order judgment performance, where pair order is encoded as: 1) positional item features, 2) position-specific permutations of item features, 3) position-item associations, and 4) adding position vectors to items. All models achieved close fits to aggregate order recognition data, without compromising associative symmetry. Although published models are unable to capture the relationship between memory for associations and their constituent order, multiple promising enhancements to convolution models are feasible.
Among many remarkable things the mind does, neuroplasticity stands in a league of its own. Central to this quality is the ability to render and infer different cognitive models for different tasks. Recent developments in machine learning have been fairly successful in optimizing for a single task (supervised learning with backpropagation). This however is not enough for general intelligence, where the agent is required to form abstractions (On the measure of intelligence, Chollet). Common ground to all the tasks is the fact that we can mathematically and geometrically model each one in the state space (S[ɸ]) with its state variable set ɸ. A neural network (NN[task]) is a universal function approximator and can be thought of mapping set of state variables along a manifold (M[task]), i.e. given {(X1,Y1),…,(Xn,Yn)}, NN builds f : X to Y learn via gradient descent. This approach introduces a new neural network (NN[meta]) which is trained to translate along all M[task] in the state space S[ɸ] learning a new meta-manifold (M[meta]) to traverse along tasks, revealing common parameters and eventually the latent model (l : task_m{x,y} -> task_n{x,y}), here x,y elicit different meaning depending on the task (context). Eventually, we are left only with the state variables that optimize for either tasks or translation over the tasks. This way the agent performs tasks through learning and switches context through model translation. Geometric interpretation of such model is an intuitive playground for all meta-learners.
Consensus is critical for problems ranging from policy decision-making to expert elicitation, yet research is lacking on methods for helping small groups come to consensus. We take advantage of a proof by Roberts (1980) that the level sets of cardinal fully comparable social welfare functions are cones with vertices at the equal utility point, where the angle of the cone can change depending on the region of the space of utility orders. We propose an approach that leverages an assumption about the relationship between the social welfare function across the n! regions. Specifically, we assume that the social welfare function's local behavior will be similar if the ordering of the utilities is similar across two regions of the order space. We compare the approach against alternative non-parametric and parametric approaches.
Dr. Scott Brown
Guy Hawkins
How extreme can we make the speed-accuracy trade-off and still see adequate performance? At what point does does a participant just start guessing? 400 participants were assigned to one of eleven speed-accuracy emphasis groups. Each group experienced a different average deadline time throughout the entire experiment, ranging from 200 ms to 2500 ms. One group was used as an approximate control, where every trial had a six second deadline. Speed-accuracy was emphasised using implicit deadlines rather than explicit instructions. Response time and accuracy (of attempted trials) increased as deadline increased, and showed an interaction with trial coherence. The resulting figure looks pretty cool. Miss rate (of all trials) decreased as deadline increased, reaching nearly 0% for the control group.
Necdet Gurkan
Joshua Peterson
Synthetic portraits are used as a privacy-preserving measure to train machine-learning models, anonymize faces through face replacement, and generate pseudonymous avatars. Here, we argue that while synthetic portraits may protect the privacy of some individuals, they do not protect the privacy of every individual with privacy interests in the images because of the statistical structure of human appearances. In particular, we demonstrate that the collection of actual appearances is so densely arranged in face space that every synthetic portrait will necessarily capture the likeness of at least one actual current, past, or future person.
Jenna Lester
Cara Kneeland
Prof. Joe Houpt
Mario Fific
Most decisions people make depend on multiple sources of information and a number of models have been posited to explain how people combine those sources as part of their decision-making process. These models include those based on heuristics, such as a “take-the-best” heuristic, and others based on probabilistic inference, such as naïve Bayesian inferences. Unfortunately, choice probabilities are often not sufficient to distinguish among these models. In the current work, we will describe how Systems Factorial Technology (SFT) can be applied to discriminate among candidate decision-making models under different learning environments, that either encourage inference making using a subset of cues or using all cues. Systems Factorial Technology is a framework of nonparametric measures to characterize information processing from multiple sources of information using response times. In our task, participants made probabilistic inferences comparing two bugs on their poisonousness, based on the bugs physical characteristics. We present results from two conditions: (a) the strategy-imposed condition, in which participants are instructed to use specific heuristics, which served to validate the SFT methodology in detecting the underlying decision-making strategies; (b) the open-strategy condition, in which participants formed their own decision strategy. Overall, the results highlight the importance of the SFT application in diagnosing the underlying properties of decision making, which can be used as a model validation tool.
Juanita Guadalajara
Sabrina Esparza
Prof. Joe Houpt
The perception of human facial features closely relates to social categorization processes. In particular, the combination of certain facial features has been found to shape observer perception of friendliness and hostility—a crucial social task. Additional decisions based on facial feature categorization, such as identifying race, gender, and age, also have important social implications. Towsend et al. (2000), and, more recently, Wang and Busemeyer (2016), demonstrated that when making decisions about group membership and hostility from facial information, many decision-makers demonstrated non-contextuality. In those experiments, the non-informative features of the face were fixed. However, extensive face research has indicated that facial features are usually not perceived independently. The goal of our research was to investigate whether varying non-informative facial features would influence the compatibility of a group membership and individual hostility decision. Our study utilized faces of different skin tones and textures, genders, and ages, but followed the previous studies in basing the participant’s task solely on the face width and lip thickness. The additional variation did not lead to different patterns of contextuality, despite the fact that they likely influenced the perception of the features. In future research, we plan to explore this decision process through the lens of systems factorial technology to examine how the process of combining information is influenced by these factors.
Jay Wimsatt
Charles Doan
Hick's law aims to predict the time that people take to make a decision when presented with a set of possible choices: roughly speaking, according to the law, decision time is a logarithmic function of the number of choices. However, the evidence suggests that the law is not as effective when used to predict reaction times involving structured sets of alternatives (Vigo, 2014; Vigo & Doan, 2014). In this talk, we give theoretical and empirical justification for a more general and robust law -- derived by Vigo (2014) from the law of invariance for human conceptual behavior -- that can more precisely account for decision reaction times on structured sets. Furthermore, we argue that Hick’s Law is a special case of this more general law of choice reaction times.
Dr. Kenneth J Kurtz Kurtz
In the traditional artificial classification learning paradigm, each training item is typically a single object composed of values along particular object features (e.g., shape, size, shading, length of tail, etc). We investigate an alternative framework for inductive category learning in which stimuli consist of pairs of items and the diagnostic basis for classification is conjoint features: properties of the stimulus that arise from a relative evaluation of the traditional dimension values of the items in the pair. For example, if a pair consisted of a small white circle and a large black circle, the identity match between the items on the shape dimension would be a conjoint feature that might predict the category label. Under what conditions can people learn categories based on such features? Further, to what extent does this ability reflect common or distinct machinery relative to traditional inductive category learning? In a series of experiments, we trained subjects to categorize stimuli consisting of two fish that each varied along one traditional dimension: length of body. Fish pairs of similar length belonged to one category while fish pairs of different lengths belonged to the other. We found that subjects appeared to successfully leverage the conjoint feature based on the relative comparison of alignable stimulus feature values (body length). Further, we tested generalization performance for novel items (previously unseen pairs) and found evidence of both graded and non-graded generalization gradients depending on the category structure that was observed during training. We propose a modeling approach to account for these results in terms of neural networks that incorporate a design principle of simple preprocessing layers to recode the input in terms of pairwise hypotheses such as ‘same-value.’
Submitting author
Author