Poster session
Daniel R. Little
Margaret Webb
Dr. Ami Eidels
Prof. Cheng-Ta Yang
We employ Systems Factorial Technology (Townsend & Nozawa, 1995) to investigate how people combine dual cues in semantic memory search. Our aims are to understand: (1) how cues interact during the process of semantic search in convergent thinking, and (2) whether workload capacity (i.e. cue-processing efficiency) is related to the final search result. In two experiments, participants completed a typical convergent thinking test and a word production task. The results reveal that: (1) collective evidence presumably supports a parallel model despite individual differences in workload capacity, (2) there exists a negative correlation between workload capacity and performance on convergent thinking test. A potential explanation is that, for the creative individual, loading many candidate answers leads to consumption of substantial processing resources that shows as low workload capacity, but also allows creative individuals to switch more easily from one candidate to another so that they have a higher probability of successfully producing an answer within a limited time. Our results further imply that workload capacity is a significant factor for the semantic search process in convergent thinking and provides new insight on the model of semantic search and creativity.
Koji Kosugi
A frequent shoppers program is a sales promotion strategy used by retailers worldwide. Awarding points can be said to be similar to price discounts in the sense that a return for the amount of money. However, there are disadvantages to awarding points such as restriction of use, expiration date, and use after the next time. Price discount doesn’t have this demerits. From a rational perspective, consumers should be more likely to prefer discounts than points. Which has the higher perceived value between price discounts and equivalent amount points? Regarding this, Nakagawa (2015) showed that the relationship between the perceived value of awarding points and price discounts is switched depending on the amount of purchase. This phenomenon can be explained by mental accounting theory(Thaler, 1985) and magnitude effect. Mental accounting theory is based on the value function of Prospect theory, and when the purchase amount is small, awarding points have higher perceived value than price discounts. Magnitude effect is an effect in which decision making and behavior change depending on the amount of money. When the Purchase amount is large, price discounts have higher perceived value than awarding points. Therefore, we investigated how the perceived value of price discounts and awarding points in a supermarket with changes in the purchase price. This study aims to detect the amount of money where switching subjects preference of price discounts and awarding points by fitting linear or non-linear regression models. As a result, it is possible to consider which sales promotion is effective for each amount.
Hiroshi Shimizu
Several laboratory experiments of social dilemma game have provided robust data suggesting that the initially high cooperation rate declines as the game is repeated. However, changes in decision-making mechanisms that are responsible for this decline are not well understood. Although reinforcement learning models can explain changes in the cooperation rate from the perspective of evolution, they cannot explain the high initial cooperation rate and the subsequent decline. In this study, a decision-making model was derived from the social value orientation (SVO) model (Muto, 2006), and the expected utility of cooperation and non-cooperation was integrated into a learning model. Then, a laboratory experiment was conducted to test the model. The model comparison showed that the data were best explained by the model that considered learning from the perspective of the game’s gain structure, including the marginal per capita of return (MPCR), and the cooperation of others. The results suggested the following: (1) Altruism, which is one of the parameters of SVO has a positive main effect on cooperation, whereas equality, which is the other parameter of SVO has an effect on cooperation only through the interaction with the expectations for cooperation by others; (2) MPCR is estimated to be high at the beginning of the game, and cooperation decrease as MPCR is perceived more accurately. (3) The impact of equality accelerates the decline in cooperation when the expectations for cooperation by others fall below 50% as a result of accurately estimating the MPCR.
Matt Ross
Sylvain Chartier
Each day we are faced with a decision of maximizing our resources by using our current knowledge to learn new things. Should we go to the new restaurant that just opened around the corner or stick to an old, reliable favourite? This is known as the exploration-exploitation dilemma and it is at the heart of reinforcement learning. The present study looks at the exploitation half of this problem and aims to implement it in a biologically plausible recurrent associative memory model. In the framework of Artificial Neural Networks, exploitation is observed when the network can iterate through many learned responses and stabilize on the correct one to solve a given task. This is a process akin to being able to switch from a line to a point attractor. More precisely, Bidirectional Associative Memory (BAM) is used to accomplish such tasks where the context dictates which attractor the network should converge to. For simple independent tasks, the BAM is sufficient. However, for overlapping tasks, the task becomes nonlinearly separable. Therefore, the BAM needs an extra unsupervised layer to extract unique features from the inputs. These features combined with input are then sent to BAM where it can learn the different attractors adequately. This network was able to stabilize on the correct responses of tasks that involved time series of varying lengths, overlap, and levels of correlation; the variability one would expect from the real world.
Shawn Betts
Dr. John Anderson
Our research aims to model human motor skill learning using a video game paradigm. We hereby evaluate the degree of motor skill transfer across game speeds and introduce changes that need to be made to the ACT-R architecture to model such transfer. This work uses the Auto Orbit game in which a ship is orbiting around a balloon at a constant speed. The player needs to learn how to adjust the ship’s aim and fire missiles at the balloon under temporal constraints. We had subjects learn to play the same game in slow, medium, and fast game speed conditions. We further explored effects of skill transfer across conditions to assess humans’ and models’ ability to adapt their motor behavior across speeds. To do so, we utilized an ABA experimental design including all 9 A-B pairs of game speeds (out of slow, medium, and fast speeds). Motor skill learning was evaluated using four experimental measures, with a focus on motor timing. These included a measure of keypress sequence regularity (Shannon entropy), motor timing variability (inter-shot interval logarithmic coefficient of variation), motor timing periodicity and motor timing regularity (both extracted using the shots autocorrelation function). Based on these measures, we first show that subjects were able to rapidly adapt to each game speed and adjust their firing rate accordingly. We then compare human and model motor skill learning and shot timing across speeds. Finally, we discuss our current model implementation and provide some ideas for future improvements.
Koji Kosugi
In social psychology, group dynamics is one of the most important topics.To understand group dynamics, it is necessary to research the change of group network structure.There are two methods to analyze the group network data that evolve over time: TERGM(Hanneke et al. 2010; Krivitsky and Handcock 2014) and Siena(Snijders et al. 2010).TERGM(Temporal Exponential Random Graph Models) is an extension of ERGM to accommodate intertemporal dependence in longitudinally observed networks.It can use ERGM network terms and statistics to be reused in a dynamic context, understood in terms of formation and dissolution of edges.From network data at two or more points in time, Siena estimates that the network structure at the previous point in time affects the change in the relationship between actors at the later point in time by using agent-based simulations.In this study, we use TERGM and Siena to analyze similar network data and compare the results of each.Also, the advantages and disadvantages of each model will be identified.
Xianni Wang
Michael Byrne
This paper presents an ACT-R model designed to simulate voting behavior on full-face paper ballots. The model implements a non-standard voting strategy: the strategy votes first from left to right on a ballot and then from top to bottom. We ran this model on 6600 randomly-generated ballots governed by three different variables that affected the visual layout of the ballot. The findings suggest that our model’s error behavior is emergent and sensitive to ballot structure. These results represent an important step towards our goal of creating a software tool capable of identifying bad ballot design.
Dr. Jerald D. Kralik
Jaeseung Jeong
The next level in understanding human social cognition is to model it comprehensively. To this end, we have been developing a framework and model that takes as input an event involving someone (focusing on who it was and what they did), and assesses the event based on whether it should change social accounting among individuals, and whether something should be done, such as communicating with others. Here, we present development of the model computationally and results generated by it as predictions to be tested empirically: e.g., more communication about those socially close to us when their actions are positive, and more about those with higher status (i.e., celebrities) when negative; and the relative merit or egregiousness of a wide range of behavior. Leveraging what is known of the human social mind and brain, our work aims to provide a comprehensive model of human social cognition.
Dr. Yiyun Shou
Bruce Christensen
The tendency to accept a hypothesis based on fewer than normal pieces of information (“Jumping-to-Conclusions” (JTC) bias) is a probabilistic reasoning bias commonly observed in clinical populations with delusions. This tendency can be attributed to a relatively low decision threshold and overweighting of a piece of evidence. Whilst some highly anxious individuals demonstrate JTC bias, the implications of findings remain contentious. The contention stems from a lack of understanding about how anxiety interacts with the two factors in belief updating. It remains unexplored as to whether anxious individuals deviate from rationality in belief updating just as much as the healthy population or are simply less “over-cautious” in gathering information. The present study adopts a systematic approach utilising a Bayesian graphical model to answer these questions. Based on the classic beads task, the model illustrates how a rational agent would update their prior belief upon receiving new information and at what point that updated belief would cause them to act. Then, we investigate the impact of anxiety on decision threshold and evidence weights in the model, and eventually how belief updating would change. These steps allow for comparisons between a rational response and those exhibited by both healthy and anxious populations. By clearly illustrating the influence of anxiety on each parameter in the model, we can deepen the understanding of associations between anxiety and JTC bias. The properties of the model are demonstrated in a series of simulation studies. The implication of this model on real-life data will also be discussed.
Dr. Yiyun Shou
Michael Smithson
Rating scales are commonly used in psychological surveys to elicit respondents’ judgements. However, the presence of response bias will threat the validity of result in surveys. Response bias (RB) refer to the cases where the number of certain response options was disproportionately more than others. The causes of RB consist of both respondent factors (such as personality or cultural influence) and context-dependent factors (such as scale format or nature of contents). The results before and after controlling for RB can be completely different.This study aimed to investigate the influences of RB. A series of simulation studies was carried out to explore the influence of RB on means, variances and associations across different conditions. The influence of RB on variables was evaluated by several indicators, including bias in estimation and variance ratios. Results showed that the influence of RB depended on the shapes of distributions of the variables. In addition, we used the data from the World Value Survey (WVS) Wave 6 to demonstrate how RB could influence means, variances and associations among variables in real world. We found that RB had substantially differing impacts on the means, variances and distribution shapes of the WVS data across different countries. Taken together, the simulation results and WVS findings indicate that RB can be a major challenge for measurement validity and measurement equivalence in studies using rating scales. We discuss implications and recommendations to researchers.
Fabian Soto
Numerous studies have investigated processing of emotional expressions and facial identity and the possible integrality between the two. However, the results of these studies have not come into an agreement on whether facial expression and identity are processed integrally or they are perceptually separable, which may be due to a general lack of control of stimulus and decisional factors. This makes it necessary to develop experiments that overcome the shortcomings of the previous research and may shed light on this debate. In this study, we performed an experiment with highly controlled stimuli using 3-D realistic computer-generated faces for which the discriminability of identities and expressions, the intensity of the expressions, and low-level features of the faces were controlled. A large number of participants, distributed across twelve experiments, performed identification tasks for the six basic emotional expressions and the neutral expression. General recognition theory with individual differences was utilized to model the data, which allowed us to dissociate between perceptual and decisional processes. Results showed robust violations of perceptual independence and decisional separability, which were consistent across most experiments. Perceptual separability results were inconsistent for most expressions, except for the case of happiness and anger. Anger was exceptional in that it showed perceptual separability from identity, and vice-versa. Happiness was perceptually separable from identity, but not vice-versa. Interestingly, discriminability of identity was consistently reduced by happiness compared to a neutral expression.
Asli Kilic
Amy H. Criss
The strength-based mirror effect in recognition memory is the finding observed as an increase in hits and a decrease in false alarms after an additional study. When a set of items is strengthened in a list in which another set is not, recognition memory performance of weak items is not negatively affected by being studied along with strong items. This finding is defined as the null list-strength effect and both of these findings are explained by the differentiation mechanism. Currently the study conducted by the researchers examined the list-strength paradigm in source memory by adopting a recognition task, and demonstrated a strength-based mirror effect and a null list-strength effect in source memory. Following these finding in source recognition memory, predictions of the Retrieving Effectively from Memory model would be explored to understand the underlying processes.
Dr. Scott Brown
With recent advances in computational modelling techniques, joint modelling of behavioural tasks has become more accessible. In the current experiment, results from a dual task cognitive workload paradigm were compared across two groups. The two groups were student participants and a highly skilled military group who were in a selection program. Both groups completed a multiple object tracking task (MOT) and a simultaneous detection response task (DRT). We then jointly estimated parameters for models corresponding to the decisions in the MOT and responses to the DRT using a Particle Metropolis within Gibbs sampling method, separately for each group. We use the Linear Ballistic Accumulator to model decisions in the MOT and the shifted-Wald to fit responses in the DRT. MOT results showed a large difference between the groups on accuracy, with an interaction effect observed between groups and level of difficulty in response times, where military group response times slowed at a greater rate than the student group. In the DRT, the military group responded faster and with greater accuracy than the student group. Model results indicated the military group were more cautious than students, and tended to have faster processing speeds. Our findings show the strength of new sampling methodologies in not only explaining decision making strategies, but also in evaluating correlations between model parameters, within and across tasks.
Jon-Paul Cavallaro
Reilly Innes
Caroline Kuhne
Guy Hawkins
Dr. Scott Brown
Hierarchical Bayesian techniques have proven to be a powerful tool for the estimation of model parameters and individual random effects. However many existing methods for estimating in this way are extensions to previously used methods, and therefore are not necessarily efficient for this purpose.I present an implementation in R of a new sampler based on the paper by Gunawan et al. (2020, JMP). This new approach has the benefit of being built for hierarchical estimation from the ground up and is easily parallelised. Additionally it allows for the estimation of the full parameter covariance matrix, providing the ability to model two tasks jointly and directly estimate correlations between parameters in the two tasks.The poster will provide an introduction to the approach, a brief overview of important use cases for the sampler and a short tutorial on using the package. References to more detailed information and how to access the package will also be provided.
Dr. Henrik Singmann
The diffusion decision model (DDM) is the most prominent model for jointly modelling binary responses and associated response times. One hurdle in estimating the DDM is that the probability density function contains an infinite sum for which several different approximations exist. The goal of this project is to compare which of these approximations is the fastest given parameter values that are typically encountered when fitting the DDM. To this end, we implemented all existing approximations as well as some new combinations of existing methods in C++ and provided an interface to R via Rcpp. This enabled us not only to evaluate each approximation in an equal environment but also to utilize the faster C++ language while maintaining the R language interface. Using a benchmark approach, we compared the speed of all approximations against each other (as well as against some existing R implementations). The results of these benchmarks show that approximations that switch between the so-called small-time and large-time approximation based on input response time and parameter values are on average fastest, especially when combined with fast implementations of the small-time approximation. In addition, our new C++ implementations are faster than all existing implementations, even when including variability in the drift rate.
Fabian Soto
The Hippocampus is a cortical structure involved in a variety of learning and memory tasks. Hippocampus is most vital, however, for performing tasks that involve rapid learning of complex stimuli. One such task is the contextual fear conditioning paradigm (CFC). Although there is a plethora of evidence linking the hippocampus to CFC tasks, the precise mechanistic function of the Hippocampus during CFC remains elusive. A close inspection of the distinct input and output pathways of the hippocampus reveals that sub-field CA1 might serve as a critical junction where contextual fear memories are stored and organized. Recent evidence also suggests that the prefrontal cortex exerts top-down cognitive control over memory formation in CA1, via the Nucleus Reuniens (NR). We present a neurocomputational model of field CA1 that takes into account the various inputs to the region, including NR inputs to inhibitory CA1 inter-neurons, which control the specificity of memory encoding in CA1. We use spiking neuron and synaptic plasticity equations that are more neurobiologically-realistic than those used in previous models. Simulations with the model suggest a distinct role for the Nucleus Reuniens input in separating representations of highly similar events. Furthermore, the model explains recent experimental results relating the role of PFC inputs to CA1 in controlling generalization of fear memories in the CFC paradigm.
Jennifer Trueblood
This project examines how people learn strategies of multi-attribute decision making in an unfamiliar environment where they must learn two important properties of cues: the discriminability (i.e., the proportion of occasions where a cue has different values for a pair of options) and validity (i.e., the probability a cue identifies the correct option when a discrimination occurs). In the past, most researchers have looked at how known or guessable values of discriminability and validity relate to search and stopping decision rules. We try to understand how humans formulate search and stopping rules when they do not know the underlying discriminability and validity of cues, but must learn these over time. We model behavior using a Bayesian model where beliefs of the underlying validity and discriminability of cues are updated based on every observation made by the participant (Mistry, Lee, & Newell, 2016). We use the beliefs about discriminability and validity obtained from the Bayesian model to define different search strategies that participants might use in the task. In order to link beliefs to search strategies, we use sampling procedures where samples are drawn from the belief distributions and used to order cues for search. We test our models on data collected from human subjects and show that the modeling results intuitively map onto behavioral findings from the experiment
Michael Lee
Standard signal detection theory (SDT) models use an unbiased criterion as a comparison point. But, in some situations, the unbiased criterion is not the right reference point to measure bias in decision making. We consider the context of experts predicting the winning team in a National Football League (NFL) game. An unbiased criterion assumes that the home and away teams have equal probabilities of winning and that any partiality toward the home team over the away team is detrimental. However, the home team advantage exists, as evidenced by the behavior of betting markets and home teams having won 58% of the games throughout the 1981-1996 NFL seasons (Vergin & Sosik, 1999). Altogether, this suggests that experts should have some partiality toward the home team to improve their prediction accuracy. We apply hierarchical SDT models to expert predictions provided by nflpickwatch.com for the 2014-2019 NFL regular seasons to measure various forms of bias in predictions. In particular, we use the SDT framework to evaluate expert bias in terms of home team advantage, the cumulative win-loss record of teams, and herding by making the same prediction as other experts. Applying our model provides a way of measuring the extent to which experts are under- or over-reliant on these different sorts of biases when they make predictions.Vergin, R. C., & Sosik, J. J. (1999). No place like home: An examination of the homefield advantage in gambling strategies in NFL football. Journal of Economicsand Business, 51(1), 21-31. doi:10.1016/s0148-6195(98)00025-3
Bennett L. Schwartz
Fabian Soto
We present a novel model-based analysis of the association between awareness and perceptual processing based on a multidimensional version of signal detection theory (general recognition theory, or GRT). The analysis fits a GRT model to behavioral data and uses the estimated model to construct a sensitivity vs. awareness (SvA) curve, representing sensitivity in the discrimination task at each value of relative likelihood of awareness. This approach treats awareness as a continuum rather than a dichotomy, but also provides an objective benchmark for low likelihood of awareness. In two experiments, we assessed nonconscious facial expression recognition using SvA curves in a condition in which emotional faces (fearful vs. neutral) were rendered invisible using continuous flash suppression (CFS) for 500 (Experiment 1) and 700 (Experiment 2) milliseconds. Participants had to provide subjective awareness reports, expression discrimination responses, and metacognitive judgements of confidence on those discrimination responses. We predicted and found evidence for the nonconscious processing of facial expression, in the form of higher than chance-level sensitivity in the area of low likelihood of awareness. We also found evidence for metacognitive sensitivity in the absence of awareness. The similarity between the pattern of results from perceptual discrimination and metacognitive judgements is in line with the detection-theoretic assumption that both processes are based on the same perceptual evidence variable. To the best of our knowledge, this is the first objective and bias-free demonstration of nonconscious perceptual processing of facial expression.
Prof. Richard Golden
Bloom’s Taxonomy (BT) (Bloom, 1956) and Bloom’s Revised Taxonomy (BRT) (Anderson et al., 2001) are widely used to guide the design and evaluation of learning assessments, but few studies have investigated the underlying assumptions of such taxonomies. Data from two undergraduate social psychology multiple-choice exams were analyzed using CDM. One exam was 33 questions and taken by 86 students, and the other exam was 58 questions and taken by 47 students. We used key words in exam questions to sort them into one of the skill categories that constitute the “understanding” rung of BRT’s cognitive processes hierarchy: “Explaining” (“E”), “Classifying/Comparing” (“CC”), or “Inferring/Interpreting” (“II”). Next, we specified two Deterministic Noisy Input And (DINA) models, which predict the probability of correctly answering an exam question. The “Exclusive Resources” (ER) model assumed items required only the latent skill corresponding to its category. The second model, a “Shared Resources” (SR) model representative of BRT, included the additional specification that all items require a common latent skill. Both the BIC (Bayesian Information Criterion) and sampling error were estimated using nonparametric bootstrapping methods, and the Bayes Factor (BF) was calculated from the average BICs. The BF analysis indicated that the ER model was more likely than the SR model for both exams. These findings contradict a foundational assumption of BT and BRT that higher-order inference involving explaining, classifying/comparing, and inferring/interpreting requires the existence of a shared latent skill (e.g., remembering). The relevance of this methodology for evaluating learning taxonomy assumptions using CDMs is discussed.
Dr. John Anderson
Questions of strategy selection have been studied in various contexts such as problem solving, text editing, and even dynamic, fast-paced tasks. One way to model the strategy selection process is as a learning and decision problem: with experience, the agent learns the expected utilities of strategies, and executes a strategy based on what it has learned. However, the strategies studied in most of the past research have relatively stable utilities. Even when the task structure is manipulated to change the utilities of strategies, these changes are relatively infrequent. This contrasts with many real-world skills, such as sports and video gaming, where different strategies are optimal at different points during the learner’s trajectory. As a learner practices a skill, improvements in the learner’s degree of perceptual-motor calibration to the physics of tools and devices interacts with the difficulty of executing a strategy to affect the strategy’s utility. Furthermore, it is often unknown what the maximum utility of any strategy will be, as this is partly determined by the learner’s own general perceptual-motor abilities and prior experiences. How humans learn and select strategies in the face of such variation and uncertainty behooves further investigation. Towards that goal, we present a task and strategy paradigm that captures many of the features of a typical complex skill. We also demonstrate possible interactions between strategy use, perceptual-motor calibration, and task knowledge using past experimental data and model simulations within the Adaptive Control of Thought-Rational (ACT-R) cognitive architecture.
Steven Verheyen
Multidimensional scaling (MDS) is a popular technique for embedding items in a low-dimensional spatial representation from a matrix of the dissimilarities among items (Shepard, 1962). MDS has been used simply as a visualization aid or dimensionality reduction technique in statistics and machine learning applications, but in cognitive science, MDS has also been interpreted as a cognitive model of similarity perception or similarity judgment, and is often part of a larger framework for modeling complex behaviors like categorization (Nosofsky, 1992) or generalization (Shepard, 2004). However, a persistent challenge in application of MDS is selecting the latent dimensionality of the inferred spatial representation; the dimensionality is a hyperparameter that the modeler must specify when fitting MDS. Perhaps the most well-known procedure for selecting dimensionality is constructing a scree plot of residual stress (the difference between empirical dissimilarities and dissimilarities implied by the model) as a function of dimensionality, and then looking for an elbow: the dimensionality where stress has decreased dramatically but then plateaus. This elbow is taken to indicate that extending the space with additional dimensions does not substantially improve the fit of the model to the input similarities. Unfortunately, this procedure is highly subjective. Often such elbows do not exist, and instead the scree plots show a smooth decrease in stress as MDS increasingly overfits to noise at higher dimensionalities. In response, various more principled statistical techniques for model selection have been proposed that account for the trade-off between model complexity (dimensionality) and model fit (stress), including likelihood ratio tests (Ramsay, 1977), BIC (Lee, 2001), and Bayes factors (Gronau and Lee, in press). While such techniques are valuable, they can be prohibitively computationally complex for novice MDS users, and rely on a number of assumptions that are not necessarily met (e.g., Storms, 1995).An alternative technique that may avoid such problems is cross-validation. Under this approach, MDS of a given dimensionality would be fit to some subset of available dissimilarity data, the model’s predicted distances for held-out dissimilarity data would be evaluated, and the dimensionality which maximizes performance on the held-out data would be selected. Despite the simplicity and generality of cross-validation as a model selection procedure, cross-validation has seen relatively little application to MDS or related methods (Steyvers, 2006; Roads & Mozer, 2019; Gronau & Lee, in press), with no systematic exploration of its capabilities, as there has been for likelihood ratio tests, BIC, and Bayes factors (Ramsay, 1977; Lee, 2001; Gronau & Lee, in press). In the present work, we therefore examine the usefulness of cross-validation over cells of a dissimilarity matrix in simulations and applications to empirical data.
Dr. Thom John Owen Griffith
Nathan Lepora
Integration-to-threshold models of two-choice perceptual decision making have guided our understanding of the behaviour and neural processing of humans and animals for decades. Although such models seem to extend naturally to multiple-choice decision making, consensus on a normative framework has yet to emerge, and hence the implications of threshold characteristics for multiple choices have only been partially explored. Here we consider sequential Bayesian inference as the basis for a normative framework together with a conceptualisation of decision making as a particle diffusing in n-dimensions. This framework implies highly choice-interdependent decision thresholds, where boundaries are a function of all choice-beliefs. We show that in general the optimal decision boundaries comprise a degenerate set of complex structures and speed-accuracy tradeoffs, contrary to current 2-choice results. Such boundaries support both stationary and collapsing thresholds as optimal strategies for decision-making, both of which result from stationary complex boundary representations. This casts new light on the interpretation of urgency signals reported in neural recordings of decision making tasks, implying that they may originate from a more complex decision rule, and that the signal as a distinct phenomenon may be misleading as to the true mechanism. Our findings point towards a much-needed normative theory of multiple-choice decision making, provide a characterisation of optimal decision thresholds under this framework, and inform the debate between stationary and dynamic decision boundaries for optimal decision making.
Dr. Thom John Owen Griffith
Nathan Lepora
The theory of decision making has largely been developed as a disembodied open-loop process, however there is growing recognition that ecologically valid scenarios require integration of movement dynamics into current decision making theory, and a revision of what are considered to be core/fundamental decision components. Here we develop the theory of decision making as a closed loop process, first exploring the role of confidence both as a neural computation within the loop, affecting movement dynamics and as a property of the egocentric frame with a causal influence on cognition. Secondly, we consider the relationship between closed-loop components/processing and stability — in embodied systems action is accumulated and so physical restrictions limit volatility, moreover the reciprocal relationship between movement and evidence processing means that this stabilisation may also happen on a neural level in the form of a biased gain during evidence accumulation, improving stability/convergence. Finally, we examine closed-loop embodied decision making in the context of optimality — it is generally accepted that open-loop decision making is optimised to maximise reward via some form of Bayes’ Risk, prescribing a speed-accuracy tradeoff in so doing. For closed-loop decision making however, the form of the ‘objective function’ is unknown, as such we consider higher level, ecologically inspired ideas of optimality such as adaptability to e.g. moving targets or nonstationarity, to explore this fundamental aspect of embodied decision making.
Rosemary Cowell
David Huber
Across 640 training trials, participants using a computer tablet learned to move acursor that had its movement direction rotated by 90 degrees relative to onscreen visual feedback. These training trials involved either an all-at-once "sudden" rotation to 90 degrees starting at trial 1 or a "gradual" rotation in nine separate increments of 10 degrees. Similar prior work found a larger detrimental aftereffect when transferring back to no rotation following gradual adaptation training. Wereplicated this effect and crossed these conditions with a speed/accuracy emphasis manipulation. To characterize the nature of learning during training, we applied a simple two-parameter learning model to trial-by-trial errors in motion direction. One parameter captured the learning rate, reflecting trial-to-trial adjustments based on the difference between predicted and observed rotation. The other parameter captured memory, reflecting a tendency to use the estimate of rotation from the previous trial. This simple model captured individual differences, speed/accuracy emphasis, and subtle differences between the sudden and gradual trainingregimes. Furthermore, the model correctly predicted transfer performance for the gradual condition. However, it grossly over-estimated transfer errors for the sudden condition. We hypothesize that participants in the sudden condition learned that the mapping between movements and visual feedback can abruptly change (i.e., a change of environment, rather than visuomotor adaptation), allowing themto quickly adopt a new visuomotor mapping in the transfer phase when the rotation was removed. This learning-to-learn in the sudden condition may reflect model-based forms of reinforcement learning, in contrast to trial-and-error model-free learning.
Mr. Daniel Brand
Marco Ragni
Feedback for drawn inferences can lead to an adaption of future responses and underlying cognitive mechanisms. This article presents a reanalysis of recent hypothesis-driven experiments in syllogistic reasoning in which participants were presented with different feedback conditions (no feedback, 1s, 10s). We extend the original analysis, which only focused on no feedback vs. 1s feedback, by including the additional 10s condition. For our analysis, we rely on the data-driven theory- and hypothesis-agnostic Joint Nonnegative Matrix Factorization which allows us to contrast datasets based on the extraction of response patterns reflecting common and distinct response behavior. Our results support the previous claims that feedback does not generally boost logical reasoning ability but reduces the influence of biases against the response indicating that nothing logically follows from the premises.
Mr. Nicolas Riesterer
Marco Ragni
Recently, the TransSet model for human syllogistic reasoning was introduced and shown to outperform the previous state of the art in terms of predictive performance. In this article, we pick up the TransSet model and extend it to allow for capturing individual differences with respect to the conclusion "No Valid Conclusion" indicating that no logically correct conclusion can be derived from a problem's premises. Our evaluation is based on a coverage analysis in which a model's ability to capture individuals in terms of its parameters is assessed. We show that TransSet also outperforms state-of-the-art models on the basis of individuals and provide further evidence for the existence of an NVC aversion bias in human syllogistic reasoning.
Dr. Marieke Van Vugt
Partha Pratim Roy
Cognitive science has started to make more and more use of techniques from machine learning to disentangle the neural correlates of cognitive processes. It can be particularly useful for complex situations in which many things are happening at the same time. Here we apply machine learning to investigate the cognitive processes in a rather novel situation: Tibetan monastic debate. Monastic debate is a core practice used in Tibetan monasteries to train preciseness of reasoning and memorization. In the work presented here we distinguish between the occurrence of attentional states, focus and distraction. This gives insight into the cognitive effect of debate training.
Stan Franklin
Activation has become a pervasive concept in many scientific disciplines, including cognitive and neural modeling, and AI. Unfortunately, its applications and functions are so broad and varied that it is difficult for practitioners to discuss the topic in precise and meaningful ways. This is particularly apparent in cognitive architectures, where a wider breadth of activation’s utilities and forms have been explored. To help combat these terminological difficulties, and hopefully facilitate productive discourse and the development of future applications, we introduce (1) a lexicon of activation-related concepts, and (2) a functional taxonomy that enumerates many activation-related “design patterns” that have appeared in cognitive architectures. We demonstrate our taxonomy by applying it to the LIDA cognitive architecture, which includes one of the most varied and comprehensive adoptions of activation-related functionality.
Matthew Danyluik
Yvonne Y. Chen
Jeremy B. Caplan
Brain-activity measures have the potential to provide powerful new constraints on memory models. With classifier-based approaches, one can identify signals, derived from a training-set data, that can predict memory outcome on test-set data. Advancing beyond descriptive methods, the classifier-based approach could identify brain-activity features that are more likely to be behaviourally relevant, rather than spectator or performance-irrelevant processes. Instead of chasing down optimal classification, we take a systematic approach to evaluate this, and to identify where improvements to classifier approaches could be made. We start with univariate event-related potential measures that have previously been implicated in recognition-memory study and matching processes (study: LPC and slow wave; test: FN400 and LPP). In 64 participants performing old/new verbal recognition, univariate measures predicted memory-accuracy with small, but significant, success (95% CI AUC = study: [0.51 0.54]; test: [0.52 0.55]; chance=0.5). Multivariate, LDA and SVM spatio-temporal classifiers performed better (study: [0.52 0.56]; test: [0.55 0.60]), suggesting the importance of other features beyond these previously identified ERP features. Overall success rates remained remarkably low, but this is in line with results from other related published approaches. However, AUC approached 0.7 for high-performing participants. Addressing individual differences in motivation/engagement, or titrating difficulty, may lead to high classification success. Future approaches should also incorporate the myriad known behavioural factors that determine memory outcome but are absent in brain activity during study or test of a particular item.
Florian Seitz
We present the software package cognitivemodels, a tool to build, apply, estimate, test, and develop computational cognitive models in R. The free package is designed for coding efficiency, robustness, and flexibility and offers novice modelers a user-friendly front-end to use models and experienced modelers a powerful back-end to write models. Here, we show how the package implements the generalized context model (Nosofsky, 1986) and cumulative prospect theory (Tversky \& Kahneman, 1992) and how end users can write further models with the package. We further present the package's variety of goodness-of-fit measures (e.g., binomial or normal log likelihood, mean-squared error, or accuracy), parameter constraints (linear constraints, box constraints, equality constraints, fixed parameters), optimization routines (e.g., Nelder-Mead), and choice rules (e.g., soft-max, epsilon greedy, or Luce's choice rule). We believe the package makes cognitive modeling more widely accessible and adds to robust model development.
Ms. Nicole King
Prof. Pernille Hemmer
Julien Musolino
A leading idea in the literature on the cognitive science of religion is that supernatural concepts (e.g., gods, ghosts, spirits) are memorable because they are minimally counterintuitive (MCI)—i.e., they contain a small number of violations of ontological assumptions. These violations increase the salience of the resulting concepts but because their number remains low, they only minimally complicate the concepts’ inferential structure. Consequently, MCI items are regarded as optimal for memory and are therefore prime candidates for cultural transmission. Interestingly, this phenomenon is reminiscent of the von Restorff effect (VR) which describes a pattern of enhanced memorability for outlier items in a homogenous list. We therefore ask whether the MCI and VR effects may be behavioral manifestations of the same underlying cognitive processes. To permit a meaningful comparison of the two effects, we developed a novel set of stimuli to guard against a number of existing confounds. We objectively measured and normed for a number of relevant parameters by obtaining ratings from a large M-Turk sample. We then conducted an experiment to assess the relative memorability of MCI and VR items compared to intuitive (INT) controls. Results indicate that MCI and VR items are both recalled better than INT concepts, but, crucially, that MCI items do not possess a memorability advantage over VR items. Furthermore, results from additional conditions suggest that the benefit of minimal counterintuitiveness is not confined to supernatural concepts. We argue that this evidence supports a single mechanism underlying both the MCI and VR effects.
Dr. Marieke Van Vugt
In Tibetan monasteries, the education system relies heavily on a very specific style of debating that is at once exhilarating and intellectually rigorous. Relatively little research has been done on the psychological and neural mechanisms of this debate, which may be an interesting method for education around the world. Hence the formation of a theory of this practice is important. Here we present a computational theory of Tibetan monastic debate implemented in the ACT-R cognitive architecture. We complement the ACT-R model with graph theory to represent knowledge and show how we can capture the dynamic flow of a debate in our model. Future research should validate the model in its native population and enrich it with more detailed strategies. Nevertheless, we think it provides an interesting example of how the interactive process of debating can be modelled.
Mr. Shashank Uttrani
Varun Dutt
Prior research in decisions from experience (DFE) has investigated people’s consequential decisions after information search both experimentally and computationally. However, prior DFE research has yet to explore how computational cognitive models and their mechanisms could explain the effects of problem framing in experience. The primary objective of this paper is to address this literature gap and develop Instance-based Learning Theory (IBLT) models on the effects of problem framing. Human data was collected on a modified form of the Asian disease problem posed about the COVID-19 pandemic across two between-subject conditions: gain (N = 40) and loss (N = 40). The COVID-19 problem was presented as “lives saved” in the gain condition and “lives lost” in the loss condition. Results revealed the absence of the classical framing effect, exhibiting no preference reversal between gain and loss conditions in experience. Next, an IBL model was developed and calibrated to the data obtained in the gain and loss problems. The calibrated model was generalized to the non-calibrated conditions (gain to loss and loss to gain). An IBL model with ACT-R default parameters was also generalized. Results revealed that the IBL model with calibrated parameters explained human choices more accurately compared to the IBL model with ACT-R default parameters. Also, participants showed greater reliance on recency and frequency of outcomes and less variability in their choices across both gain and loss conditions. We highlight the main implications of our findings for the cognitive modeling community.
Prof. Julia Haaf
Claire E Stevenson
How do people evaluate whether an idea is creative or not? It is commonly assumed that creative ideas have two characteristics: they are original as well as useful. However, research suggests that, overall, people value originality more than utility when they judge whether something is creative or not. But individuals may also differ in how much they value originality and utility in their creativity judgments. In the extreme, some individuals may take utility into account while others do not at all.To examine conceptions of creativity in a standardized way and to explore individual differences, we used the creative-or-not (CON) task, a timed two-choice decision-making task. In this task, participants decide whether they find uses for certain objects creative or not (e.g., using a book as a buoy). The different use items vary on the two dimensions ‘originality’ and ‘utility’.We analyzed the CON task data using a Bayesian hierarchical diffusion model. In a sample of university students (n = 293; 17806 trials) we found, as expected, that originality and utility of the use items influences the drift rate of the diffusion model but that the effect of originality is greater. This suggests that, on average, people take originality and utility into account when they evaluate creativity, but that originality is considered more important than utility. In addition, we find substantial individual differences. The more individuals took originality into account when evaluating creativity, the less they took utility into account and vice versa.
Dr. Ami Eidels
Keith Nesbitt
Rachel Heath
James T. Townsend
Analysis (and models) of response times typically rely on data from trial-by-trial designs, whereby experimental tasks present participants with a series of trials constructed as a sequence of stimulus presentation, response, and a short break, and all over again. However, real-world behaviours (e.g., driving) often require continuous monitoring of information and accompanied by ongoing responses. In these cases, there is no start and end to a trial, and the researcher cannot measure RT, pre-empting many successful approaches to analysis of RT data (such as Systems Factorial Technology, on which we focus here). We developed and tested a novel technique for converting continuous tracking data to a trial-like form, producing what we call ‘pseudo response times’. These pseudo response times can be conveniently subjected to many existing RT analysis techniques. Participants completed a continuous tracking task. We calculated the absolute tracking error as the distance between the user-controlled needle and to-be-tracked target. We then converted these data to pseudo RTs by setting a threshold of maximum acceptable tracking error, identifying points in the time series when tracking error crossed this threshold, and calculating the time taken to return to acceptable performance. Analyses of pseudo RTs agreed with equivalent analyses of mean tracking error, albeit with less sensitivity.
Dr. Prakash Mondal
The goal of the current work is to develop a theoretical model which can possibly account for certain of the speech disarticulations that occur among children with Speech Sound Disorders (SSDs). In trying to do so, we propose an interface module, a specialized section within the mental realm, the nature and functioning of which may provide us with some useful insights on SSDs. The postulation of an interface module here is necessitated by the fact that there are facets of errors in SSDs and in typical populations that cannot be simply explained in terms of either articulatory/phonetic factors or matters pertaining to abstract sound representations. This paper will therefore present a detailed theoretical view of the interface, its nature, its relation with other levels in the mental space, and the functions it performs. The results of applying the proposed model to certain types of sound alterations in SSDs are described with implications for the cognitive representation of speech sounds.
Elisabeth Reid
Robert L. West
We were interested in testing Newell’s Micro Strategies hypothesis as well as assumptions made by both ACT-R and SGOMS theory using a mobile game and a predictive SGOMS-ACT-R model. The Model is designed to predict expert game play. We found in most conditions the model did predict the results, however in one condition the player employed an alternative Micro Strategy.
Robert L. West
Jennifer Schellinck
Babak Esfandiari
As machines become autonomous, acting as agents within society, there will become an increasing need for them to interact with people. For a machine to act within a society free of its creator’s supervision, it will also have to have the same capacity for intersubjective behavior as people. This paper presents a design system for creating a moral artificial agent based on cognitive modeling and test driven development.
Terry Stewart
Jesse Hoey
Social interactions are a part of day-to-day life of most human beings. Affect, decision-making and behavior are central to it. With increase in adaptation of technology in our society, interaction between humans and artificially intelligent agents is also increasing. Large-scale brain-inspired neural models have been equipped with capabilities to fulfil a variety of tasks, but there has been relatively limited focus on making them capable of handling social interaction. In this paper, NeuroACT, a neural computational model and implementation of a socio-psychological theory called Affect Control Theory (ACT) is presented. This is towards building an emotionally intelligent AI agent, that can handle interactions. It takes as input a continuous affective interpretation of a perceived event, consisting of an actor, behavior and an object and generates post-event predictions of the next optimal behavior to minimize deflection. The aim is to model the role of affect guiding decision-making in AI agents, resulting in interactions that are similar to human interactions, while inhibiting some behaviors based on the social context.
Don Morrison
Prof. Andrea Stocco
Mark Orr
Christian Lebiere
ACT-R, a well-established cognitive modeling architecture (Anderson, 2007) has been widely used in the field of cognitive psychology and neuroscience to interpret human cognition, memory formation and learning process. However, the programming difficulties in designing a model slows down the progress of cognitive modeling study. Inspired by Reitter and Lebiere (2010)’s ACT-UP, which is a subset of ACT-R declarative memory implementation, we introduce the Python implementation, PyACTUp, and expand its functionality to incorporate more important features from ACT-R. Current version of PyACT-UP provides great flexibility for modelers to define their own methods and meanwhile remains a simplified structure which is friendly to novice programmers.
Monica Van Til
Lola Erfourth
Tylor Kistler
Townsend and Fific (2004) published an influential short-term memory (STM) study in which they observed individual differences in serial and parallel STM scanning. The authors employed the systems factorial technology – the novel methodology which provides strong diagnostic tests of cognitive architectures, and presented a new method of manipulating probe-to-memory item processing speed for memory loads N=2. Three variables were manipulated in this experiment: number of processing elements (N=2), phonemic dissimilarity of a target to the particular memorized item (high, low) and duration between the memorized set and a target (short-long). In the original study 10 subjects participated in about 20 sessions each. In the current research we conducted a conceptual replication of the original study: two hundred subjects participated in 1 session each, and novel memory load conditions N=1 was included. The results added a converging evidence in testing serial/parallel processing in short-term memory scanning.
Jacolien van Rij
Dr. Niels Taatgen
The universal flexibility of biological systems needs to be reflected in cognitive architecture. In PRIMs, we attempt to achieve flexibility through a bottom-up approach. Using contextual learning, randomly firing of a set of instantiated primitive operators are gradually organized into context-sensitive operator firing sequences (i.e., primordial “skills”). Based on this implementation, the preliminary results of the model simulated the averaged single-pattern processing latency that is consistent with infants’ differential focusing time in three theoretically controversial artificial language studies, namely Saffran, Aslin, and Newport (1996), Marcus, Vijayan, Rao, and Vishton (1999), and Gomez (2002). In our ongoing work, we are analyzing (a) whether the model can arrive at primordial “skills” adaptive to the trained tasks, and (b) whether the learned chunks mirror the trained patterns.
Dr. Chris R. Sims
In behavioral economics, `rational inattention' (C. A. Sims, 2010) has been proposed as a theory of human decision-making subject to information processing limitations. This theory hypothesizes that decision-makers act so as to optimize a trade-off between the utility of their behavior, and the information processing effort required to reach a good decision. Shannon information has been proposed as a means of quantifying this information processing cost. However, existing models in the rational inattention framework do not account for the learning dynamics that underlie human decision-making. In order to incorporate the impact of cognitive limitations on learning, we extend the traditional reinforcement learning objective to incorporate a bound on the Shannon information of the learned policy (see also Lerch & Sims, 2019). Using experimental data from a previously-studied learning paradigm (Niv et al 2015). we show that our method can be used to represent differences in participants' performance as resulting in part from utilizing different capacities for storing and processing information.
Terry Stewart
Mary C. Olmstead
We present first steps towards a biologically grounded imple-mentation of the Incentive Sensitization Theory of addiction.We present multiple different plausible ways of mapping thistheory into a computational model, and examine the resultingbehaviour to see whether it accords with standard interpreta-tions of the theory. This is the first step in a larger project tocreate a computationally tractible and biologically motivatedmodel of addiction to help clarify and ground various terms inthe theory.
Julien Musolino
Prof. Pernille Hemmer
The sense of agency (SoA) is a fundamental aspect of the human experience. Intentional Binding (IB), the subjective compression of the time interval between a voluntary action and its associated outcome, has been proposed as an implicit measure of SoA. Given the fundamental nature of SoA, one would expect the presence of IB in all healthy individuals. To date, empirical investigations of IB have only reported aggregate data averaged across individuals and may inappropriately use parametric statistics on non-normally distributed data. We compared aggregate vs. individual data in a study (N=35) using a variation on the standard IB paradigm. Aggregate results replicated the expected effect of action binding (F(1, 28) = 4.44, p = 0.044) and outcome binding (F (1, 35) = 49.12, p<0.001). Crucially, however, inter-individual analyses revealed that more than half of participants’ mean binding values for either action or outcome (N=20) were in the opposite of the expected direction, in line with results from involuntary action conditions in the literature. Moreover, reanalysis of a publicly available dataset shows a similar pattern; the authors reported a replication of the standard IB effect at the aggregate level but our re-analysis at the individual level revealed 19 out of 20 participants in certain sub-conditions had mean action or outcome binding values in the opposite of the expected direction. These findings indicate the IB phenomenon may be another classic example of how averaging can be misleading and will have important implications for the future of research in this domain.
Ashley McDermott
What is the effect of level of simulation fidelity on learning and then on performance in the target task? We consider an example of a maintenance training system with two levels of fidelity: a high fidelity (HiFi) simulation that basically takes as much time as the real world task and a low fidelity (LoFi) system with minimal delays and many actions removed or reduced in fidelity and time. The LoFi simulation initially takes about one quarter of the time, and thus starts out getting about four times more practice trials in a given time period. The time to perform the task modifies the learning curves for each system. The LoFi curve has a lower intercept and a steeper slope. For a small number of practice trials, this makes a significant difference. For longer time periods, the differences between low and high fidelity get smaller. Learners that move from low to high appear to not be adversely affected. We note factors, such as subtasks included, that could influence this transfer, and how this approach could be extended.
Anjali Krishnan
The structure of organized categories is argued to be hierarchical and is suggestive of the taxonomy of the “superordinate-basic-subordinate” categorization schema (Rosch, 1978). In contrast, the similarity of a member to its category may also be indicative of the extent of transitivity that exists within a category. To investigate the role that hierarchical and similarity relations contribute to categorization [adapted from Sloman (1998)], we asked 49 participants to evaluate the probability of a conclusion statement based on the given fact. In condition 1, participants were only provided a fact and conclusion, while in condition 2 participants were also provided with a hierarchical relation (e.g., All pines are wood; Fact: All 'wood' is fibrous; Conclusion: All 'pine' is fibrous). Condition 1 can be solved using hierarchical relations, while condition 2, an inductive reasoning task, can be solved with similarity or hierarchical relations. We used 20 natural and 20 artificial categories validated by Gruenenfelder (1984), with typical and atypical examples in each category. A factorial ANOVA revealed a main effect of condition, F(1,48) = 69.53, p < 0.001, indicating that providing hierarchical relations increased overall agreement between fact and conclusion. We also found the expected main effect of typicality, F(1,48) = 45.39, p < 0.001. An interaction between condition and typicality was also detected, F(1,48) = 4.79, p = 0.034, however, a metric multidimensional scaling of average ratings per category for conditions 1 and 2 showed that agreement between the fact and conclusion might rely on the type of category rather than typicality.
Julia Taylor Rayz
In this paper, we propose a methodology that aims to develop a recommendation system for jokes by analyzing its text. This exploratory study focuses mainly on the General Theory of Verbal Humor and implements the knowledge resources defined by it to annotate the jokes. These annotations contain the characteristics of the jokes and hence are used to determine how alike the jokes are. We use Lin’s similarity metric and Word2vec to calculate the similarity between different jokes. The jokes are then clustered hierarchically based on their similarity values for the recommendation. Finally, for multiple users, we compare our joke recommendations to those obtained by the Eigenstate algorithm which does not consider the content of the joke in its recommendation.
Farnaz Tehranchi
Frank E Ritter
ACT-R has been used to study human-computer interaction, however, up until the creation of JSegMan, ACT-R was unable to interact with unmodified interfaces not written in Common Lisp. Working with unmodified interfaces reveals deficiencies with ACT-R’s motor module. Currently, ACT-R is capable of queuing rapid keystrokes, however, many programs require multiple keys to be pressed at once, which ACT-R cannot do. This prevents ACT-R from interacting with text editors such as Vim and Emacs. Similarly, ACT-R cannot model people playing many modern video games that require pressing multiple WASD or arrow keys at once while moving the mouse. This paper creates a model that demonstrates this deficiency while playing Desert Bus. Furthermore, new systems to allow parallel motor actions to be learned and requested are proposed and the implications of running a model over many hours is explored.
Babak Esfandiari
Robert L. West
Our research presents a review of the StarCraft II ecosystem, and an analysis of those universal characteristics integral to the replay data generated by thousands of humans and robots in mixed competitions. In this paper we present the obvious and subtle differences between human and machine tournament play, and demonstrate that we can still identify and leverage various aspects of game play to distinguish human from machine.
Ms. Paulina Friemann
Terry Stewart
Marco Ragni
In Dynamic Field Theory (DFT) cognition is modeled as the interaction of a complex dynamical system. The connection to the brain is established by smaller parts of this system, neural fields, that mimic the behavior of neuron populations. We reimplemented a spatial reasoning model from DFT in Python using the Nengo framework in order to provide a more flexible implementation, and to facilitate future research on a more general comparison between DFT and the Neural Engineering Framework (NEF). Our results show that it is possible to recreate the DFT spatial reasoning model using Nengo, since we were able to duplicate both the behavior of single neural fields and the whole model. However, there are statistical differences in performance between the two implementations, and future work is needed to determine the cause of these differences.
Prof. Junya Morita
Both nature and nurture contribute to language development. In the case of phoneme segmentation, children have the natural ability to recognize a continuous sound in various units, but as they grow, they only selectively learn to recognize it as part of a series in the unit that is used in their mother tongue. This developmental process is supported by an ability called phonological awareness that allows children to become intentionally aware of units of phonology. It is known that erroneous pronunciation appears during the phonological awareness formation process. In this research, we aim to examine the factors that induce and reduce such errors. To do so, we modeled phonological awareness using the cognitive architecture ACT-R and performed simulations that manipulated ACT-R parameters that correspond to both nature and nurture factors. As a result, it was confirmed that errors due to a lack of phonological awareness can be modeled with the innate memory retrieval mechanism. We also observed that such errors were reduced when learning factors were added to the model. However, we could not simulate this learning process. In the future, we will study the interaction task that enables learning to reduce phonological errors and contribute to the acquisition of phonological awareness.
Prof. Junya Morita
Takatsugu Hirayama
Kenji Mase
Kazunori Yamada
In this study, we developed a photo slideshow system to support reminiscence activity. Compared to a conventional photo slideshow, the developed system has two features: incorporating a memory model based on the cognitive architecture ACT-R and modulation of the model parameter from the user's feedback. We assume that the first feature enables various patterns of photo presentation by the system, and the second feature makes the system adaptive to the user's response. More importantly, such presentation patterns and feedback can be theoretically designed by using cognitive architecture.In this paper, a preliminary evaluation of the developed system is presented. Through an analysis of the subjective evaluation of the system and changes in mental states, we clarified the effect of model-based reminiscence.In addition, heart rate variability (HRV) analysis was conducted to clarify the changes in the behavior of the model by feedback.
Fiona Kumfor
Olivier Piguet
Frontotemporal Dementia (FTD) is an umbrella term to describe younger onset dementias with clinical presentations arising from progressive neurodegeneration of the frontal and temporal brain regions. Patients diagnosed with FTD show disturbance of emotion processing due to pathological changes affecting the network. FTD, therefore, provide a useful framework to understand the underlying mechanisms of emotion processing. Furthermore, establishing clinical diagnose in dementia cases early step often result in inaccurate diagnose due to overlapping syndromes.The practical aim of this study is to inform us about which tests are more specific and sensitive to differentiate FTD from other dementia such as AD. Considering that few AD patients, although having a primarily cognitive problem (i.e. episodic memory), often present emotion processing problem.This project applied novel methods using data-driven analysis to model emotion processing and the contributing factors such as general cognitive. Elementary analysis shows that combining emotion and cognition tests can differentiate bvFTD from AD. Further analysis investigated the unique pattern of emotion-cognition interactions between FTD and AD. These results give a better understanding of how emotion and processing deficits occur in dementia.
Elena Gorina
Mario Martinez-Saito
According to previous studies, people generally do not perform exact Bayesian inference in causal structure learning. However, there are conditions when a reasonable strategy might be to compute exact posterior probabilities: evidence arrives one piece at a time, learners do not have access to previous information, and a problem at hand has a sufficiently small hypothesis space.We conducted a series of experiments to test whether these conditions would result in a preference of exact inference in humans. A non-deterministic causal system of four binary elements was employed, and participants were sequentially presented with the system's states (evidence). As a new state was presented, participants were asked to indicate the most probable scheme of causal connections (hypothesis) in the system given this state and those demonstrated previously. We used two sizes of hypothesis space: two and six schemes of connections. We compared participants' responses to the limited memory (LM) Bayesian model (posterior re-estimation based on several recent states) and the ideal Bayesian model. Additionally, participants' responses were compared to the LM "win-stay, lose-sample" (WSLS) model.When hypothesis space was small, participants preferred the probability-based strategy (described by the ideal Bayesian, thus matching exact inference) as it was more optimal in terms of memory to retain posterior probabilities, not evidence. When hypothesis space size increased, participants often resorted to the evidence-based strategy (described by the LM Bayesian) as it became difficult to memorize exact probabilities. Additionally, we found that most participants revealed strong WSLS tendencies across all conditions of our experiments.
Yamani Yusuke
A vigilance task requires observers to monitor for rare signals over long periods of time. The vigilance decrement is a decrease in detection rate that occurs with time on task, sometimes beginning within 5 minutes. Signal detection analyses have ascribed the decrement to changes of response bias or declines of perceptual sensitivity. However, recent work has suggested that sensitivity losses in vigilance are spurious, and that the decrement instead results from attentional lapses.Analysis of psychometric curves provides a way of isolating changes in bias, sensitivity, and lapse rate. Because signal events are rare and trials are partitioned into brief blocks, though, a standard vigilance task does not provide enough data to fit psychometric curves for individual observers. To circumvent this problem, we used hierarchical Bayesian modeling to combine data from a large number of individuals.Participants (N = 99) performed a 20-min vigilance task that required them to judge whether the gap between two probe dots each trial exceeded a criterion value. Signal detectability was manipulated via the method of constant stimuli. Hierarchical psychometric curves were fit to data from the first and last 4-minute blocks of trials. Model fits revealed three changes between blocks: a conservative shift of response bias, a decrease in perceptual sensitivity, and an increase in response lapse rate. Results confirm that sensitivity losses are possible in a sustained attention task, but indicate that mental lapses can also contribute to the vigilance decrement.
Melike Baykal-Gursoy
Prof. Pernille Hemmer
Terrorist attacks carried out by individuals have significantly accelerated over the last 20 years. This type of lone-actor (LA) terrorism stands as one of the greatest security threats of our time. While the research on LA behavior and characteristics have produced valuable information on demographics, and warning signs, the relationship among these characters is yet to be addressed. Moreover, the means of radicalization and attacking have changed over decades. This study conducts an a-posteriori analysis of the temporal changes in LA terrorism and behavioral associations in LAs. We first identify 25 binary behavioral characteristics of LAs and analyze 190 LAs. Next, we classify LAs according to behavioral clusters obtained from the data. Within each class, statistically significant associations and temporal relations are extracted using the A-priori algorithm. The results indicate that while pre-9/11 LAs were mostly radicalized by the people in their environment, post-9/11 LAs are more diverse. Furthermore, association chains for different LA types present unique characteristic pathways to violence and after-attack behavior.
Karina Rodriguez
Gustavo Gasaneo
Alejandra Mendivelzua
Laura Garcı́a Blanco
Manuela Sanchez
Recent estimations show that 10 per cent of children population of Argentina suffers of Specific Learning Disorder (SLD). Attention disorders and SLD present, generally, large comorbidity making difficult to distinguish them clinically, in a quantitative way. In Argentina, the diagnosis of these disorders is performed by neurologists, basing on neuro-psychological evaluations performed by psycho-pedagogues. It is the aim of our studies to add or enhance the already existent tools based on the inclusion of physiological magnitudes. The eye movements can serve as a direct source of information about what is happening in the brain (Luna, Velanova, & Geier, 2008). Track the eyes of children while reading, gives a great amount of data to study the way reading is being process (Kliegl, Nuthmann, & Engbert, 2006). Analyzing statiscally the data, permits to obtain the number of fixation, the fixation duration, the saccade amplitude, among other other metrics (Duchowski, 2002). However more sophisticated quantities like reading speed processing can be also generated. In this work, we present the results of studies related to reading. We registered neurotypical and dyslexic children eye movements while reading. We process mathematically the data with the objective of introducing variables allowing clearly differentiate between both groups.
Submitting author
Author