Poster meetup session
You must be logged in and registered to see live session information.
By using this site, you consent to our use of cookies. You can view our terms and conditions for more information.
Existing process of models of function learning mostly assume that function learning is a gradual and continuous process (polynomial rule model: Koh and Meyer (1991); EXtrapolation Association Model (EXAM): DeLosh et al. (1997); Population Of Linear Experts (POLE): Kalish et al. (2004)). In contrast, Brehmer (1974) proposed a two-staged hypothesis testing theory of function learning. The first stage involves discovering a suitable rule, and the second stage is concerned with learning the parameters of the rule. Although this theory has not been quantitatively formalized, it differs from the other theories by positing a discontinuity when the learner transitions from discovering a rule to applying a rule. In this extended abstract, we present preliminary evidence of such discontinuities. In a replication of McDaniel et al. (2014), we identified a subset of participants that demonstrated abrupt decreases in error over the course of the experiment. Our computational simulations of existing process models further confirmed that gradual update mechanisms are insufficient to account for these observed discontinuities.
This study aims at modelling individual differences using GOMS. In an attempt to evaluate a competence assessment task in natural language, results revealed limitations of a previous GOMS model that was used to design the task (Ismail & Cheng, 2021). The task, Chunk Assessment by Stimulus Matching (CASM), exploits measurements of chunk signals to assess competence in the English language. It was tested with 34 speakers of English as a second language. Results were compared against the initial GOMS models. The models’ predictions were partially supported, showing substantial performance differences between the levels of expertise. Contrary to expectations, major differences were found amongst those at the same level of expertise. A refinement of the models was built to coherently capture differences between and within levels of competence.
Recent findings of Markov violations challenge Markov random walk processes for decision making. On the other hand, quantum walk processes explain these Markov violations in a natural way, but they have only been applied to binary alternative decision making. In this work, we propose a general framework for extending quantum walk processes to multi-alternative decision making. The multi-alternative quantum walk model operates in a direct sum space of the alternatives, with Hamiltonian built for each pair of alternatives to model context effects. Order effects come naturally from the matrix non-commutativity of the Hamiltonians. Future works built on this framework can connect parameters of the models with the expected utilities.
How difficult is it to simulate a an algorithm in one's mind and correctly deduce its outcome? In this paper, we present a predictive modeling task in the domain of algorithmic thinking in a railway environment. We present metrics, either based on algorithmic complexity (e.g. lines of code) or on the effect on cognitive resources an algorithm simulation can have (e.g. context switching). We implement the metrics within a benchmark and evaluate their predictive performance on an individual level, by assigning a complexity threshold to each individual. We compare these results to a standard statistical correlation analysis and suggest a different perspective for determining the predictive powers of a complexity metrics as models.
For a long time the human capability to form hypotheses from observations has been in the focus of research in psychology and cognitive science. An interesting case is to form hypotheses about the underlying mechanisms of technical systems. This process is called reverse-engineering, i.e., to identify how a system works. Research so far has focused on identifying general principles of the underlying reasoning process and lead to the development of at least three general approaches. This paper investigates the predictive power of existing models for each individual reasoner for the first time, i.e., can the individual reasoner reverse engineer the Boolean Concepts from observations. Towards this goal, we (i) defined a modeling task on the individual level, (ii) adapt or re-implement existing models for Boolean Concept learning to make predictions on the individual level, (iii) identify base-line models and additional strategies, and (iv) evaluate the models. By focusing on the individual level, we uncover limitations of current state of the art and discuss possible solutions.
We present a new way to do task analysis that includes learning. This approach starts with a hierarchical task analysis of a troubleshooting strategy and applies a power law of learning to modify the time, mimicking the ACT-R learning equations. We apply this approach to finding faults in the Ben Franklin Radar (BFR) system, a 35-component system, designed to study troubleshooting and learning. In this task, faults are introduced into the BFR, and the participants are responsible for finding and fixing these automatic faults. Pre-vious models in Soar took up to 6-9 months of graduate student to create. This model was created more quickly and provides a model between GOMS and a full cognitive archi-tecture-based model. The predictions will be compared to the aggregate and individuals’ data (N=111) and lessons will be reported.
In the cyber world, deception through honeypots has been prominent in response to modern cyberattacks. Prior cybersecurity research has investigated the effect of probing action costs on adversarial decisions in a deception game. However, little is known about the cognitive mechanisms that affect the influence of probing action costs on adversarial decisions. The main objective of this research is to see how an instance-based learning (IBL) model incorporating recency, frequency, and cognitive noise could predict adversarial decisions with different probing action costs. The experimental study had three different probing action costs in the deception game: increasing cost probe (N = 40), no-cost probe (N = 40), and constant cost probe (N = 40). Across the three conditions, the cost for probing the honeypot webserver was varied; however, the cost for probing the regular webserver was kept the same. The results revealed that the cost of probing had no effect on probe and attack actions and that there was a significant interaction between different cost conditions and regular webserver probe actions over the trials. The human decisions obtained in the above experiment were used to calibrate an IBL model. As a baseline, an IBL model with ACT-R default parameters was built. In comparison to the IBL model with ACT-R default parameters, the results showed that the IBL model with calibrated parameters explained adversary decisions more precisely. Results from the model showed higher cognitive noise for cost-associated conditions compared to that of no-cost condition. We highlight the main implications of this research for the community.
The way how humans reason about spatial beliefs has been investigated for almost a century. However, how humans update their spatial beliefs and how this can be explained by cognitive models has not yet been systematically analyzed. This paper aims to explore belief revision by (i) establishing and revisiting theories for belief revision, (ii) instantiating those theories into predictive cognitive models and evaluating them on a benchmark set of four different data sets, (iii) provide an ensemble out of all theories for belief revision tailored to the individual and comparing performance to baseline and upper modeling bound from the area of machine learning. This allowed for an analysis on the individual level as well as to investigate which task characteristics favor the application of specific belief revision strategy.
Deep neural networks (DNNs) are increasingly being used as computational models of human vision and higher-level cognition. When DNNs are trained to recognize objects in images, they develop a similarity space, measured by the distance between image pairs in DNNs' nodes. This similarity space can be made to be more human-like by pruning redundant nodes, which suggests that DNNs need only a subset of nodes to model human similarity judgments. Because the pruning method requires supervision by similarity judgments from humans which is costly to collect, in our work, we investigate if it is possible to prune DNNs to improve the prediction of human similarity judgments without human data. It has been shown that after being trained, DNNs contain many nodes that are not activated (zero) for a majority of images. We hypothesize that because these nodes carry less information they can be pruned. To quantify the effect of pruning, we used Pearson correlation as a measure of fit between two representational similarity matrices (RMS): 1) RSM from pruned or un-pruned network, 2) RSM from human similarity judgments for images. Our results showed that: 1) nodes with mostly zero-firing values are prevalent but contribute minimally to the DNN’s own similarity space, and 2) removing a majority of these nodes does not affect, and sometimes even improves the prediction of human similarity judgments. We suggest that these nodes should be considered as a separate class when constructing encoding or decoding models of human cognition.