Mr. Johannes Mannhardt
Dr. Leandra Bucher
Mr. Daniel Brand
Spatial relational descriptions in everyday life sometimes need to be revised in the light of new information. While there are cognitive models for reasoning about spatial descriptions there are currently no models for belief revision for the spatial domain. This paper approaches this need by (i) revisiting existing models such as verbal model (Krumnack et al., 2010) and PRISM (Ragni and Knauff,2013) and adapt them to deal with belief revision tasks, (ii) evaluate these models by testing the predictive accuracy for the individual reasoner on a previously conducted experiment by Bucher et al.(2013), (iii) provide baseline models and machine learning models, provide user-based collaborative filtering and content-based filtering methods, and provide an analysis on the individual level. Implications for predicting the individual and identifying strategies and shared similar reasoning patterns are discussed
Due to information processing constraints and cognitive limitations, humans necessarily form limited representations of complex visual stimuli when making utility-based decisions. However, it remains unclear what mechanisms humans use to generate representations of visual stimuli that allow them to make predictions of utility. In this paper, we develop a model that seeks to account for the formation of representations in utility-based economic decision making. This model takes the form of a β-variational autoencoder (β-VAE) trained with a novel utility-based learning objective. The proposed model forms representations of visual stimuli that can be used to make utility predictions, and are also constrained in their informational complexity. This representation modelling approach shares common features with related methods, but is unique in its connection to utility in economic decision making. We show through simulation that this approach can account for several phenomena in human economic decision making and learning tasks, including risk averse behaviour and distortion in the calculation of expected utility.
Bespoke cognitive models of mental spatial transformation, like those used in mental rotation tasks, can generate a very close fit to human data. However these models usually lack grounding to a common spatial theory. In turn, this makes it difficult to assess their validity and impedes research insights that go beyond task-specific limitations. We introduce a spatial module for the cognitive architecture ACT-R, serving as a framework offering unified mechanisms for mental spatial transformation to try and alleviate those problems. This module combines symbolic semantic and spatial information processing for three-dimensional objects, while suggesting constraints on this processing to ensure high theoretical validity and cognitive plausibility. A mental rotation model was created to make use of this module, avoiding custom-made mechanisms in favor of a generalizable approach. Results of a mental rotation experiment are reproduced well by the model, including effects of rotation disparity and improvement over time on reaction times. Based on this, the spatial module might serve as a stepping stone towards unified, application-oriented research into mental spatial transformation.
In this paper I present a model of aperture passage judgment (judgment of whether an agent can walk through aperture, rotating shoulders as needed) and performance (initiation and termination of shoulder rotation while walking through an aperture) in ACT-R 3D. The model is adapted from Somers (2016) and represents a first attempt to unify findings across multiple experiments with a single model. The cognitive model is embodied in a robotics simulator, with motor movement controlled directly by the cognitive model. The model exhibits an improved fit as compared to Somers (2016), in the same experiment. The same cognitive model also exhibits a reasonable fit in additional, exaggerated conditions.