Posters: Perception, Cognition, Memory, Neuroscience, Language
Prof. Han van der Maas
Dr. Michael D. Nunez
Human nature comprises multilevel complex systems, and we hypothesize that these systems undergo critical changes through cascading transitions. For example, individuals who become extremists are often part of a massive societal shift, such as polarization. To model these complex systems, we aim to develop a general mathematical model of cascading transitions. For this purpose, two simplified cases will be tested: multifigure multistable perception and logical paradoxes. Our work builds on previous models and experimental studies of single multistable figure perception and binocular rivalry. We hypothesize that different cases of multifigure multistable perception and logical paradoxes can be represented as unique instances of the general model for cascading transitions. We will examine fundamental phenomena, create and test new predictions, and employ innovative experimental designs and recently developed psychophysiological measurement methods. In addition, we will apply eye-tracking and EGG techniques to novel situations. We will fit cascading transition models to psychophysiological data to advance our understanding of these models. Furthermore, we will expand this newly developed theory to include logical reasoning and multimodal perception. The expansion of a quantitative theory of cascading transitions will offer a tangible societal impact by improving our understanding of psycho-social systems. In conclusion, the core objective of this study is to examine whether the cascading transition model could serve as a thorough explanation for both multifigure multistable perception and logical paradoxes.
Continuous Performance Tasks (CPTs) are widely used for assessing cognitive function in psychological, psychiatric, and neurological disorders. The present study seeks to establish the construct and predictive validity of a commonly used CPT - The Dot Pattern Expectancy Task (DPX), by showing how neural measures relate to specific patterns of behavioural performance. To achieve this, we first fit generative models to parse individual biases and parameters that characterise the evidence accumulation process at a single-trial level. Second, we investigate whether electroencephalographic (EEG) activity recorded during the same task tracks individual differences in the cognitive modelling parameters. Results indicate that evidence accumulation models can, in principle, separate preparatory and corrective mechanisms in the DPX. In addition, different spatiotemporal patterns of evoked activity correlated with different model parameters allowing a finer-grained, theory-driven perspective on cognitive and neural processes underpinning variability in CPT performance.
Marco Raglianti
Alessandro Lazzeri
Fabio Giovannelli
Maria Pia Viggiano
Integrated Information Theory (IIT) is considered the most advanced formal theory of consciousness within neuroscience literature. However, only limited and indirect empirical evidence supports IIT. Computational, empirical, and theoretical limitations make it hard to test the predictions of IIT. To verify the hypothesis that higher values of integrated information (PHI) are associated with a higher level of consciousness, we leveraged data collected by two previous studies (Taghia et al., 2018; Huang et al., 2020). Such data is amenable to an IIT analysis employing the PyPhi toolbox (Mayner et al., 2018). In both studies there are conditions associated with different levels of consciousness (e.g., sedated participants vs controls as in Huang et al., 2020) and a transition probability matrix between brain states, obtained by means of machine learning techniques. We investigated if integrated information is able to predict consciousness level based on the state-by-state matrix generated according to transition probabilities. We observed that the PHI values are not related with conditions where brain states are characterized, according to neuroscience literature, by a greater consciousness level. Finally, we discussed limitations and future opportunities of our approach.
Dominik Pegler
Prof. Frank Scharnowski
Dr. Filip Melinscak
Understanding the mechanisms of anxiety disorders requires an understanding of how fear-inducing stimuli are mentally represented. Because similarity is central to recognizing objects and structuring representations, similarity judgment data are often used in cognitive models to reveal psychological dimensions of mental representations. However, both collecting similarity data and predicting the positions of newly added objects in the existing database are resource-intensive. Thus, previous studies mainly focused on small-scale databases, and characterizing representations for large-scale fearful stimuli is still limited. In this work, we conducted an online experiment using a large image database of 314 spider-relevant images to collect similarity judgments. Participants first completed the Fear of Spider Questionnaire (FSQ). We then used a rejection sampling method to select participants and ensure that the resulting FSQ scores were uniformly distributed. Next, selected participants performed the Spatial Arrangement Task, in which they arranged spider images on a 2D canvas according to the subjective similarity between each pair of images. With the collected data, metric multidimensional scaling (MDS) was applied to create low-dimensional embeddings. We compared Bayesian information criterion and cross-validation as model selection procedures in a simulation and these two methods were used to determine the dimensionality. We then reproduced these embeddings and predicted the positions of new images using convolutional neural networks (CNNs). Taken together, this work explores the application of MDS and CNNs to large-scale complex images for the first time, and the methodology employed could be applied to a wide range of stimuli in psychological research.
Dr. Catherine Sibert
Over the past decades, a vast amount of models and architectures have been developed, looking at the large scale organization of the human brain on different levels of abstraction. In an attempt to synthesize the ideas from some of the most established existing models of cognitive processing, namely ACT-R, SOAR, and Sigma, the Common Model of Cognition (CMC) has been proposed. It identifies five different modules within the brain with discrete functionalites and processing connections between them, modules for Perception, Action, Long-Term Memory, Procedural Memory, as well as Working Memory. These are considered to be essential for cognition across different domains and tasks, representing a generalized model of the structuring and processing of the mind. Previous work has connected the structure of the CMC to activity in the specific brain regions, helping to validate the model and compare it to other models and architectures, like Hub-and-Spoke Architectures and Hierarchical Architectures. The CMC was found to outperform its alternatives, being a significantly better match for the experimentally gained data. However, the results also suggested that modifications to the original formulation of the CMC would improve its fit. This is not surprising, as the CMC has a rather basic structure, only incorporating high level cognitive components. Other models consist of larger networks of sub-components, representing real human cognition more accurately. It further does not consider many significant aspects of cognitive processing like metacognition or emotional processing in the modularity and organization. The large scale parcellation currently used to identify signals associated with each cognitive component will not be sufficient in the future, as the model grows in complexity and additional cognitive components are incorporated. Better methods are needed for identifying regions associated with specific cognitive processes and modeling these and its connections in the CMC. To improve the identification of brain regions we can use meta-analyses of brain data. Tools like Neurosynth synthesize the results of many studies using neuroimaging, allowing to perform connectivity analyses on them. This makes it possible to relate specific brain regions to functions, as well as investigate the interactions between the different regions, which can be leveraged to inform the CMC about its structure. Due to the large amount of data and the wide variety of domains covered, meta-analyses of brain data are significantly more powerful than single studies. To validate our methods, we can use fMRI brain data from the Human Connectome Project. It provides a wide range of brain activity across multiple tasks allowing us to compare different configurations of the CMC using methods of connectivity analysis. We propose leveraging the power of connectivity analyses with both large-scale fMRI brain data and meta-analyses of brain data to create expanded and more robust versions of the CMC. The methodology used to research this is defined as follows: First, look at shortcomings of the current CMC structure and create expanded versions with additional components integrated in a plausible way. Then identify and isolate brain activity associated with those components using the proposed combination of meta-analyses and fMRI brain data. Finally, compare the resulting predictions with the current CMC structure.
Relational reasoning is a core cognitive ability necessary for intelligent behaviour as it evaluates relationships between mental representations. Laboratory-based tasks such as relational reasoning problems have long been used to investigate how individuals make inferences about such problems, with theories of mental models arguing that to solve such problems, individuals construct an integrated mental model based on the provided premises to generate or verify conclusions. Computational models of relational reasoning offer insights into how individuals generate such mental models and why some cognitive strategies may be preferred over others. However, many of these models do not directly account for what is often cited as a primary reason for the difficulty of different problems, the effects of increased working memory demand. In this paper, we present four ACT-R models that simulate the negative relationship between accuracy rates and relational problem complexity and demonstrate how different memory errors of omission and commission can account for qualitatively different reasoning processes. Our cognitive models demonstrate the importance of future work to consider individual differences in working memory processing, micro-strategy preferences, and the effects of different memory errors on the reasoning process.
Prof. Arndt Bröder
The sampling framework has been proposed to provide an integrative perspective of how people make probability judgments. It posits that people approximate probabilities by drawing mental samples from memory or mental simulation. Sampling-based models have successfully reproduced a wide range of observed effects in probability judgments. Yet, they have also been criticized for lacking a robust coupling of model terms and psychological processes (Coenen, Nelson, & Gureckis, 2018). We addressed this critique by testing the positive association between an important model term – the sample size of mental sampling – and individual differences in working memory capacity (WMC). Such a relation has been widely assumed in the sampling framework (e,g., see Lloyd et al., 2019). Nevertheless, as far as we know, the validity of the assumption has yet to be investigated. Here we use the coherence of people’s probability judgments as a proxy of sample size, as larger samples are less vulnerable to sampling variability. Therefore, an empirical examination of the association between WMC and coherence would provide evidence for the assumed positive relation between WMC and sample size. To measure coherence in probability judgments, we adopted the novel event-ranking task proposed by Liu et al. (in prep). In such a task, participants are asked to provide rankings for different sets of events, each consisting of two pairs of complementary events, {A, not-A, B, and not-B}. A logically correct ranking follows the complement rule, such that when A is ranked above B, not-A is ranked below not-B. The probabilities of participants providing logically correct (versus incorrect) ranking would manifest the level of coherence in probability judgments. The present study critically examined the assumed link between the sample size of mental sampling and WMC, thereby contributing to the theory-testing of the sampling framework.
Juergen Heller
Can we compare the loudness of a tone to the brightness of a light? The answer is yes. We are intuitively capable of these cross-modal comparisons. Psychophysical researchers such as Stevens have long assumed that these cross-modal comparisons are mediated by a single scale of subjective intensity. Luce (2002) developed a psychophysical theory for physical intensity making Stevens’ assumptions towards an underlying scale of perceived intensity explicit and formulating empirically testable conditions for it. They identified cross-modal commutativity as a property through which the theory can be tested. We investigated this property in a cross-modal magnitude production task between auditory and visual stimuli, concerning their loudness and brightness respectively. Participants were presented with the two stimuli and instructed to, for example “make the tone 3 times as loud [as the visual stimuli appears bright]”. This was partly a replication of Ellermeier et al. (2021). They concluded that cross- modal commutativity holds whereas we find inconclusive evidence in a Bayesian analysis. More importantly, in a theoretical analysis, we find evidence that role-independence of the internal references used in magnitude production is violated. In an expansion of Luce’s theory, Heller (2021) concluded that cross- modal commutativity holds if and only if the internal references are role-independent, meaning they are not dependent on whether the reference pertains to the standard or the variable stimulus. This means, if role-independence of the internal references is violated, the assumed intensity scale can hold even if cross- modal commutativity doesn't. Evidence towards this conclusion as well as its implications will be discussed.
Dr. Yoolim Kim
This poster presents interim results from an ACT-R-based statistical model of a series of lexical decision experiments in Korean. The model uses two tiers of spreading activation, one of which represents semantic distance, and the other of which represents the effect of the Hanja writing system on the mental lexicon. Modelling the data requires assumptions to be made about the relationship between the tiers of spreading activation, and about the method of computing semantic association. The poster is supported by an interactive browser interface that allows participants to vary these assumptions, as well as the standard ACT-R spreading activation parameters, and explore how this impacts the model fit.
Submitting author
Author