Virtual ICCM V
You must be logged in and registered to see live session information.
By using this site, you consent to our use of cookies. You can view our terms and conditions for more information.
A core inferential problem in the study of natural and artificial systems is the following: given access to a neural network, a stimulus and behaviour of interest, and a method of systematic experimentation, figure out which circuit suffices to generate the behaviour in response to the stimulus. It is often assumed that the main obstacles to this "circuit cracking'' are incomplete maps (e.g., connectomes), observability and perturbability. Here we show through complexity-theoretic proofs that even if all these and many other obstacles are removed, an intrinsic and irreducible computational hardness remains. While this may seem to leave open the possibility that the researcher may in practice resort to approximation, we prove the task is inapproximable. We discuss the implications of these findings for implementationist versus functionalist debates on how to approach the study of cognitive systems.
More accurately understanding how individuals deploy attention in multitasking environments helps us develop models that more accurately capture human performance and variability. Here, we implemented a method of measuring subjective workload in an ACT-R model and constrained the model's ability to use bottom-up capture for stimuli outside of a peripheral window (i.e., perceptual span). Stimuli outside of the perceptual span window could thus only be detected via top-down attention. Our subjective workload metric was based on event-frequency and was compared to NASA-TLX reports from multitasking data using the AF-MATB in \citeA{bowers2014effects}. The metric successfully differentiated between Easy and Hard task demands. We then evaluated performance and eye movements of an ACT-R model with different fixed levels of perceptual span. As expected, when the model was limited to mostly top-down visual attention, performance declined because the model could not directly attend to malfunctions in peripheral vision. Similarly, saccade amplitude decreased and eye movements became more systematic. Interestingly, when comparing the model's simulation to behavioral data, the size of the perceptual span window increased as task demands increased, suggesting that participants were using less systematic scans when subjective workload increased. We then implemented this transition in the ACT-R model.
Traditional fit indices used in the context of factor analysis are based on the objective function of the Maximum Likelihood (ML), or modified ML, estimates of the free parameters. Therefore, these indices are an indication of how well the fitted model describes the observed correlation matrix. However, this these indices do not provide a direct assessment of the validity of the assumed causal relations between the latent and observed variables. The objective of this study is to propose a tetrad fit index (TFI) which reflects how well the assumed causal relations in the model are reflected in the data. The TFI is defined as the complement of the average of the root-mean-squared difference between the tetrads of the observed correlation matrix and the correlation matrix implied by a fitted factor analytic model. A preliminary simulation study provides initial evidence in favor of using the TFI instead of other traditional fit indices to identify the correct factor model in comparison to concurrent models.
To explain the performance history of individuals over time, particular features of memories are posited, such as the power law of learning, power law of decay, and the spacing effect. When these features of memory are integrated together into a model of learning and retention, they have been able to account for human performance across a wide range of both applied and laboratory domains. However, these models of learning and retention assume that performance is best accounted for by a continuous performance curve. In contrast to this standard assumption of models of learning and retention, other researcher have argued that ,over time, individuals display sudden discrete shifts in their performance due to changes in strategy and/or memory representation. To compare these two accounts of memory, the standard Predictive Performance Equation (PPE; (Walsh, Gluck, Gunzelmann, Jastrzembski, & Krusmark, 2018)) and was compared to a Change PPE on fits to human performance in a naturalistic data set. We make several hypotheses about the expected characteristics of individual learning curves and the different abilities of the models to account for human performance. Our results show that performance that Change PPE was not only able to be better fit the data compared to the Standard PPE, but that inferred changes in the participant’s performance was associated with greater learning outcomes.