Individual Differences
You must be logged in and registered to see live session information.
By using this site, you consent to our use of cookies. You can view our terms and conditions for more information.
Electroencephalography (EEG) is a fundamental tool in neuroscience, offering key insights into the complex workings of the brain. This study introduces a global model for EEG analysis based on a stochastic autoregressive framework derived from established models of neural behavior. While it is typically thought that EEG frequency bands emerge from synchronous synaptic activity, the global model of EEG states that delays in axonal propagation across corticocortical and thalamocortical connections significantly contribute to the variance observed in EEG signals. The present model predicts that spectral peaks in scalp-recorded EEG data can be solely attributed to axonal time delays at various distances. The autoregressive models are notable for their linear structure that efficiently captures temporal relationships within EEG signals, highlighting the impact of axonal propagation delays with greater computational efficiency. The model employs a connectivity atlas to determine the connectivity and distances between various brain regions. Additionally, it incorporates distributions of axonal delays and Event-Related Potentials (ERPs) in response to visual stimuli. The approach allows for an accurate reproduction of EEG power spectra, including both resting-state alpha rhythms and ERP peaks. The findings suggest that axonal delay times and neural connectivity within linear predictive models influence EEG dynamics, offering a method to analyze individual cognitive variations through EEG data. In the future, we aim to apply these models alongside cognitive frameworks to draw inferences about individual variations in neurocognition.
Theoretical models of metacognitive assessment postulate that the correctness of this type of assessment depends on the difference between actual performance and some type of “overconfidence” bias. In contrast, Stanovich’s Tripartite model postulates that an adequate performance on tasks that requires reasoning depends first on cognitive inhibition and then on cognitive reflection. The objective of this study was to test if actual performance or measures of cognitive reflection are better predictors of metacognitive assessment. Our sample consisted of 120 undergraduate students, with ages ranging from 18 to 58 (M = 27.11, SD = 9.79). To measure performance, we fitted a Signal Detection Theory (SDT) model to the assessment of validity of 32 syllogisms. To measure cognitive reflection, we fitted a Multinomial Processing Tree (MPT) model to the Cognitive Reflection Test. Metacognitive assessment was done by asking the respondents to rate, on a three-points scale, how confident they were of their responses on the syllogisms. The SDT model generated estimates of discrimination (i.e., the ability to differentiate between logically valid and invalid syllogisms) and criterion (i.e., the bias towards choosing the “valid” or the “invalid” responses). The MPT model generated estimates of inhibition and cognitive reflection. A regularized network analysis, using the atan regularization of the correlation matrix of the measures, indicated that the predictability (i.e., a measure of the variance explained by the other variables in the model) of the metacognitive assessments was higher for the MPT measures than for the SDT measures. The implications of these results are discussed.
Theories of many cognitive processes can be expressed as dynamical process models. In order to test the hypotheses that the models implement, we must calibrate them to experimental data by fitting free parameters. In this work, we study a version of the SWIFT model of eye-movement control during reading (Engbert et al., 2005, Psych. Rev.; Engbert & Rabe, 2023, under review) to illustrate two related issues that can arise in models with multiple free parameters: parameter identifiability and sloppiness. The parameters of a model are identifiable for a given data set when it is possible to find a finite confidence interval for the parameter (Raue et al., 2009, Bioninformatics). When a parameter is non-identifiable, parameter fitting can be difficult and misleading, even if the fitted model's output looks reasonable. Sloppiness arises when there are large differences in how sensitive the model's output is to changes in different parameters (Brown & Sethna, 2003, Phys. Rev. E). Sloppiness can also lead to difficulty in model calibration and make interpreting model output challenging, as an analysis of sloppiness often reveals that there are combinations of parameters that vary systematically together with no change in the model's predictions. To our knowledge, parameter identifiability and sloppiness have received little attention in cognitive science, even though the structure of many models is susceptible to these problems. In this talk, we will discuss methods for identifying and addressing parameter non-identifiability and model sloppiness, which can lead to simpler models and more informative fits to experimental data.