This site uses cookies

By using this site, you consent to our use of cookies. You can view our terms and conditions for more information.

Learning linguistic reference biases in the PRIMs cognitive architecture

Abigail Toth
University of Groningen ~ Artificial Intelligence
Prof. Petra Hendriks
University of Groningen ~ Linguistics
Dr. Niels Taatgen
University of Groningen ~ Artificial Intelligence
Jacolien van Rij
University of Groningen, Netherlands, The

Language users rely on biases in order to predict upcoming linguistic input. One of these biases is the implicit causality bias, which describes the phenomena whereby language users assume certain entities will be rementioned in the discourse based on the entity's particular role in an expressed causal event. However, we know very little about how this bias is learned and how it gets used by language users during real-time language processing. In order to investigate this, we constructed a reference learning model in the cognitive architecture PRIMs. The model processed simple sentences and made predictions about how the discourse would continue. By utilising PRIMs' context-operator learning -- based on reinforcement learning -- the model was able to pick up on the asymmetries in the input, resulting in biased behaviour that is in line with what is reported in the psycholinguistic literature. The findings demonstrate that complex linguistic behavior can be captured by domain-general learning and processing mechanisms and have implications for psycholinguistic theories of prediction, language learning, and reference processing.



cognitive modelling; PRIMs cognitive architecture; language learning; implicit causality

There is nothing here yet. Be the first to create a thread.

Cite this as:

Toth, A., Hendriks, P., Taatgen, N., & van Rij, J. (2022, July). Learning linguistic reference biases in the PRIMs cognitive architecture. Paper presented at In-Person MathPsych/ICCM 2022. Via