Close
This site uses cookies

By using this site, you consent to our use of cookies. You can view our terms and conditions for more information.

Learning reference biases from language input: a cognitive modelling approach

Authors
Abigail Toth
University of Groningen ~ Artificial Intelligence
Dr. Niels Taatgen
University of Groningen ~ Artificial Intelligence
Jacolien van Rij
University of Groningen, Netherlands, The
Prof. Petra Hendriks
University of Groningen ~ Linguistics
Abstract

In order to gain insight into how people acquire certain reference biases in language and how those biases eventually influence online language processing, we constructed a cognitive model and presented it with a dataset containing reference asymmetries. Via prediction and reinforcement learning the model was able to pick up on the asymmetries in the input. The model predictions have implications for various accounts of reference processing and demonstrate that seemingly complex behavior can be explained by simple learning mechanisms.

Tags

Keywords

implicit causality
reference
cognitive modelling
Discussion
New
Input Data Last updated 2 years ago

This was a great talk and an interesting demonstration! I was wondering if you have a sense of how sensitive the model is to the input dataset. In some developmental language corpora it has been noted that the input contains very sparse examples for certain constructs. Does there seem to be a minimum number of exposures for the model to learn these...

Christopher Adam Stevens 0 comments
Cite this as:

Toth, A., Taatgen, N., van Rij, J., & Hendriks, P. (2021, July). Learning reference biases from language input: a cognitive modelling approach. Paper presented at Virtual MathPsych/ICCM 2021. Via mathpsych.org/presentation/590.