Learning reference biases from language input: a cognitive modelling approach
In order to gain insight into how people acquire certain reference biases in language and how those biases eventually influence online language processing, we constructed a cognitive model and presented it with a dataset containing reference asymmetries. Via prediction and reinforcement learning the model was able to pick up on the asymmetries in the input. The model predictions have implications for various accounts of reference processing and demonstrate that seemingly complex behavior can be explained by simple learning mechanisms.
Keywords
This was a great talk and an interesting demonstration! I was wondering if you have a sense of how sensitive the model is to the input dataset. In some developmental language corpora it has been noted that the input contains very sparse examples for certain constructs. Does there seem to be a minimum number of exposures for the model to learn these...
Cite this as: