First-passage statistics for a model of word-by-word sentence parsing
A fundamental question in cognitive science is how people comprehend sentences word by word. An important step in sentence comprehension is determining the syntactic relationships between words (figuring out who did what to whom). Building these syntactic relationships is known to take differing amounts of time depending on the type of sentence and the words it contains. A good theory of sentence comprehension should not only say how syntactic relations are established but also how long it takes to establish them. Here, we analyze a new model that aims to accomplish both goals. At each word in a sentence, the model stochastically explores a network of discrete states. Each state consists of a partial parse of the sentence so far, i.e., some set of dependency links between head words and dependent words. The model can jump between states if they differ by a single link until it reaches a state corresponding to a complete parse of the sentence so far. We use the master equation to analyze this continuous-time random walk. We present formulas for first passage time distributions and splitting probabilities, which are treated as the predicted reading times for that word and the probabilities of building different alternative parses, respectively. We illustrate how we can gain new insights into known phenomena (temporary ambiguities like, "the horse raced past the barn fell") using these techniques. The hope is that these quantitative tools will facilitate comparisons with other sentence comprehension models and lead to new theory-driven experiments.
There is nothing here yet. Be the first to create a thread.