Virtual ICCM IV
You must be logged in and registered to see live session information.
By using this site, you consent to our use of cookies. You can view our terms and conditions for more information.
We provide a quantitative assessment of the part of speech tagging accuracy rate of the written word recognition subcomponent of the computational implementation of Double R Grammar. Double R Grammar is a cognitively and linguistically motivated near human-scale computational cognitive model of the grammatical analysis of written English which is focused on the grammatical encoding of two key dimensions of meaning: referential and relational meaning. Cognitively, the computational implementation is implemented in the ACT-R cognitive architecture. It contains a mental lexicon which encodes explicit declarative knowledge about lexical items and grammatical constructions, and a procedural memory which encodes implicit knowledge about how to grammatically analyze input expressions. With ~100,000 words and multi-word units, the size of the mental lexicon aligns with numerous estimates of the size of the human mental lexicon. The words were primarily borrowed from the COCA corpus and are assigned a part of speech specific base-level activation based on their frequency of use in that corpus. The retrieval of lexical items corresponding to input tokens depends on the spread of activation from the lexical, morphological, and grammatical context, and the base-level activation. Grammatical productions determine how to integrate retrieved lexical items and projected grammatical constructions into grammatical representations. There are ~2500 manually created productions that cover the common grammatical patterns of English. The basic processing mechanism is pseudo-deterministic in that it pursues the single best analysis, but is capable of non-monotonically adjusting to the evolving context. The processing mechanism adheres to two well established cognitive constraints on human language processing: incremental and interactive processing. Linguistically, Double R Grammar aligns with cognitive and construction grammar, and is strongly usage based. On a previously unseen sample corpus of book abstracts of spy novels and a few paragraphs of a Clive Cussler book, the computational implementation achieved a 98.48% part of speech tagging accuracy rate over 1838 tokens. On a second sample corpus of 8 abstracts of books on the topic of self-help and two political biographies, the computational implementation achieved a part of speech tagging accuracy rate of 98.56% over 766 tokens. Although this accuracy rate is not directly comparable to competing machine learning approaches trained over an annotated corpus, or deep learning approaches trained over big data, the current state of the art for part of speech tagging accuracy is in the neighborhood of 98% for systems trained on the annotated Penn Treebank corpus, using the Penn Treebank tagset which contains 36 atomic parts of speech organized into a flat listing with no internal structure. This compares to 56 non-atomic parts of speech with internal structure, organized into a multiple inheritance hierarchy in Double R Grammar.
This study proposes a method of generating body gestures from distributed representations of words. In the method, the size image for words is computed based on the axis whose poles correspond to "small" and "large" word images. In addition, the size image of the words is physically implemented as robot gestures. The proposed methods were evaluated by two online surveys. Summarizing the results, the authors claim the potential of developing artifacts exchanging qualitative and quantitative aspects of word representations.
With the development of autonomously operating machines, the demand for adopting moral judgements to machines is growing. In our society, it is likely that machines like self-driving cars will face complex problems with ethical dilemmas where moral judgments might be necessary. To achieve the goal of making an explainable artificial morality for machines, we assume morality as a thinking system of human beings based on the two modes of thinking proposed in dual process theory. As a concrete research step, a prototype model combining distributed language representation and memory activation mechanism in cognitive architecture ACT-R is presented, using the trolley problem as a case study.
The relationship between hippocampal volume and memory function has produced mixed results in neuroscience research. However, an experience-dependent efficient encoding mechanism underlies these varied observations. We present a model that utilizes an autoencoder to prioritize sparseness and transforms the recurrent loop between the cortex and hippocampus into a deep neural network. We trained our model with the Fashion MNIST database and a loss function to modify synapses via backpropagation of mean squared recall error. The model exhibited experience-dependent efficient encoding, representing frequently repeated objects with fewer neurons and smaller loss penalties and similar representations for objects repeated equally. Our findings clarify perplexing results from neurodevelopment studies: linking increased hippocampus size and memory impairments in ASD to decreased sparseness, and explaining dementia symptoms of forgetting with varied neuronal integrity. Our findings propose a novel model that connects observed relationships between hippocampus size and memory, contributing to the development of a larger theory on experience-dependent encoding and storage and its failure.
Inspired from previous research in the spatial reasoning domain, in this paper, we address the varying interpretations of premises of syllogistic problems among individuals and the differences in their resulting mental models. We conducted an experiment whose results show that model building is a relatively easy task for humans to do correctly and they do in fact have preferred models for most syllogisms, yet, without a relation to their responses. We report in-depth analysis of the models' canonicality in order to compare the model building behavior in humans to the processes implemented in mReasoner, a cognitive model that implements the Mental Model Theory.