Posters: AI & ACT-R
Sonay Duman
Timing accuracy is very important in human behavioral experiments, especially in time perception experiments. In this study, the prospective interval timing experiment conducted by Taatgen, van Rijn, and Anderson in 2007 was repeated using the ACT-R Graphical User Interface (AGI) to compare the timing performances of operating systems. Considering that almost all psychology experiments are run on operating systems that require a multitasking environment today, it can be said that under which conditions such operating systems provide timing accuracy is an important practice that researchers should acquire. Ensuring such precision on a computer can be challenging, especially when using multitasking operating systems like Windows, UNIX, or Linux. Therefore, the experiment developed using Python programming language and AGI was tested on both Windows and Linux operating systems to evaluate duration of the experiment. The original experiment had four different conditions, there are three of these conditions in this study. In each phase, the task was either Letter or Addition. These three conditions are as follows: The LLL condition with only letter task, the AAA condition with only addition task, and the AAL condition with both addition and letter task. In this study, prospective interval timing performance is evaluated, thus timing accuracy is important. In the original experiment trial duration is 13 s. However, when the timer duration has been set as 13 s in the Python code, it is observed that the trial duration lasted almost twice this time. To solve this issue, a mathematical function that calculates the deviation has been added to the code. Although the deviation was minimized with this function, the trial duration was not precisely 13 s. It is thought that the reason for this problem is weak timer resolution of the AGI. Apart from that, the performance and hardware specifications of the computer systems can differ, which can impact the time taken for the code to execute. After analyzing the data, it has been found that the average durations of the AAA, LLL, and AAL conditions when run on the Windows operating system are 13.35 s, 14.11 s, and 10.76 s, respectively. Similarly, the average durations of these same conditions when run on the Linux operating system are 13.28 s, 13.77 s, and 10.6 s. Based on the results, it can be observed that the experiment runs for comparable durations on both operating systems. However, upon examining the averages, it appears that the experiment runs slightly faster on Linux. Linux is known for its efficient file system and memory management, which reduces the amount of overhead required to run the operating system. So, this efficient memory management allows Linux to run faster and smoother, even on older or less powerful hardware. According to the results of the study, despite the timer resolution of AGI is not constant in itself, it can be seen that the experiment developed with Python and AGI work stably on both operating systems. Considering that AGI's timing performance is dependent on many factors, including task complexity and computer hardware, this study shows that AGI has consistent timing performance in different operating systems.
Chris Dancy
ConceptNet is a semantic knowledge graph made with the intention of drawing conclusions between words and expressions. This semantic network intakes information from various databases, largely originated from text gathered from online websites, defines the relationship between words based on the context in which it was found being used and assigns a relational strength between each of the words. But due to the sources of these datasets and degree of human influence over the spaces that this data is collected from, biases have been detected in the relationship aspects of this network. Our work focuses specifically on the racial biases that have multiplied in this environment. By using this network as a declarative memory knowledge source in a cognitive architecture, we can dissect some of these relational values and gain further insight into how the conceptual space of Blackness is treated among these representations and what this means for cognitive processes and behavior. While we are aware of (canonical) ACT-R's capability of representing a semantic knowledge graph, our goal with this model is to create an extended declarative memory that would hold the knowledge that ConceptNet contains—which consists of well over 1 million nodes. We plan to use this extended ACT-R system to understand the socio-cognitive processes used by participants in a human-AI cooperation study by Atkins et al. (2021). The study reported by Atkins and colleagues explicitly explored how (likely implicit) racialization of AI agents might affect human cooperation with those agents during a task. Thus, a cognitive model for that task would need some representation of sociocultural knowledge, particularly knowledge to represent the conceptual space of systems of oppression that result in racial categorization and racialization.
Tanishca Sanjay Dwivedi
How can we model the ways race-based systems of power and oppression impact the ways people interact with AI agents? To approach this question, we are developing a computational model of a human-AI interaction study that explores the impact of racialization on such interactions. We’re developing an ACT-R model that connects with the existing study infrastructure and code to complete the Pig Chase task (a modified version of the Stag hunt task) using NodeJS. Connecting the existing Pig Chase Environment to ACT-R provides us with an opportunity to understand and in turn lay out the process of recognizing the steps taken to make certain decisions, particularly without creating another environment, something that can be especially time consuming for the computational cognitive modeling process. To develop a more complete simulation of related human behavior with greater resolution, we are developing a connection between ConceptNet and ACT-R. Integrating it with our model will provide the model with existing historical and sociocultural perspectives and provides us with a more realistic ability to understand the interaction between the user and the environment after it has the knowledge of the race of the AI agent.
Mr. Colin Halupczok
Dr. Winfried Ilg
Dr. Daniel Haeufle
Dr. Philipp Beckerle
Nele Russwinkel
Interactions between human users and assistive robotic systems in real life often involve both cognitive and physical interactions. In order to support humans well in their daily life, a robotic agent needs to be aware of the situation, anticipate the human agent, and generate human-like behaviors. In this work, we present an ACT-R observer model as a possible implementation on the robotic agent’s cognitive level. The model anticipates the human agent’s behaviors in an application example: a tea-making task. We discuss how such a model provides us the possibility to connect cognitive and physical human-robot interactions, and the advantages of such a model compared with common state-of-the-art approaches for human intention and behavior predictions. We also discuss how such an individual ACT-R model provides potential for an anticipatory, situation-aware robotic agent in real life applications, allowing us to solve ambiguities from acquiring input via various sensors and gain time for proactive support.
Gaojie Fan
Ms. Peyton Corbi
Robin D. Thomas
Ideally, the capacity of a single channel in a multichannel system should be unaffected by task type (e.g, logical “AND” vs “OR” tasks). However, Howard et al. (2021) reviewed studies in which capacity estimates for “AND” tasks differ greatly from “OR” tasks. In the classic definition of capacity, the absence of a component is not explicitly considered. They suggest a need to incorporate processing time random variables from the no-signal channels into the capacity formulation. We recently collected data from the standard double factorial paradigm that allows us to evaluate the utility of this modification. In one experiment, observers detected the presence or absence of components in Navon-like stimuli (i.e., global shape is composed of local shapes) for both the “OR” and the “AND” task instructions. Absence of a target feature was the presence of a neutral distractor. Hence, the no-signal channel actually contained shape information. In contrast, a second experiment used complex Gabor patches composed of two sinewave gratings in the same tasks. Hence, the absence of one grating does imply nothing is present on that channel. We show that modifying the classical capacity coefficient to account for empty channels is more effective for Gabor patches when compared to the Navon stimulus example.
Greg Trafton
Laura Hiatt
The GPT-family of Large Language Models has garnered significant attention in the past year. Its ability to digest natural language has opened up previously unsolvable natural language problem domains. We tasked GPT-3 with generating complex cognitive models from plain text instructions. The quality of the generated models is dependent upon the quality and quantity of fine-tuning samples, but is otherwise quite promising, producing executable and correct models in four of six task areas.
Mr. Kosuke Sasaki
Prof. Junya Morita
Mr. Alexis Meneses
Dr. Kazuki Sakai
Prof. Yuichiro Yoshikawa
Human communication is mediated by symbolic (e.g., language) or quantitative (e.g., body movements) representations. For smooth interaction between humans and machines, it is important for machines to have a mechanism to convert between symbolic and quantitative representations. In this study, we construct a model in which the cognitive architecture as a symbol processing system and the robot as an embodied media interact with each other. In this model, we use a simple word game with a human as a test case of communication. The conversion from a symbolic to a quantitative representation in this model corresponds to the robot's posture based on the "size image" of a noun. The "size image" is a general human image of a word taken from the word distributional representation. The influence of quantitative representation on symbols in this model is represented by the influence of the robot's posture on the model's next word selection.
Submitting author
Author