Close
This site uses cookies

By using this site, you consent to our use of cookies. You can view our terms and conditions for more information.

Optimizing Timing Accuracy Performance of AGI in Experiments

Authors
Behiye Sahin
Toros University ~ Psychology
Sonay Duman
Toros University ~ Software Engineering
Abstract

ACT-R (Adaptive Control of Thought - Rational) is a cognitive architecture that provides a framework for developing computational cognitive models that simulate various aspects of human performance in tasks ranging from time perception to decision-making. ACT-R also can be implemented in various programming languages and environments, including graphical user interfaces (GUIs) that facilitate model development, experimentation, and analysis (Anderson 2007, p.135). One of these interfaces is ACT-R graphical user interface (AGI). The compiling time of the AGI can vary depending on several factors related to the design, development environment, and computational resources available (Bothell, 2007). The compilation process for these technologies involves translating the source code into executable binaries or bytecode that can be run on different operating systems with different compiling times (Şahin & Duman, 2023). The fact that AGI’s timing accuracy is very important to obtain highly accurate results, especially in time perception experiments. In this study, a function was developed to enable the AGI interval timing experiment, developed using the Python programming language, to run as close to real time as possible. The function calculates the duration of other functions which are based on the ACT-R model and called to create the user interface during the experiment, then performs the necessary mathematical operations to ensure that this duration is as close as possible to the required trial duration (13s). In the experiment, there is a letter condition with a letter recognition task and an addition condition involving the sum of two numbers. In the experiments conducted before using the function, it was observed that the letter condition, which should take 13s, took an average of 15.67s and the addition condition took 15.18s. According to the results obtained after this function was added, it was observed that the letter condition took 14.11s on average and the addition condition took 13.3s on average. With the inclusion of the function, the trial time was found to be closer to the original time of 13s. As a result, the study provides an improvement in the use of AGI as a graphical user interface in experiments with humans. It also shows that functions related to timing accuracy can be added in Python-based experiment design.

Tags

Keywords

timing accuracy
AGI
interval timing
Python
Discussion
New

There is nothing here yet. Be the first to create a thread.

Cite this as:

Sahin, B., & Duman, S. (2024, June). Optimizing Timing Accuracy Performance of AGI in Experiments. Paper presented at Virtual MathPsych/ICCM 2024. Via mathpsych.org/presentation/1469.