Close
This site uses cookies

By using this site, you consent to our use of cookies. You can view our terms and conditions for more information.

The Comparison of Operating System Timing Performances in Interval Timing Study Using the AGI

Authors
Behiye Sahin
Toros University ~ Psychology
Sonay Duman
Toros University ~ Software Engineering
Abstract

Timing accuracy is very important in human behavioral experiments, especially in time perception experiments. In this study, the prospective interval timing experiment conducted by Taatgen, van Rijn, and Anderson in 2007 was repeated using the ACT-R Graphical User Interface (AGI) to compare the timing performances of operating systems. Considering that almost all psychology experiments are run on operating systems that require a multitasking environment today, it can be said that under which conditions such operating systems provide timing accuracy is an important practice that researchers should acquire. Ensuring such precision on a computer can be challenging, especially when using multitasking operating systems like Windows, UNIX, or Linux. Therefore, the experiment developed using Python programming language and AGI was tested on both Windows and Linux operating systems to evaluate duration of the experiment. The original experiment had four different conditions, there are three of these conditions in this study. In each phase, the task was either Letter or Addition. These three conditions are as follows: The LLL condition with only letter task, the AAA condition with only addition task, and the AAL condition with both addition and letter task. In this study, prospective interval timing performance is evaluated, thus timing accuracy is important. In the original experiment trial duration is 13 s. However, when the timer duration has been set as 13 s in the Python code, it is observed that the trial duration lasted almost twice this time. To solve this issue, a mathematical function that calculates the deviation has been added to the code. Although the deviation was minimized with this function, the trial duration was not precisely 13 s. It is thought that the reason for this problem is weak timer resolution of the AGI. Apart from that, the performance and hardware specifications of the computer systems can differ, which can impact the time taken for the code to execute. After analyzing the data, it has been found that the average durations of the AAA, LLL, and AAL conditions when run on the Windows operating system are 13.35 s, 14.11 s, and 10.76 s, respectively. Similarly, the average durations of these same conditions when run on the Linux operating system are 13.28 s, 13.77 s, and 10.6 s. Based on the results, it can be observed that the experiment runs for comparable durations on both operating systems. However, upon examining the averages, it appears that the experiment runs slightly faster on Linux. Linux is known for its efficient file system and memory management, which reduces the amount of overhead required to run the operating system. So, this efficient memory management allows Linux to run faster and smoother, even on older or less powerful hardware. According to the results of the study, despite the timer resolution of AGI is not constant in itself, it can be seen that the experiment developed with Python and AGI work stably on both operating systems. Considering that AGI's timing performance is dependent on many factors, including task complexity and computer hardware, this study shows that AGI has consistent timing performance in different operating systems.

Tags

Keywords

timing accuracy
AGI
interval timing
Discussion
New

There is nothing here yet. Be the first to create a thread.

Cite this as:

Sahin, B., & Duman, S. (2023, July). The Comparison of Operating System Timing Performances in Interval Timing Study Using the AGI. Paper presented at MathPsych/ICCM/EMPG 2023. Via mathpsych.org/presentation/1038.