This site uses cookies

By using this site, you consent to our use of cookies. You can view our terms and conditions for more information.

Fast and flexible: Human program induction in abstract reasoning tasks

Aysja Johnson
NYU ~ Psychology
Todd Gureckis
New York University
Wai Keen Vong
NYU ~ Center for Data Science
Brenden Lake
NYU ~ Center for Data Science, Psychology

The Abstraction and Reasoning Corpus (ARC) is a challenging program induction dataset that was recently proposed by Chollet (2019). Here, we report the first set of results collected from a behavioral study of humans solving a subset of tasks from ARC (40 out of 1000). Although this subset of tasks contains considerable variation, our results showed that humans were able to infer the underlying program and generate the correct test output for a novel test input example, with an average of 80% of tasks solved per participant, and with 65% of tasks being solved by more than 80% of participants. Additionally, we find interesting patterns of behavioral consistency and variability within the action sequences during the generation process, the natural language descriptions to describe the transformations for each task, and the errors people made. Our findings suggest that people can quickly and reliably determine the relevant features and properties of a task to compose a correct solution. Future modeling work could incorporate these findings, potentially by connecting the natural language descriptions we collected here to the underlying semantics of ARC.



concept learning
abstract reasoning
program induction

There is nothing here yet. Be the first to create a thread.

Cite this as:

Johnson, A., Gureckis, T., Vong, W., & Lake, B. (2021, July). Fast and flexible: Human program induction in abstract reasoning tasks. Paper presented at Virtual MathPsych/ICCM 2021. Via