ARC Is a Vision Problem!
- URL: http://arxiv.org/abs/2511.14761v1
- Date: Tue, 18 Nov 2025 18:59:49 GMT
- Title: ARC Is a Vision Problem!
- Authors: Keya Hu, Ali Cy, Linlu Qiu, Xiaoman Delores Ding, Runqian Wang, Yeyin Eva Zhu, Jacob Andreas, Kaiming He,
- Abstract summary: We formulate ARC within a vision paradigm, framing it as an image-to-image translation problem.<n>Our framework, termed Vision ARC, achieves 60.4% accuracy on the ARC-1 benchmark.
- Score: 50.59206008530851
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The Abstraction and Reasoning Corpus (ARC) is designed to promote research on abstract reasoning, a fundamental aspect of human intelligence. Common approaches to ARC treat it as a language-oriented problem, addressed by large language models (LLMs) or recurrent reasoning models. However, although the puzzle-like tasks in ARC are inherently visual, existing research has rarely approached the problem from a vision-centric perspective. In this work, we formulate ARC within a vision paradigm, framing it as an image-to-image translation problem. To incorporate visual priors, we represent the inputs on a "canvas" that can be processed like natural images. It is then natural for us to apply standard vision architectures, such as a vanilla Vision Transformer (ViT), to perform image-to-image mapping. Our model is trained from scratch solely on ARC data and generalizes to unseen tasks through test-time training. Our framework, termed Vision ARC (VARC), achieves 60.4% accuracy on the ARC-1 benchmark, substantially outperforming existing methods that are also trained from scratch. Our results are competitive with those of leading LLMs and close the gap to average human performance.
Related papers
- Think Visually, Reason Textually: Vision-Language Synergy in ARC [94.15522924153264]
ARC-AGI is a rigorous testbed for conceptual rule induction and transfer to novel tasks.<n>Most existing methods treat ARC-AGI as a purely textual reasoning task, overlooking the fact that humans rely heavily on visual abstraction.<n>We introduce two synergistic strategies: Vision-Language Synergy Reasoning (VLSR) and Modality-Switch Self-Correction (MSSC)<n>Our findings suggest that unifying visual abstraction with linguistic reasoning is a crucial step toward achieving generalizable, human-like intelligence.
arXiv Detail & Related papers (2025-11-19T18:59:04Z) - MiCo: Multi-image Contrast for Reinforcement Visual Reasoning [72.81576836419373]
Chain-of-Thought (CoT) reasoning can be used to link visual cues across multiple images.<n>We adapt rule-based reinforcement learning for Vision-Language Models (VLMs)<n>Our method achieves significant improvements on multi-image reasoning benchmarks and shows strong performance on general vision tasks.
arXiv Detail & Related papers (2025-06-27T17:59:27Z) - GIFARC: Synthetic Dataset for Leveraging Human-Intuitive Analogies to Elevate AI Reasoning [7.09254962218677]
State-of-the-art models still achieve accuracy rates of merely 40-55% on 2024 ARC Competition.<n>We introduce an analogy-inspired ARC dataset, GIFARC.<n>GIFARC guides AI agents to evaluate the task analogically before engaging in brute-force pattern search.
arXiv Detail & Related papers (2025-05-27T03:42:51Z) - Tackling the Abstraction and Reasoning Corpus with Vision Transformers: the Importance of 2D Representation, Positions, and Objects [31.926206783846144]
We show that a Vision Transformer (ViT) fails dramatically on most ARC tasks even when trained on one million examples per task.<n>We propose ViTARC, a ViT-style architecture that unlocks some of the visual reasoning capabilities required by the ARC.<n>Our task-specific ViTARC models achieve a test solve rate close to 100% on more than half of the 400 public ARC tasks.
arXiv Detail & Related papers (2024-10-08T22:25:34Z) - UniFine: A Unified and Fine-grained Approach for Zero-shot Vision-Language Understanding [88.24517460894634]
We propose a unified framework to take advantage of the fine-grained information for zero-shot vision-language learning.<n>Our framework outperforms former zero-shot methods on VQA and achieves substantial improvement on SNLI-VE and VCR.
arXiv Detail & Related papers (2023-07-03T09:03:12Z) - LLMs and the Abstraction and Reasoning Corpus: Successes, Failures, and
the Importance of Object-based Representations [50.431003245201644]
We show that GPT-4 is unable to "reason" perfectly within non-language domains such as the 1D-ARC or a simple ARC subset.
We propose an object-based representation that is obtained through an external tool, resulting in nearly doubling the performance on solved ARC tasks and near-perfect scores on the easier 1D-ARC.
arXiv Detail & Related papers (2023-05-26T16:32:17Z) - Abstract Visual Reasoning Enabled by Language [8.627180519837657]
We propose a general learning-based framework for solving ARC.
It is centered on transforming tasks from the vision to the language domain.
This composition of language and vision allows for pre-trained models to be leveraged at each stage.
arXiv Detail & Related papers (2023-03-07T17:52:46Z) - Exploring CLIP for Assessing the Look and Feel of Images [87.97623543523858]
We introduce Contrastive Language-Image Pre-training (CLIP) models for assessing both the quality perception (look) and abstract perception (feel) of images in a zero-shot manner.
Our results show that CLIP captures meaningful priors that generalize well to different perceptual assessments.
arXiv Detail & Related papers (2022-07-25T17:58:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.