Spatial Mental Modeling from Limited Views
- URL: http://arxiv.org/abs/2506.21458v1
- Date: Thu, 26 Jun 2025 16:38:19 GMT
- Title: Spatial Mental Modeling from Limited Views
- Authors: Baiqiao Yin, Qineng Wang, Pingyue Zhang, Jianshu Zhang, Kangrui Wang, Zihan Wang, Jieyu Zhang, Keshigeyan Chandrasegaran, Han Liu, Ranjay Krishna, Saining Xie, Manling Li, Jiajun Wu, Li Fei-Fei,
- Abstract summary: Our new MindCube benchmark with 21,154 questions across 3,268 images exposes this critical gap.<n>Using MindCube, we evaluate how well Vision Language Models (VLMs) build robust spatial mental models.<n>We then explore three approaches to help VLMs approximate spatial mental models, including unseen intermediate views, natural language reasoning chains, and cognitive maps.
- Score: 71.57140964322559
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Can Vision Language Models (VLMs) imagine the full scene from just a few views, like humans do? Humans form spatial mental models, internal representations of unseen space, to reason about layout, perspective, and motion. Our new MindCube benchmark with 21,154 questions across 3,268 images exposes this critical gap, where existing VLMs exhibit near-random performance. Using MindCube, we systematically evaluate how well VLMs build robust spatial mental models through representing positions (cognitive mapping), orientations (perspective-taking), and dynamics (mental simulation for "what-if" movements). We then explore three approaches to help VLMs approximate spatial mental models, including unseen intermediate views, natural language reasoning chains, and cognitive maps. The significant improvement comes from a synergistic approach, "map-then-reason", that jointly trains the model to first generate a cognitive map and then reason upon it. By training models to reason over these internal maps, we boosted accuracy from 37.8% to 60.8% (+23.0%). Adding reinforcement learning pushed performance even further to 70.7% (+32.9%). Our key insight is that such scaffolding of spatial mental models, actively constructing and utilizing internal structured spatial representations with flexible reasoning processes, significantly improves understanding of unobservable space.
Related papers
- Enhancing Spatial Reasoning through Visual and Textual Thinking [45.0026939683271]
The spatial reasoning task aims to reason about the spatial relationships in 2D and 3D space.<n>Although vision language models (VLMs) have developed rapidly in recent years, they are still struggling with the spatial reasoning task.<n>We introduce a method that can enhance spatial reasoning through Visual and Textual thinking Simultaneously.
arXiv Detail & Related papers (2025-07-28T05:24:54Z) - MindJourney: Test-Time Scaling with World Models for Spatial Reasoning [82.46482433335535]
spatial reasoning in 3D space is central to human cognition and indispensable for embodied tasks such as navigation and manipulation.<n>We propose MindJourney, a test-time scaling framework that grants a vision-language model with this missing capability.<n>We show that MindJourney achieves over an average 8% performance boost on the representative spatial reasoning benchmark SAT.
arXiv Detail & Related papers (2025-07-16T17:59:36Z) - Unfolding Spatial Cognition: Evaluating Multimodal Models on Visual Simulations [61.235500325327585]
Existing AI benchmarks primarily assess verbal reasoning, neglecting the complexities of non-verbal, multi-step visual simulation.<n>We introduce STARE, a benchmark designed to rigorously evaluate multimodal large language models on tasks better solved through visual simulation.<n>Our evaluations show that models excel at reasoning over simpler 2D transformations, but perform close to random chance on more complex tasks.
arXiv Detail & Related papers (2025-06-05T05:09:46Z) - OmniSpatial: Towards Comprehensive Spatial Reasoning Benchmark for Vision Language Models [21.311740507694143]
We introduce OmniSpatial, a benchmark for spatial reasoning grounded in cognitive psychology.<n>Through Internet data crawling and careful manual annotation, we construct over 1.5K question-answer pairs.
arXiv Detail & Related papers (2025-06-03T17:58:29Z) - Jigsaw-Puzzles: From Seeing to Understanding to Reasoning in Vision-Language Models [6.569837864665502]
We introduce Jigsaw-Puzzles, a novel benchmark consisting of 1,100 carefully curated real-world images with high spatial complexity.<n>Based on this dataset, we design five tasks to rigorously evaluate vision-language models' spatial perception, structural understanding, and reasoning capabilities.<n>The results show that even the strongest model, Gemini-2.5-Pro, achieves only 77.14% overall accuracy and performs particularly poorly on the Order Generation task.
arXiv Detail & Related papers (2025-05-27T05:17:41Z) - Mind the Gap: Benchmarking Spatial Reasoning in Vision-Language Models [14.442394137843923]
We present a detailed analysis that first delineates the core elements of spatial reasoning.<n>We then assesses the performance of these models in both synthetic and real-world images.
arXiv Detail & Related papers (2025-03-25T14:34:06Z) - Why Is Spatial Reasoning Hard for VLMs? An Attention Mechanism Perspective on Focus Areas [52.478956204238315]
We study the spatial reasoning challenge from the lens of mechanistic interpretability.<n>We observe that successful spatial reasoning correlates strongly with the model's ability to align its attention with actual object locations.<n>Motivated by these findings, we propose ADAPTVIS to sharpen the attention on highly relevant regions when confident.
arXiv Detail & Related papers (2025-03-03T17:57:03Z) - Human Cognitive Benchmarks Reveal Foundational Visual Gaps in MLLMs [65.93003087656754]
VisFactor is a benchmark that digitizes 20 vision-centric subtests from a well-established cognitive psychology assessment.<n>We evaluate 20 frontier Multimodal Large Language Models (MLLMs) from GPT, Gemini, Claude, LLaMA, Qwen, and SEED families.<n>The best-performing model achieves a score of only 25.19 out of 100, with consistent failures on tasks such as mental rotation, spatial relation inference, and figure-ground discrimination.
arXiv Detail & Related papers (2025-02-23T04:21:32Z) - Thinking in Space: How Multimodal Large Language Models See, Remember, and Recall Spaces [34.809309396448654]
We present a novel video-based visual-spatial intelligence benchmark (VSI-Bench) of over 5,000 question-answer pairs.<n>We find that Multimodal Large Language Models (MLLMs) exhibit competitive - though subhuman - visual-spatial intelligence.
arXiv Detail & Related papers (2024-12-18T18:59:54Z) - SAT: Dynamic Spatial Aptitude Training for Multimodal Language Models [78.06537464850538]
We show that simulations are surprisingly effective at imparting spatial aptitudes that translate to real images.<n>We show that perfect annotations in simulation are more effective than existing approaches of pseudo-annotating real images.
arXiv Detail & Related papers (2024-12-10T18:52:45Z) - Where Am I and What Will I See: An Auto-Regressive Model for Spatial Localization and View Prediction [60.964512894143475]
We present Generative Spatial Transformer ( GST), a novel auto-regressive framework that jointly addresses spatial localization and view prediction.
Our model simultaneously estimates the camera pose from a single image and predicts the view from a new camera pose, effectively bridging the gap between spatial awareness and visual prediction.
arXiv Detail & Related papers (2024-10-24T17:58:05Z) - Neural feels with neural fields: Visuo-tactile perception for in-hand
manipulation [57.60490773016364]
We combine vision and touch sensing on a multi-fingered hand to estimate an object's pose and shape during in-hand manipulation.
Our method, NeuralFeels, encodes object geometry by learning a neural field online and jointly tracks it by optimizing a pose graph problem.
Our results demonstrate that touch, at the very least, refines and, at the very best, disambiguates visual estimates during in-hand manipulation.
arXiv Detail & Related papers (2023-12-20T22:36:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.