GRAID: Enhancing Spatial Reasoning of VLMs Through High-Fidelity Data Generation
- URL: http://arxiv.org/abs/2510.22118v2
- Date: Tue, 28 Oct 2025 00:53:28 GMT
- Title: GRAID: Enhancing Spatial Reasoning of VLMs Through High-Fidelity Data Generation
- Authors: Karim Elmaaroufi, Liheng Lai, Justin Svegliato, Yutong Bai, Sanjit A. Seshia, Matei Zaharia,
- Abstract summary: We present a framework for learning spatial reasoning using 2D boxes from standard detectors.<n>We show that when trained on GRAID data, models learn spatial reasoning concepts that generalize on over-detailed held-out types.<n>We also show that when trained on all questions types, achieve improvements on several existing benchmarks.
- Score: 31.365285503503475
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Vision Language Models (VLMs) achieve strong performance on many vision-language tasks but often struggle with spatial reasoning$\unicode{x2014}$a prerequisite for many applications. Empirically, we find that a dataset produced by a current training data generation pipeline has a 57.6% human validation rate. These rates stem from current limitations: single-image 3D reconstruction introduces cascading modeling errors and requires wide answer tolerances, while caption-based methods require hyper-detailed annotations and suffer from generative hallucinations. We present GRAID, built on the key insight that qualitative spatial relationships can be reliably determined from 2D geometric primitives alone. By operating exclusively on 2D bounding boxes from standard object detectors, GRAID avoids both 3D reconstruction errors and generative hallucinations, resulting in datasets that are of higher quality than existing tools that produce similar datasets as validated by human evaluations. We apply our framework to the BDD100k, NuImages, and Waymo datasets, generating over 8.5 million high-quality VQA pairs creating questions spanning spatial relations, counting, ranking, and size comparisons. We evaluate one of the datasets and find it achieves 91.16% human-validated accuracy$\unicode{x2014}$compared to 57.6% on a dataset generated by recent work. Critically, we demonstrate that when trained on GRAID data, models learn spatial reasoning concepts that generalize: models fine-tuned on 6 question types improve on over 10 held-out types, with accuracy gains of 47.5% on BDD and 37.9% on NuImages for Llama 3.2B 11B, and when trained on all questions types, achieve improvements on several existing benchmarks such as BLINK. The GRAID framework, datasets, and additional information can be found $\href{this https URL}{here}$.
Related papers
- VeriSciQA: An Auto-Verified Dataset for Scientific Visual Question Answering [53.662676566188175]
A key bottleneck lies in the lack of public, large-scale, high-quality Scientific Visual Question Answering (SVQA) datasets.<n>We propose a verification-centric Generate-then-Verify framework that first generates QA pairs with figure-associated textual context.<n>We instantiate this framework to curate VeriSciQA, a dataset of 20,351 QA pairs spanning 20 scientific domains and 12 figure types.
arXiv Detail & Related papers (2025-11-25T04:14:52Z) - VaseVQA-3D: Benchmarking 3D VLMs on Ancient Greek Pottery [14.993425622341917]
We propose the VaseVQA-3D dataset, which serves as the first 3D visual question answering dataset for ancient Greek pottery analysis.<n>We further develop the VaseVLM model, enhancing model performance in vase artifact analysis through domain-adaptive training.
arXiv Detail & Related papers (2025-10-06T04:28:39Z) - InterAct: Advancing Large-Scale Versatile 3D Human-Object Interaction Generation [54.09384502044162]
We introduce InterAct, a large-scale 3D HOI benchmark featuring dataset and methodological advancements.<n>First, we consolidate and standardize 21.81 hours of HOI data from diverse sources, enriching it with detailed textual annotations.<n>Second, we propose a unified optimization framework to enhance data quality by reducing artifacts and correcting hand motions.<n>Third, we define six benchmarking tasks and develop a unified HOI generative modeling perspective, achieving state-of-the-art performance.
arXiv Detail & Related papers (2025-09-11T15:43:54Z) - BOP Challenge 2024 on Model-Based and Model-Free 6D Object Pose Estimation [55.13521733366838]
The 6th in a series of public competitions organized to capture the state of the art in 6D object pose estimation and related tasks.<n>In 2024, we introduced new model-free tasks, where no 3D object models are available and methods need to onboard objects just from provided reference videos.<n>We defined a new, more practical 6D object detection task where identities of objects visible in a test image are not provided as input.
arXiv Detail & Related papers (2025-04-03T17:55:19Z) - MMScan: A Multi-Modal 3D Scene Dataset with Hierarchical Grounded Language Annotations [55.022519020409405]
This paper builds the first largest ever multi-modal 3D scene dataset and benchmark with hierarchical grounded language annotations, MMScan.<n>The resulting multi-modal 3D dataset encompasses 1.4M meta-annotated captions on 109k objects and 7.7k regions as well as over 3.04M diverse samples for 3D visual grounding and question-answering benchmarks.
arXiv Detail & Related papers (2024-06-13T17:59:30Z) - TextSquare: Scaling up Text-Centric Visual Instruction Tuning [62.878378882175284]
We introduce a new approach for creating a massive, high-quality instruction-tuning dataset, Square-10M.<n>Our model, TextSquare, considerably surpasses open-source previous state-of-the-art Text-centric MLLMs.<n>It even outperforms top-tier models like GPT4V and Gemini in 6 of 10 text-centric benchmarks.
arXiv Detail & Related papers (2024-04-19T11:38:08Z) - Zero-shot detection of buildings in mobile LiDAR using Language Vision Model [0.8192907805418583]
Language Vision Models (LVMs) surpass the existing State-of-the-Art (SOTA) in two-dimensional (2D) computer vision tasks.
LVMs face significant challenges when it comes to point clouds, a representative format for representing 3D data.
Our research aims to 1) apply the Grounded SAM through Spherical Projection to transfer 3D to 2D, and 2) experiment with synthetic data to evaluate its effectiveness.
arXiv Detail & Related papers (2024-04-15T16:56:58Z) - FS6D: Few-Shot 6D Pose Estimation of Novel Objects [116.34922994123973]
6D object pose estimation networks are limited in their capability to scale to large numbers of object instances.
In this work, we study a new open set problem; the few-shot 6D object poses estimation: estimating the 6D pose of an unknown object by a few support views without extra training.
arXiv Detail & Related papers (2022-03-28T10:31:29Z) - GI-NNet \& RGI-NNet: Development of Robotic Grasp Pose Models, Trainable
with Large as well as Limited Labelled Training Datasets, under supervised
and semi supervised paradigms [0.0]
We use deep learning techniques to help robots learn to generate and execute appropriate grasps quickly.
We developed a Generative Inception Neural Network (GI-NNet) model, capable of generating antipodal robotic grasps on seen as well as unseen objects.
arXiv Detail & Related papers (2021-07-15T16:55:49Z) - Rel3D: A Minimally Contrastive Benchmark for Grounding Spatial Relations
in 3D [71.11034329713058]
Existing datasets lack large-scale, high-quality 3D ground truth information.
Rel3D is the first large-scale, human-annotated dataset for grounding spatial relations in 3D.
We propose minimally contrastive data collection -- a novel crowdsourcing method for reducing dataset bias.
arXiv Detail & Related papers (2020-12-03T01:51:56Z) - Generative Multi-Stream Architecture For American Sign Language
Recognition [15.717424753251674]
Training on datasets with low feature-richness for complex applications limit optimal convergence below human performance.
We propose a generative multistream architecture, eliminating the need for additional hardware with the intent to improve feature convergence without risking impracticability.
Our methods have achieved 95.62% validation accuracy with a variance of 1.42% from training, outperforming past models by 0.45% in validation accuracy and 5.53% in variance.
arXiv Detail & Related papers (2020-03-09T21:04:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.