SAT: Spatial Aptitude Training for Multimodal Language Models
- URL: http://arxiv.org/abs/2412.07755v1
- Date: Tue, 10 Dec 2024 18:52:45 GMT
- Title: SAT: Spatial Aptitude Training for Multimodal Language Models
- Authors: Arijit Ray, Jiafei Duan, Reuben Tan, Dina Bashkirova, Rose Hendrix, Kiana Ehsani, Aniruddha Kembhavi, Bryan A. Plummer, Ranjay Krishna, Kuo-Hao Zeng, Kate Saenko,
- Abstract summary: We introduce SAT, Spatial Aptitude Training, to improve spatial intelligence.
SAT contains 218K question-answer pairs for 22K synthetic scenes across a training and testing set.
We show that SAT instruction-tuning data improves not only dynamic spatial reasoning on SAT, but also zero-shot performance on existing real-image spatial benchmarks.
- Score: 77.47422592453614
- License:
- Abstract: Spatial perception is a fundamental component of intelligence. While many studies highlight that large multimodal language models (MLMs) struggle to reason about space, they only test for static spatial reasoning, such as categorizing the relative positions of objects. Meanwhile, real-world deployment requires dynamic capabilities like perspective-taking and egocentric action recognition. As a roadmap to improving spatial intelligence, we introduce SAT, Spatial Aptitude Training, which goes beyond static relative object position questions to the more dynamic tasks. SAT contains 218K question-answer pairs for 22K synthetic scenes across a training and testing set. Generated using a photo-realistic physics engine, our dataset can be arbitrarily scaled and easily extended to new actions, scenes, and 3D assets. We find that even MLMs that perform relatively well on static questions struggle to accurately answer dynamic spatial questions. Further, we show that SAT instruction-tuning data improves not only dynamic spatial reasoning on SAT, but also zero-shot performance on existing real-image spatial benchmarks: $23\%$ on CVBench, $8\%$ on the harder BLINK benchmark, and $18\%$ on VSR. When instruction-tuned on SAT, our 13B model matches larger proprietary MLMs like GPT4-V and Gemini-3-1.0 in spatial reasoning. Our data/code is available at http://arijitray1993.github.io/SAT/ .
Related papers
- Coarse Correspondences Boost Spatial-Temporal Reasoning in Multimodal Language Model [51.83436609094658]
We introduce Coarse Correspondences, a simple lightweight method that enhances MLLMs' spatial-temporal reasoning with 2D images as input.
Our method uses a lightweight tracking model to identify primary object correspondences between frames in a video or across different image viewpoints.
We demonstrate that this simple training-free approach brings substantial gains to GPT4-V/O consistently on four benchmarks.
arXiv Detail & Related papers (2024-08-01T17:57:12Z) - MMScan: A Multi-Modal 3D Scene Dataset with Hierarchical Grounded Language Annotations [55.022519020409405]
This paper builds the first largest ever multi-modal 3D scene dataset and benchmark with hierarchical grounded language annotations, MMScan.
The resulting multi-modal 3D dataset encompasses 1.4M meta-annotated captions on 109k objects and 7.7k regions as well as over 3.04M diverse samples for 3D visual grounding and question-answering benchmarks.
arXiv Detail & Related papers (2024-06-13T17:59:30Z) - SpatialVLM: Endowing Vision-Language Models with Spatial Reasoning
Capabilities [59.39858959066982]
understanding and reasoning about spatial relationships is a fundamental capability for Visual Question Answering (VQA) and robotics.
We develop an automatic 3D spatial VQA data generation framework that scales up to 2 billion VQA examples on 10 million real-world images.
By training a VLM on such data, we significantly enhance its ability on both qualitative and quantitative spatial VQA.
arXiv Detail & Related papers (2024-01-22T18:01:01Z) - What's "up" with vision-language models? Investigating their struggle
with spatial reasoning [76.2406963762722]
Three new corpora quantify model comprehension of basic spatial relations.
We evaluate 18 vision-language (VL) models, finding that all perform poorly.
We conclude by studying causes of this surprising behavior.
arXiv Detail & Related papers (2023-10-30T17:50:15Z) - W2SAT: Learning to generate SAT instances from Weighted Literal Incidence Graphs [11.139131079925113]
W2SAT is a framework to generate SAT formulas by learning intrinsic structures and properties from given real-world/industrial instances.
We introduce a novel SAT representation called Weighted Literal Incidence Graph (WLIG), which exhibits strong representation ability and generalizability.
Decoding from WLIG into SAT problems is then modeled as finding overlapping cliques with a novel hill-climbing optimization method.
arXiv Detail & Related papers (2023-02-01T06:30:41Z) - Learning Dynamic View Synthesis With Few RGBD Cameras [60.36357774688289]
We propose to utilize RGBD cameras to synthesize free-viewpoint videos of dynamic indoor scenes.
We generate point clouds from RGBD frames and then render them into free-viewpoint videos via a neural feature.
We introduce a simple Regional Depth-Inpainting module that adaptively inpaints missing depth values to render complete novel views.
arXiv Detail & Related papers (2022-04-22T03:17:35Z) - StepGame: A New Benchmark for Robust Multi-Hop Spatial Reasoning in
Texts [12.254118455438535]
In this paper, we present a new Question-Answering dataset called StepGame for robust multi-hop spatial reasoning in texts.
We also propose a Memory-Augmented Neural Network (TP-MANN) specialized for spatial reasoning tasks.
arXiv Detail & Related papers (2022-04-18T12:46:46Z) - SpartQA: : A Textual Question Answering Benchmark for Spatial Reasoning [10.810615375345511]
This paper proposes a benchmark for spatial reasoning on natural language text.
We design grammar and reasoning rules to automatically generate a spatial description of visual scenes and corresponding QA pairs.
Experiments show that further pretraining LMs on these automatically generated data significantly improves LMs' capability on spatial understanding.
arXiv Detail & Related papers (2021-04-12T21:37:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.