Chat-3D v2: Bridging 3D Scene and Large Language Models with Object
Identifiers
- URL: http://arxiv.org/abs/2312.08168v2
- Date: Fri, 15 Dec 2023 06:15:33 GMT
- Title: Chat-3D v2: Bridging 3D Scene and Large Language Models with Object
Identifiers
- Authors: Haifeng Huang, Zehan Wang, Rongjie Huang, Luping Liu, Xize Cheng, Yang
Zhao, Tao Jin, Zhou Zhao
- Abstract summary: We introduce the use of object identifiers to freely reference objects during a conversation.
We propose a two-stage alignment method, which involves learning an attribute-aware token and a relation-aware token for each object.
Experiments conducted on traditional datasets like ScanQA, ScanRefer, and Nr3D/Sr3D showcase the effectiveness of our proposed method.
- Score: 62.232809030044116
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent research has evidenced the significant potentials of Large Language
Models (LLMs) in handling challenging tasks within 3D scenes. However, current
models are constrained to addressing object-centric tasks, where each
question-answer pair focuses solely on an individual object. In real-world
applications, users may pose queries involving multiple objects or expect for
answers that precisely reference various objects. We introduce the use of
object identifiers to freely reference objects during a conversation. While
this solution appears straightforward, it presents two main challenges: 1) How
to establish a reliable one-to-one correspondence between each object and its
identifier? 2) How to incorporate complex spatial relationships among dozens of
objects into the embedding space of the LLM? To address these challenges, we
propose a two-stage alignment method, which involves learning an
attribute-aware token and a relation-aware token for each object. These tokens
capture the object's attributes and spatial relationships with surrounding
objects in the 3D scene. Once the alignment is established, we can fine-tune
our model on various downstream tasks using instruction tuning. Experiments
conducted on traditional datasets like ScanQA, ScanRefer, and Nr3D/Sr3D
showcase the effectiveness of our proposed method. Additionally, we create a 3D
scene captioning dataset annotated with rich object identifiers, with the
assistant of GPT-4. This dataset aims to further explore the capability of
object identifiers in effective object referencing and precise scene
understanding.
Related papers
- Descrip3D: Enhancing Large Language Model-based 3D Scene Understanding with Object-Level Text Descriptions [28.185661905201222]
Descrip3D is a novel framework that explicitly encodes the relationships between objects using natural language.<n>It allows for unified reasoning across various tasks such as grounding, captioning, and question answering.
arXiv Detail & Related papers (2025-07-19T09:19:16Z) - MLLM-For3D: Adapting Multimodal Large Language Model for 3D Reasoning Segmentation [87.30919771444117]
Reasoning segmentation aims to segment target objects in complex scenes based on human intent and spatial reasoning.
Recent multimodal large language models (MLLMs) have demonstrated impressive 2D image reasoning segmentation.
We introduce MLLM-For3D, a framework that transfers knowledge from 2D MLLMs to 3D scene understanding.
arXiv Detail & Related papers (2025-03-23T16:40:20Z) - Articulate3D: Holistic Understanding of 3D Scenes as Universal Scene Description [56.69740649781989]
3D scene understanding is a long-standing challenge in computer vision and a key component in enabling mixed reality, wearable computing, and embodied AI.<n>We introduce Articulate3D, an expertly curated 3D dataset featuring high-quality manual annotations on 280 indoor scenes.<n>We also present USDNet, a novel unified framework capable of simultaneously predicting part segmentation along with a full specification of motion attributes for articulated objects.
arXiv Detail & Related papers (2024-12-02T11:33:55Z) - Functionality understanding and segmentation in 3D scenes [6.1744362771344]
We introduce Fun3DU, the first approach designed for functionality understanding in 3D scenes.
Fun3DU uses a language model to parse the task description through Chain-of-Thought reasoning.
We evaluate Fun3DU on SceneFun3D, the most recent and only dataset to benchmark this task.
arXiv Detail & Related papers (2024-11-25T11:57:48Z) - Grounded 3D-LLM with Referent Tokens [58.890058568493096]
We propose Grounded 3D-LLM to consolidate various 3D vision tasks within a unified generative framework.
The model uses scene referent tokens as special noun phrases to reference 3D scenes.
Per-task instruction-following templates are employed to ensure natural and diversity in translating 3D vision tasks into language formats.
arXiv Detail & Related papers (2024-05-16T18:03:41Z) - PARIS3D: Reasoning-based 3D Part Segmentation Using Large Multimodal Model [19.333506797686695]
We introduce a novel segmentation task known as reasoning part segmentation for 3D objects.
We output a segmentation mask based on complex and implicit textual queries about specific parts of a 3D object.
We propose a model that is capable of segmenting parts of 3D objects based on implicit textual queries and generating natural language explanations.
arXiv Detail & Related papers (2024-04-04T23:38:45Z) - Multi3DRefer: Grounding Text Description to Multiple 3D Objects [15.54885309441946]
We introduce the task of localizing a flexible number of objects in real-world 3D scenes using natural language descriptions.
Our dataset contains 61926 descriptions of 11609 objects, where zero, single or multiple target objects are referenced by each description.
We develop a better baseline leveraging 2D features from CLIP by rendering proposals online with contrastive learning, which outperforms the state of the art on the ScanRefer benchmark.
arXiv Detail & Related papers (2023-09-11T06:03:39Z) - OpenScene: 3D Scene Understanding with Open Vocabularies [73.1411930820683]
Traditional 3D scene understanding approaches rely on labeled 3D datasets to train a model for a single task with supervision.
We propose OpenScene, an alternative approach where a model predicts dense features for 3D scene points that are co-embedded with text and image pixels in CLIP feature space.
This zero-shot approach enables task-agnostic training and open-vocabulary queries.
arXiv Detail & Related papers (2022-11-28T18:58:36Z) - HyperDet3D: Learning a Scene-conditioned 3D Object Detector [154.84798451437032]
We propose HyperDet3D to explore scene-conditioned prior knowledge for 3D object detection.
Our HyperDet3D achieves state-of-the-art results on the 3D object detection benchmark of the ScanNet and SUN RGB-D datasets.
arXiv Detail & Related papers (2022-04-12T07:57:58Z) - Point2Seq: Detecting 3D Objects as Sequences [58.63662049729309]
We present a simple and effective framework, named Point2Seq, for 3D object detection from point clouds.
We view each 3D object as a sequence of words and reformulate the 3D object detection task as decoding words from 3D scenes in an auto-regressive manner.
arXiv Detail & Related papers (2022-03-25T00:20:31Z) - LanguageRefer: Spatial-Language Model for 3D Visual Grounding [72.7618059299306]
We develop a spatial-language model for a 3D visual grounding problem.
We show that our model performs competitively on visio-linguistic datasets proposed by ReferIt3D.
arXiv Detail & Related papers (2021-07-07T18:55:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.