Chat-3D v2: Bridging 3D Scene and Large Language Models with Object
Identifiers
- URL: http://arxiv.org/abs/2312.08168v2
- Date: Fri, 15 Dec 2023 06:15:33 GMT
- Title: Chat-3D v2: Bridging 3D Scene and Large Language Models with Object
Identifiers
- Authors: Haifeng Huang, Zehan Wang, Rongjie Huang, Luping Liu, Xize Cheng, Yang
Zhao, Tao Jin, Zhou Zhao
- Abstract summary: We introduce the use of object identifiers to freely reference objects during a conversation.
We propose a two-stage alignment method, which involves learning an attribute-aware token and a relation-aware token for each object.
Experiments conducted on traditional datasets like ScanQA, ScanRefer, and Nr3D/Sr3D showcase the effectiveness of our proposed method.
- Score: 62.232809030044116
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent research has evidenced the significant potentials of Large Language
Models (LLMs) in handling challenging tasks within 3D scenes. However, current
models are constrained to addressing object-centric tasks, where each
question-answer pair focuses solely on an individual object. In real-world
applications, users may pose queries involving multiple objects or expect for
answers that precisely reference various objects. We introduce the use of
object identifiers to freely reference objects during a conversation. While
this solution appears straightforward, it presents two main challenges: 1) How
to establish a reliable one-to-one correspondence between each object and its
identifier? 2) How to incorporate complex spatial relationships among dozens of
objects into the embedding space of the LLM? To address these challenges, we
propose a two-stage alignment method, which involves learning an
attribute-aware token and a relation-aware token for each object. These tokens
capture the object's attributes and spatial relationships with surrounding
objects in the 3D scene. Once the alignment is established, we can fine-tune
our model on various downstream tasks using instruction tuning. Experiments
conducted on traditional datasets like ScanQA, ScanRefer, and Nr3D/Sr3D
showcase the effectiveness of our proposed method. Additionally, we create a 3D
scene captioning dataset annotated with rich object identifiers, with the
assistant of GPT-4. This dataset aims to further explore the capability of
object identifiers in effective object referencing and precise scene
understanding.
Related papers
- 1st Place Solution for MOSE Track in CVPR 2024 PVUW Workshop: Complex Video Object Segmentation [72.54357831350762]
We propose a semantic embedding video object segmentation model and use the salient features of objects as query representations.
We trained our model on a large-scale video object segmentation dataset.
Our model achieves first place (textbf84.45%) in the test set of Complex Video Object Challenge.
arXiv Detail & Related papers (2024-06-07T03:13:46Z) - Multi3DRefer: Grounding Text Description to Multiple 3D Objects [15.54885309441946]
We introduce the task of localizing a flexible number of objects in real-world 3D scenes using natural language descriptions.
Our dataset contains 61926 descriptions of 11609 objects, where zero, single or multiple target objects are referenced by each description.
We develop a better baseline leveraging 2D features from CLIP by rendering proposals online with contrastive learning, which outperforms the state of the art on the ScanRefer benchmark.
arXiv Detail & Related papers (2023-09-11T06:03:39Z) - ROAM: Robust and Object-Aware Motion Generation Using Neural Pose
Descriptors [73.26004792375556]
This paper shows that robustness and generalisation to novel scene objects in 3D object-aware character synthesis can be achieved by training a motion model with as few as one reference object.
We leverage an implicit feature representation trained on object-only datasets, which encodes an SE(3)-equivariant descriptor field around the object.
We demonstrate substantial improvements in 3D virtual character motion and interaction quality and robustness to scenarios with unseen objects.
arXiv Detail & Related papers (2023-08-24T17:59:51Z) - PatchContrast: Self-Supervised Pre-training for 3D Object Detection [14.603858163158625]
We introduce PatchContrast, a novel self-supervised point cloud pre-training framework for 3D object detection.
We show that our method outperforms existing state-of-the-art models on three commonly-used 3D detection datasets.
arXiv Detail & Related papers (2023-08-14T07:45:54Z) - 3DRP-Net: 3D Relative Position-aware Network for 3D Visual Grounding [58.924180772480504]
3D visual grounding aims to localize the target object in a 3D point cloud by a free-form language description.
We propose a relation-aware one-stage framework, named 3D Relative Position-aware Network (3-Net)
arXiv Detail & Related papers (2023-07-25T09:33:25Z) - CMR3D: Contextualized Multi-Stage Refinement for 3D Object Detection [57.44434974289945]
We propose Contextualized Multi-Stage Refinement for 3D Object Detection (CMR3D) framework.
Our framework takes a 3D scene as input and strives to explicitly integrate useful contextual information of the scene.
In addition to 3D object detection, we investigate the effectiveness of our framework for the problem of 3D object counting.
arXiv Detail & Related papers (2022-09-13T05:26:09Z) - ObjectFolder: A Dataset of Objects with Implicit Visual, Auditory, and
Tactile Representations [52.226947570070784]
We present Object, a dataset of 100 objects that addresses both challenges with two key innovations.
First, Object encodes the visual, auditory, and tactile sensory data for all objects, enabling a number of multisensory object recognition tasks.
Second, Object employs a uniform, object-centric simulations, and implicit representation for each object's visual textures, tactile readings, and tactile readings, making the dataset flexible to use and easy to share.
arXiv Detail & Related papers (2021-09-16T14:00:59Z) - A Deep Learning Approach to Object Affordance Segmentation [31.221897360610114]
We design an autoencoder that infers pixel-wise affordance labels in both videos and static images.
Our model surpasses the need for object labels and bounding boxes by using a soft-attention mechanism.
We show that our model achieves competitive results compared to strongly supervised methods on SOR3D-AFF.
arXiv Detail & Related papers (2020-04-18T15:34:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.