CLIP-FO3D: Learning Free Open-world 3D Scene Representations from 2D
Dense CLIP
- URL: http://arxiv.org/abs/2303.04748v1
- Date: Wed, 8 Mar 2023 17:30:58 GMT
- Title: CLIP-FO3D: Learning Free Open-world 3D Scene Representations from 2D
Dense CLIP
- Authors: Junbo Zhang, Runpei Dong, Kaisheng Ma
- Abstract summary: Training a 3D scene understanding model requires complicated human annotations.
vision-language pre-training models (e.g., CLIP) have shown remarkable open-world reasoning properties.
We propose directly transferring CLIP's feature space to 3D scene understanding model without any form of supervision.
- Score: 19.66617835750012
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Training a 3D scene understanding model requires complicated human
annotations, which are laborious to collect and result in a model only encoding
close-set object semantics. In contrast, vision-language pre-training models
(e.g., CLIP) have shown remarkable open-world reasoning properties. To this
end, we propose directly transferring CLIP's feature space to 3D scene
understanding model without any form of supervision. We first modify CLIP's
input and forwarding process so that it can be adapted to extract dense pixel
features for 3D scene contents. We then project multi-view image features to
the point cloud and train a 3D scene understanding model with feature
distillation. Without any annotations or additional training, our model
achieves promising annotation-free semantic segmentation results on
open-vocabulary semantics and long-tailed concepts. Besides, serving as a
cross-modal pre-training framework, our method can be used to improve data
efficiency during fine-tuning. Our model outperforms previous SOTA methods in
various zero-shot and data-efficient learning benchmarks. Most importantly, our
model successfully inherits CLIP's rich-structured knowledge, allowing 3D scene
understanding models to recognize not only object concepts but also open-world
semantics.
Related papers
- DistillNeRF: Perceiving 3D Scenes from Single-Glance Images by Distilling Neural Fields and Foundation Model Features [65.8738034806085]
DistillNeRF is a self-supervised learning framework for understanding 3D environments in autonomous driving scenes.
Our method is a generalizable feedforward model that predicts a rich neural scene representation from sparse, single-frame multi-view camera inputs.
arXiv Detail & Related papers (2024-06-17T21:15:13Z) - Cross-Modal Self-Training: Aligning Images and Pointclouds to Learn Classification without Labels [69.55622471172941]
Large-scale vision 2D vision language models, such as CLIP can be aligned with a 3D encoder to learn generalizable (open-vocabulary) 3D vision models.
We propose an optimization framework: Cross-MoST: Cross-Modal Self-Training, to improve the label-free classification performance of a zero-shot 3D vision model.
arXiv Detail & Related papers (2024-04-15T21:30:50Z) - Leveraging Large-Scale Pretrained Vision Foundation Models for
Label-Efficient 3D Point Cloud Segmentation [67.07112533415116]
We present a novel framework that adapts various foundational models for the 3D point cloud segmentation task.
Our approach involves making initial predictions of 2D semantic masks using different large vision models.
To generate robust 3D semantic pseudo labels, we introduce a semantic label fusion strategy that effectively combines all the results via voting.
arXiv Detail & Related papers (2023-11-03T15:41:15Z) - Weakly Supervised 3D Open-vocabulary Segmentation [104.07740741126119]
We tackle the challenges in 3D open-vocabulary segmentation by exploiting pre-trained foundation models CLIP and DINO in a weakly supervised manner.
We distill the open-vocabulary multimodal knowledge and object reasoning capability of CLIP and DINO into a neural radiance field (NeRF)
A notable aspect of our approach is that it does not require any manual segmentation annotations for either the foundation models or the distillation process.
arXiv Detail & Related papers (2023-05-23T14:16:49Z) - Bridging the Domain Gap: Self-Supervised 3D Scene Understanding with
Foundation Models [18.315856283440386]
Foundation models have achieved remarkable results in 2D and language tasks like image segmentation, object detection, and visual-language understanding.
Their potential to enrich 3D scene representation learning is largely untapped due to the existence of the domain gap.
We propose an innovative methodology called Bridge3D to address this gap by pre-training 3D models using features, semantic masks, and sourced captions from foundation models.
arXiv Detail & Related papers (2023-05-15T16:36:56Z) - CLIP2Scene: Towards Label-efficient 3D Scene Understanding by CLIP [55.864132158596206]
Contrastive Language-Image Pre-training (CLIP) achieves promising results in 2D zero-shot and few-shot learning.
We make the first attempt to investigate how CLIP knowledge benefits 3D scene understanding.
We propose CLIP2Scene, a framework that transfers CLIP knowledge from 2D image-text pre-trained models to a 3D point cloud network.
arXiv Detail & Related papers (2023-01-12T10:42:39Z) - OpenScene: 3D Scene Understanding with Open Vocabularies [73.1411930820683]
Traditional 3D scene understanding approaches rely on labeled 3D datasets to train a model for a single task with supervision.
We propose OpenScene, an alternative approach where a model predicts dense features for 3D scene points that are co-embedded with text and image pixels in CLIP feature space.
This zero-shot approach enables task-agnostic training and open-vocabulary queries.
arXiv Detail & Related papers (2022-11-28T18:58:36Z) - Prompt-guided Scene Generation for 3D Zero-Shot Learning [8.658191774247944]
We propose a prompt-guided 3D scene generation and supervision method to augment 3D data to learn the network better.
First, we merge point clouds of two 3D models in certain ways described by a prompt. The prompt acts like the annotation describing each 3D scene.
We have achieved state-of-the-art ZSL and generalized ZSL performance on synthetic (ModelNet40, ModelNet10) and real-scanned (ScanOjbectNN) 3D object datasets.
arXiv Detail & Related papers (2022-09-29T11:24:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.