3DiffTection: 3D Object Detection with Geometry-Aware Diffusion Features
- URL: http://arxiv.org/abs/2311.04391v1
- Date: Tue, 7 Nov 2023 23:46:41 GMT
- Title: 3DiffTection: 3D Object Detection with Geometry-Aware Diffusion Features
- Authors: Chenfeng Xu, Huan Ling, Sanja Fidler, Or Litany
- Abstract summary: 3DiffTection is a state-of-the-art method for 3D object detection from single images.
We fine-tune a diffusion model to perform novel view synthesis conditioned on a single image.
We further train the model on target data with detection supervision.
- Score: 70.50665869806188
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present 3DiffTection, a state-of-the-art method for 3D object detection
from single images, leveraging features from a 3D-aware diffusion model.
Annotating large-scale image data for 3D detection is resource-intensive and
time-consuming. Recently, pretrained large image diffusion models have become
prominent as effective feature extractors for 2D perception tasks. However,
these features are initially trained on paired text and image data, which are
not optimized for 3D tasks, and often exhibit a domain gap when applied to the
target data. Our approach bridges these gaps through two specialized tuning
strategies: geometric and semantic. For geometric tuning, we fine-tune a
diffusion model to perform novel view synthesis conditioned on a single image,
by introducing a novel epipolar warp operator. This task meets two essential
criteria: the necessity for 3D awareness and reliance solely on posed image
data, which are readily available (e.g., from videos) and does not require
manual annotation. For semantic refinement, we further train the model on
target data with detection supervision. Both tuning phases employ ControlNet to
preserve the integrity of the original feature capabilities. In the final step,
we harness these enhanced capabilities to conduct a test-time prediction
ensemble across multiple virtual viewpoints. Through our methodology, we obtain
3D-aware features that are tailored for 3D detection and excel in identifying
cross-view point correspondences. Consequently, our model emerges as a powerful
3D detector, substantially surpassing previous benchmarks, e.g., Cube-RCNN, a
precedent in single-view 3D detection by 9.43\% in AP3D on the
Omni3D-ARkitscene dataset. Furthermore, 3DiffTection showcases robust data
efficiency and generalization to cross-domain data.
Related papers
- Enhancing Single Image to 3D Generation using Gaussian Splatting and Hybrid Diffusion Priors [17.544733016978928]
3D object generation from a single image involves estimating the full 3D geometry and texture of unseen views from an unposed RGB image captured in the wild.
Recent advancements in 3D object generation have introduced techniques that reconstruct an object's 3D shape and texture.
We propose bridging the gap between 2D and 3D diffusion models to address this limitation.
arXiv Detail & Related papers (2024-10-12T10:14:11Z) - OV-Uni3DETR: Towards Unified Open-Vocabulary 3D Object Detection via Cycle-Modality Propagation [67.56268991234371]
OV-Uni3DETR achieves the state-of-the-art performance on various scenarios, surpassing existing methods by more than 6% on average.
Code and pre-trained models will be released later.
arXiv Detail & Related papers (2024-03-28T17:05:04Z) - Spice-E : Structural Priors in 3D Diffusion using Cross-Entity Attention [9.52027244702166]
Spice-E is a neural network that adds structural guidance to 3D diffusion models.
We show that our approach supports a variety of applications, including 3D stylization, semantic shape editing and text-conditional abstraction-to-3D.
arXiv Detail & Related papers (2023-11-29T17:36:49Z) - DatasetNeRF: Efficient 3D-aware Data Factory with Generative Radiance Fields [68.94868475824575]
This paper introduces a novel approach capable of generating infinite, high-quality 3D-consistent 2D annotations alongside 3D point cloud segmentations.
We leverage the strong semantic prior within a 3D generative model to train a semantic decoder.
Once trained, the decoder efficiently generalizes across the latent space, enabling the generation of infinite data.
arXiv Detail & Related papers (2023-11-18T21:58:28Z) - HVPR: Hybrid Voxel-Point Representation for Single-stage 3D Object
Detection [39.64891219500416]
3D object detection methods exploit either voxel-based or point-based features to represent 3D objects in a scene.
We introduce in this paper a novel single-stage 3D detection method having the merit of both voxel-based and point-based features.
arXiv Detail & Related papers (2021-04-02T06:34:49Z) - ST3D: Self-training for Unsupervised Domain Adaptation on 3D
ObjectDetection [78.71826145162092]
We present a new domain adaptive self-training pipeline, named ST3D, for unsupervised domain adaptation on 3D object detection from point clouds.
Our ST3D achieves state-of-the-art performance on all evaluated datasets and even surpasses fully supervised results on KITTI 3D object detection benchmark.
arXiv Detail & Related papers (2021-03-09T10:51:24Z) - Reinforced Axial Refinement Network for Monocular 3D Object Detection [160.34246529816085]
Monocular 3D object detection aims to extract the 3D position and properties of objects from a 2D input image.
Conventional approaches sample 3D bounding boxes from the space and infer the relationship between the target object and each of them, however, the probability of effective samples is relatively small in the 3D space.
We propose to start with an initial prediction and refine it gradually towards the ground truth, with only one 3d parameter changed in each step.
This requires designing a policy which gets a reward after several steps, and thus we adopt reinforcement learning to optimize it.
arXiv Detail & Related papers (2020-08-31T17:10:48Z) - ZoomNet: Part-Aware Adaptive Zooming Neural Network for 3D Object
Detection [69.68263074432224]
We present a novel framework named ZoomNet for stereo imagery-based 3D detection.
The pipeline of ZoomNet begins with an ordinary 2D object detection model which is used to obtain pairs of left-right bounding boxes.
To further exploit the abundant texture cues in RGB images for more accurate disparity estimation, we introduce a conceptually straight-forward module -- adaptive zooming.
arXiv Detail & Related papers (2020-03-01T17:18:08Z) - SMOKE: Single-Stage Monocular 3D Object Detection via Keypoint
Estimation [3.1542695050861544]
Estimating 3D orientation and translation of objects is essential for infrastructure-less autonomous navigation and driving.
We propose a novel 3D object detection method, named SMOKE, that combines a single keypoint estimate with regressed 3D variables.
Despite of its structural simplicity, our proposed SMOKE network outperforms all existing monocular 3D detection methods on the KITTI dataset.
arXiv Detail & Related papers (2020-02-24T08:15:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.