Believing is Seeing: Unobserved Object Detection using Generative Models
- URL: http://arxiv.org/abs/2410.05869v2
- Date: Sun, 24 Nov 2024 23:47:03 GMT
- Title: Believing is Seeing: Unobserved Object Detection using Generative Models
- Authors: Subhransu S. Bhattacharjee, Dylan Campbell, Rahul Shome,
- Abstract summary: This study introduces the novel tasks of 2D, 2.5D and 3D unobserved object detection.
We adapt several state-of-the-art pre-trained generative models to address this task.
We show that they can be used to infer the presence of objects that are not directly observed.
- Score: 17.883297093049787
- License:
- Abstract: Can objects that are not visible in an image -- but are in the vicinity of the camera -- be detected? This study introduces the novel tasks of 2D, 2.5D and 3D unobserved object detection for predicting the location of nearby objects that are occluded or lie outside the image frame. We adapt several state-of-the-art pre-trained generative models to address this task, including 2D and 3D diffusion models and vision-language models, and show that they can be used to infer the presence of objects that are not directly observed. To benchmark this task, we propose a suite of metrics that capture different aspects of performance. Our empirical evaluation on indoor scenes from the RealEstate10k and NYU Depth v2 datasets demonstrate results that motivate the use of generative models for the unobserved object detection task.
Related papers
- Open Vocabulary Monocular 3D Object Detection [10.424711580213616]
We pioneer the study of open-vocabulary monocular 3D object detection, a novel task that aims to detect and localize objects in 3D space from a single RGB image.
We introduce a class-agnostic approach that leverages open-vocabulary 2D detectors and lifts 2D bounding boxes into 3D space.
Our approach decouples the recognition and localization of objects in 2D from the task of estimating 3D bounding boxes, enabling generalization across unseen categories.
arXiv Detail & Related papers (2024-11-25T18:59:17Z) - Uncertainty Estimation for 3D Object Detection via Evidential Learning [63.61283174146648]
We introduce a framework for quantifying uncertainty in 3D object detection by leveraging an evidential learning loss on Bird's Eye View representations in the 3D detector.
We demonstrate both the efficacy and importance of these uncertainty estimates on identifying out-of-distribution scenes, poorly localized objects, and missing (false negative) detections.
arXiv Detail & Related papers (2024-10-31T13:13:32Z) - VFMM3D: Releasing the Potential of Image by Vision Foundation Model for Monocular 3D Object Detection [80.62052650370416]
monocular 3D object detection holds significant importance across various applications, including autonomous driving and robotics.
In this paper, we present VFMM3D, an innovative framework that leverages the capabilities of Vision Foundation Models (VFMs) to accurately transform single-view images into LiDAR point cloud representations.
arXiv Detail & Related papers (2024-04-15T03:12:12Z) - Probing the 3D Awareness of Visual Foundation Models [56.68380136809413]
We analyze the 3D awareness of visual foundation models.
We conduct experiments using task-specific probes and zero-shot inference procedures on frozen features.
arXiv Detail & Related papers (2024-04-12T17:58:04Z) - 3DiffTection: 3D Object Detection with Geometry-Aware Diffusion Features [70.50665869806188]
3DiffTection is a state-of-the-art method for 3D object detection from single images.
We fine-tune a diffusion model to perform novel view synthesis conditioned on a single image.
We further train the model on target data with detection supervision.
arXiv Detail & Related papers (2023-11-07T23:46:41Z) - Predict to Detect: Prediction-guided 3D Object Detection using
Sequential Images [15.51093009875854]
We propose a novel 3D object detection model, P2D (Predict to Detect), that integrates a prediction scheme into a detection framework.
P2D predicts object information in the current frame using solely past frames to learn temporal motion features.
We then introduce a novel temporal feature aggregation method that attentively exploits Bird's-Eye-View (BEV) features based on predicted object information.
arXiv Detail & Related papers (2023-06-14T14:22:56Z) - DR-WLC: Dimensionality Reduction cognition for object detection and pose
estimation by Watching, Learning and Checking [30.58114448119465]
Existing object detection and pose estimation methods mostly adopt the same-dimensional data for training.
DR-WLC, a dimensionality reduction cognitive model, can perform both object detection and pose estimation tasks at the same time.
arXiv Detail & Related papers (2023-01-17T15:08:32Z) - MegaPose: 6D Pose Estimation of Novel Objects via Render & Compare [84.80956484848505]
MegaPose is a method to estimate the 6D pose of novel objects, that is, objects unseen during training.
We present a 6D pose refiner based on a render&compare strategy which can be applied to novel objects.
Second, we introduce a novel approach for coarse pose estimation which leverages a network trained to classify whether the pose error between a synthetic rendering and an observed image of the same object can be corrected by the refiner.
arXiv Detail & Related papers (2022-12-13T19:30:03Z) - Shape-Aware Monocular 3D Object Detection [15.693199934120077]
A single-stage monocular 3D object detection model is proposed.
The detection largely avoids interference from irrelevant regions surrounding the target objects.
A novel evaluation metric, namely average depth similarity (ADS) is proposed for the monocular 3D object detection models.
arXiv Detail & Related papers (2022-04-19T07:43:56Z) - Object DGCNN: 3D Object Detection using Dynamic Graphs [32.090268859180334]
3D object detection often involves complicated training and testing pipelines.
Inspired by recent non-maximum suppression-free 2D object detection models, we propose a 3D object detection architecture on point clouds.
arXiv Detail & Related papers (2021-10-13T17:59:38Z) - Detecting Invisible People [58.49425715635312]
We re-purpose tracking benchmarks and propose new metrics for the task of detecting invisible objects.
We demonstrate that current detection and tracking systems perform dramatically worse on this task.
Second, we build dynamic models that explicitly reason in 3D, making use of observations produced by state-of-the-art monocular depth estimation networks.
arXiv Detail & Related papers (2020-12-15T16:54:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.