Zero-Shot Metric Depth with a Field-of-View Conditioned Diffusion Model
- URL: http://arxiv.org/abs/2312.13252v1
- Date: Wed, 20 Dec 2023 18:27:47 GMT
- Title: Zero-Shot Metric Depth with a Field-of-View Conditioned Diffusion Model
- Authors: Saurabh Saxena, Junhwa Hur, Charles Herrmann, Deqing Sun, David J.
Fleet
- Abstract summary: Methods for monocular depth estimation have made significant strides on standard benchmarks, but zero-shot metric depth estimation remains unsolved.
Recent work has proposed specialized multi-head architectures for jointly modeling indoor and outdoor scenes.
We advocate a generic, task-agnostic diffusion model, with several advancements such as log-scale depth parameterization.
- Score: 34.85279074665031
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: While methods for monocular depth estimation have made significant strides on
standard benchmarks, zero-shot metric depth estimation remains unsolved.
Challenges include the joint modeling of indoor and outdoor scenes, which often
exhibit significantly different distributions of RGB and depth, and the
depth-scale ambiguity due to unknown camera intrinsics. Recent work has
proposed specialized multi-head architectures for jointly modeling indoor and
outdoor scenes. In contrast, we advocate a generic, task-agnostic diffusion
model, with several advancements such as log-scale depth parameterization to
enable joint modeling of indoor and outdoor scenes, conditioning on the
field-of-view (FOV) to handle scale ambiguity and synthetically augmenting FOV
during training to generalize beyond the limited camera intrinsics in training
datasets. Furthermore, by employing a more diverse training mixture than is
common, and an efficient diffusion parameterization, our method, DMD (Diffusion
for Metric Depth) achieves a 25\% reduction in relative error (REL) on
zero-shot indoor and 33\% reduction on zero-shot outdoor datasets over the
current SOTA using only a small number of denoising steps. For an overview see
https://diffusion-vision.github.io/dmd
Related papers
- GVDepth: Zero-Shot Monocular Depth Estimation for Ground Vehicles based on Probabilistic Cue Fusion [7.588468985212172]
Generalizing metric monocular depth estimation presents a significant challenge due to its ill-posed nature.
We propose a novel canonical representation that maintains consistency across varied camera setups.
We also propose a novel architecture that adaptively and probabilistically fuses depths estimated via object size and vertical image position cues.
arXiv Detail & Related papers (2024-12-08T22:04:34Z) - ScaleDepth: Decomposing Metric Depth Estimation into Scale Prediction and Relative Depth Estimation [62.600382533322325]
We propose a novel monocular depth estimation method called ScaleDepth.
Our method decomposes metric depth into scene scale and relative depth, and predicts them through a semantic-aware scale prediction module.
Our method achieves metric depth estimation for both indoor and outdoor scenes in a unified framework.
arXiv Detail & Related papers (2024-07-11T05:11:56Z) - UniDepth: Universal Monocular Metric Depth Estimation [81.80512457953903]
We propose a new model, UniDepth, capable of reconstructing metric 3D scenes from solely single images across domains.
Our model exploits a pseudo-spherical output representation, which disentangles camera and depth representations.
Thorough evaluations on ten datasets in a zero-shot regime consistently demonstrate the superior performance of UniDepth.
arXiv Detail & Related papers (2024-03-27T18:06:31Z) - SM4Depth: Seamless Monocular Metric Depth Estimation across Multiple Cameras and Scenes by One Model [72.0795843450604]
Current approaches face challenges in maintaining consistent accuracy across diverse scenes.
These methods rely on extensive datasets comprising millions, if not tens of millions, of data for training.
This paper presents SM$4$Depth, a model that seamlessly works for both indoor and outdoor scenes.
arXiv Detail & Related papers (2024-03-13T14:08:25Z) - The Surprising Effectiveness of Diffusion Models for Optical Flow and
Monocular Depth Estimation [42.48819460873482]
Denoising diffusion probabilistic models have transformed image generation with their impressive fidelity and diversity.
We show that they also excel in estimating optical flow and monocular depth, surprisingly, without task-specific architectures and loss functions.
arXiv Detail & Related papers (2023-06-02T21:26:20Z) - Hierarchical Integration Diffusion Model for Realistic Image Deblurring [71.76410266003917]
Diffusion models (DMs) have been introduced in image deblurring and exhibited promising performance.
We propose the Hierarchical Integration Diffusion Model (HI-Diff), for realistic image deblurring.
Experiments on synthetic and real-world blur datasets demonstrate that our HI-Diff outperforms state-of-the-art methods.
arXiv Detail & Related papers (2023-05-22T12:18:20Z) - Monocular Depth Estimation using Diffusion Models [39.27361388836347]
We introduce innovations to address problems arising due to noisy, incomplete depth maps in training data.
To cope with the limited availability of data for supervised training, we leverage pre-training on self-supervised image-to-image translation tasks.
Our DepthGen model achieves SOTA performance on the indoor NYU dataset, and near SOTA results on the outdoor KITTI dataset.
arXiv Detail & Related papers (2023-02-28T18:08:21Z) - DARF: Depth-Aware Generalizable Neural Radiance Field [51.29437249009986]
We propose the Depth-Aware Generalizable Neural Radiance Field (DARF) with a Depth-Aware Dynamic Sampling (DADS) strategy.
Our framework infers the unseen scenes on both pixel level and geometry level with only a few input images.
Compared with state-of-the-art generalizable NeRF methods, DARF reduces samples by 50%, while improving rendering quality and depth estimation.
arXiv Detail & Related papers (2022-12-05T14:00:59Z) - CLONeR: Camera-Lidar Fusion for Occupancy Grid-aided Neural
Representations [77.90883737693325]
This paper proposes CLONeR, which significantly improves upon NeRF by allowing it to model large outdoor driving scenes observed from sparse input sensor views.
This is achieved by decoupling occupancy and color learning within the NeRF framework into separate Multi-Layer Perceptrons (MLPs) trained using LiDAR and camera data, respectively.
In addition, this paper proposes a novel method to build differentiable 3D Occupancy Grid Maps (OGM) alongside the NeRF model, and leverage this occupancy grid for improved sampling of points along a ray for rendering in metric space.
arXiv Detail & Related papers (2022-09-02T17:44:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.