Depth Map Estimation of Dynamic Scenes Using Prior Depth Information
- URL: http://arxiv.org/abs/2002.00297v1
- Date: Sun, 2 Feb 2020 01:04:27 GMT
- Title: Depth Map Estimation of Dynamic Scenes Using Prior Depth Information
- Authors: James Noraky, Vivienne Sze
- Abstract summary: We propose an algorithm that estimates depth maps using concurrently collected images and a previously measured depth map for dynamic scenes.
Our goal is to balance the acquisition of depth between the active depth sensor and computation, without incurring a large computational cost.
Our approach can obtain dense depth maps at up to real-time (30 FPS) on a standard laptop computer, which is orders of magnitude faster than similar approaches.
- Score: 14.03714478207425
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Depth information is useful for many applications. Active depth sensors are
appealing because they obtain dense and accurate depth maps. However, due to
issues that range from power constraints to multi-sensor interference, these
sensors cannot always be continuously used. To overcome this limitation, we
propose an algorithm that estimates depth maps using concurrently collected
images and a previously measured depth map for dynamic scenes, where both the
camera and objects in the scene may be independently moving. To estimate depth
in these scenarios, our algorithm models the dynamic scene motion using
independent and rigid motions. It then uses the previous depth map to
efficiently estimate these rigid motions and obtain a new depth map. Our goal
is to balance the acquisition of depth between the active depth sensor and
computation, without incurring a large computational cost. Thus, we leverage
the prior depth information to avoid computationally expensive operations like
dense optical flow estimation or segmentation used in similar approaches. Our
approach can obtain dense depth maps at up to real-time (30 FPS) on a standard
laptop computer, which is orders of magnitude faster than similar approaches.
When evaluated using RGB-D datasets of various dynamic scenes, our approach
estimates depth maps with a mean relative error of 2.5% while reducing the
active depth sensor usage by over 90%.
Related papers
- Temporal Lidar Depth Completion [0.08192907805418582]
We show how a state-of-the-art method PENet can be modified to benefit from recurrency.
Our algorithm achieves state-of-the-art results on the KITTI depth completion dataset.
arXiv Detail & Related papers (2024-06-17T08:25:31Z) - SelfReDepth: Self-Supervised Real-Time Depth Restoration for Consumer-Grade Sensors [42.48726526726542]
SelfReDepth is a self-supervised deep learning technique for depth restoration.
It uses multiple sequential depth frames and color data to achieve high-quality depth videos with temporal coherence.
Our results demonstrate our approach's real-time performance on real-world datasets.
arXiv Detail & Related papers (2024-06-05T15:38:02Z) - Robust Depth Enhancement via Polarization Prompt Fusion Tuning [112.88371907047396]
We present a framework that leverages polarization imaging to improve inaccurate depth measurements from various depth sensors.
Our method first adopts a learning-based strategy where a neural network is trained to estimate a dense and complete depth map from polarization data and a sensor depth map from different sensors.
To further improve the performance, we propose a Polarization Prompt Fusion Tuning (PPFT) strategy to effectively utilize RGB-based models pre-trained on large-scale datasets.
arXiv Detail & Related papers (2024-04-05T17:55:33Z) - Manydepth2: Motion-Aware Self-Supervised Multi-Frame Monocular Depth Estimation in Dynamic Scenes [45.092076587934464]
We present Manydepth2, to achieve precise depth estimation for both dynamic objects and static backgrounds.
To tackle the challenges posed by dynamic content, we incorporate optical flow and coarse monocular depth to create a pseudo-static reference frame.
This frame is then utilized to build a motion-aware cost volume in collaboration with the vanilla target frame.
arXiv Detail & Related papers (2023-12-23T14:36:27Z) - FS-Depth: Focal-and-Scale Depth Estimation from a Single Image in Unseen
Indoor Scene [57.26600120397529]
It has long been an ill-posed problem to predict absolute depth maps from single images in real (unseen) indoor scenes.
We develop a focal-and-scale depth estimation model to well learn absolute depth maps from single images in unseen indoor scenes.
arXiv Detail & Related papers (2023-07-27T04:49:36Z) - SC-DepthV3: Robust Self-supervised Monocular Depth Estimation for
Dynamic Scenes [58.89295356901823]
Self-supervised monocular depth estimation has shown impressive results in static scenes.
It relies on the multi-view consistency assumption for training networks, however, that is violated in dynamic object regions.
We introduce an external pretrained monocular depth estimation model for generating single-image depth prior.
Our model can predict sharp and accurate depth maps, even when training from monocular videos of highly-dynamic scenes.
arXiv Detail & Related papers (2022-11-07T16:17:47Z) - Event Guided Depth Sensing [50.997474285910734]
We present an efficient bio-inspired event-camera-driven depth estimation algorithm.
In our approach, we illuminate areas of interest densely, depending on the scene activity detected by the event camera.
We show the feasibility of our approach in a simulated autonomous driving sequences and real indoor environments.
arXiv Detail & Related papers (2021-10-20T11:41:11Z) - DnD: Dense Depth Estimation in Crowded Dynamic Indoor Scenes [68.38952377590499]
We present a novel approach for estimating depth from a monocular camera as it moves through complex indoor environments.
Our approach predicts absolute scale depth maps over the entire scene consisting of a static background and multiple moving people.
arXiv Detail & Related papers (2021-08-12T09:12:39Z) - Self-Attention Dense Depth Estimation Network for Unrectified Video
Sequences [6.821598757786515]
LiDAR and radar sensors are the hardware solution for real-time depth estimation.
Deep learning based self-supervised depth estimation methods have shown promising results.
We propose a self-attention based depth and ego-motion network for unrectified images.
arXiv Detail & Related papers (2020-05-28T21:53:53Z) - Occlusion-Aware Depth Estimation with Adaptive Normal Constraints [85.44842683936471]
We present a new learning-based method for multi-frame depth estimation from a color video.
Our method outperforms the state-of-the-art in terms of depth estimation accuracy.
arXiv Detail & Related papers (2020-04-02T07:10:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.