LiDAR Meta Depth Completion
- URL: http://arxiv.org/abs/2307.12761v2
- Date: Wed, 16 Aug 2023 13:03:33 GMT
- Title: LiDAR Meta Depth Completion
- Authors: Wolfgang Boettcher, Lukas Hoyer, Ozan Unal, Ke Li, Dengxin Dai
- Abstract summary: We propose a meta depth completion network that uses data patterns to learn a task network to solve a given depth completion task effectively.
While using a single model, our method yields significantly better results than a non-adaptive baseline trained on different LiDAR patterns.
These advantages allow flexible deployment of a single depth completion model on different sensors.
- Score: 47.99004789132264
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Depth estimation is one of the essential tasks to be addressed when creating
mobile autonomous systems. While monocular depth estimation methods have
improved in recent times, depth completion provides more accurate and reliable
depth maps by additionally using sparse depth information from other sensors
such as LiDAR. However, current methods are specifically trained for a single
LiDAR sensor. As the scanning pattern differs between sensors, every new sensor
would require re-training a specialized depth completion model, which is
computationally inefficient and not flexible. Therefore, we propose to
dynamically adapt the depth completion model to the used sensor type enabling
LiDAR adaptive depth completion. Specifically, we propose a meta depth
completion network that uses data patterns derived from the data to learn a
task network to alter weights of the main depth completion network to solve a
given depth completion task effectively. The method demonstrates a strong
capability to work on multiple LiDAR scanning patterns and can also generalize
to scanning patterns that are unseen during training. While using a single
model, our method yields significantly better results than a non-adaptive
baseline trained on different LiDAR patterns. It outperforms LiDAR-specific
expert models for very sparse cases. These advantages allow flexible deployment
of a single depth completion model on different sensors, which could also prove
valuable to process the input of nascent LiDAR technology with adaptive instead
of fixed scanning patterns.
Related papers
- UnCLe: Unsupervised Continual Learning of Depth Completion [5.677777151863184]
UnCLe is a standardized benchmark for Unsupervised Continual Learning of a multimodal depth estimation task.
We benchmark depth completion models under the practical scenario of unsupervised learning over continuous streams of data.
arXiv Detail & Related papers (2024-10-23T17:56:33Z) - Finetuning Pre-trained Model with Limited Data for LiDAR-based 3D Object Detection by Bridging Domain Gaps [8.897884780881535]
LiDAR-based 3D object detectors often fail to adapt well to target domains with different sensor configurations.
Recent studies suggest that pre-trained backbones can be learned in a self-supervised manner with large-scale unlabeled LiDAR frames.
We propose a novel method, called Domain Adaptive Distill-Tuning (DADT), to adapt a pre-trained model with limited target data.
arXiv Detail & Related papers (2024-10-02T08:22:42Z) - Robust Depth Enhancement via Polarization Prompt Fusion Tuning [112.88371907047396]
We present a framework that leverages polarization imaging to improve inaccurate depth measurements from various depth sensors.
Our method first adopts a learning-based strategy where a neural network is trained to estimate a dense and complete depth map from polarization data and a sensor depth map from different sensors.
To further improve the performance, we propose a Polarization Prompt Fusion Tuning (PPFT) strategy to effectively utilize RGB-based models pre-trained on large-scale datasets.
arXiv Detail & Related papers (2024-04-05T17:55:33Z) - LRRU: Long-short Range Recurrent Updating Networks for Depth Completion [45.48580252300282]
Long-short Range Recurrent Updating (LRRU) network is proposed to accomplish depth completion more efficiently.
LRRU first roughly fills the sparse input to obtain an initial dense depth map, and then iteratively updates it through learned spatially-variant kernels.
Our initial depth map has coarse but complete scene depth information, which helps relieve the burden of directly regressing the dense depth from sparse ones.
arXiv Detail & Related papers (2023-10-13T09:04:52Z) - Towards Domain-agnostic Depth Completion [28.25756709062647]
Existing depth completion methods are often targeted at a specific sparse depth type and generalize poorly across task domains.
We present a method to complete sparse/semi-dense, noisy, and potentially low-resolution depth maps obtained by various range sensors.
Our method shows superior cross-domain generalization ability against state-of-the-art depth completion methods.
arXiv Detail & Related papers (2022-07-29T04:10:22Z) - SelfTune: Metrically Scaled Monocular Depth Estimation through
Self-Supervised Learning [53.78813049373321]
We propose a self-supervised learning method for the pre-trained supervised monocular depth networks to enable metrically scaled depth estimation.
Our approach is useful for various applications such as mobile robot navigation and is applicable to diverse environments.
arXiv Detail & Related papers (2022-03-10T12:28:42Z) - Joint Learning of Salient Object Detection, Depth Estimation and Contour
Extraction [91.43066633305662]
We propose a novel multi-task and multi-modal filtered transformer (MMFT) network for RGB-D salient object detection (SOD)
Specifically, we unify three complementary tasks: depth estimation, salient object detection and contour estimation. The multi-task mechanism promotes the model to learn the task-aware features from the auxiliary tasks.
Experiments show that it not only significantly surpasses the depth-based RGB-D SOD methods on multiple datasets, but also precisely predicts a high-quality depth map and salient contour at the same time.
arXiv Detail & Related papers (2022-03-09T17:20:18Z) - End-To-End Optimization of LiDAR Beam Configuration for 3D Object
Detection and Localization [87.56144220508587]
We take a new route to learn to optimize the LiDAR beam configuration for a given application.
We propose a reinforcement learning-based learning-to-optimize framework to automatically optimize the beam configuration.
Our method is especially useful when a low-resolution (low-cost) LiDAR is needed.
arXiv Detail & Related papers (2022-01-11T09:46:31Z) - An Adaptive Framework for Learning Unsupervised Depth Completion [59.17364202590475]
We present a method to infer a dense depth map from a color image and associated sparse depth measurements.
We show that regularization and co-visibility are related via the fitness of the model to data and can be unified into a single framework.
arXiv Detail & Related papers (2021-06-06T02:27:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.