Disentangling Object Motion and Occlusion for Unsupervised Multi-frame
Monocular Depth
- URL: http://arxiv.org/abs/2203.15174v1
- Date: Tue, 29 Mar 2022 01:36:11 GMT
- Title: Disentangling Object Motion and Occlusion for Unsupervised Multi-frame
Monocular Depth
- Authors: Ziyue Feng, Liang Yang, Longlong Jing, Haiyan Wang, YingLi Tian, Bing
Li
- Abstract summary: Existing dynamic-object-focused methods only partially solved the mismatch problem at the training loss level.
We propose a novel multi-frame monocular depth prediction method to solve these problems at both the prediction and supervision loss levels.
Our method, called DynamicDepth, is a new framework trained via a self-supervised cycle consistent learning scheme.
- Score: 37.021579239596164
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Conventional self-supervised monocular depth prediction methods are based on
a static environment assumption, which leads to accuracy degradation in dynamic
scenes due to the mismatch and occlusion problems introduced by object motions.
Existing dynamic-object-focused methods only partially solved the mismatch
problem at the training loss level. In this paper, we accordingly propose a
novel multi-frame monocular depth prediction method to solve these problems at
both the prediction and supervision loss levels. Our method, called
DynamicDepth, is a new framework trained via a self-supervised cycle consistent
learning scheme. A Dynamic Object Motion Disentanglement (DOMD) module is
proposed to disentangle object motions to solve the mismatch problem. Moreover,
novel occlusion-aware Cost Volume and Re-projection Loss are designed to
alleviate the occlusion effects of object motions. Extensive analyses and
experiments on the Cityscapes and KITTI datasets show that our method
significantly outperforms the state-of-the-art monocular depth prediction
methods, especially in the areas of dynamic objects. Our code will be made
publicly available.
Related papers
- DynaVINS++: Robust Visual-Inertial State Estimator in Dynamic Environments by Adaptive Truncated Least Squares and Stable State Recovery [11.37707868611451]
We propose a robust VINS framework called mboxtextitDynaVINS++.
Our approach shows promising performance in dynamic environments, including scenes with abruptly dynamic objects.
arXiv Detail & Related papers (2024-10-20T12:13:45Z) - FlowDepth: Decoupling Optical Flow for Self-Supervised Monocular Depth Estimation [8.78717459496649]
We propose FlowDepth, where a Dynamic Motion Flow Module (DMFM) decouples the optical flow by a mechanism-based approach and warps the dynamic regions thus solving the mismatch problem.
For the unfairness of photometric errors caused by high-freq and low-texture regions, we use Depth-Cue-Aware Blur (DCABlur) and Cost-Volume sparsity loss respectively at the input and the loss level to solve the problem.
arXiv Detail & Related papers (2024-03-28T10:31:23Z) - Dynamo-Depth: Fixing Unsupervised Depth Estimation for Dynamical Scenes [40.46121828229776]
Dynamo-Depth is an approach that disambiguates dynamical motion by jointly learning monocular depth, 3D independent flow field, and motion segmentation from unlabeled monocular videos.
Our proposed method achieves state-of-the-art performance on monocular depth estimation on Open and nuScenes with significant improvement in the depth of moving objects.
arXiv Detail & Related papers (2023-10-29T03:24:16Z) - DeNoising-MOT: Towards Multiple Object Tracking with Severe Occlusions [52.63323657077447]
We propose DNMOT, an end-to-end trainable DeNoising Transformer for multiple object tracking.
Specifically, we augment the trajectory with noises during training and make our model learn the denoising process in an encoder-decoder architecture.
We conduct extensive experiments on the MOT17, MOT20, and DanceTrack datasets, and the experimental results show that our method outperforms previous state-of-the-art methods by a clear margin.
arXiv Detail & Related papers (2023-09-09T04:40:01Z) - MonoTDP: Twin Depth Perception for Monocular 3D Object Detection in
Adverse Scenes [49.21187418886508]
This paper proposes a monocular 3D detection model designed to perceive twin depth in adverse scenes, termed MonoTDP.
We first introduce an adaptive learning strategy to aid the model in handling uncontrollable weather conditions, significantly resisting degradation caused by various degrading factors.
Then, to address the depth/content loss in adverse regions, we propose a novel twin depth perception module that simultaneously estimates scene and object depth.
arXiv Detail & Related papers (2023-05-18T13:42:02Z) - Dyna-DepthFormer: Multi-frame Transformer for Self-Supervised Depth
Estimation in Dynamic Scenes [19.810725397641406]
We propose a novel Dyna-Depthformer framework, which predicts scene depth and 3D motion field jointly.
Our contributions are two-fold. First, we leverage multi-view correlation through a series of self- and cross-attention layers in order to obtain enhanced depth feature representation.
Second, we propose a warping-based Motion Network to estimate the motion field of dynamic objects without using semantic prior.
arXiv Detail & Related papers (2023-01-14T09:43:23Z) - SC-DepthV3: Robust Self-supervised Monocular Depth Estimation for
Dynamic Scenes [58.89295356901823]
Self-supervised monocular depth estimation has shown impressive results in static scenes.
It relies on the multi-view consistency assumption for training networks, however, that is violated in dynamic object regions.
We introduce an external pretrained monocular depth estimation model for generating single-image depth prior.
Our model can predict sharp and accurate depth maps, even when training from monocular videos of highly-dynamic scenes.
arXiv Detail & Related papers (2022-11-07T16:17:47Z) - Towards Scale-Aware, Robust, and Generalizable Unsupervised Monocular
Depth Estimation by Integrating IMU Motion Dynamics [74.1720528573331]
Unsupervised monocular depth and ego-motion estimation has drawn extensive research attention in recent years.
We propose DynaDepth, a novel scale-aware framework that integrates information from vision and IMU motion dynamics.
We validate the effectiveness of DynaDepth by conducting extensive experiments and simulations on the KITTI and Make3D datasets.
arXiv Detail & Related papers (2022-07-11T07:50:22Z) - Attentive and Contrastive Learning for Joint Depth and Motion Field
Estimation [76.58256020932312]
Estimating the motion of the camera together with the 3D structure of the scene from a monocular vision system is a complex task.
We present a self-supervised learning framework for 3D object motion field estimation from monocular videos.
arXiv Detail & Related papers (2021-10-13T16:45:01Z) - Self-Supervised Joint Learning Framework of Depth Estimation via
Implicit Cues [24.743099160992937]
We propose a novel self-supervised joint learning framework for depth estimation.
The proposed framework outperforms the state-of-the-art(SOTA) on KITTI and Make3D datasets.
arXiv Detail & Related papers (2020-06-17T13:56:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.