Dynamo-Depth: Fixing Unsupervised Depth Estimation for Dynamical Scenes
- URL: http://arxiv.org/abs/2310.18887v1
- Date: Sun, 29 Oct 2023 03:24:16 GMT
- Title: Dynamo-Depth: Fixing Unsupervised Depth Estimation for Dynamical Scenes
- Authors: Yihong Sun, Bharath Hariharan
- Abstract summary: Dynamo-Depth is an approach that disambiguates dynamical motion by jointly learning monocular depth, 3D independent flow field, and motion segmentation from unlabeled monocular videos.
Our proposed method achieves state-of-the-art performance on monocular depth estimation on Open and nuScenes with significant improvement in the depth of moving objects.
- Score: 40.46121828229776
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Unsupervised monocular depth estimation techniques have demonstrated
encouraging results but typically assume that the scene is static. These
techniques suffer when trained on dynamical scenes, where apparent object
motion can equally be explained by hypothesizing the object's independent
motion, or by altering its depth. This ambiguity causes depth estimators to
predict erroneous depth for moving objects. To resolve this issue, we introduce
Dynamo-Depth, an unifying approach that disambiguates dynamical motion by
jointly learning monocular depth, 3D independent flow field, and motion
segmentation from unlabeled monocular videos. Specifically, we offer our key
insight that a good initial estimation of motion segmentation is sufficient for
jointly learning depth and independent motion despite the fundamental
underlying ambiguity. Our proposed method achieves state-of-the-art performance
on monocular depth estimation on Waymo Open and nuScenes Dataset with
significant improvement in the depth of moving objects. Code and additional
results are available at https://dynamo-depth.github.io.
Related papers
- Mining Supervision for Dynamic Regions in Self-Supervised Monocular Depth Estimation [23.93080319283679]
Existing methods jointly estimate pixel-wise depth and motion, relying mainly on an image reconstruction loss.
Dynamic regions1 remain a critical challenge for these methods due to the inherent ambiguity in depth and motion estimation.
This paper proposes a self-supervised training framework exploiting pseudo depth labels for dynamic regions from training data.
arXiv Detail & Related papers (2024-04-23T10:51:15Z) - DO3D: Self-supervised Learning of Decomposed Object-aware 3D Motion and
Depth from Monocular Videos [76.01906393673897]
We propose a self-supervised method to jointly learn 3D motion and depth from monocular videos.
Our system contains a depth estimation module to predict depth, and a new decomposed object-wise 3D motion (DO3D) estimation module to predict ego-motion and 3D object motion.
Our model delivers superior performance in all evaluated settings.
arXiv Detail & Related papers (2024-03-09T12:22:46Z) - SC-DepthV3: Robust Self-supervised Monocular Depth Estimation for
Dynamic Scenes [58.89295356901823]
Self-supervised monocular depth estimation has shown impressive results in static scenes.
It relies on the multi-view consistency assumption for training networks, however, that is violated in dynamic object regions.
We introduce an external pretrained monocular depth estimation model for generating single-image depth prior.
Our model can predict sharp and accurate depth maps, even when training from monocular videos of highly-dynamic scenes.
arXiv Detail & Related papers (2022-11-07T16:17:47Z) - ParticleSfM: Exploiting Dense Point Trajectories for Localizing Moving
Cameras in the Wild [57.37891682117178]
We present a robust dense indirect structure-from-motion method for videos that is based on dense correspondence from pairwise optical flow.
A novel neural network architecture is proposed for processing irregular point trajectory data.
Experiments on MPI Sintel dataset show that our system produces significantly more accurate camera trajectories.
arXiv Detail & Related papers (2022-07-19T09:19:45Z) - Disentangling Object Motion and Occlusion for Unsupervised Multi-frame
Monocular Depth [37.021579239596164]
Existing dynamic-object-focused methods only partially solved the mismatch problem at the training loss level.
We propose a novel multi-frame monocular depth prediction method to solve these problems at both the prediction and supervision loss levels.
Our method, called DynamicDepth, is a new framework trained via a self-supervised cycle consistent learning scheme.
arXiv Detail & Related papers (2022-03-29T01:36:11Z) - Attentive and Contrastive Learning for Joint Depth and Motion Field
Estimation [76.58256020932312]
Estimating the motion of the camera together with the 3D structure of the scene from a monocular vision system is a complex task.
We present a self-supervised learning framework for 3D object motion field estimation from monocular videos.
arXiv Detail & Related papers (2021-10-13T16:45:01Z) - Unsupervised Monocular Depth Reconstruction of Non-Rigid Scenes [87.91841050957714]
We present an unsupervised monocular framework for dense depth estimation of dynamic scenes.
We derive a training objective that aims to opportunistically preserve pairwise distances between reconstructed 3D points.
Our method provides promising results, demonstrating its capability of reconstructing 3D from challenging videos of non-rigid scenes.
arXiv Detail & Related papers (2020-12-31T16:02:03Z) - Self-Supervised Joint Learning Framework of Depth Estimation via
Implicit Cues [24.743099160992937]
We propose a novel self-supervised joint learning framework for depth estimation.
The proposed framework outperforms the state-of-the-art(SOTA) on KITTI and Make3D datasets.
arXiv Detail & Related papers (2020-06-17T13:56:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.