Fast Depth Estimation for View Synthesis
- URL: http://arxiv.org/abs/2003.06637v1
- Date: Sat, 14 Mar 2020 14:10:42 GMT
- Title: Fast Depth Estimation for View Synthesis
- Authors: Nantheera Anantrasirichai and Majid Geravand and David Braendler and
David R. Bull
- Abstract summary: Disparity/depth estimation from sequences of stereo images is an important element in 3D vision.
We propose a novel learning-based framework making use of dilated convolution, densely connected convolutional modules, compact decoder and skip connections.
We show that our network outperforms state-of-the-art methods with an average improvement in depth estimation and view synthesis by approximately 45% and 34% respectively.
- Score: 9.243157709083672
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Disparity/depth estimation from sequences of stereo images is an important
element in 3D vision. Owing to occlusions, imperfect settings and homogeneous
luminance, accurate estimate of depth remains a challenging problem. Targetting
view synthesis, we propose a novel learning-based framework making use of
dilated convolution, densely connected convolutional modules, compact decoder
and skip connections. The network is shallow but dense, so it is fast and
accurate. Two additional contributions -- a non-linear adjustment of the depth
resolution and the introduction of a projection loss, lead to reduction of
estimation error by up to 20% and 25% respectively. The results show that our
network outperforms state-of-the-art methods with an average improvement in
accuracy of depth estimation and view synthesis by approximately 45% and 34%
respectively. Where our method generates comparable quality of estimated depth,
it performs 10 times faster than those methods.
Related papers
- Depth-aware Volume Attention for Texture-less Stereo Matching [67.46404479356896]
We propose a lightweight volume refinement scheme to tackle the texture deterioration in practical outdoor scenarios.
We introduce a depth volume supervised by the ground-truth depth map, capturing the relative hierarchy of image texture.
Local fine structure and context are emphasized to mitigate ambiguity and redundancy during volume aggregation.
arXiv Detail & Related papers (2024-02-14T04:07:44Z) - RA-Depth: Resolution Adaptive Self-Supervised Monocular Depth Estimation [27.679479140943503]
We propose a resolution adaptive self-supervised monocular depth estimation method (RA-Depth) by learning the scale invariance of the scene depth.
RA-Depth achieves state-of-the-art performance, and also exhibits a good ability of resolution adaptation.
arXiv Detail & Related papers (2022-07-25T08:49:59Z) - Analysis & Computational Complexity Reduction of Monocular and Stereo
Depth Estimation Techniques [0.0]
A high accuracy algorithm may provide the best depth estimation but may consume tremendous compute and energy resources.
Previous work has shown this trade-off can be improved by developing a state-of-the-art method (AnyNet) to improve stereo depth estimation.
Our experiments with the novel stereo vision method (AnyNet) show that accuracy of depth estimation does not degrade more than 3% (three pixel error metric) in spite of reduction in model size by 20%.
arXiv Detail & Related papers (2022-06-18T00:47:33Z) - Depth Refinement for Improved Stereo Reconstruction [13.941756438712382]
Current techniques for depth estimation from stereoscopic images still suffer from a built-in drawback.
A simple analysis reveals that the depth error is quadratically proportional to the object's distance.
We propose a simple but effective method that uses a refinement network for depth estimation.
arXiv Detail & Related papers (2021-12-15T12:21:08Z) - On the Sins of Image Synthesis Loss for Self-supervised Depth Estimation [60.780823530087446]
We show that improvements in image synthesis do not necessitate improvement in depth estimation.
We attribute this diverging phenomenon to aleatoric uncertainties, which originate from data.
This observed divergence has not been previously reported or studied in depth.
arXiv Detail & Related papers (2021-09-13T17:57:24Z) - Geometry Uncertainty Projection Network for Monocular 3D Object
Detection [138.24798140338095]
We propose a Geometry Uncertainty Projection Network (GUP Net) to tackle the error amplification problem at both inference and training stages.
Specifically, a GUP module is proposed to obtains the geometry-guided uncertainty of the inferred depth.
At the training stage, we propose a Hierarchical Task Learning strategy to reduce the instability caused by error amplification.
arXiv Detail & Related papers (2021-07-29T06:59:07Z) - Direct Depth Learning Network for Stereo Matching [79.3665881702387]
A novel Direct Depth Learning Network (DDL-Net) is designed for stereo matching.
DDL-Net consists of two stages: the Coarse Depth Estimation stage and the Adaptive-Grained Depth Refinement stage.
We show that DDL-Net achieves an average improvement of 25% on the SceneFlow dataset and $12%$ on the DrivingStereo dataset.
arXiv Detail & Related papers (2020-12-10T10:33:57Z) - Multi-view Depth Estimation using Epipolar Spatio-Temporal Networks [87.50632573601283]
We present a novel method for multi-view depth estimation from a single video.
Our method achieves temporally coherent depth estimation results by using a novel Epipolar Spatio-Temporal (EST) transformer.
To reduce the computational cost, inspired by recent Mixture-of-Experts models, we design a compact hybrid network.
arXiv Detail & Related papers (2020-11-26T04:04:21Z) - Adjusting Bias in Long Range Stereo Matching: A semantics guided
approach [14.306250516592305]
We propose a pair of novel depth-based loss functions for foreground and background, separately.
These loss functions are tunable and can balance the inherent bias of the stereo learning algorithms.
Our solution yields substantial improvements in disparity and depth estimation, particularly for objects located at distances beyond 50 meters.
arXiv Detail & Related papers (2020-09-10T01:47:53Z) - Occlusion-Aware Depth Estimation with Adaptive Normal Constraints [85.44842683936471]
We present a new learning-based method for multi-frame depth estimation from a color video.
Our method outperforms the state-of-the-art in terms of depth estimation accuracy.
arXiv Detail & Related papers (2020-04-02T07:10:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.