Self-supervised Monocular Depth Estimation for All Day Images using
Domain Separation
- URL: http://arxiv.org/abs/2108.07628v1
- Date: Tue, 17 Aug 2021 13:52:19 GMT
- Title: Self-supervised Monocular Depth Estimation for All Day Images using
Domain Separation
- Authors: Lina Liu, Xibin Song, Mengmeng Wang, Yong Liu and Liangjun Zhang
- Abstract summary: We propose a domain-separated network for self-supervised depth estimation of all-day images.
Our approach achieves state-of-the-art depth estimation results for all-day images on the challenging Oxford RobotCar dataset.
- Score: 17.066753214406525
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Remarkable results have been achieved by DCNN based self-supervised depth
estimation approaches. However, most of these approaches can only handle either
day-time or night-time images, while their performance degrades for all-day
images due to large domain shift and the variation of illumination between day
and night images. To relieve these limitations, we propose a domain-separated
network for self-supervised depth estimation of all-day images. Specifically,
to relieve the negative influence of disturbing terms (illumination, etc.), we
partition the information of day and night image pairs into two complementary
sub-spaces: private and invariant domains, where the former contains the unique
information (illumination, etc.) of day and night images and the latter
contains essential shared information (texture, etc.). Meanwhile, to guarantee
that the day and night images contain the same information, the
domain-separated network takes the day-time images and corresponding night-time
images (generated by GAN) as input, and the private and invariant feature
extractors are learned by orthogonality and similarity loss, where the domain
gap can be alleviated, thus better depth maps can be expected. Meanwhile, the
reconstruction and photometric losses are utilized to estimate complementary
information and depth maps effectively. Experimental results demonstrate that
our approach achieves state-of-the-art depth estimation results for all-day
images on the challenging Oxford RobotCar dataset, proving the superiority of
our proposed approach.
Related papers
- Exploring Reliable Matching with Phase Enhancement for Night-time Semantic Segmentation [58.180226179087086]
We propose a novel end-to-end optimized approach, named NightFormer, tailored for night-time semantic segmentation.
Specifically, we design a pixel-level texture enhancement module to acquire texture-aware features hierarchically with phase enhancement and amplified attention.
Our proposed method performs favorably against state-of-the-art night-time semantic segmentation methods.
arXiv Detail & Related papers (2024-08-25T13:59:31Z) - PIG: Prompt Images Guidance for Night-Time Scene Parsing [48.35991796324741]
Unsupervised domain adaptation (UDA) has become the predominant method for studying night scenes.
We propose a Night-Focused Network (NFNet) to learn night-specific features from both target domain images and prompt images.
We conduct experiments on four night-time datasets: NightCity, NightCity+, Dark Zurich, and ACDC.
arXiv Detail & Related papers (2024-06-15T07:06:19Z) - Similarity Min-Max: Zero-Shot Day-Night Domain Adaptation [52.923298434948606]
Low-light conditions not only hamper human visual experience but also degrade the model's performance on downstream vision tasks.
This paper challenges a more complicated scenario with border applicability, i.e., zero-shot day-night domain adaptation.
We propose a similarity min-max paradigm that considers them under a unified framework.
arXiv Detail & Related papers (2023-07-17T18:50:15Z) - GlocalFuse-Depth: Fusing Transformers and CNNs for All-day
Self-supervised Monocular Depth Estimation [0.12891210250935148]
We propose a two-branch network named GlocalFuse-Depth for self-supervised depth estimation of all-day images.
GlocalFuse-Depth achieves state-of-the-art results for all-day images on the Oxford RobotCar dataset.
arXiv Detail & Related papers (2023-02-20T10:20:07Z) - When the Sun Goes Down: Repairing Photometric Losses for All-Day Depth
Estimation [47.617222712429026]
We show how to use a combination of three techniques to allow the existing photometric losses to work for both day and nighttime images.
First, we introduce a per-pixel neural intensity transformation to compensate for the light changes that occur between successive frames.
Second, we predict a per-pixel residual flow map that we use to correct the reprojection correspondences induced by the estimated ego-motion and depth.
arXiv Detail & Related papers (2022-06-28T09:29:55Z) - Cross-Domain Correlation Distillation for Unsupervised Domain Adaptation
in Nighttime Semantic Segmentation [17.874336775904272]
We propose a novel domain adaptation framework via cross-domain correlation distillation, called CCDistill.
We extract the content and style knowledge contained in features, calculate the degree of inherent or illumination difference between two images.
Experiments on Dark Zurich and ACDC demonstrate that CCDistill achieves the state-of-the-art performance for nighttime semantic segmentation.
arXiv Detail & Related papers (2022-05-02T12:42:04Z) - Regularizing Nighttime Weirdness: Efficient Self-supervised Monocular
Depth Estimation in the Dark [20.66405067066299]
We introduce Priors-Based Regularization to learn distribution knowledge from unpaired depth maps.
We also leverage Mapping-Consistent Image Enhancement module to enhance image visibility and contrast.
Our framework achieves remarkable improvements and state-of-the-art results on two nighttime datasets.
arXiv Detail & Related papers (2021-08-09T06:24:35Z) - Unsupervised Monocular Depth Estimation for Night-time Images using
Adversarial Domain Feature Adaptation [17.067988025947024]
We look into the problem of estimating per-pixel depth maps from unconstrained RGB monocular night-time images.
The state-of-the-art day-time depth estimation methods fail miserably when tested with night-time images.
We propose to solve this problem by posing it as a domain adaptation problem where a network trained with day-time images is adapted to work for night-time images.
arXiv Detail & Related papers (2020-10-03T17:55:16Z) - Map-Guided Curriculum Domain Adaptation and Uncertainty-Aware Evaluation
for Semantic Nighttime Image Segmentation [107.33492779588641]
We develop a curriculum framework to adapt semantic segmentation models from day to night without using nighttime annotations.
We also design a new evaluation framework to address the substantial uncertainty of semantics in nighttime images.
arXiv Detail & Related papers (2020-05-28T16:54:38Z) - DeFeat-Net: General Monocular Depth via Simultaneous Unsupervised
Representation Learning [65.94499390875046]
DeFeat-Net is an approach to simultaneously learn a cross-domain dense feature representation.
Our technique is able to outperform the current state-of-the-art with around 10% reduction in all error measures.
arXiv Detail & Related papers (2020-03-30T13:10:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.