Digging Into Uncertainty-based Pseudo-label for Robust Stereo Matching
- URL: http://arxiv.org/abs/2307.16509v1
- Date: Mon, 31 Jul 2023 09:11:31 GMT
- Title: Digging Into Uncertainty-based Pseudo-label for Robust Stereo Matching
- Authors: Zhelun Shen, Xibin Song, Yuchao Dai, Dingfu Zhou, Zhibo Rao, Liangjun
Zhang
- Abstract summary: We propose to dig into uncertainty estimation for robust stereo matching.
An uncertainty-based pseudo-label is proposed to adapt the pre-trained model to the new domain.
Our method shows strong cross-domain, adapt, and joint generalization and obtains textbf1st place on the stereo task of Robust Vision Challenge 2020.
- Score: 39.959000340261625
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Due to the domain differences and unbalanced disparity distribution across
multiple datasets, current stereo matching approaches are commonly limited to a
specific dataset and generalize poorly to others. Such domain shift issue is
usually addressed by substantial adaptation on costly target-domain
ground-truth data, which cannot be easily obtained in practical settings. In
this paper, we propose to dig into uncertainty estimation for robust stereo
matching. Specifically, to balance the disparity distribution, we employ a
pixel-level uncertainty estimation to adaptively adjust the next stage
disparity searching space, in this way driving the network progressively prune
out the space of unlikely correspondences. Then, to solve the limited ground
truth data, an uncertainty-based pseudo-label is proposed to adapt the
pre-trained model to the new domain, where pixel-level and area-level
uncertainty estimation are proposed to filter out the high-uncertainty pixels
of predicted disparity maps and generate sparse while reliable pseudo-labels to
align the domain gap. Experimentally, our method shows strong cross-domain,
adapt, and joint generalization and obtains \textbf{1st} place on the stereo
task of Robust Vision Challenge 2020. Additionally, our uncertainty-based
pseudo-labels can be extended to train monocular depth estimation networks in
an unsupervised way and even achieves comparable performance with the
supervised methods. The code will be available at
https://github.com/gallenszl/UCFNet.
Related papers
- Domain Adaptive Object Detection via Balancing Between Self-Training and
Adversarial Learning [19.81071116581342]
Deep learning based object detectors struggle generalizing to a new target domain bearing significant variations in object and background.
Current methods align domains by using image or instance-level adversarial feature alignment.
We propose to leverage model's predictive uncertainty to strike the right balance between adversarial feature alignment and class-level alignment.
arXiv Detail & Related papers (2023-11-08T16:40:53Z) - Bi-discriminator Domain Adversarial Neural Networks with Class-Level
Gradient Alignment [87.8301166955305]
We propose a novel bi-discriminator domain adversarial neural network with class-level gradient alignment.
BACG resorts to gradient signals and second-order probability estimation for better alignment of domain distributions.
In addition, inspired by contrastive learning, we develop a memory bank-based variant, i.e. Fast-BACG, which can greatly shorten the training process.
arXiv Detail & Related papers (2023-10-21T09:53:17Z) - Adapting to Latent Subgroup Shifts via Concepts and Proxies [82.01141290360562]
We show that the optimal target predictor can be non-parametrically identified with the help of concept and proxy variables available only in the source domain.
For continuous observations, we propose a latent variable model specific to the data generation process at hand.
arXiv Detail & Related papers (2022-12-21T18:30:22Z) - Certainty Volume Prediction for Unsupervised Domain Adaptation [35.984559137218504]
Unsupervised domain adaptation (UDA) deals with the problem of classifying unlabeled target domain data.
We propose a novel uncertainty-aware domain adaptation setup that models uncertainty as a multivariate Gaussian distribution in feature space.
We evaluate our proposed pipeline on challenging UDA datasets and achieve state-of-the-art results.
arXiv Detail & Related papers (2021-11-03T11:22:55Z) - Synergizing between Self-Training and Adversarial Learning for Domain
Adaptive Object Detection [11.091890625685298]
We study adapting trained object detectors to unseen domains manifesting significant variations of object appearance, viewpoints and backgrounds.
We propose to leverage model predictive uncertainty to strike the right balance between adversarial feature alignment and class-level alignment.
arXiv Detail & Related papers (2021-10-01T08:10:00Z) - Tune it the Right Way: Unsupervised Validation of Domain Adaptation via
Soft Neighborhood Density [125.64297244986552]
We propose an unsupervised validation criterion that measures the density of soft neighborhoods by computing the entropy of the similarity distribution between points.
Our criterion is simpler than competing validation methods, yet more effective.
arXiv Detail & Related papers (2021-08-24T17:41:45Z) - Uncertainty-Aware Unsupervised Domain Adaptation in Object Detection [34.18382705952121]
Unlabelled domain adaptive object detection aims to adapt detectors from a labelled source domain to an unsupervised target domain.
adversarial learning may impair the alignment of well-aligned samples as it merely aligns the global distributions across domains.
We design an uncertainty-aware domain adaptation network (UaDAN) that introduces conditional adversarial learning to align well-aligned and poorly-aligned samples separately.
arXiv Detail & Related papers (2021-02-27T15:04:07Z) - Learning Calibrated Uncertainties for Domain Shift: A Distributionally
Robust Learning Approach [150.8920602230832]
We propose a framework for learning calibrated uncertainties under domain shifts.
In particular, the density ratio estimation reflects the closeness of a target (test) sample to the source (training) distribution.
We show that our proposed method generates calibrated uncertainties that benefit downstream tasks.
arXiv Detail & Related papers (2020-10-08T02:10:54Z) - Log-Likelihood Ratio Minimizing Flows: Towards Robust and Quantifiable
Neural Distribution Alignment [52.02794488304448]
We propose a new distribution alignment method based on a log-likelihood ratio statistic and normalizing flows.
We experimentally verify that minimizing the resulting objective results in domain alignment that preserves the local structure of input domains.
arXiv Detail & Related papers (2020-03-26T22:10:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.