Learning Accurate Dense Correspondences and When to Trust Them
- URL: http://arxiv.org/abs/2101.01710v2
- Date: Thu, 1 Apr 2021 16:57:01 GMT
- Title: Learning Accurate Dense Correspondences and When to Trust Them
- Authors: Prune Truong and Martin Danelljan and Luc Van Gool and Radu Timofte
- Abstract summary: We aim to estimate a dense flow field relating two images, coupled with a robust pixel-wise confidence map.
We develop a flexible probabilistic approach that jointly learns the flow prediction and its uncertainty.
Our approach obtains state-of-the-art results on challenging geometric matching and optical flow datasets.
- Score: 161.76275845530964
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Establishing dense correspondences between a pair of images is an important
and general problem. However, dense flow estimation is often inaccurate in the
case of large displacements or homogeneous regions. For most applications and
down-stream tasks, such as pose estimation, image manipulation, or 3D
reconstruction, it is crucial to know when and where to trust the estimated
matches.
In this work, we aim to estimate a dense flow field relating two images,
coupled with a robust pixel-wise confidence map indicating the reliability and
accuracy of the prediction. We develop a flexible probabilistic approach that
jointly learns the flow prediction and its uncertainty. In particular, we
parametrize the predictive distribution as a constrained mixture model,
ensuring better modelling of both accurate flow predictions and outliers.
Moreover, we develop an architecture and training strategy tailored for robust
and generalizable uncertainty prediction in the context of self-supervised
training. Our approach obtains state-of-the-art results on multiple challenging
geometric matching and optical flow datasets. We further validate the
usefulness of our probabilistic confidence estimation for the task of pose
estimation. Code and models are available at
https://github.com/PruneTruong/PDCNet.
Related papers
- Exploiting Diffusion Prior for Generalizable Dense Prediction [85.4563592053464]
Recent advanced Text-to-Image (T2I) diffusion models are sometimes too imaginative for existing off-the-shelf dense predictors to estimate.
We introduce DMP, a pipeline utilizing pre-trained T2I models as a prior for dense prediction tasks.
Despite limited-domain training data, the approach yields faithful estimations for arbitrary images, surpassing existing state-of-the-art algorithms.
arXiv Detail & Related papers (2023-11-30T18:59:44Z) - Investigating Low Data, Confidence Aware Image Prediction on Smooth Repetitive Videos using Gaussian Processes [25.319133815064557]
We focus on the problem of predicting future images of an image sequence with interpretable confidence bounds from very little training data.
We generate probability distributions over sequentially predicted images, and propagate uncertainty through time to generate a confidence metric for our predictions.
We showcase the capabilities of our approach on real world data by predicting pedestrian flows and weather patterns from satellite imagery.
arXiv Detail & Related papers (2023-07-20T22:35:27Z) - Birds of a Feather Trust Together: Knowing When to Trust a Classifier
via Adaptive Neighborhood Aggregation [30.34223543030105]
We show how NeighborAgg can leverage the two essential information via an adaptive neighborhood aggregation.
We also extend our approach to the closely related task of mislabel detection and provide a theoretical coverage guarantee to bound the false negative.
arXiv Detail & Related papers (2022-11-29T18:43:15Z) - Reliability-Aware Prediction via Uncertainty Learning for Person Image
Retrieval [51.83967175585896]
UAL aims at providing reliability-aware predictions by considering data uncertainty and model uncertainty simultaneously.
Data uncertainty captures the noise" inherent in the sample, while model uncertainty depicts the model's confidence in the sample's prediction.
arXiv Detail & Related papers (2022-10-24T17:53:20Z) - Improving the Reliability for Confidence Estimation [16.952133489480776]
Confidence estimation is a task that aims to evaluate the trustworthiness of the model's prediction output during deployment.
Previous works have outlined two important qualities that a reliable confidence estimation model should possess.
We propose a meta-learning framework that can simultaneously improve upon both qualities in a confidence estimation model.
arXiv Detail & Related papers (2022-10-13T06:34:23Z) - PDC-Net+: Enhanced Probabilistic Dense Correspondence Network [161.76275845530964]
Enhanced Probabilistic Dense Correspondence Network, PDC-Net+, capable of estimating accurate dense correspondences.
We develop an architecture and an enhanced training strategy tailored for robust and generalizable uncertainty prediction.
Our approach obtains state-of-the-art results on multiple challenging geometric matching and optical flow datasets.
arXiv Detail & Related papers (2021-09-28T17:56:41Z) - Improving Uncertainty Calibration via Prior Augmented Data [56.88185136509654]
Neural networks have proven successful at learning from complex data distributions by acting as universal function approximators.
They are often overconfident in their predictions, which leads to inaccurate and miscalibrated probabilistic predictions.
We propose a solution by seeking out regions of feature space where the model is unjustifiably overconfident, and conditionally raising the entropy of those predictions towards that of the prior distribution of the labels.
arXiv Detail & Related papers (2021-02-22T07:02:37Z) - Trust but Verify: Assigning Prediction Credibility by Counterfactual
Constrained Learning [123.3472310767721]
Prediction credibility measures are fundamental in statistics and machine learning.
These measures should account for the wide variety of models used in practice.
The framework developed in this work expresses the credibility as a risk-fit trade-off.
arXiv Detail & Related papers (2020-11-24T19:52:38Z) - Calibrated Adversarial Refinement for Stochastic Semantic Segmentation [5.849736173068868]
We present a strategy for learning a calibrated predictive distribution over semantic maps, where the probability associated with each prediction reflects its ground truth correctness likelihood.
We demonstrate the versatility and robustness of the approach by achieving state-of-the-art results on the multigrader LIDC dataset and on a modified Cityscapes dataset with injected ambiguities.
We show that the core design can be adapted to other tasks requiring learning a calibrated predictive distribution by experimenting on a toy regression dataset.
arXiv Detail & Related papers (2020-06-23T16:39:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.