Confidence-aware Adversarial Learning for Self-supervised Semantic
Matching
- URL: http://arxiv.org/abs/2008.10902v1
- Date: Tue, 25 Aug 2020 09:15:48 GMT
- Title: Confidence-aware Adversarial Learning for Self-supervised Semantic
Matching
- Authors: Shuaiyi Huang, Qiuyue Wang, Xuming He
- Abstract summary: We introduce a Confidence-Aware Semantic Matching Network (CAMNet)
First, we estimate a dense confidence map for a matching prediction through self-supervised learning.
Second, based on the estimated confidence, we refine initial predictions by propagating reliable matching to the rest of locations on the image plane.
We are the first that exploit confidence during refinement to improve semantic matching accuracy and develop an end-to-end self-supervised adversarial learning procedure for the entire matching network.
- Score: 29.132600499226406
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we aim to address the challenging task of semantic matching
where matching ambiguity is difficult to resolve even with learned deep
features. We tackle this problem by taking into account the confidence in
predictions and develop a novel refinement strategy to correct partial matching
errors. Specifically, we introduce a Confidence-Aware Semantic Matching Network
(CAMNet) which instantiates two key ideas of our approach. First, we propose to
estimate a dense confidence map for a matching prediction through
self-supervised learning. Second, based on the estimated confidence, we refine
initial predictions by propagating reliable matching to the rest of locations
on the image plane. In addition, we develop a new hybrid loss in which we
integrate a semantic alignment loss with a confidence loss, and an adversarial
loss that measures the quality of semantic correspondence. We are the first
that exploit confidence during refinement to improve semantic matching accuracy
and develop an end-to-end self-supervised adversarial learning procedure for
the entire matching network. We evaluate our method on two public benchmarks,
on which we achieve top performance over the prior state of the art. We will
release our source code at https://github.com/ShuaiyiHuang/CAMNet.
Related papers
- Cross Domain Object Detection via Multi-Granularity Confidence Alignment based Mean Teacher [14.715398100791559]
Cross domain object detection learns an object detector for an unlabeled target domain by transferring knowledge from an annotated source domain.
In this study, we find that confidence misalignment of the predictions, including category-level overconfidence, instance-level task confidence inconsistency, and image-level confidence misfocusing, will bring suboptimal performance on the target domain.
arXiv Detail & Related papers (2024-07-10T15:56:24Z) - Revisiting Confidence Estimation: Towards Reliable Failure Prediction [53.79160907725975]
We find a general, widely existing but actually-neglected phenomenon that most confidence estimation methods are harmful for detecting misclassification errors.
We propose to enlarge the confidence gap by finding flat minima, which yields state-of-the-art failure prediction performance.
arXiv Detail & Related papers (2024-03-05T11:44:14Z) - Binary Classification with Confidence Difference [100.08818204756093]
This paper delves into a novel weakly supervised binary classification problem called confidence-difference (ConfDiff) classification.
We propose a risk-consistent approach to tackle this problem and show that the estimation error bound the optimal convergence rate.
We also introduce a risk correction approach to mitigate overfitting problems, whose consistency and convergence rate are also proven.
arXiv Detail & Related papers (2023-10-09T11:44:50Z) - Dual Focal Loss for Calibration [21.663687352629225]
We propose a new loss function by focusing on dual logits.
By maximizing the gap between these two logits, our proposed dual focal loss can achieve a better balance between over-confidence and under-confidence.
arXiv Detail & Related papers (2023-05-23T04:19:16Z) - Birds of a Feather Trust Together: Knowing When to Trust a Classifier
via Adaptive Neighborhood Aggregation [30.34223543030105]
We show how NeighborAgg can leverage the two essential information via an adaptive neighborhood aggregation.
We also extend our approach to the closely related task of mislabel detection and provide a theoretical coverage guarantee to bound the false negative.
arXiv Detail & Related papers (2022-11-29T18:43:15Z) - ConfMix: Unsupervised Domain Adaptation for Object Detection via
Confidence-based Mixing [32.679280923208715]
Unsupervised Domain Adaptation (UDA) for object detection aims to adapt a model trained on a source domain to detect instances from a new target domain for which annotations are not available.
We propose ConfMix, the first method that introduces a sample mixing strategy based on region-level detection confidence for adaptive object detector learning.
arXiv Detail & Related papers (2022-10-20T19:16:39Z) - An evaluation of word-level confidence estimation for end-to-end
automatic speech recognition [70.61280174637913]
We investigate confidence estimation for end-to-end automatic speech recognition (ASR)
We provide an extensive benchmark of popular confidence methods on four well-known speech datasets.
Our results suggest a strong baseline can be obtained by scaling the logits by a learnt temperature.
arXiv Detail & Related papers (2021-01-14T09:51:59Z) - Learning Accurate Dense Correspondences and When to Trust Them [161.76275845530964]
We aim to estimate a dense flow field relating two images, coupled with a robust pixel-wise confidence map.
We develop a flexible probabilistic approach that jointly learns the flow prediction and its uncertainty.
Our approach obtains state-of-the-art results on challenging geometric matching and optical flow datasets.
arXiv Detail & Related papers (2021-01-05T18:54:11Z) - Uncertainty-Aware Deep Calibrated Salient Object Detection [74.58153220370527]
Existing deep neural network based salient object detection (SOD) methods mainly focus on pursuing high network accuracy.
These methods overlook the gap between network accuracy and prediction confidence, known as the confidence uncalibration problem.
We introduce an uncertaintyaware deep SOD network, and propose two strategies to prevent deep SOD networks from being overconfident.
arXiv Detail & Related papers (2020-12-10T23:28:36Z) - Binary Classification from Positive Data with Skewed Confidence [85.18941440826309]
Positive-confidence (Pconf) classification is a promising weakly-supervised learning method.
In practice, the confidence may be skewed by bias arising in an annotation process.
We introduce the parameterized model of the skewed confidence, and propose the method for selecting the hyper parameter.
arXiv Detail & Related papers (2020-01-29T00:04:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.