ConMatch: Semi-Supervised Learning with Confidence-Guided Consistency
Regularization
- URL: http://arxiv.org/abs/2208.08631v1
- Date: Thu, 18 Aug 2022 04:37:50 GMT
- Title: ConMatch: Semi-Supervised Learning with Confidence-Guided Consistency
Regularization
- Authors: Jiwon Kim, Youngjo Min, Daehwan Kim, Gyuseong Lee, Junyoung Seo,
Kwangrok Ryoo, Seungryong Kim
- Abstract summary: We present a novel semi-supervised learning framework that intelligently leverages the consistency regularization between the model's predictions from two strongly-augmented views of an image, weighted by a confidence of pseudo-label, dubbed ConMatch.
We conduct experiments to demonstrate the effectiveness of our ConMatch over the latest methods and provide extensive ablation studies.
- Score: 26.542718087103665
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: We present a novel semi-supervised learning framework that intelligently
leverages the consistency regularization between the model's predictions from
two strongly-augmented views of an image, weighted by a confidence of
pseudo-label, dubbed ConMatch. While the latest semi-supervised learning
methods use weakly- and strongly-augmented views of an image to define a
directional consistency loss, how to define such direction for the consistency
regularization between two strongly-augmented views remains unexplored. To
account for this, we present novel confidence measures for pseudo-labels from
strongly-augmented views by means of weakly-augmented view as an anchor in
non-parametric and parametric approaches. Especially, in parametric approach,
we present, for the first time, to learn the confidence of pseudo-label within
the networks, which is learned with backbone model in an end-to-end manner. In
addition, we also present a stage-wise training to boost the convergence of
training. When incorporated in existing semi-supervised learners, ConMatch
consistently boosts the performance. We conduct experiments to demonstrate the
effectiveness of our ConMatch over the latest methods and provide extensive
ablation studies. Code has been made publicly available at
https://github.com/JiwonCocoder/ConMatch.
Related papers
- AstMatch: Adversarial Self-training Consistency Framework for Semi-Supervised Medical Image Segmentation [19.80612796391153]
Semi-supervised learning (SSL) has shown considerable potential in medical image segmentation.
In this work, we propose an adversarial self-training consistency framework (AstMatch)
The proposed AstMatch has been extensively evaluated with cutting-edge SSL methods on three public-available datasets.
arXiv Detail & Related papers (2024-06-28T04:38:12Z) - Deep Boosting Learning: A Brand-new Cooperative Approach for Image-Text Matching [53.05954114863596]
We propose a brand-new Deep Boosting Learning (DBL) algorithm for image-text matching.
An anchor branch is first trained to provide insights into the data properties.
A target branch is concurrently tasked with more adaptive margin constraints to further enlarge the relative distance between matched and unmatched samples.
arXiv Detail & Related papers (2024-04-28T08:44:28Z) - Synergistic Anchored Contrastive Pre-training for Few-Shot Relation
Extraction [4.7220779071424985]
Few-shot Relation Extraction (FSRE) aims to extract facts from a sparse set of labeled corpora.
Recent studies have shown promising results in FSRE by employing Pre-trained Language Models.
We introduce a novel synergistic anchored contrastive pre-training framework.
arXiv Detail & Related papers (2023-12-19T10:16:24Z) - Towards Distribution-Agnostic Generalized Category Discovery [51.52673017664908]
Data imbalance and open-ended distribution are intrinsic characteristics of the real visual world.
We propose a Self-Balanced Co-Advice contrastive framework (BaCon)
BaCon consists of a contrastive-learning branch and a pseudo-labeling branch, working collaboratively to provide interactive supervision to resolve the DA-GCD task.
arXiv Detail & Related papers (2023-10-02T17:39:58Z) - Learning Transferable Adversarial Robust Representations via Multi-view
Consistency [57.73073964318167]
We propose a novel meta-adversarial multi-view representation learning framework with dual encoders.
We demonstrate the effectiveness of our framework on few-shot learning tasks from unseen domains.
arXiv Detail & Related papers (2022-10-19T11:48:01Z) - Semi-Supervised Learning of Semantic Correspondence with Pseudo-Labels [26.542718087103665]
SemiMatch is a semi-supervised solution for establishing dense correspondences across semantically similar images.
Our framework generates the pseudo-labels using the model's prediction itself between source and weakly-augmented target, and uses pseudo-labels to learn the model again between source and strongly-augmented target.
In experiments, SemiMatch achieves state-of-the-art performance on various benchmarks, especially on PF-Willow by a large margin.
arXiv Detail & Related papers (2022-03-30T03:52:50Z) - Revisiting Contrastive Learning through the Lens of Neighborhood
Component Analysis: an Integrated Framework [70.84906094606072]
We show a new methodology to design integrated contrastive losses that could simultaneously achieve good accuracy and robustness on downstream tasks.
With the integrated framework, we achieve up to 6% improvement on the standard accuracy and 17% improvement on the adversarial accuracy.
arXiv Detail & Related papers (2021-12-08T18:54:11Z) - Co$^2$L: Contrastive Continual Learning [69.46643497220586]
Recent breakthroughs in self-supervised learning show that such algorithms learn visual representations that can be transferred better to unseen tasks.
We propose a rehearsal-based continual learning algorithm that focuses on continually learning and maintaining transferable representations.
arXiv Detail & Related papers (2021-06-28T06:14:38Z) - Robust Pre-Training by Adversarial Contrastive Learning [120.33706897927391]
Recent work has shown that, when integrated with adversarial training, self-supervised pre-training can lead to state-of-the-art robustness.
We improve robustness-aware self-supervised pre-training by learning representations consistent under both data augmentations and adversarial perturbations.
arXiv Detail & Related papers (2020-10-26T04:44:43Z) - A Simple Framework for Uncertainty in Contrastive Learning [11.64841553345271]
We introduce a simple approach that learns to assign uncertainty for pretrained contrastive representations.
We train a deep network from a representation to a distribution in representation space, whose variance can be used as a measure of confidence.
In our experiments, we show that this deep uncertainty model can be used (1) to visually interpret model behavior, (2) to detect new noise in the input to deployed models, (3) to detect anomalies, where we outperform 10 baseline methods across 11 tasks with improvements of up to 14% absolute.
arXiv Detail & Related papers (2020-10-05T14:17:42Z) - Confidence-aware Adversarial Learning for Self-supervised Semantic
Matching [29.132600499226406]
We introduce a Confidence-Aware Semantic Matching Network (CAMNet)
First, we estimate a dense confidence map for a matching prediction through self-supervised learning.
Second, based on the estimated confidence, we refine initial predictions by propagating reliable matching to the rest of locations on the image plane.
We are the first that exploit confidence during refinement to improve semantic matching accuracy and develop an end-to-end self-supervised adversarial learning procedure for the entire matching network.
arXiv Detail & Related papers (2020-08-25T09:15:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.