Interpolation-based Contrastive Learning for Few-Label Semi-Supervised
Learning
- URL: http://arxiv.org/abs/2202.11915v1
- Date: Thu, 24 Feb 2022 06:00:05 GMT
- Title: Interpolation-based Contrastive Learning for Few-Label Semi-Supervised
Learning
- Authors: Xihong Yang, Xiaochang Hu, Sihang Zhou, Xinwang Liu, En Zhu
- Abstract summary: Semi-supervised learning (SSL) has long been proved to be an effective technique to construct powerful models with limited labels.
Regularization-based methods which force the perturbed samples to have similar predictions with the original ones have attracted much attention.
We propose a novel contrastive loss to guide the embedding of the learned network to change linearly between samples.
- Score: 43.51182049644767
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Semi-supervised learning (SSL) has long been proved to be an effective
technique to construct powerful models with limited labels. In the existing
literature, consistency regularization-based methods, which force the perturbed
samples to have similar predictions with the original ones have attracted much
attention for their promising accuracy. However, we observe that, the
performance of such methods decreases drastically when the labels get extremely
limited, e.g., 2 or 3 labels for each category. Our empirical study finds that
the main problem lies with the drifting of semantic information in the
procedure of data augmentation. The problem can be alleviated when enough
supervision is provided. However, when little guidance is available, the
incorrect regularization would mislead the network and undermine the
performance of the algorithm. To tackle the problem, we (1) propose an
interpolation-based method to construct more reliable positive sample pairs;
(2) design a novel contrastive loss to guide the embedding of the learned
network to change linearly between samples so as to improve the discriminative
capability of the network by enlarging the margin decision boundaries. Since no
destructive regularization is introduced, the performance of our proposed
algorithm is largely improved. Specifically, the proposed algorithm outperforms
the second best algorithm (Comatch) with 5.3% by achieving 88.73%
classification accuracy when only two labels are available for each class on
the CIFAR-10 dataset. Moreover, we further prove the generality of the proposed
method by improving the performance of the existing state-of-the-art algorithms
considerably with our proposed strategy.
Related papers
- All Points Matter: Entropy-Regularized Distribution Alignment for
Weakly-supervised 3D Segmentation [67.30502812804271]
Pseudo-labels are widely employed in weakly supervised 3D segmentation tasks where only sparse ground-truth labels are available for learning.
We propose a novel learning strategy to regularize the generated pseudo-labels and effectively narrow the gaps between pseudo-labels and model predictions.
arXiv Detail & Related papers (2023-05-25T08:19:31Z) - Faster Adaptive Federated Learning [84.38913517122619]
Federated learning has attracted increasing attention with the emergence of distributed data.
In this paper, we propose an efficient adaptive algorithm (i.e., FAFED) based on momentum-based variance reduced technique in cross-silo FL.
arXiv Detail & Related papers (2022-12-02T05:07:50Z) - Deep Active Ensemble Sampling For Image Classification [8.31483061185317]
Active learning frameworks aim to reduce the cost of data annotation by actively requesting the labeling for the most informative data points.
Some proposed approaches include uncertainty-based techniques, geometric methods, implicit combination of uncertainty-based and geometric approaches.
We present an innovative integration of recent progress in both uncertainty-based and geometric frameworks to enable an efficient exploration/exploitation trade-off in sample selection strategy.
Our framework provides two advantages: (1) accurate posterior estimation, and (2) tune-able trade-off between computational overhead and higher accuracy.
arXiv Detail & Related papers (2022-10-11T20:20:20Z) - Rethinking Clustering-Based Pseudo-Labeling for Unsupervised
Meta-Learning [146.11600461034746]
Method for unsupervised meta-learning, CACTUs, is a clustering-based approach with pseudo-labeling.
This approach is model-agnostic and can be combined with supervised algorithms to learn from unlabeled data.
We prove that the core reason for this is lack of a clustering-friendly property in the embedding space.
arXiv Detail & Related papers (2022-09-27T19:04:36Z) - MaxMatch: Semi-Supervised Learning with Worst-Case Consistency [149.03760479533855]
We propose a worst-case consistency regularization technique for semi-supervised learning (SSL)
We present a generalization bound for SSL consisting of the empirical loss terms observed on labeled and unlabeled training data separately.
Motivated by this bound, we derive an SSL objective that minimizes the largest inconsistency between an original unlabeled sample and its multiple augmented variants.
arXiv Detail & Related papers (2022-09-26T12:04:49Z) - Unsupervised feature selection via self-paced learning and low-redundant
regularization [6.083524716031565]
An unsupervised feature selection is proposed by integrating the framework of self-paced learning and subspace learning.
The convergence of the method is proved theoretically and experimentally.
The experimental results show that the proposed method can improve the performance of clustering methods and outperform other compared algorithms.
arXiv Detail & Related papers (2021-12-14T08:28:19Z) - A new weakly supervised approach for ALS point cloud semantic
segmentation [1.4620086904601473]
We propose a deep-learning based weakly supervised framework for semantic segmentation of ALS point clouds.
We exploit potential information from unlabeled data subject to incomplete and sparse labels.
Our method achieves an overall accuracy of 83.0% and an average F1 score of 70.0%, which have increased by 6.9% and 12.8% respectively.
arXiv Detail & Related papers (2021-10-04T14:00:23Z) - Semi-Supervised Learning with Meta-Gradient [123.26748223837802]
We propose a simple yet effective meta-learning algorithm in semi-supervised learning.
We find that the proposed algorithm performs favorably against state-of-the-art methods.
arXiv Detail & Related papers (2020-07-08T08:48:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.