Revisiting Consistency Regularization for Semi-Supervised Learning
- URL: http://arxiv.org/abs/2112.05825v1
- Date: Fri, 10 Dec 2021 20:46:13 GMT
- Title: Revisiting Consistency Regularization for Semi-Supervised Learning
- Authors: Yue Fan and Anna Kukleva and Bernt Schiele
- Abstract summary: We propose an improved consistency regularization framework by a simple yet effective technique, FeatDistLoss.
Experimental results show that our model defines a new state of the art for various datasets and settings.
- Score: 80.28461584135967
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Consistency regularization is one of the most widely-used techniques for
semi-supervised learning (SSL). Generally, the aim is to train a model that is
invariant to various data augmentations. In this paper, we revisit this idea
and find that enforcing invariance by decreasing distances between features
from differently augmented images leads to improved performance. However,
encouraging equivariance instead, by increasing the feature distance, further
improves performance. To this end, we propose an improved consistency
regularization framework by a simple yet effective technique, FeatDistLoss,
that imposes consistency and equivariance on the classifier and the feature
level, respectively. Experimental results show that our model defines a new
state of the art for various datasets and settings and outperforms previous
work by a significant margin, particularly in low data regimes. Extensive
experiments are conducted to analyze the method, and the code will be
published.
Related papers
- FeTrIL++: Feature Translation for Exemplar-Free Class-Incremental
Learning with Hill-Climbing [3.533544633664583]
Exemplar-free class-incremental learning (EFCIL) poses significant challenges, primarily due to catastrophic forgetting.
Traditional EFCIL approaches typically skew towards either model plasticity through successive fine-tuning or stability.
This paper builds upon the foundational FeTrIL framework to examine the efficacy of various oversampling techniques and dynamic optimization strategies.
arXiv Detail & Related papers (2024-03-12T08:34:05Z) - Consistency Regularization for Generalizable Source-free Domain
Adaptation [62.654883736925456]
Source-free domain adaptation (SFDA) aims to adapt a well-trained source model to an unlabelled target domain without accessing the source dataset.
Existing SFDA methods ONLY assess their adapted models on the target training set, neglecting the data from unseen but identically distributed testing sets.
We propose a consistency regularization framework to develop a more generalizable SFDA method.
arXiv Detail & Related papers (2023-08-03T07:45:53Z) - Regularising for invariance to data augmentation improves supervised
learning [82.85692486314949]
We show that using multiple augmentations per input can improve generalisation.
We propose an explicit regulariser that encourages this invariance on the level of individual model predictions.
arXiv Detail & Related papers (2022-03-07T11:25:45Z) - Contrastively Disentangled Sequential Variational Autoencoder [20.75922928324671]
We propose a novel sequence representation learning method, named Contrastively Disentangled Sequential Variational Autoencoder (C-DSVAE)
We use a novel evidence lower bound which maximizes the mutual information between the input and the latent factors, while penalizes the mutual information between the static and dynamic factors.
Our experiments show that C-DSVAE significantly outperforms the previous state-of-the-art methods on multiple metrics.
arXiv Detail & Related papers (2021-10-22T23:00:32Z) - Exploring Complementary Strengths of Invariant and Equivariant
Representations for Few-Shot Learning [96.75889543560497]
In many real-world problems, collecting a large number of labeled samples is infeasible.
Few-shot learning is the dominant approach to address this issue, where the objective is to quickly adapt to novel categories in presence of a limited number of samples.
We propose a novel training mechanism that simultaneously enforces equivariance and invariance to a general set of geometric transformations.
arXiv Detail & Related papers (2021-03-01T21:14:33Z) - Squared $\ell_2$ Norm as Consistency Loss for Leveraging Augmented Data
to Learn Robust and Invariant Representations [76.85274970052762]
Regularizing distance between embeddings/representations of original samples and augmented counterparts is a popular technique for improving robustness of neural networks.
In this paper, we explore these various regularization choices, seeking to provide a general understanding of how we should regularize the embeddings.
We show that the generic approach we identified (squared $ell$ regularized augmentation) outperforms several recent methods, which are each specially designed for one task.
arXiv Detail & Related papers (2020-11-25T22:40:09Z) - Extrapolation for Large-batch Training in Deep Learning [72.61259487233214]
We show that a host of variations can be covered in a unified framework that we propose.
We prove the convergence of this novel scheme and rigorously evaluate its empirical performance on ResNet, LSTM, and Transformer.
arXiv Detail & Related papers (2020-06-10T08:22:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.