Contrastive Learning with Consistent Representations
- URL: http://arxiv.org/abs/2302.01541v1
- Date: Fri, 3 Feb 2023 04:34:00 GMT
- Title: Contrastive Learning with Consistent Representations
- Authors: Zihu Wang, Yu Wang, Hanbin Hu, Peng Li
- Abstract summary: This paper proposes Contrastive Learning with Consistent Representations (CoCor)
CoCor is a new consistency measure, DA consistency, which dictates the mapping of augmented input data to the representation space.
The proposed techniques give rise to a semi-supervised learning framework based on bi-level optimization, achieving new state-of-the-art results for image recognition.
- Score: 8.274769259790926
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Contrastive learning demonstrates great promise for representation learning.
Data augmentations play a critical role in contrastive learning by providing
informative views of the data without needing the labels. However, the
performance of the existing works heavily relies on the quality of the employed
data augmentation (DA) functions, which are typically hand picked from a
restricted set of choices. While exploiting a diverse set of data augmentations
is appealing, the intricacies of DAs and representation learning may lead to
performance degradation. To address this challenge and allow for a systemic use
of large numbers of data augmentations, this paper proposes Contrastive
Learning with Consistent Representations (CoCor). At the core of CoCor is a new
consistency measure, DA consistency, which dictates the mapping of augmented
input data to the representation space such that these instances are mapped to
optimal locations in a way consistent to the intensity of the DA applied.
Furthermore, a data-driven approach is proposed to learn the optimal mapping
locations as a function of DA while maintaining a desired monotonic property
with respect to DA intensity. The proposed techniques give rise to a
semi-supervised learning framework based on bi-level optimization, achieving
new state-of-the-art results for image recognition.
Related papers
- CAD-VAE: Leveraging Correlation-Aware Latents for Comprehensive Fair Disentanglement [24.818829983471765]
Deep generative models may inherit or amplify biases and fairness issues by encoding sensitive attributes alongside predictive features.
We propose CAD-VAE (Correlation-Aware Disentangled VAE), which introduces a correlated latent code to capture the shared information between target and sensitive attributes.
Experiments on benchmark datasets demonstrate that CAD-VAE produces fairer representations, realistic counterfactuals, and improved fairness-aware image editing.
arXiv Detail & Related papers (2025-03-11T00:32:56Z) - LAC: Graph Contrastive Learning with Learnable Augmentation in Continuous Space [16.26882307454389]
We introduce LAC, a graph contrastive learning framework with learnable data augmentation in an orthogonal continuous space.
To capture the representative information in the graph data during augmentation, we introduce a continuous view augmenter.
We propose an information-theoretic principle named InfoBal and introduce corresponding pretext tasks.
Our experimental results show that LAC significantly outperforms the state-of-the-art frameworks.
arXiv Detail & Related papers (2024-10-20T10:47:15Z) - Learning Better with Less: Effective Augmentation for Sample-Efficient
Visual Reinforcement Learning [57.83232242068982]
Data augmentation (DA) is a crucial technique for enhancing the sample efficiency of visual reinforcement learning (RL) algorithms.
It remains unclear which attributes of DA account for its effectiveness in achieving sample-efficient visual RL.
This work conducts comprehensive experiments to assess the impact of DA's attributes on its efficacy.
arXiv Detail & Related papers (2023-05-25T15:46:20Z) - Automatic Data Augmentation via Invariance-Constrained Learning [94.27081585149836]
Underlying data structures are often exploited to improve the solution of learning tasks.
Data augmentation induces these symmetries during training by applying multiple transformations to the input data.
This work tackles these issues by automatically adapting the data augmentation while solving the learning task.
arXiv Detail & Related papers (2022-09-29T18:11:01Z) - Improving GANs with A Dynamic Discriminator [106.54552336711997]
We argue that a discriminator with an on-the-fly adjustment on its capacity can better accommodate such a time-varying task.
A comprehensive empirical study confirms that the proposed training strategy, termed as DynamicD, improves the synthesis performance without incurring any additional cost or training objectives.
arXiv Detail & Related papers (2022-09-20T17:57:33Z) - Robust Representation Learning via Perceptual Similarity Metrics [18.842322467828502]
Contrastive Input Morphing (CIM) is a representation learning framework that learns input-space transformations of the data.
We show that CIM is complementary to other mutual information-based representation learning techniques.
arXiv Detail & Related papers (2021-06-11T21:45:44Z) - Heterogeneous Contrastive Learning: Encoding Spatial Information for
Compact Visual Representations [183.03278932562438]
This paper presents an effective approach that adds spatial information to the encoding stage to alleviate the learning inconsistency between the contrastive objective and strong data augmentation operations.
We show that our approach achieves higher efficiency in visual representations and thus delivers a key message to inspire the future research of self-supervised visual representation learning.
arXiv Detail & Related papers (2020-11-19T16:26:25Z) - CoDA: Contrast-enhanced and Diversity-promoting Data Augmentation for
Natural Language Understanding [67.61357003974153]
We propose a novel data augmentation framework dubbed CoDA.
CoDA synthesizes diverse and informative augmented examples by integrating multiple transformations organically.
A contrastive regularization objective is introduced to capture the global relationship among all the data samples.
arXiv Detail & Related papers (2020-10-16T23:57:03Z) - Learning Unbiased Representations via R\'enyi Minimization [13.61565693336172]
We propose an adversarial algorithm to learn unbiased representations via the Hirschfeld-Gebel-Renyi (HGR) maximal correlation coefficient.
We empirically evaluate and compare our approach and demonstrate significant improvements over existing works in the field.
arXiv Detail & Related papers (2020-09-07T15:48:24Z) - Spectrum-Guided Adversarial Disparity Learning [52.293230153385124]
We propose a novel end-to-end knowledge directed adversarial learning framework.
It portrays the class-conditioned intraclass disparity using two competitive encoding distributions and learns the purified latent codes by denoising learned disparity.
The experiments on four HAR benchmark datasets demonstrate the robustness and generalization of our proposed methods over a set of state-of-the-art.
arXiv Detail & Related papers (2020-07-14T05:46:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.