Concurrent Discrimination and Alignment for Self-Supervised Feature
Learning
- URL: http://arxiv.org/abs/2108.08562v1
- Date: Thu, 19 Aug 2021 09:07:41 GMT
- Title: Concurrent Discrimination and Alignment for Self-Supervised Feature
Learning
- Authors: Anjan Dutta, Massimiliano Mancini, Zeynep Akata
- Abstract summary: Existing self-supervised learning methods learn by means of pretext tasks which are either (1) discriminating that explicitly specify which features should be separated or (2) aligning that precisely indicate which features should be closed together.
In this work, we combine the positive aspects of the discriminating and aligning methods, and design a hybrid method that addresses the above issue.
Our method explicitly specifies the repulsion and attraction mechanism respectively by discriminative predictive task and concurrently maximizing mutual information between paired views.
Our experiments on nine established benchmarks show that the proposed model consistently outperforms the existing state-of-the-art results of self-supervised and transfer
- Score: 52.213140525321165
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Existing self-supervised learning methods learn representation by means of
pretext tasks which are either (1) discriminating that explicitly specify which
features should be separated or (2) aligning that precisely indicate which
features should be closed together, but ignore the fact how to jointly and
principally define which features to be repelled and which ones to be
attracted. In this work, we combine the positive aspects of the discriminating
and aligning methods, and design a hybrid method that addresses the above
issue. Our method explicitly specifies the repulsion and attraction mechanism
respectively by discriminative predictive task and concurrently maximizing
mutual information between paired views sharing redundant information. We
qualitatively and quantitatively show that our proposed model learns better
features that are more effective for the diverse downstream tasks ranging from
classification to semantic segmentation. Our experiments on nine established
benchmarks show that the proposed model consistently outperforms the existing
state-of-the-art results of self-supervised and transfer learning protocol.
Related papers
- An Attention-based Framework for Fair Contrastive Learning [2.1605931466490795]
We propose a new method for fair contrastive learning that employs an attention mechanism to model bias-causing interactions.
Our attention mechanism avoids bias-causing samples that confound the model and focuses on bias-reducing samples that help learn semantically meaningful representations.
arXiv Detail & Related papers (2024-11-22T07:11:35Z) - A Probabilistic Model Behind Self-Supervised Learning [53.64989127914936]
In self-supervised learning (SSL), representations are learned via an auxiliary task without annotated labels.
We present a generative latent variable model for self-supervised learning.
We show that several families of discriminative SSL, including contrastive methods, induce a comparable distribution over representations.
arXiv Detail & Related papers (2024-02-02T13:31:17Z) - Using Positive Matching Contrastive Loss with Facial Action Units to
mitigate bias in Facial Expression Recognition [6.015556590955814]
We propose to mitigate bias by guiding the model's focus towards task-relevant features using domain knowledge.
We show that incorporating task-relevant features via our method can improve model fairness at minimal cost to classification performance.
arXiv Detail & Related papers (2023-03-08T21:28:02Z) - Conditional Supervised Contrastive Learning for Fair Text Classification [59.813422435604025]
We study learning fair representations that satisfy a notion of fairness known as equalized odds for text classification via contrastive learning.
Specifically, we first theoretically analyze the connections between learning representations with a fairness constraint and conditional supervised contrastive objectives.
arXiv Detail & Related papers (2022-05-23T17:38:30Z) - Discriminative Attribution from Counterfactuals [64.94009515033984]
We present a method for neural network interpretability by combining feature attribution with counterfactual explanations.
We show that this method can be used to quantitatively evaluate the performance of feature attribution methods in an objective manner.
arXiv Detail & Related papers (2021-09-28T00:53:34Z) - Contrastive Learning for Fair Representations [50.95604482330149]
Trained classification models can unintentionally lead to biased representations and predictions.
Existing debiasing methods for classification models, such as adversarial training, are often expensive to train and difficult to optimise.
We propose a method for mitigating bias by incorporating contrastive learning, in which instances sharing the same class label are encouraged to have similar representations.
arXiv Detail & Related papers (2021-09-22T10:47:51Z) - A Joint Representation Learning and Feature Modeling Approach for
One-class Recognition [15.606362608483316]
We argue that both of these approaches have their own limitations; and a more effective solution can be obtained by combining the two.
The proposed approach is based on the combination of a generative framework and a one-class classification method.
We test the effectiveness of the proposed method on three one-class classification tasks and obtain state-of-the-art results.
arXiv Detail & Related papers (2021-01-24T19:51:46Z) - Self-Supervised Relational Reasoning for Representation Learning [5.076419064097733]
In self-supervised learning, a system is tasked with achieving a surrogate objective by defining alternative targets on unlabeled data.
We propose a novel self-supervised formulation of relational reasoning that allows a learner to bootstrap a signal from information implicit in unlabeled data.
We evaluate the proposed method following a rigorous experimental procedure, using standard datasets, protocols, and backbones.
arXiv Detail & Related papers (2020-06-10T14:24:25Z) - Task-Feature Collaborative Learning with Application to Personalized
Attribute Prediction [166.87111665908333]
We propose a novel multi-task learning method called Task-Feature Collaborative Learning (TFCL)
Specifically, we first propose a base model with a heterogeneous block-diagonal structure regularizer to leverage the collaborative grouping of features and tasks.
As a practical extension, we extend the base model by allowing overlapping features and differentiating the hard tasks.
arXiv Detail & Related papers (2020-04-29T02:32:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.