Fair Contrastive Learning for Facial Attribute Classification
- URL: http://arxiv.org/abs/2203.16209v1
- Date: Wed, 30 Mar 2022 11:16:18 GMT
- Title: Fair Contrastive Learning for Facial Attribute Classification
- Authors: Sungho Park, Jewook Lee, Pilhyeon Lee, Sunhee Hwang, Dohyung Kim,
Hyeran Byun
- Abstract summary: We propose a new Fair Supervised Contrastive Loss (FSCL) for fair visual representation learning.
In this paper, we for the first time analyze unfairness caused by supervised contrastive learning.
Our method is robust to the intensity of data bias and effectively works in incomplete supervised settings.
- Score: 25.436462696033846
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning visual representation of high quality is essential for image
classification. Recently, a series of contrastive representation learning
methods have achieved preeminent success. Particularly, SupCon outperformed the
dominant methods based on cross-entropy loss in representation learning.
However, we notice that there could be potential ethical risks in supervised
contrastive learning. In this paper, we for the first time analyze unfairness
caused by supervised contrastive learning and propose a new Fair Supervised
Contrastive Loss (FSCL) for fair visual representation learning. Inheriting the
philosophy of supervised contrastive learning, it encourages representation of
the same class to be closer to each other than that of different classes, while
ensuring fairness by penalizing the inclusion of sensitive attribute
information in representation. In addition, we introduce a group-wise
normalization to diminish the disparities of intra-group compactness and
inter-class separability between demographic groups that arouse unfair
classification. Through extensive experiments on CelebA and UTK Face, we
validate that the proposed method significantly outperforms SupCon and existing
state-of-the-art methods in terms of the trade-off between top-1 accuracy and
fairness. Moreover, our method is robust to the intensity of data bias and
effectively works in incomplete supervised settings. Our code is available at
https://github.com/sungho-CoolG/FSCL.
Related papers
- Classes Are Not Equal: An Empirical Study on Image Recognition Fairness [100.36114135663836]
We experimentally demonstrate that classes are not equal and the fairness issue is prevalent for image classification models across various datasets.
Our findings reveal that models tend to exhibit greater prediction biases for classes that are more challenging to recognize.
Data augmentation and representation learning algorithms improve overall performance by promoting fairness to some degree in image classification.
arXiv Detail & Related papers (2024-02-28T07:54:50Z) - Conditional Supervised Contrastive Learning for Fair Text Classification [59.813422435604025]
We study learning fair representations that satisfy a notion of fairness known as equalized odds for text classification via contrastive learning.
Specifically, we first theoretically analyze the connections between learning representations with a fairness constraint and conditional supervised contrastive objectives.
arXiv Detail & Related papers (2022-05-23T17:38:30Z) - Weakly Supervised Contrastive Learning [68.47096022526927]
We introduce a weakly supervised contrastive learning framework (WCL) to tackle this issue.
WCL achieves 65% and 72% ImageNet Top-1 Accuracy using ResNet50, which is even higher than SimCLRv2 with ResNet101.
arXiv Detail & Related papers (2021-10-10T12:03:52Z) - Contrastive Learning for Fair Representations [50.95604482330149]
Trained classification models can unintentionally lead to biased representations and predictions.
Existing debiasing methods for classification models, such as adversarial training, are often expensive to train and difficult to optimise.
We propose a method for mitigating bias by incorporating contrastive learning, in which instances sharing the same class label are encouraged to have similar representations.
arXiv Detail & Related papers (2021-09-22T10:47:51Z) - Contrastive Learning based Hybrid Networks for Long-Tailed Image
Classification [31.647639786095993]
We propose a novel hybrid network structure composed of a supervised contrastive loss to learn image representations and a cross-entropy loss to learn classifiers.
Experiments on three long-tailed classification datasets demonstrate the advantage of the proposed contrastive learning based hybrid networks in long-tailed classification.
arXiv Detail & Related papers (2021-03-26T05:22:36Z) - Deep Clustering by Semantic Contrastive Learning [67.28140787010447]
We introduce a novel variant called Semantic Contrastive Learning (SCL)
It explores the characteristics of both conventional contrastive learning and deep clustering.
It can amplify the strengths of contrastive learning and deep clustering in a unified approach.
arXiv Detail & Related papers (2021-03-03T20:20:48Z) - Spatial Contrastive Learning for Few-Shot Classification [9.66840768820136]
We propose a novel attention-based spatial contrastive objective to learn locally discriminative and class-agnostic features.
With extensive experiments, we show that the proposed method outperforms state-of-the-art approaches.
arXiv Detail & Related papers (2020-12-26T23:39:41Z) - Heterogeneous Contrastive Learning: Encoding Spatial Information for
Compact Visual Representations [183.03278932562438]
This paper presents an effective approach that adds spatial information to the encoding stage to alleviate the learning inconsistency between the contrastive objective and strong data augmentation operations.
We show that our approach achieves higher efficiency in visual representations and thus delivers a key message to inspire the future research of self-supervised visual representation learning.
arXiv Detail & Related papers (2020-11-19T16:26:25Z) - FairNN- Conjoint Learning of Fair Representations for Fair Decisions [40.05268461544044]
We propose FairNN a neural network that performs joint feature representation and classification for fairness-aware learning.
Our experiments on a variety of datasets demonstrate that such a joint approach is superior to separate treatment of unfairness in representation learning or supervised learning.
arXiv Detail & Related papers (2020-04-05T12:08:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.