Conditional Supervised Contrastive Learning for Fair Text Classification
- URL: http://arxiv.org/abs/2205.11485v2
- Date: Mon, 31 Oct 2022 04:21:33 GMT
- Title: Conditional Supervised Contrastive Learning for Fair Text Classification
- Authors: Jianfeng Chi, William Shand, Yaodong Yu, Kai-Wei Chang, Han Zhao, Yuan
Tian
- Abstract summary: We study learning fair representations that satisfy a notion of fairness known as equalized odds for text classification via contrastive learning.
Specifically, we first theoretically analyze the connections between learning representations with a fairness constraint and conditional supervised contrastive objectives.
- Score: 59.813422435604025
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Contrastive representation learning has gained much attention due to its
superior performance in learning representations from both image and sequential
data. However, the learned representations could potentially lead to
performance disparities in downstream tasks, such as increased silencing of
underrepresented groups in toxicity comment classification. In light of this
challenge, in this work, we study learning fair representations that satisfy a
notion of fairness known as equalized odds for text classification via
contrastive learning. Specifically, we first theoretically analyze the
connections between learning representations with a fairness constraint and
conditional supervised contrastive objectives, and then propose to use
conditional supervised contrastive objectives to learn fair representations for
text classification. We conduct experiments on two text datasets to demonstrate
the effectiveness of our approaches in balancing the trade-offs between task
performance and bias mitigation among existing baselines for text
classification. Furthermore, we also show that the proposed methods are stable
in different hyperparameter settings.
Related papers
- Rethinking Fair Representation Learning for Performance-Sensitive Tasks [19.40265690963578]
We use causal reasoning to define and formalise different sources of dataset bias.
We run experiments across a range of medical modalities to examine the performance of fair representation learning under distribution shifts.
arXiv Detail & Related papers (2024-10-05T11:01:16Z) - RankCSE: Unsupervised Sentence Representations Learning via Learning to
Rank [54.854714257687334]
We propose a novel approach, RankCSE, for unsupervised sentence representation learning.
It incorporates ranking consistency and ranking distillation with contrastive learning into a unified framework.
An extensive set of experiments are conducted on both semantic textual similarity (STS) and transfer (TR) tasks.
arXiv Detail & Related papers (2023-05-26T08:27:07Z) - Disentangled Representation with Causal Constraints for Counterfactual
Fairness [25.114619307838602]
This work theoretically demonstrates that using the structured representations enable downstream predictive models to achieve counterfactual fairness.
We propose the Counterfactual Fairness Variational AutoEncoder (CF-VAE) to obtain structured representations with respect to domain knowledge.
The experimental results show that the proposed method achieves better fairness and accuracy performance than the benchmark fairness methods.
arXiv Detail & Related papers (2022-08-19T04:47:58Z) - Generative or Contrastive? Phrase Reconstruction for Better Sentence
Representation Learning [86.01683892956144]
We propose a novel generative self-supervised learning objective based on phrase reconstruction.
Our generative learning may yield powerful enough sentence representation and achieve performance in Sentence Textual Similarity tasks on par with contrastive learning.
arXiv Detail & Related papers (2022-04-20T10:00:46Z) - Fair Contrastive Learning for Facial Attribute Classification [25.436462696033846]
We propose a new Fair Supervised Contrastive Loss (FSCL) for fair visual representation learning.
In this paper, we for the first time analyze unfairness caused by supervised contrastive learning.
Our method is robust to the intensity of data bias and effectively works in incomplete supervised settings.
arXiv Detail & Related papers (2022-03-30T11:16:18Z) - Understanding Contrastive Learning Requires Incorporating Inductive
Biases [64.56006519908213]
Recent attempts to theoretically explain the success of contrastive learning on downstream tasks prove guarantees depending on properties of em augmentations and the value of em contrastive loss of representations.
We demonstrate that such analyses ignore em inductive biases of the function class and training algorithm, even em provably leading to vacuous guarantees in some settings.
arXiv Detail & Related papers (2022-02-28T18:59:20Z) - Contrastive Learning for Fair Representations [50.95604482330149]
Trained classification models can unintentionally lead to biased representations and predictions.
Existing debiasing methods for classification models, such as adversarial training, are often expensive to train and difficult to optimise.
We propose a method for mitigating bias by incorporating contrastive learning, in which instances sharing the same class label are encouraged to have similar representations.
arXiv Detail & Related papers (2021-09-22T10:47:51Z) - Co$^2$L: Contrastive Continual Learning [69.46643497220586]
Recent breakthroughs in self-supervised learning show that such algorithms learn visual representations that can be transferred better to unseen tasks.
We propose a rehearsal-based continual learning algorithm that focuses on continually learning and maintaining transferable representations.
arXiv Detail & Related papers (2021-06-28T06:14:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.