Learning Fair Classifiers with Partially Annotated Group Labels
- URL: http://arxiv.org/abs/2111.14581v1
- Date: Mon, 29 Nov 2021 15:11:18 GMT
- Title: Learning Fair Classifiers with Partially Annotated Group Labels
- Authors: Sangwon Jung, Sanghyuk Chun, Taesup Moon
- Abstract summary: We consider a more practical scenario dubbed as Algorithmic Fairness with annotated Group labels (FairPG)
We propose a simple auxiliary group assignment (CGL) that is readily applicable to any fairness-aware learning strategy.
We show that our method design is better than the vanilla pseudo-labeling strategy in terms of fairness criteria.
- Score: 22.838927494573436
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, fairness-aware learning have become increasingly crucial, but we
note that most of those methods operate by assuming the availability of fully
annotated group-labels. We emphasize that such assumption is unrealistic for
real-world applications since group label annotations are expensive and can
conflict with privacy issues. In this paper, we consider a more practical
scenario, dubbed as Algorithmic Fairness with the Partially annotated Group
labels (Fair-PG). We observe that the existing fairness methods, which only use
the data with group-labels, perform even worse than the vanilla training, which
simply uses full data only with target labels, under Fair-PG. To address this
problem, we propose a simple Confidence-based Group Label assignment (CGL)
strategy that is readily applicable to any fairness-aware learning method. Our
CGL utilizes an auxiliary group classifier to assign pseudo group labels, where
random labels are assigned to low confident samples. We first theoretically
show that our method design is better than the vanilla pseudo-labeling strategy
in terms of fairness criteria. Then, we empirically show for UTKFace, CelebA
and COMPAS datasets that by combining CGL and the state-of-the-art
fairness-aware in-processing methods, the target accuracies and the fairness
metrics are jointly improved compared to the baseline methods. Furthermore, we
convincingly show that our CGL enables to naturally augment the given
group-labeled dataset with external datasets only with target labels so that
both accuracy and fairness metrics can be improved. We will release our
implementation publicly to make future research reproduce our results.
Related papers
- Learn to be Fair without Labels: a Distribution-based Learning Framework for Fair Ranking [1.8577028544235155]
We propose a distribution-based fair learning framework (DLF) that does not require labels by replacing the unavailable fairness labels with target fairness exposure distributions.
Our proposed framework achieves better fairness performance while maintaining better control over the fairness-relevance trade-off.
arXiv Detail & Related papers (2024-05-28T03:49:04Z) - Falcon: Fair Active Learning using Multi-armed Bandits [9.895979687746376]
We propose a data-centric approach that improves machine learning model fairness via strategic sample selection.
Experiments show that Falcon significantly outperforms existing fair active learning approaches in terms of fairness and accuracy.
In particular, only Falcon supports a proper trade-off between accuracy and fairness where its maximum fairness score is 1.8-4.5x higher than the second-best results.
arXiv Detail & Related papers (2024-01-23T12:48:27Z) - Channel-Wise Contrastive Learning for Learning with Noisy Labels [60.46434734808148]
We introduce channel-wise contrastive learning (CWCL) to distinguish authentic label information from noise.
Unlike conventional instance-wise contrastive learning (IWCL), CWCL tends to yield more nuanced and resilient features aligned with the authentic labels.
Our strategy is twofold: firstly, using CWCL to extract pertinent features to identify cleanly labeled samples, and secondly, progressively fine-tuning using these samples.
arXiv Detail & Related papers (2023-08-14T06:04:50Z) - Enhancing Label Sharing Efficiency in Complementary-Label Learning with
Label Augmentation [92.4959898591397]
We analyze the implicit sharing of complementary labels on nearby instances during training.
We propose a novel technique that enhances the sharing efficiency via complementary-label augmentation.
Our results confirm that complementary-label augmentation can systematically improve empirical performance over state-of-the-art CLL models.
arXiv Detail & Related papers (2023-05-15T04:43:14Z) - GaussianMLR: Learning Implicit Class Significance via Calibrated
Multi-Label Ranking [0.0]
We propose a novel multi-label ranking method: GaussianMLR.
It aims to learn implicit class significance values that determine the positive label ranks.
We show that our method is able to accurately learn a representation of the incorporated positive rank order.
arXiv Detail & Related papers (2023-03-07T14:09:08Z) - Dist-PU: Positive-Unlabeled Learning from a Label Distribution
Perspective [89.5370481649529]
We propose a label distribution perspective for PU learning in this paper.
Motivated by this, we propose to pursue the label distribution consistency between predicted and ground-truth label distributions.
Experiments on three benchmark datasets validate the effectiveness of the proposed method.
arXiv Detail & Related papers (2022-12-06T07:38:29Z) - BARACK: Partially Supervised Group Robustness With Guarantees [29.427365308680717]
We propose BARACK, a framework to improve worst-group performance on neural networks.
We train a model to predict the missing group labels for the training data, and then use these predicted group labels in a robust optimization objective.
Empirically, our method outperforms the baselines that do not use group information, even when only 1-33% of points have group labels.
arXiv Detail & Related papers (2021-12-31T23:05:21Z) - Weakly Supervised Classification Using Group-Level Labels [12.285265254225166]
We propose methods to use group-level binary labels as weak supervision to train instance-level binary classification models.
We model group-level labels as Class Conditional Noisy (CCN) labels for individual instances and use the noisy labels to regularize predictions of the model trained on the strongly-labeled instances.
arXiv Detail & Related papers (2021-08-16T20:01:45Z) - Boosting Semi-Supervised Face Recognition with Noise Robustness [54.342992887966616]
This paper presents an effective solution to semi-supervised face recognition that is robust to the label noise aroused by the auto-labelling.
We develop a semi-supervised face recognition solution, named Noise Robust Learning-Labelling (NRoLL), which is based on the robust training ability empowered by GN.
arXiv Detail & Related papers (2021-05-10T14:43:11Z) - An Empirical Study on Large-Scale Multi-Label Text Classification
Including Few and Zero-Shot Labels [49.036212158261215]
Large-scale Multi-label Text Classification (LMTC) has a wide range of Natural Language Processing (NLP) applications.
Current state-of-the-art LMTC models employ Label-Wise Attention Networks (LWANs)
We show that hierarchical methods based on Probabilistic Label Trees (PLTs) outperform LWANs.
We propose a new state-of-the-art method which combines BERT with LWANs.
arXiv Detail & Related papers (2020-10-04T18:55:47Z) - Social Adaptive Module for Weakly-supervised Group Activity Recognition [143.68241396839062]
This paper presents a new task named weakly-supervised group activity recognition (GAR)
It differs from conventional GAR tasks in that only video-level labels are available, yet the important persons within each frame are not provided even in the training data.
This eases us to collect and annotate a large-scale NBA dataset and thus raise new challenges to GAR.
arXiv Detail & Related papers (2020-07-18T16:40:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.