Fairness via Adversarial Attribute Neighbourhood Robust Learning
- URL: http://arxiv.org/abs/2210.06630v1
- Date: Wed, 12 Oct 2022 23:39:28 GMT
- Title: Fairness via Adversarial Attribute Neighbourhood Robust Learning
- Authors: Qi Qi, Shervin Ardeshir, Yi Xu, Tianbao Yang
- Abstract summary: We propose a principled underlineRobust underlineAdversarial underlineAttribute underlineNeighbourhood (RAAN) loss to debias the classification head.
- Score: 49.93775302674591
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Improving fairness between privileged and less-privileged sensitive attribute
groups (e.g, {race, gender}) has attracted lots of attention. To enhance the
model performs uniformly well in different sensitive attributes, we propose a
principled \underline{R}obust \underline{A}dversarial \underline{A}ttribute
\underline{N}eighbourhood (RAAN) loss to debias the classification head and
promote a fairer representation distribution across different sensitive
attribute groups. The key idea of RAAN is to mitigate the differences of biased
representations between different sensitive attribute groups by assigning each
sample an adversarial robust weight, which is defined on the representations of
adversarial attribute neighbors, i.e, the samples from different protected
groups. To provide efficient optimization algorithms, we cast the RAAN into a
sum of coupled compositional functions and propose a stochastic adaptive
(Adam-style) and non-adaptive (SGD-style) algorithm framework SCRAAN with
provable theoretical guarantee. Extensive empirical studies on fairness-related
benchmark datasets verify the effectiveness of the proposed method.
Related papers
- Provable Optimization for Adversarial Fair Self-supervised Contrastive Learning [49.417414031031264]
This paper studies learning fair encoders in a self-supervised learning setting.
All data are unlabeled and only a small portion of them are annotated with sensitive attributes.
arXiv Detail & Related papers (2024-06-09T08:11:12Z) - Toward Fair Facial Expression Recognition with Improved Distribution
Alignment [19.442685015494316]
We present a novel approach to mitigate bias in facial expression recognition (FER) models.
Our method aims to reduce sensitive attribute information such as gender, age, or race, in the embeddings produced by FER models.
For the first time, we analyze the notion of attractiveness as an important sensitive attribute in FER models and demonstrate that FER models can indeed exhibit biases towards more attractive faces.
arXiv Detail & Related papers (2023-06-11T14:59:20Z) - Chasing Fairness Under Distribution Shift: A Model Weight Perturbation
Approach [72.19525160912943]
We first theoretically demonstrate the inherent connection between distribution shift, data perturbation, and model weight perturbation.
We then analyze the sufficient conditions to guarantee fairness for the target dataset.
Motivated by these sufficient conditions, we propose robust fairness regularization (RFR)
arXiv Detail & Related papers (2023-03-06T17:19:23Z) - Multiple Attribute Fairness: Application to Fraud Detection [0.0]
We propose a fairness measure relaxing the equality conditions in the popular equal odds fairness regime for classification.
We design an iterative-agnostic, grid-based model that calibrates the outcomes per sensitive attribute value to conform to the measure.
arXiv Detail & Related papers (2022-07-28T19:19:45Z) - Spread Spurious Attribute: Improving Worst-group Accuracy with Spurious
Attribute Estimation [72.92329724600631]
We propose a pseudo-attribute-based algorithm, coined Spread Spurious Attribute, for improving the worst-group accuracy.
Our experiments on various benchmark datasets show that our algorithm consistently outperforms the baseline methods.
We also demonstrate that the proposed SSA can achieve comparable performances to methods using full (100%) spurious attribute supervision.
arXiv Detail & Related papers (2022-04-05T09:08:30Z) - Deep Clustering based Fair Outlier Detection [19.601280507914325]
We propose an instance-level weighted representation learning strategy to enhance the joint deep clustering and outlier detection.
Our DCFOD method consistently achieves superior performance on both the outlier detection validity and two types of fairness notions in outlier detection.
arXiv Detail & Related papers (2021-06-09T15:12:26Z) - MultiFair: Multi-Group Fairness in Machine Learning [52.24956510371455]
We study multi-group fairness in machine learning (MultiFair)
We propose a generic end-to-end algorithmic framework to solve it.
Our proposed framework is generalizable to many different settings.
arXiv Detail & Related papers (2021-05-24T02:30:22Z) - Addressing Fairness in Classification with a Model-Agnostic
Multi-Objective Algorithm [33.145522561104464]
The goal of fairness in classification is to learn a classifier that does not discriminate against groups of individuals based on sensitive attributes, such as race and gender.
One approach to designing fair algorithms is to use relaxations of fairness notions as regularization terms.
We leverage this property to define a differentiable relaxation that approximates fairness notions provably better than existing relaxations.
arXiv Detail & Related papers (2020-09-09T17:40:24Z) - Data Augmentation Imbalance For Imbalanced Attribute Classification [60.71438625139922]
We propose a new re-sampling algorithm called: data augmentation imbalance (DAI) to explicitly enhance the ability to discriminate the fewer attributes.
Our DAI algorithm achieves state-of-the-art results, based on pedestrian attribute datasets.
arXiv Detail & Related papers (2020-04-19T20:43:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.