Learning Antidote Data to Individual Unfairness
- URL: http://arxiv.org/abs/2211.15897v3
- Date: Wed, 24 May 2023 04:56:59 GMT
- Title: Learning Antidote Data to Individual Unfairness
- Authors: Peizhao Li, Ethan Xia, Hongfu Liu
- Abstract summary: Individual fairness is a vital notion to describe fair treatment for individual cases.
Previous studies characterize individual fairness as a prediction-invariant problem.
We show our method resists individual unfairness at a minimal or zero cost to predictive utility.
- Score: 23.119278763970037
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Fairness is essential for machine learning systems deployed in high-stake
applications. Among all fairness notions, individual fairness, deriving from a
consensus that `similar individuals should be treated similarly,' is a vital
notion to describe fair treatment for individual cases. Previous studies
typically characterize individual fairness as a prediction-invariant problem
when perturbing sensitive attributes on samples, and solve it by
Distributionally Robust Optimization (DRO) paradigm. However, such adversarial
perturbations along a direction covering sensitive information used in DRO do
not consider the inherent feature correlations or innate data constraints,
therefore could mislead the model to optimize at off-manifold and unrealistic
samples. In light of this drawback, in this paper, we propose to learn and
generate antidote data that approximately follows the data distribution to
remedy individual unfairness. These generated on-manifold antidote data can be
used through a generic optimization procedure along with original training
data, resulting in a pure pre-processing approach to individual unfairness, or
can also fit well with the in-processing DRO paradigm. Through extensive
experiments on multiple tabular datasets, we demonstrate our method resists
individual unfairness at a minimal or zero cost to predictive utility compared
to baselines.
Related papers
- Alpha and Prejudice: Improving $α$-sized Worst-case Fairness via Intrinsic Reweighting [34.954141077528334]
Worst-case fairness with off-the-shelf demographics group achieves parity by maximizing the model utility of the worst-off group.
Recent advances have reframed this learning problem by introducing the lower bound of minimal partition ratio.
arXiv Detail & Related papers (2024-11-05T13:04:05Z) - MITA: Bridging the Gap between Model and Data for Test-time Adaptation [68.62509948690698]
Test-Time Adaptation (TTA) has emerged as a promising paradigm for enhancing the generalizability of models.
We propose Meet-In-The-Middle based MITA, which introduces energy-based optimization to encourage mutual adaptation of the model and data from opposing directions.
arXiv Detail & Related papers (2024-10-12T07:02:33Z) - Achievable Fairness on Your Data With Utility Guarantees [16.78730663293352]
In machine learning fairness, training models that minimize disparity across different sensitive groups often leads to diminished accuracy.
We present a computationally efficient approach to approximate the fairness-accuracy trade-off curve tailored to individual datasets.
We introduce a novel methodology for quantifying uncertainty in our estimates, thereby providing practitioners with a robust framework for auditing model fairness.
arXiv Detail & Related papers (2024-02-27T00:59:32Z) - Fairness Without Harm: An Influence-Guided Active Sampling Approach [32.173195437797766]
We aim to train models that mitigate group fairness disparity without causing harm to model accuracy.
The current data acquisition methods, such as fair active learning approaches, typically require annotating sensitive attributes.
We propose a tractable active data sampling algorithm that does not rely on training group annotations.
arXiv Detail & Related papers (2024-02-20T07:57:38Z) - Self-Supervised Dataset Distillation for Transfer Learning [77.4714995131992]
We propose a novel problem of distilling an unlabeled dataset into a set of small synthetic samples for efficient self-supervised learning (SSL)
We first prove that a gradient of synthetic samples with respect to a SSL objective in naive bilevel optimization is textitbiased due to randomness originating from data augmentations or masking.
We empirically validate the effectiveness of our method on various applications involving transfer learning.
arXiv Detail & Related papers (2023-10-10T10:48:52Z) - Delving into Identify-Emphasize Paradigm for Combating Unknown Bias [52.76758938921129]
We propose an effective bias-conflicting scoring method (ECS) to boost the identification accuracy.
We also propose gradient alignment (GA) to balance the contributions of the mined bias-aligned and bias-conflicting samples.
Experiments are conducted on multiple datasets in various settings, demonstrating that the proposed solution can mitigate the impact of unknown biases.
arXiv Detail & Related papers (2023-02-22T14:50:24Z) - Simultaneous Improvement of ML Model Fairness and Performance by
Identifying Bias in Data [1.76179873429447]
We propose a data preprocessing technique that can detect instances ascribing a specific kind of bias that should be removed from the dataset before training.
In particular, we claim that in the problem settings where instances exist with similar feature but different labels caused by variation in protected attributes, an inherent bias gets induced in the dataset.
arXiv Detail & Related papers (2022-10-24T13:04:07Z) - Fair mapping [0.0]
We propose a novel pre-processing method based on the transformation of the distribution of protected groups onto a chosen target one.
We leverage on the recent works of the Wasserstein GAN and AttGAN frameworks to achieve the optimal transport of data points.
Our proposed approach, preserves the interpretability of data and can be used without defining exactly the sensitive groups.
arXiv Detail & Related papers (2022-09-01T17:31:27Z) - Leveraging Unlabeled Data to Predict Out-of-Distribution Performance [63.740181251997306]
Real-world machine learning deployments are characterized by mismatches between the source (training) and target (test) distributions.
In this work, we investigate methods for predicting the target domain accuracy using only labeled source data and unlabeled target data.
We propose Average Thresholded Confidence (ATC), a practical method that learns a threshold on the model's confidence, predicting accuracy as the fraction of unlabeled examples.
arXiv Detail & Related papers (2022-01-11T23:01:12Z) - Learning Bias-Invariant Representation by Cross-Sample Mutual
Information Minimization [77.8735802150511]
We propose a cross-sample adversarial debiasing (CSAD) method to remove the bias information misused by the target task.
The correlation measurement plays a critical role in adversarial debiasing and is conducted by a cross-sample neural mutual information estimator.
We conduct thorough experiments on publicly available datasets to validate the advantages of the proposed method over state-of-the-art approaches.
arXiv Detail & Related papers (2021-08-11T21:17:02Z) - Examining and Combating Spurious Features under Distribution Shift [94.31956965507085]
We define and analyze robust and spurious representations using the information-theoretic concept of minimal sufficient statistics.
We prove that even when there is only bias of the input distribution, models can still pick up spurious features from their training data.
Inspired by our analysis, we demonstrate that group DRO can fail when groups do not directly account for various spurious correlations.
arXiv Detail & Related papers (2021-06-14T05:39:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.