Understanding Instance-Level Impact of Fairness Constraints
- URL: http://arxiv.org/abs/2206.15437v1
- Date: Thu, 30 Jun 2022 17:31:33 GMT
- Title: Understanding Instance-Level Impact of Fairness Constraints
- Authors: Jialu Wang and Xin Eric Wang and Yang Liu
- Abstract summary: We study the influence of training examples when fairness constraints are imposed.
We find that training on a subset of weighty data examples leads to lower fairness violations with a trade-off of accuracy.
- Score: 12.866655972682254
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A variety of fairness constraints have been proposed in the literature to
mitigate group-level statistical bias. Their impacts have been largely
evaluated for different groups of populations corresponding to a set of
sensitive attributes, such as race or gender. Nonetheless, the community has
not observed sufficient explorations for how imposing fairness constraints fare
at an instance level. Building on the concept of influence function, a measure
that characterizes the impact of a training example on the target model and its
predictive performance, this work studies the influence of training examples
when fairness constraints are imposed. We find out that under certain
assumptions, the influence function with respect to fairness constraints can be
decomposed into a kernelized combination of training examples. One promising
application of the proposed fairness influence function is to identify
suspicious training examples that may cause model discrimination by ranking
their influence scores. We demonstrate with extensive experiments that training
on a subset of weighty data examples leads to lower fairness violations with a
trade-off of accuracy.
Related papers
- Fairness-Accuracy Trade-Offs: A Causal Perspective [58.06306331390586]
We analyze the tension between fairness and accuracy from a causal lens for the first time.
We show that enforcing a causal constraint often reduces the disparity between demographic groups.
We introduce a new neural approach for causally-constrained fair learning.
arXiv Detail & Related papers (2024-05-24T11:19:52Z) - Fair Few-shot Learning with Auxiliary Sets [53.30014767684218]
In many machine learning (ML) tasks, only very few labeled data samples can be collected, which can lead to inferior fairness performance.
In this paper, we define the fairness-aware learning task with limited training samples as the emphfair few-shot learning problem.
We devise a novel framework that accumulates fairness-aware knowledge across different meta-training tasks and then generalizes the learned knowledge to meta-test tasks.
arXiv Detail & Related papers (2023-08-28T06:31:37Z) - Learning for Counterfactual Fairness from Observational Data [62.43249746968616]
Fairness-aware machine learning aims to eliminate biases of learning models against certain subgroups described by certain protected (sensitive) attributes such as race, gender, and age.
A prerequisite for existing methods to achieve counterfactual fairness is the prior human knowledge of the causal model for the data.
In this work, we address the problem of counterfactually fair prediction from observational data without given causal models by proposing a novel framework CLAIRE.
arXiv Detail & Related papers (2023-07-17T04:08:29Z) - On the Cause of Unfairness: A Training Sample Perspective [13.258569961897907]
We look into the problem through the lens of training data - the major source of unfairness.
We quantify the influence of training samples on unfairness by counterfactually changing samples based on predefined concepts.
Our framework not only can help practitioners understand the observed unfairness and mitigate it by repairing their training data, but also leads to many other applications.
arXiv Detail & Related papers (2023-06-30T17:48:19Z) - Systematic Evaluation of Predictive Fairness [60.0947291284978]
Mitigating bias in training on biased datasets is an important open problem.
We examine the performance of various debiasing methods across multiple tasks.
We find that data conditions have a strong influence on relative model performance.
arXiv Detail & Related papers (2022-10-17T05:40:13Z) - Why Fair Labels Can Yield Unfair Predictions: Graphical Conditions for
Introduced Unfairness [14.710365964629066]
In addition to reproducing discriminatory relationships in the training data, machine learning systems can also introduce or amplify discriminatory effects.
We refer to this as introduced unfairness, and investigate the conditions under which it may arise.
We propose introduced total variation as a measure of introduced unfairness, and establish graphical conditions under which it may be incentivised to occur.
arXiv Detail & Related papers (2022-02-22T11:16:26Z) - FairIF: Boosting Fairness in Deep Learning via Influence Functions with
Validation Set Sensitive Attributes [51.02407217197623]
We propose a two-stage training algorithm named FAIRIF.
It minimizes the loss over the reweighted data set where the sample weights are computed.
We show that FAIRIF yields models with better fairness-utility trade-offs against various types of bias.
arXiv Detail & Related papers (2022-01-15T05:14:48Z) - Measuring Fairness Under Unawareness of Sensitive Attributes: A
Quantification-Based Approach [131.20444904674494]
We tackle the problem of measuring group fairness under unawareness of sensitive attributes.
We show that quantification approaches are particularly suited to tackle the fairness-under-unawareness problem.
arXiv Detail & Related papers (2021-09-17T13:45:46Z) - On the Fairness of Causal Algorithmic Recourse [36.519629650529666]
We propose two new fairness criteria at the group and individual level.
We show that fairness of recourse is complementary to fairness of prediction.
We discuss whether fairness violations in the data generating process revealed by our criteria may be better addressed by societal interventions.
arXiv Detail & Related papers (2020-10-13T16:35:06Z) - Improving Fair Predictions Using Variational Inference In Causal Models [8.557308138001712]
The importance of algorithmic fairness grows with the increasing impact machine learning has on people's lives.
Recent work on fairness metrics shows the need for causal reasoning in fairness constraints.
This research aims to contribute to machine learning techniques which honour our ethical and legal boundaries.
arXiv Detail & Related papers (2020-08-25T08:27:11Z) - RelatIF: Identifying Explanatory Training Examples via Relative
Influence [13.87851325824883]
We use influence functions to identify relevant training examples that one might hope "explain" the predictions of a machine learning model.
We introduce RelatIF, a new class of criteria for choosing relevant training examples by way of an optimization objective that places a constraint on global influence.
In empirical evaluations, we find that the examples returned by RelatIF are more intuitive when compared to those found using influence functions.
arXiv Detail & Related papers (2020-03-25T20:59:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.