A Novel Approach to Fairness in Automated Decision-Making using
Affective Normalization
- URL: http://arxiv.org/abs/2205.00819v1
- Date: Mon, 2 May 2022 11:48:53 GMT
- Title: A Novel Approach to Fairness in Automated Decision-Making using
Affective Normalization
- Authors: Jesse Hoey and Gabrielle Chan
- Abstract summary: We propose a method for measuring the affective, socially biased, component, thus enabling its removal.
That is, given a decision-making process, these affective measurements remove the affective bias in the decision, rendering it fair across a set of categories defined by the method itself.
- Score: 2.0178765779788495
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Any decision, such as one about who to hire, involves two components. First,
a rational component, i.e., they have a good education, they speak clearly.
Second, an affective component, based on observables such as visual features of
race and gender, and possibly biased by stereotypes. Here we propose a method
for measuring the affective, socially biased, component, thus enabling its
removal. That is, given a decision-making process, these affective measurements
remove the affective bias in the decision, rendering it fair across a set of
categories defined by the method itself. We thus propose that this may solve
three key problems in intersectional fairness: (1) the definition of categories
over which fairness is a consideration; (2) an infinite regress into smaller
and smaller groups; and (3) ensuring a fair distribution based on basic human
rights or other prior information. The primary idea in this paper is that
fairness biases can be measured using affective coherence, and that this can be
used to normalize outcome mappings. We aim for this conceptual work to expose a
novel method for handling fairness problems that uses emotional coherence as an
independent measure of bias that goes beyond statistical parity.
Related papers
- How to be fair? A study of label and selection bias [3.018638214344819]
It is widely accepted that biased data leads to biased and potentially unfair models.
Several measures for bias in data and model predictions have been proposed, as well as bias mitigation techniques.
Despite the myriad of mitigation techniques developed in the past decade, it is still poorly understood under what circumstances which methods work.
arXiv Detail & Related papers (2024-03-21T10:43:55Z) - Causal Context Connects Counterfactual Fairness to Robust Prediction and
Group Fairness [15.83823345486604]
We motivatefactual fairness by showing that there is not a fundamental trade-off between fairness and accuracy.
Counterfactual fairness can sometimes be tested by measuring relatively simple group fairness metrics.
arXiv Detail & Related papers (2023-10-30T16:07:57Z) - Evaluating the Fairness of Discriminative Foundation Models in Computer
Vision [51.176061115977774]
We propose a novel taxonomy for bias evaluation of discriminative foundation models, such as Contrastive Language-Pretraining (CLIP)
We then systematically evaluate existing methods for mitigating bias in these models with respect to our taxonomy.
Specifically, we evaluate OpenAI's CLIP and OpenCLIP models for key applications, such as zero-shot classification, image retrieval and image captioning.
arXiv Detail & Related papers (2023-10-18T10:32:39Z) - When mitigating bias is unfair: multiplicity and arbitrariness in algorithmic group fairness [8.367620276482056]
We introduce the FRAME (FaiRness Arbitrariness and Multiplicity Evaluation) framework, which evaluates bias mitigation through five dimensions.
Applying FRAME to various bias mitigation approaches across key datasets allows us to exhibit significant differences in the behaviors of debiasing methods.
These findings highlight the limitations of current fairness criteria and the inherent arbitrariness in the debiasing process.
arXiv Detail & Related papers (2023-02-14T16:53:52Z) - How Robust is Your Fairness? Evaluating and Sustaining Fairness under
Unseen Distribution Shifts [107.72786199113183]
We propose a novel fairness learning method termed CUrvature MAtching (CUMA)
CUMA achieves robust fairness generalizable to unseen domains with unknown distributional shifts.
We evaluate our method on three popular fairness datasets.
arXiv Detail & Related papers (2022-07-04T02:37:50Z) - On Disentangled and Locally Fair Representations [95.6635227371479]
We study the problem of performing classification in a manner that is fair for sensitive groups, such as race and gender.
We learn a locally fair representation, such that, under the learned representation, the neighborhood of each sample is balanced in terms of the sensitive attribute.
arXiv Detail & Related papers (2022-05-05T14:26:50Z) - Measuring Fairness of Text Classifiers via Prediction Sensitivity [63.56554964580627]
ACCUMULATED PREDICTION SENSITIVITY measures fairness in machine learning models based on the model's prediction sensitivity to perturbations in input features.
We show that the metric can be theoretically linked with a specific notion of group fairness (statistical parity) and individual fairness.
arXiv Detail & Related papers (2022-03-16T15:00:33Z) - Towards Equal Opportunity Fairness through Adversarial Learning [64.45845091719002]
Adversarial training is a common approach for bias mitigation in natural language processing.
We propose an augmented discriminator for adversarial training, which takes the target class as input to create richer features.
arXiv Detail & Related papers (2022-03-12T02:22:58Z) - Fair Group-Shared Representations with Normalizing Flows [68.29997072804537]
We develop a fair representation learning algorithm which is able to map individuals belonging to different groups in a single group.
We show experimentally that our methodology is competitive with other fair representation learning algorithms.
arXiv Detail & Related papers (2022-01-17T10:49:49Z) - fairadapt: Causal Reasoning for Fair Data Pre-processing [2.1915057426589746]
This manuscript describes the R-package fairadapt, which implements a causal inference pre-processing method.
We discuss appropriate relaxations which assume certain causal pathways from the sensitive attribute to the outcome are not discriminatory.
arXiv Detail & Related papers (2021-10-19T18:48:28Z) - Bias in Machine Learning Software: Why? How? What to do? [15.525314212209564]
This paper postulates that the root causes of bias are the prior decisions that affect- (a) what data was selected and (b) the labels assigned to those examples.
Our Fair-SMOTE algorithm removes biased labels; and rebalances internal distributions such that based on sensitive attribute, examples are equal in both positive and negative classes.
arXiv Detail & Related papers (2021-05-25T20:15:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.