On the Choice of Fairness: Finding Representative Fairness Metrics for a
Given Context
- URL: http://arxiv.org/abs/2109.05697v1
- Date: Mon, 13 Sep 2021 04:17:38 GMT
- Title: On the Choice of Fairness: Finding Representative Fairness Metrics for a
Given Context
- Authors: Hadis Anahideh, Nazanin Nezami, Abolfazl Asudeh
- Abstract summary: Various notions of fairness have been defined, though choosing an appropriate metric is cumbersome.
Trade-offs and impossibility theorems make such selection even more complicated and controversial.
We propose a framework that automatically discovers the correlations and trade-offs between different pairs of measures for a given context.
- Score: 5.667221573173013
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: It is of critical importance to be aware of the historical discrimination
embedded in the data and to consider a fairness measure to reduce bias
throughout the predictive modeling pipeline. Various notions of fairness have
been defined, though choosing an appropriate metric is cumbersome. Trade-offs
and impossibility theorems make such selection even more complicated and
controversial. In practice, users (perhaps regular data scientists) should
understand each of the measures and (if possible) manually explore the
combinatorial space of different measures before they can decide which
combination is preferred based on the context, the use case, and regulations.
To alleviate the burden of selecting fairness notions for consideration, we
propose a framework that automatically discovers the correlations and
trade-offs between different pairs of measures for a given context. Our
framework dramatically reduces the exploration space by finding a small subset
of measures that represent others and highlighting the trade-offs between them.
This allows users to view unfairness from various perspectives that might
otherwise be ignored due to the sheer size of the exploration space. We
showcase the validity of the proposal using comprehensive experiments on
real-world benchmark data sets.
Related papers
- Fairness-Accuracy Trade-Offs: A Causal Perspective [58.06306331390586]
We analyze the tension between fairness and accuracy from a causal lens for the first time.
We show that enforcing a causal constraint often reduces the disparity between demographic groups.
We introduce a new neural approach for causally-constrained fair learning.
arXiv Detail & Related papers (2024-05-24T11:19:52Z) - Does Machine Bring in Extra Bias in Learning? Approximating Fairness in Models Promptly [2.002741592555996]
Existing techniques for assessing the discrimination level of machine learning models include commonly used group and individual fairness measures.
We propose a "harmonic fairness measure via manifold (HFM)" based on distances between sets.
Empirical results indicate that the proposed fairness measure HFM is valid and that the proposed ApproxDist is effective and efficient.
arXiv Detail & Related papers (2024-05-15T11:07:40Z) - In Search of Insights, Not Magic Bullets: Towards Demystification of the
Model Selection Dilemma in Heterogeneous Treatment Effect Estimation [92.51773744318119]
This paper empirically investigates the strengths and weaknesses of different model selection criteria.
We highlight that there is a complex interplay between selection strategies, candidate estimators and the data used for comparing them.
arXiv Detail & Related papers (2023-02-06T16:55:37Z) - Relational Proxies: Emergent Relationships as Fine-Grained
Discriminators [52.17542855760418]
We propose a novel approach that leverages information between the global and local part of an object for encoding its label.
We design Proxies based on our theoretical findings and evaluate it on seven challenging fine-grained benchmark datasets.
We also experimentally validate our theory and obtain consistent results across multiple benchmarks.
arXiv Detail & Related papers (2022-10-05T11:08:04Z) - Fairness and robustness in anti-causal prediction [73.693135253335]
Robustness to distribution shift and fairness have independently emerged as two important desiderata required of machine learning models.
While these two desiderata seem related, the connection between them is often unclear in practice.
By taking this perspective, we draw explicit connections between a common fairness criterion - separation - and a common notion of robustness.
arXiv Detail & Related papers (2022-09-20T02:41:17Z) - Survey on Fairness Notions and Related Tensions [4.257210316104905]
Automated decision systems are increasingly used to take consequential decisions in problems such as job hiring and loan granting.
However, objective machine learning (ML) algorithms are prone to bias, which results in yet unfair decisions.
This paper surveys the commonly used fairness notions and discusses the tensions among them with privacy and accuracy.
arXiv Detail & Related papers (2022-09-16T13:36:05Z) - Joint Multisided Exposure Fairness for Recommendation [76.75990595228666]
This paper formalizes a family of exposure fairness metrics that model the problem jointly from the perspective of both the consumers and producers.
Specifically, we consider group attributes for both types of stakeholders to identify and mitigate fairness concerns that go beyond individual users and items towards more systemic biases in recommendation.
arXiv Detail & Related papers (2022-04-29T19:13:23Z) - Measuring Fairness of Text Classifiers via Prediction Sensitivity [63.56554964580627]
ACCUMULATED PREDICTION SENSITIVITY measures fairness in machine learning models based on the model's prediction sensitivity to perturbations in input features.
We show that the metric can be theoretically linked with a specific notion of group fairness (statistical parity) and individual fairness.
arXiv Detail & Related papers (2022-03-16T15:00:33Z) - Adaptive Data Debiasing through Bounded Exploration and Fairness [19.082622108240585]
Biases in existing datasets used to train algorithmic decision rules can raise ethical, societal, and economic concerns.
We propose an algorithm for sequentially debiasing such datasets through adaptive and bounded exploration.
arXiv Detail & Related papers (2021-10-25T15:50:10Z) - Everything is Relative: Understanding Fairness with Optimal Transport [1.160208922584163]
We present an optimal transport-based approach to fairness that offers an interpretable and quantifiable exploration of bias and its structure.
Our framework is able to recover well known examples of algorithmic discrimination, detect unfairness when other metrics fail, and explore recourse opportunities.
arXiv Detail & Related papers (2021-02-20T13:57:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.