Causal Fairness Analysis
- URL: http://arxiv.org/abs/2207.11385v1
- Date: Sat, 23 Jul 2022 01:06:34 GMT
- Title: Causal Fairness Analysis
- Authors: Drago Plecko, Elias Bareinboim
- Abstract summary: We introduce a framework for understanding, modeling, and possibly solving issues of fairness in decision-making settings.
The main insight of our approach will be to link the quantification of the disparities present on the observed data with the underlying, and often unobserved, collection of causal mechanisms.
Our effort culminates in the Fairness Map, which is the first systematic attempt to organize and explain the relationship between different criteria found in the literature.
- Score: 68.12191782657437
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Decision-making systems based on AI and machine learning have been used
throughout a wide range of real-world scenarios, including healthcare, law
enforcement, education, and finance. It is no longer far-fetched to envision a
future where autonomous systems will be driving entire business decisions and,
more broadly, supporting large-scale decision-making infrastructure to solve
society's most challenging problems. Issues of unfairness and discrimination
are pervasive when decisions are being made by humans, and remain (or are
potentially amplified) when decisions are made using machines with little
transparency, accountability, and fairness. In this paper, we introduce a
framework for \textit{causal fairness analysis} with the intent of filling in
this gap, i.e., understanding, modeling, and possibly solving issues of
fairness in decision-making settings. The main insight of our approach will be
to link the quantification of the disparities present on the observed data with
the underlying, and often unobserved, collection of causal mechanisms that
generate the disparity in the first place, challenge we call the Fundamental
Problem of Causal Fairness Analysis (FPCFA). In order to solve the FPCFA, we
study the problem of decomposing variations and empirical measures of fairness
that attribute such variations to structural mechanisms and different units of
the population. Our effort culminates in the Fairness Map, which is the first
systematic attempt to organize and explain the relationship between different
criteria found in the literature. Finally, we study which causal assumptions
are minimally needed for performing causal fairness analysis and propose a
Fairness Cookbook, which allows data scientists to assess the existence of
disparate impact and disparate treatment.
Related papers
- Fairness Explainability using Optimal Transport with Applications in
Image Classification [0.46040036610482665]
We propose a comprehensive approach to uncover the causes of discrimination in Machine Learning applications.
We leverage Wasserstein barycenters to achieve fair predictions and introduce an extension to pinpoint bias-associated regions.
This allows us to derive a cohesive system which uses the enforced fairness to measure each features influence emphon the bias.
arXiv Detail & Related papers (2023-08-22T00:10:23Z) - Ecosystem-level Analysis of Deployed Machine Learning Reveals Homogeneous Outcomes [72.13373216644021]
We study the societal impact of machine learning by considering the collection of models that are deployed in a given context.
We find deployed machine learning is prone to systemic failure, meaning some users are exclusively misclassified by all models available.
These examples demonstrate ecosystem-level analysis has unique strengths for characterizing the societal impact of machine learning.
arXiv Detail & Related papers (2023-07-12T01:11:52Z) - Equal Confusion Fairness: Measuring Group-Based Disparities in Automated
Decision Systems [5.076419064097733]
This paper proposes a new equal confusion fairness test to check an automated decision system for fairness and a new confusion parity error to quantify the extent of any unfairness.
Overall, the methods and metrics provided here may assess automated decision systems' fairness as part of a more extensive accountability assessment.
arXiv Detail & Related papers (2023-07-02T04:44:19Z) - Fairness meets Cross-Domain Learning: a new perspective on Models and
Metrics [80.07271410743806]
We study the relationship between cross-domain learning (CD) and model fairness.
We introduce a benchmark on face and medical images spanning several demographic groups as well as classification and localization tasks.
Our study covers 14 CD approaches alongside three state-of-the-art fairness algorithms and shows how the former can outperform the latter.
arXiv Detail & Related papers (2023-03-25T09:34:05Z) - Survey on Fairness Notions and Related Tensions [4.257210316104905]
Automated decision systems are increasingly used to take consequential decisions in problems such as job hiring and loan granting.
However, objective machine learning (ML) algorithms are prone to bias, which results in yet unfair decisions.
This paper surveys the commonly used fairness notions and discusses the tensions among them with privacy and accuracy.
arXiv Detail & Related papers (2022-09-16T13:36:05Z) - The Equity Framework: Fairness Beyond Equalized Predictive Outcomes [0.0]
We study fairness issues that arise when decision-makers use models that deviate from the models that depict the physical and social environment.
We formulate an Equity Framework that considers equal access to the model, equal outcomes from the model, and equal utilization of the model.
arXiv Detail & Related papers (2022-04-18T20:49:51Z) - Addressing Fairness, Bias and Class Imbalance in Machine Learning: the
FBI-loss [11.291571222801027]
We propose a unified loss correction to address issues related to Fairness, Biases and Imbalances (FBI-loss)
The correction capabilities of the proposed approach are assessed on three real-world benchmarks.
arXiv Detail & Related papers (2021-05-13T15:01:14Z) - Leveraging Expert Consistency to Improve Algorithmic Decision Support [62.61153549123407]
We explore the use of historical expert decisions as a rich source of information that can be combined with observed outcomes to narrow the construct gap.
We propose an influence function-based methodology to estimate expert consistency indirectly when each case in the data is assessed by a single expert.
Our empirical evaluation, using simulations in a clinical setting and real-world data from the child welfare domain, indicates that the proposed approach successfully narrows the construct gap.
arXiv Detail & Related papers (2021-01-24T05:40:29Z) - Through the Data Management Lens: Experimental Analysis and Evaluation
of Fair Classification [75.49600684537117]
Data management research is showing an increasing presence and interest in topics related to data and algorithmic fairness.
We contribute a broad analysis of 13 fair classification approaches and additional variants, over their correctness, fairness, efficiency, scalability, and stability.
Our analysis highlights novel insights on the impact of different metrics and high-level approach characteristics on different aspects of performance.
arXiv Detail & Related papers (2021-01-18T22:55:40Z) - On Consequentialism and Fairness [64.35872952140677]
We provide a consequentialist critique of common definitions of fairness within machine learning.
We conclude with a broader discussion of the issues of learning and randomization.
arXiv Detail & Related papers (2020-01-02T05:39:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.