Parametric Fairness with Statistical Guarantees
- URL: http://arxiv.org/abs/2310.20508v1
- Date: Tue, 31 Oct 2023 14:52:39 GMT
- Title: Parametric Fairness with Statistical Guarantees
- Authors: Fran\c{c}ois HU and Philipp Ratz and Arthur Charpentier
- Abstract summary: We extend the concept of Demographic Parity to incorporate distributional properties in predictions, allowing expert knowledge to be used in the fair solution.
We illustrate the use of this new metric through a practical example of wages, and develop a parametric method that efficiently addresses practical challenges.
- Score: 0.46040036610482665
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Algorithmic fairness has gained prominence due to societal and regulatory
concerns about biases in Machine Learning models. Common group fairness metrics
like Equalized Odds for classification or Demographic Parity for both
classification and regression are widely used and a host of computationally
advantageous post-processing methods have been developed around them. However,
these metrics often limit users from incorporating domain knowledge. Despite
meeting traditional fairness criteria, they can obscure issues related to
intersectional fairness and even replicate unwanted intra-group biases in the
resulting fair solution. To avoid this narrow perspective, we extend the
concept of Demographic Parity to incorporate distributional properties in the
predictions, allowing expert knowledge to be used in the fair solution. We
illustrate the use of this new metric through a practical example of wages, and
develop a parametric method that efficiently addresses practical challenges
like limited training data and constraints on total spending, offering a robust
solution for real-life applications.
Related papers
- Editable Fairness: Fine-Grained Bias Mitigation in Language Models [52.66450426729818]
We propose a novel debiasing approach, Fairness Stamp (FAST), which enables fine-grained calibration of individual social biases.
FAST surpasses state-of-the-art baselines with superior debiasing performance.
This highlights the potential of fine-grained debiasing strategies to achieve fairness in large language models.
arXiv Detail & Related papers (2024-08-07T17:14:58Z) - Individual Fairness under Uncertainty [26.183244654397477]
Algorithmic fairness is an established area in machine learning (ML) algorithms.
We propose an individual fairness measure and a corresponding algorithm that deal with the challenges of uncertainty arising from censorship in class labels.
We argue that this perspective represents a more realistic model of fairness research for real-world application deployment.
arXiv Detail & Related papers (2023-02-16T01:07:58Z) - Fairness in Matching under Uncertainty [78.39459690570531]
algorithmic two-sided marketplaces have drawn attention to the issue of fairness in such settings.
We axiomatize a notion of individual fairness in the two-sided marketplace setting which respects the uncertainty in the merits.
We design a linear programming framework to find fair utility-maximizing distributions over allocations.
arXiv Detail & Related papers (2023-02-08T00:30:32Z) - Practical Approaches for Fair Learning with Multitype and Multivariate
Sensitive Attributes [70.6326967720747]
It is important to guarantee that machine learning algorithms deployed in the real world do not result in unfairness or unintended social consequences.
We introduce FairCOCCO, a fairness measure built on cross-covariance operators on reproducing kernel Hilbert Spaces.
We empirically demonstrate consistent improvements against state-of-the-art techniques in balancing predictive power and fairness on real-world datasets.
arXiv Detail & Related papers (2022-11-11T11:28:46Z) - Fair Inference for Discrete Latent Variable Models [12.558187319452657]
Machine learning models, trained on data without due care, often exhibit unfair and discriminatory behavior against certain populations.
We develop a fair variational inference technique for the discrete latent variables, which is accomplished by including a fairness penalty on the variational distribution.
To demonstrate the generality of our approach and its potential for real-world impact, we then develop a special-purpose graphical model for criminal justice risk assessments.
arXiv Detail & Related papers (2022-09-15T04:54:21Z) - Contrastive Learning for Fair Representations [50.95604482330149]
Trained classification models can unintentionally lead to biased representations and predictions.
Existing debiasing methods for classification models, such as adversarial training, are often expensive to train and difficult to optimise.
We propose a method for mitigating bias by incorporating contrastive learning, in which instances sharing the same class label are encouraged to have similar representations.
arXiv Detail & Related papers (2021-09-22T10:47:51Z) - Towards a Fairness-Aware Scoring System for Algorithmic Decision-Making [35.21763166288736]
We propose a general framework to create data-driven fairness-aware scoring systems.
We show that the proposed framework provides practitioners or policymakers great flexibility to select their desired fairness requirements.
arXiv Detail & Related papers (2021-09-21T09:46:35Z) - Measuring Fairness Under Unawareness of Sensitive Attributes: A
Quantification-Based Approach [131.20444904674494]
We tackle the problem of measuring group fairness under unawareness of sensitive attributes.
We show that quantification approaches are particularly suited to tackle the fairness-under-unawareness problem.
arXiv Detail & Related papers (2021-09-17T13:45:46Z) - Unsupervised Learning of Debiased Representations with Pseudo-Attributes [85.5691102676175]
We propose a simple but effective debiasing technique in an unsupervised manner.
We perform clustering on the feature embedding space and identify pseudoattributes by taking advantage of the clustering results.
We then employ a novel cluster-based reweighting scheme for learning debiased representation.
arXiv Detail & Related papers (2021-08-06T05:20:46Z) - Learning to Generate Fair Clusters from Demonstrations [27.423983748614198]
We show how to identify the intended fairness constraint for a problem based on limited demonstrations from an expert.
We present an algorithm to identify the fairness metric from demonstrations and generate clusters using existing off-the-shelf clustering techniques.
We investigate how to generate interpretable solutions using our approach.
arXiv Detail & Related papers (2021-02-08T03:09:33Z) - Beyond traditional assumptions in fair machine learning [5.029280887073969]
This thesis scrutinizes common assumptions underlying traditional machine learning approaches to fairness in consequential decision making.
We show that group fairness criteria purely based on statistical properties of observed data are fundamentally limited.
We overcome the assumption that sensitive data is readily available in practice.
arXiv Detail & Related papers (2021-01-29T09:02:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.