Procedural Fairness in Machine Learning
- URL: http://arxiv.org/abs/2404.01877v1
- Date: Tue, 2 Apr 2024 12:05:02 GMT
- Title: Procedural Fairness in Machine Learning
- Authors: Ziming Wang, Changwu Huang, Xin Yao,
- Abstract summary: We first define the procedural fairness of ML models, and then give formal definitions of individual and group procedural fairness.
We propose a novel metric to evaluate the group procedural fairness of ML models, called $GPF_FAE$.
- Score: 8.034966749391902
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Fairness in machine learning (ML) has received much attention. However, existing studies have mainly focused on the distributive fairness of ML models. The other dimension of fairness, i.e., procedural fairness, has been neglected. In this paper, we first define the procedural fairness of ML models, and then give formal definitions of individual and group procedural fairness. We propose a novel metric to evaluate the group procedural fairness of ML models, called $GPF_{FAE}$, which utilizes a widely used explainable artificial intelligence technique, namely feature attribution explanation (FAE), to capture the decision process of the ML models. We validate the effectiveness of $GPF_{FAE}$ on a synthetic dataset and eight real-world datasets. Our experiments reveal the relationship between procedural and distributive fairness of the ML model. Based on our analysis, we propose a method for identifying the features that lead to the procedural unfairness of the model and propose two methods to improve procedural fairness after identifying unfair features. Our experimental results demonstrate that we can accurately identify the features that lead to procedural unfairness in the ML model, and both of our proposed methods can significantly improve procedural fairness with a slight impact on model performance, while also improving distributive fairness.
Related papers
- Perceived Fairness of the Machine Learning Development Process: Concept Scale Development [0.0]
unfairness is triggered due to bias in the data, the data curation process, erroneous assumptions, and implicit bias rendered during the development process.
We propose operational attributes of perceived fairness to be transparency, accountability, and representativeness.
The multidimensional framework for perceived fairness offers a comprehensive understanding of perceived fairness.
arXiv Detail & Related papers (2025-01-23T06:51:31Z) - Procedural Fairness and Its Relationship with Distributive Fairness in Machine Learning [16.07834804195331]
This paper proposes a novel method to achieve procedural fairness during the model training phase.
The effectiveness of the proposed method is validated through experiments conducted on one synthetic and six real-world datasets.
arXiv Detail & Related papers (2025-01-12T08:42:32Z) - Multi-Output Distributional Fairness via Post-Processing [47.94071156898198]
We introduce a post-processing method for multi-output models to enhance a model's distributional parity, a task-agnostic fairness measure.
Our method employs an optimal transport mapping to move a model's outputs across different groups towards their empirical Wasserstein barycenter.
arXiv Detail & Related papers (2024-08-31T22:41:26Z) - Getting More Juice Out of the SFT Data: Reward Learning from Human Demonstration Improves SFT for LLM Alignment [65.15914284008973]
We propose to leverage an Inverse Reinforcement Learning (IRL) technique to simultaneously build an reward model and a policy model.
We show that the proposed algorithms converge to the stationary solutions of the IRL problem.
Our results indicate that it is beneficial to leverage reward learning throughout the entire alignment process.
arXiv Detail & Related papers (2024-05-28T07:11:05Z) - Fairness Reprogramming [42.65700878967251]
We propose a new generic fairness learning paradigm, called FairReprogram, which incorporates the model reprogramming technique.
Specifically, FairReprogram considers the case where models can not be changed and appends to the input a set of perturbations, called the fairness trigger.
We show both theoretically and empirically that the fairness trigger can effectively obscure demographic biases in the output prediction of fixed ML models.
arXiv Detail & Related papers (2022-09-21T09:37:00Z) - MACE: An Efficient Model-Agnostic Framework for Counterfactual
Explanation [132.77005365032468]
We propose a novel framework of Model-Agnostic Counterfactual Explanation (MACE)
In our MACE approach, we propose a novel RL-based method for finding good counterfactual examples and a gradient-less descent method for improving proximity.
Experiments on public datasets validate the effectiveness with better validity, sparsity and proximity.
arXiv Detail & Related papers (2022-05-31T04:57:06Z) - FairIF: Boosting Fairness in Deep Learning via Influence Functions with
Validation Set Sensitive Attributes [51.02407217197623]
We propose a two-stage training algorithm named FAIRIF.
It minimizes the loss over the reweighted data set where the sample weights are computed.
We show that FAIRIF yields models with better fairness-utility trade-offs against various types of bias.
arXiv Detail & Related papers (2022-01-15T05:14:48Z) - Reducing Unintended Bias of ML Models on Tabular and Textual Data [5.503546193689538]
We revisit the framework FixOut that is inspired in the approach "fairness through unawareness" to build fairer models.
We introduce several improvements such as automating the choice of FixOut's parameters.
We present several experimental results that illustrate the fact that FixOut improves process fairness on different classification settings.
arXiv Detail & Related papers (2021-08-05T14:55:56Z) - Transfer Learning without Knowing: Reprogramming Black-box Machine
Learning Models with Scarce Data and Limited Resources [78.72922528736011]
We propose a novel approach, black-box adversarial reprogramming (BAR), that repurposes a well-trained black-box machine learning model.
Using zeroth order optimization and multi-label mapping techniques, BAR can reprogram a black-box ML model solely based on its input-output responses.
BAR outperforms state-of-the-art methods and yields comparable performance to the vanilla adversarial reprogramming method.
arXiv Detail & Related papers (2020-07-17T01:52:34Z) - SenSeI: Sensitive Set Invariance for Enforcing Individual Fairness [50.916483212900275]
We first formulate a version of individual fairness that enforces invariance on certain sensitive sets.
We then design a transport-based regularizer that enforces this version of individual fairness and develop an algorithm to minimize the regularizer efficiently.
arXiv Detail & Related papers (2020-06-25T04:31:57Z) - Fair Bayesian Optimization [25.80374249896801]
We introduce a general constrained Bayesian optimization framework to optimize the performance of any machine learning (ML) model.
We apply BO with fairness constraints to a range of popular models, including random forests, boosting, and neural networks.
We show that our approach is competitive with specialized techniques that enforce model-specific fairness constraints.
arXiv Detail & Related papers (2020-06-09T08:31:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.