Convex Fairness Constrained Model Using Causal Effect Estimators
- URL: http://arxiv.org/abs/2002.06501v1
- Date: Sun, 16 Feb 2020 03:40:04 GMT
- Title: Convex Fairness Constrained Model Using Causal Effect Estimators
- Authors: Hikaru Ogura and Akiko Takeda
- Abstract summary: We devise novel models, called FairCEEs, which remove discrimination while keeping explanatory bias.
We provide an efficient algorithm for solving FairCEEs in regression and binary classification tasks.
- Score: 6.414055487487486
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent years have seen much research on fairness in machine learning. Here,
mean difference (MD) or demographic parity is one of the most popular measures
of fairness. However, MD quantifies not only discrimination but also
explanatory bias which is the difference of outcomes justified by explanatory
features. In this paper, we devise novel models, called FairCEEs, which remove
discrimination while keeping explanatory bias. The models are based on
estimators of causal effect utilizing propensity score analysis. We prove that
FairCEEs with the squared loss theoretically outperform a naive MD constraint
model. We provide an efficient algorithm for solving FairCEEs in regression and
binary classification tasks. In our experiment on synthetic and real-world data
in these two tasks, FairCEEs outperformed an existing model that considers
explanatory bias in specific cases.
Related papers
- FairGridSearch: A Framework to Compare Fairness-Enhancing Models [0.0]
This paper focuses on binary classification and proposes FairGridSearch, a novel framework for comparing fairness-enhancing models.
The study applies FairGridSearch to three popular datasets (Adult, COMPAS, and German Credit) and analyzes the impacts of metric selection, base estimator choice, and classification threshold on model fairness.
arXiv Detail & Related papers (2024-01-04T10:29:02Z) - Fast Model Debias with Machine Unlearning [54.32026474971696]
Deep neural networks might behave in a biased manner in many real-world scenarios.
Existing debiasing methods suffer from high costs in bias labeling or model re-training.
We propose a fast model debiasing framework (FMD) which offers an efficient approach to identify, evaluate and remove biases.
arXiv Detail & Related papers (2023-10-19T08:10:57Z) - Learning for Counterfactual Fairness from Observational Data [62.43249746968616]
Fairness-aware machine learning aims to eliminate biases of learning models against certain subgroups described by certain protected (sensitive) attributes such as race, gender, and age.
A prerequisite for existing methods to achieve counterfactual fairness is the prior human knowledge of the causal model for the data.
In this work, we address the problem of counterfactually fair prediction from observational data without given causal models by proposing a novel framework CLAIRE.
arXiv Detail & Related papers (2023-07-17T04:08:29Z) - Non-Invasive Fairness in Learning through the Lens of Data Drift [88.37640805363317]
We show how to improve the fairness of Machine Learning models without altering the data or the learning algorithm.
We use a simple but key insight: the divergence of trends between different populations, and, consecutively, between a learned model and minority populations, is analogous to data drift.
We explore two strategies (model-splitting and reweighing) to resolve this drift, aiming to improve the overall conformance of models to the underlying data.
arXiv Detail & Related papers (2023-03-30T17:30:42Z) - Achieving Counterfactual Fairness with Imperfect Structural Causal Model [11.108866104714627]
We propose a novel minimax game-theoretic model for counterfactual fairness.
We also theoretically prove the error bound of the proposed minimax model.
Empirical experiments on multiple real-world datasets illustrate our superior performance in both accuracy and fairness.
arXiv Detail & Related papers (2023-03-26T09:37:29Z) - DualFair: Fair Representation Learning at Both Group and Individual
Levels via Contrastive Self-supervision [73.80009454050858]
This work presents a self-supervised model, called DualFair, that can debias sensitive attributes like gender and race from learned representations.
Our model jointly optimize for two fairness criteria - group fairness and counterfactual fairness.
arXiv Detail & Related papers (2023-03-15T07:13:54Z) - FairAdaBN: Mitigating unfairness with adaptive batch normalization and
its application to dermatological disease classification [14.589159162086926]
We propose FairAdaBN, which makes batch normalization adaptive to sensitive attribute.
We propose a new metric, named Fairness-Accuracy Trade-off Efficiency (FATE), to compute normalized fairness improvement over accuracy drop.
Experiments on two dermatological datasets show that our proposed method outperforms other methods on fairness criteria and FATE.
arXiv Detail & Related papers (2023-03-15T02:22:07Z) - Chasing Fairness Under Distribution Shift: A Model Weight Perturbation
Approach [72.19525160912943]
We first theoretically demonstrate the inherent connection between distribution shift, data perturbation, and model weight perturbation.
We then analyze the sufficient conditions to guarantee fairness for the target dataset.
Motivated by these sufficient conditions, we propose robust fairness regularization (RFR)
arXiv Detail & Related papers (2023-03-06T17:19:23Z) - FairIF: Boosting Fairness in Deep Learning via Influence Functions with
Validation Set Sensitive Attributes [51.02407217197623]
We propose a two-stage training algorithm named FAIRIF.
It minimizes the loss over the reweighted data set where the sample weights are computed.
We show that FAIRIF yields models with better fairness-utility trade-offs against various types of bias.
arXiv Detail & Related papers (2022-01-15T05:14:48Z) - Do the Machine Learning Models on a Crowd Sourced Platform Exhibit Bias?
An Empirical Study on Model Fairness [7.673007415383724]
We have created a benchmark of 40 top-rated models from Kaggle used for 5 different tasks.
We have applied 7 mitigation techniques on these models and analyzed the fairness, mitigation results, and impacts on performance.
arXiv Detail & Related papers (2020-05-21T23:35:53Z) - Counterfactual fairness: removing direct effects through regularization [0.0]
We propose a new definition of fairness that incorporates causality through the Controlled Direct Effect (CDE)
We develop regularizations to tackle classical fairness measures and present a causal regularization that satisfies our new fairness definition.
Our results were found to mitigate unfairness from the predictions with small reductions in model performance.
arXiv Detail & Related papers (2020-02-25T10:13:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.