Learning Individually Fair Classifier with Path-Specific Causal-Effect
Constraint
- URL: http://arxiv.org/abs/2002.06746v4
- Date: Fri, 26 Feb 2021 04:50:41 GMT
- Title: Learning Individually Fair Classifier with Path-Specific Causal-Effect
Constraint
- Authors: Yoichi Chikahara, Shinsaku Sakaue, Akinori Fujino, Hisashi Kashima
- Abstract summary: In this paper, we propose a framework for learning an individually fair classifier.
We define the it probability of individual unfairness (PIU) and solve an optimization problem where PIU's upper bound, which can be estimated from data, is controlled to be close to zero.
Experimental results show that our method can learn an individually fair classifier at a slight cost of accuracy.
- Score: 31.86959207229775
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine learning is used to make decisions for individuals in various fields,
which require us to achieve good prediction accuracy while ensuring fairness
with respect to sensitive features (e.g., race and gender). This problem,
however, remains difficult in complex real-world scenarios. To quantify
unfairness under such situations, existing methods utilize {\it path-specific
causal effects}. However, none of them can ensure fairness for each individual
without making impractical functional assumptions on the data. In this paper,
we propose a far more practical framework for learning an individually fair
classifier. To avoid restrictive functional assumptions, we define the {\it
probability of individual unfairness} (PIU) and solve an optimization problem
where PIU's upper bound, which can be estimated from data, is controlled to be
close to zero. We elucidate why our method can guarantee fairness for each
individual. Experimental results show that our method can learn an individually
fair classifier at a slight cost of accuracy.
Related papers
- Probably Approximately Precision and Recall Learning [62.912015491907994]
Precision and Recall are foundational metrics in machine learning.
One-sided feedback--where only positive examples are observed during training--is inherent in many practical problems.
We introduce a PAC learning framework where each hypothesis is represented by a graph, with edges indicating positive interactions.
arXiv Detail & Related papers (2024-11-20T04:21:07Z) - Probabilistic Contrastive Learning for Long-Tailed Visual Recognition [78.70453964041718]
Longtailed distributions frequently emerge in real-world data, where a large number of minority categories contain a limited number of samples.
Recent investigations have revealed that supervised contrastive learning exhibits promising potential in alleviating the data imbalance.
We propose a novel probabilistic contrastive (ProCo) learning algorithm that estimates the data distribution of the samples from each class in the feature space.
arXiv Detail & Related papers (2024-03-11T13:44:49Z) - Achievable Fairness on Your Data With Utility Guarantees [16.78730663293352]
In machine learning fairness, training models that minimize disparity across different sensitive groups often leads to diminished accuracy.
We present a computationally efficient approach to approximate the fairness-accuracy trade-off curve tailored to individual datasets.
We introduce a novel methodology for quantifying uncertainty in our estimates, thereby providing practitioners with a robust framework for auditing model fairness.
arXiv Detail & Related papers (2024-02-27T00:59:32Z) - Fairness Without Harm: An Influence-Guided Active Sampling Approach [32.173195437797766]
We aim to train models that mitigate group fairness disparity without causing harm to model accuracy.
The current data acquisition methods, such as fair active learning approaches, typically require annotating sensitive attributes.
We propose a tractable active data sampling algorithm that does not rely on training group annotations.
arXiv Detail & Related papers (2024-02-20T07:57:38Z) - Causal Fair Machine Learning via Rank-Preserving Interventional Distributions [0.5062312533373299]
We define individuals as being normatively equal if they are equal in a fictitious, normatively desired (FiND) world.
We propose rank-preserving interventional distributions to define a specific FiND world in which this holds.
We show that our warping approach effectively identifies the most discriminated individuals and mitigates unfairness.
arXiv Detail & Related papers (2023-07-24T13:46:50Z) - Practical Approaches for Fair Learning with Multitype and Multivariate
Sensitive Attributes [70.6326967720747]
It is important to guarantee that machine learning algorithms deployed in the real world do not result in unfairness or unintended social consequences.
We introduce FairCOCCO, a fairness measure built on cross-covariance operators on reproducing kernel Hilbert Spaces.
We empirically demonstrate consistent improvements against state-of-the-art techniques in balancing predictive power and fairness on real-world datasets.
arXiv Detail & Related papers (2022-11-11T11:28:46Z) - Understanding Unfairness in Fraud Detection through Model and Data Bias
Interactions [4.159343412286401]
We argue that algorithmic unfairness stems from interactions between models and biases in the data.
We study a set of hypotheses regarding the fairness-accuracy trade-offs that fairness-blind ML algorithms exhibit under different data bias settings.
arXiv Detail & Related papers (2022-07-13T15:18:30Z) - Can Active Learning Preemptively Mitigate Fairness Issues? [66.84854430781097]
dataset bias is one of the prevailing causes of unfairness in machine learning.
We study whether models trained with uncertainty-based ALs are fairer in their decisions with respect to a protected class.
We also explore the interaction of algorithmic fairness methods such as gradient reversal (GRAD) and BALD.
arXiv Detail & Related papers (2021-04-14T14:20:22Z) - Fairness Constraints in Semi-supervised Learning [56.48626493765908]
We develop a framework for fair semi-supervised learning, which is formulated as an optimization problem.
We theoretically analyze the source of discrimination in semi-supervised learning via bias, variance and noise decomposition.
Our method is able to achieve fair semi-supervised learning, and reach a better trade-off between accuracy and fairness than fair supervised learning.
arXiv Detail & Related papers (2020-09-14T04:25:59Z) - Accuracy and Fairness Trade-offs in Machine Learning: A Stochastic
Multi-Objective Approach [0.0]
In the application of machine learning to real-life decision-making systems, the prediction outcomes might discriminate against people with sensitive attributes, leading to unfairness.
The commonly used strategy in fair machine learning is to include fairness as a constraint or a penalization term in the minimization of the prediction loss.
In this paper, we introduce a new approach to handle fairness by formulating a multi-objective optimization problem.
arXiv Detail & Related papers (2020-08-03T18:51:24Z) - Causal Feature Selection for Algorithmic Fairness [61.767399505764736]
We consider fairness in the integration component of data management.
We propose an approach to identify a sub-collection of features that ensure the fairness of the dataset.
arXiv Detail & Related papers (2020-06-10T20:20:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.