Fairness-Aware Naive Bayes Classifier for Data with Multiple Sensitive
Features
- URL: http://arxiv.org/abs/2202.11499v1
- Date: Wed, 23 Feb 2022 13:32:21 GMT
- Title: Fairness-Aware Naive Bayes Classifier for Data with Multiple Sensitive
Features
- Authors: Stelios Boulitsakis-Logothetis
- Abstract summary: We generalise two-naive-Bayes (2NB) into N-naive-Bayes (NNB) to eliminate the simplification of assuming only two sensitive groups in the data.
We investigate its application on data with multiple sensitive features and propose a new constraint and post-processing routine to enforce differential fairness.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Fairness-aware machine learning seeks to maximise utility in generating
predictions while avoiding unfair discrimination based on sensitive attributes
such as race, sex, religion, etc. An important line of work in this field is
enforcing fairness during the training step of a classifier. A simple yet
effective binary classification algorithm that follows this strategy is
two-naive-Bayes (2NB), which enforces statistical parity - requiring that the
groups comprising the dataset receive positive labels with the same likelihood.
In this paper, we generalise this algorithm into N-naive-Bayes (NNB) to
eliminate the simplification of assuming only two sensitive groups in the data
and instead apply it to an arbitrary number of groups.
We propose an extension of the original algorithm's statistical parity
constraint and the post-processing routine that enforces statistical
independence of the label and the single sensitive attribute. Then, we
investigate its application on data with multiple sensitive features and
propose a new constraint and post-processing routine to enforce differential
fairness, an extension of established group-fairness constraints focused on
intersectionalities. We empirically demonstrate the effectiveness of the NNB
algorithm on US Census datasets and compare its accuracy and debiasing
performance, as measured by disparate impact and DF-$\epsilon$ score, with
similar group-fairness algorithms. Finally, we lay out important considerations
users should be aware of before incorporating this algorithm into their
application, and direct them to further reading on the pros, cons, and ethical
implications of using statistical parity as a fairness criterion.
Related papers
- Post-processing fairness with minimal changes [5.927938174149359]
We introduce a novel post-processing algorithm that is both model-agnostic and does not require the sensitive attribute at test time.
Our algorithm is explicitly designed to enforce minimal changes between biased and debiased predictions.
arXiv Detail & Related papers (2024-08-27T14:26:56Z) - Optimal Group Fair Classifiers from Linear Post-Processing [10.615965454674901]
We propose a post-processing algorithm for fair classification that mitigates model bias under a unified family of group fairness criteria.
It achieves fairness by re-calibrating the output score of the given base model with a "fairness cost" -- a linear combination of the (predicted) group memberships.
arXiv Detail & Related papers (2024-05-07T05:58:44Z) - Differentially Private Fair Binary Classifications [1.8087157239832476]
We first propose an algorithm for learning a classifier with only fairness guarantee.
We then refine this algorithm to incorporate differential privacy.
Empirical evaluations conducted on the Adult and Credit Card datasets illustrate that our algorithm outperforms the state-of-the-art in terms of fairness guarantees.
arXiv Detail & Related papers (2024-02-23T20:52:59Z) - Fairness Without Harm: An Influence-Guided Active Sampling Approach [32.173195437797766]
We aim to train models that mitigate group fairness disparity without causing harm to model accuracy.
The current data acquisition methods, such as fair active learning approaches, typically require annotating sensitive attributes.
We propose a tractable active data sampling algorithm that does not rely on training group annotations.
arXiv Detail & Related papers (2024-02-20T07:57:38Z) - Bipartite Ranking Fairness through a Model Agnostic Ordering Adjustment [54.179859639868646]
We propose a model agnostic post-processing framework xOrder for achieving fairness in bipartite ranking.
xOrder is compatible with various classification models and ranking fairness metrics, including supervised and unsupervised fairness metrics.
We evaluate our proposed algorithm on four benchmark data sets and two real-world patient electronic health record repositories.
arXiv Detail & Related papers (2023-07-27T07:42:44Z) - Practical Approaches for Fair Learning with Multitype and Multivariate
Sensitive Attributes [70.6326967720747]
It is important to guarantee that machine learning algorithms deployed in the real world do not result in unfairness or unintended social consequences.
We introduce FairCOCCO, a fairness measure built on cross-covariance operators on reproducing kernel Hilbert Spaces.
We empirically demonstrate consistent improvements against state-of-the-art techniques in balancing predictive power and fairness on real-world datasets.
arXiv Detail & Related papers (2022-11-11T11:28:46Z) - Fairness via Adversarial Attribute Neighbourhood Robust Learning [49.93775302674591]
We propose a principled underlineRobust underlineAdversarial underlineAttribute underlineNeighbourhood (RAAN) loss to debias the classification head.
arXiv Detail & Related papers (2022-10-12T23:39:28Z) - D-BIAS: A Causality-Based Human-in-the-Loop System for Tackling
Algorithmic Bias [57.87117733071416]
We propose D-BIAS, a visual interactive tool that embodies human-in-the-loop AI approach for auditing and mitigating social biases.
A user can detect the presence of bias against a group by identifying unfair causal relationships in the causal network.
For each interaction, say weakening/deleting a biased causal edge, the system uses a novel method to simulate a new (debiased) dataset.
arXiv Detail & Related papers (2022-08-10T03:41:48Z) - Towards Model-Agnostic Post-Hoc Adjustment for Balancing Ranking
Fairness and Algorithm Utility [54.179859639868646]
Bipartite ranking aims to learn a scoring function that ranks positive individuals higher than negative ones from labeled data.
There have been rising concerns on whether the learned scoring function can cause systematic disparity across different protected groups.
We propose a model post-processing framework for balancing them in the bipartite ranking scenario.
arXiv Detail & Related papers (2020-06-15T10:08:39Z) - Causal Feature Selection for Algorithmic Fairness [61.767399505764736]
We consider fairness in the integration component of data management.
We propose an approach to identify a sub-collection of features that ensure the fairness of the dataset.
arXiv Detail & Related papers (2020-06-10T20:20:10Z) - Fast Fair Regression via Efficient Approximations of Mutual Information [0.0]
This paper introduces fast approximations of the independence, separation and sufficiency group fairness criteria for regression models.
It uses such approximations as regularisers to enforce fairness within a regularised risk minimisation framework.
Experiments in real-world datasets indicate that in spite of its superior computational efficiency our algorithm still displays state-of-the-art accuracy/fairness tradeoffs.
arXiv Detail & Related papers (2020-02-14T08:50:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.