Learning Optimal Fair Classification Trees: Trade-offs Between
Interpretability, Fairness, and Accuracy
- URL: http://arxiv.org/abs/2201.09932v5
- Date: Tue, 25 Jul 2023 14:41:05 GMT
- Title: Learning Optimal Fair Classification Trees: Trade-offs Between
Interpretability, Fairness, and Accuracy
- Authors: Nathanael Jo, Sina Aghaei, Andr\'es G\'omez, Phebe Vayanos
- Abstract summary: We propose a mixed integer optimization framework for learning optimal classification trees.
We benchmark our method against state-of-the-art approaches for fair classification on popular datasets.
Our method consistently finds decisions with almost full parity, while other methods rarely do.
- Score: 7.215903549622416
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The increasing use of machine learning in high-stakes domains -- where
people's livelihoods are impacted -- creates an urgent need for interpretable,
fair, and highly accurate algorithms. With these needs in mind, we propose a
mixed integer optimization (MIO) framework for learning optimal classification
trees -- one of the most interpretable models -- that can be augmented with
arbitrary fairness constraints. In order to better quantify the "price of
interpretability", we also propose a new measure of model interpretability
called decision complexity that allows for comparisons across different classes
of machine learning models. We benchmark our method against state-of-the-art
approaches for fair classification on popular datasets; in doing so, we conduct
one of the first comprehensive analyses of the trade-offs between
interpretability, fairness, and predictive accuracy. Given a fixed disparity
threshold, our method has a price of interpretability of about 4.2 percentage
points in terms of out-of-sample accuracy compared to the best performing,
complex models. However, our method consistently finds decisions with almost
full parity, while other methods rarely do.
Related papers
- Fair Bilevel Neural Network (FairBiNN): On Balancing fairness and accuracy via Stackelberg Equilibrium [0.3350491650545292]
Current methods for mitigating bias often result in information loss and an inadequate balance between accuracy and fairness.
We propose a novel methodology grounded in bilevel optimization principles.
Our deep learning-based approach concurrently optimize for both accuracy and fairness objectives.
arXiv Detail & Related papers (2024-10-21T18:53:39Z) - Boosting Fair Classifier Generalization through Adaptive Priority Reweighing [59.801444556074394]
A performance-promising fair algorithm with better generalizability is needed.
This paper proposes a novel adaptive reweighing method to eliminate the impact of the distribution shifts between training and test data on model generalizability.
arXiv Detail & Related papers (2023-09-15T13:04:55Z) - Fair and Optimal Classification via Post-Processing [10.163721748735801]
This paper provides a complete characterization of the inherent tradeoff of demographic parity on classification problems.
We show that the minimum error rate achievable by randomized and attribute-aware fair classifiers is given by the optimal value of a Wasserstein-barycenter problem.
arXiv Detail & Related papers (2022-11-03T00:04:04Z) - HyperImpute: Generalized Iterative Imputation with Automatic Model
Selection [77.86861638371926]
We propose a generalized iterative imputation framework for adaptively and automatically configuring column-wise models.
We provide a concrete implementation with out-of-the-box learners, simulators, and interfaces.
arXiv Detail & Related papers (2022-06-15T19:10:35Z) - Fairly Accurate: Learning Optimal Accuracy vs. Fairness Tradeoffs for
Hate Speech Detection [8.841221697099687]
We introduce a differentiable measure that enables direct optimization of group fairness in model training.
We evaluate our methods on the specific task of hate speech detection.
Empirical results across convolutional, sequential, and transformer-based neural architectures show superior empirical accuracy vs. fairness trade-offs over prior work.
arXiv Detail & Related papers (2022-04-15T22:11:25Z) - Group-Aware Threshold Adaptation for Fair Classification [9.496524884855557]
We introduce a novel post-processing method to optimize over multiple fairness constraints.
Our method theoretically enables a better upper bound in near optimality than existing method under same condition.
Experimental results demonstrate that our method outperforms state-of-the-art methods.
arXiv Detail & Related papers (2021-11-08T04:36:37Z) - Scalable Personalised Item Ranking through Parametric Density Estimation [53.44830012414444]
Learning from implicit feedback is challenging because of the difficult nature of the one-class problem.
Most conventional methods use a pairwise ranking approach and negative samplers to cope with the one-class problem.
We propose a learning-to-rank approach, which achieves convergence speed comparable to the pointwise counterpart.
arXiv Detail & Related papers (2021-05-11T03:38:16Z) - Can Active Learning Preemptively Mitigate Fairness Issues? [66.84854430781097]
dataset bias is one of the prevailing causes of unfairness in machine learning.
We study whether models trained with uncertainty-based ALs are fairer in their decisions with respect to a protected class.
We also explore the interaction of algorithmic fairness methods such as gradient reversal (GRAD) and BALD.
arXiv Detail & Related papers (2021-04-14T14:20:22Z) - Characterizing Fairness Over the Set of Good Models Under Selective
Labels [69.64662540443162]
We develop a framework for characterizing predictive fairness properties over the set of models that deliver similar overall performance.
We provide tractable algorithms to compute the range of attainable group-level predictive disparities.
We extend our framework to address the empirically relevant challenge of selectively labelled data.
arXiv Detail & Related papers (2021-01-02T02:11:37Z) - Addressing Fairness in Classification with a Model-Agnostic
Multi-Objective Algorithm [33.145522561104464]
The goal of fairness in classification is to learn a classifier that does not discriminate against groups of individuals based on sensitive attributes, such as race and gender.
One approach to designing fair algorithms is to use relaxations of fairness notions as regularization terms.
We leverage this property to define a differentiable relaxation that approximates fairness notions provably better than existing relaxations.
arXiv Detail & Related papers (2020-09-09T17:40:24Z) - Towards Model-Agnostic Post-Hoc Adjustment for Balancing Ranking
Fairness and Algorithm Utility [54.179859639868646]
Bipartite ranking aims to learn a scoring function that ranks positive individuals higher than negative ones from labeled data.
There have been rising concerns on whether the learned scoring function can cause systematic disparity across different protected groups.
We propose a model post-processing framework for balancing them in the bipartite ranking scenario.
arXiv Detail & Related papers (2020-06-15T10:08:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.