Fair Training of Decision Tree Classifiers
- URL: http://arxiv.org/abs/2101.00909v1
- Date: Mon, 4 Jan 2021 12:04:22 GMT
- Title: Fair Training of Decision Tree Classifiers
- Authors: Francesco Ranzato, Caterina Urban, Marco Zanella
- Abstract summary: We study the problem of formally verifying individual fairness of decision tree ensembles.
In our approach, fairness verification and fairness-aware training both rely on a notion of stability of a classification model.
- Score: 6.381149074212897
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We study the problem of formally verifying individual fairness of decision
tree ensembles, as well as training tree models which maximize both accuracy
and individual fairness. In our approach, fairness verification and
fairness-aware training both rely on a notion of stability of a classification
model, which is a variant of standard robustness under input perturbations used
in adversarial machine learning. Our verification and training methods leverage
abstract interpretation, a well established technique for static program
analysis which is able to automatically infer assertions about stability
properties of decision trees. By relying on a tool for adversarial training of
decision trees, our fairness-aware learning method has been implemented and
experimentally evaluated on the reference datasets used to assess fairness
properties. The experimental results show that our approach is able to train
tree models exhibiting a high degree of individual fairness w.r.t. the natural
state-of-the-art CART trees and random forests. Moreover, as a by-product,
these fair decision trees turn out to be significantly compact, thus enhancing
the interpretability of their fairness properties.
Related papers
- Uncertainty-Aware Fairness-Adaptive Classification Trees [0.0]
This paper introduces a new classification tree algorithm using a novel splitting criterion that incorporates fairness adjustments into the tree-building process.
We show that our method effectively reduces discriminatory predictions compared to traditional classification trees, without significant loss in overall accuracy.
arXiv Detail & Related papers (2024-10-08T08:42:12Z) - An Interpretable Client Decision Tree Aggregation process for Federated Learning [7.8973037023478785]
We propose an Interpretable Client Decision Tree aggregation process for Federated Learning scenarios.
This model is based on aggregating multiple decision paths of the decision trees and can be used on different decision tree types, such as ID3 and CART.
We carry out the experiments within four datasets, and the analysis shows that the tree built with the model improves the local models, and outperforms the state-of-the-art.
arXiv Detail & Related papers (2024-04-03T06:53:56Z) - A Comprehensive Study on Robustness of Image Classification Models:
Benchmarking and Rethinking [54.89987482509155]
robustness of deep neural networks is usually lacking under adversarial examples, common corruptions, and distribution shifts.
We establish a comprehensive benchmark robustness called textbfARES-Bench on the image classification task.
By designing the training settings accordingly, we achieve the new state-of-the-art adversarial robustness.
arXiv Detail & Related papers (2023-02-28T04:26:20Z) - Optimal Decision Diagrams for Classification [68.72078059880018]
We study the training of optimal decision diagrams from a mathematical programming perspective.
We introduce a novel mixed-integer linear programming model for training.
We show how this model can be easily extended for fairness, parsimony, and stability notions.
arXiv Detail & Related papers (2022-05-28T18:31:23Z) - Beyond Robustness: Resilience Verification of Tree-Based Classifiers [7.574509994822738]
We introduce a new measure called resilience and we focus on its verification.
We discuss how resilience can be verified by combining a traditional robustness verification technique with a data-independent stability analysis.
Our results show that resilience verification is useful and feasible in practice, yielding a more reliable security assessment of both standard and robust decision tree models.
arXiv Detail & Related papers (2021-12-05T23:07:22Z) - Estimating and Improving Fairness with Adversarial Learning [65.99330614802388]
We propose an adversarial multi-task training strategy to simultaneously mitigate and detect bias in the deep learning-based medical image analysis system.
Specifically, we propose to add a discrimination module against bias and a critical module that predicts unfairness within the base classification model.
We evaluate our framework on a large-scale public-available skin lesion dataset.
arXiv Detail & Related papers (2021-03-07T03:10:32Z) - Genetic Adversarial Training of Decision Trees [6.85316573653194]
We put forward a novel learning methodology for ensembles of decision trees based on a genetic algorithm which is able to train a decision tree for maximizing its accuracy and its robustness to adversarial perturbations.
We implemented this genetic adversarial training algorithm in a tool called Meta-Silvae (MS) and we experimentally evaluated it on some reference datasets used in adversarial training.
arXiv Detail & Related papers (2020-12-21T14:05:57Z) - Beyond Individual and Group Fairness [90.4666341812857]
We present a new data-driven model of fairness that is guided by the unfairness complaints received by the system.
Our model supports multiple fairness criteria and takes into account their potential incompatibilities.
arXiv Detail & Related papers (2020-08-21T14:14:44Z) - Rectified Decision Trees: Exploring the Landscape of Interpretable and
Effective Machine Learning [66.01622034708319]
We propose a knowledge distillation based decision trees extension, dubbed rectified decision trees (ReDT)
We extend the splitting criteria and the ending condition of the standard decision trees, which allows training with soft labels.
We then train the ReDT based on the soft label distilled from a well-trained teacher model through a novel jackknife-based method.
arXiv Detail & Related papers (2020-08-21T10:45:25Z) - Causal Feature Selection for Algorithmic Fairness [61.767399505764736]
We consider fairness in the integration component of data management.
We propose an approach to identify a sub-collection of features that ensure the fairness of the dataset.
arXiv Detail & Related papers (2020-06-10T20:20:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.