EiFFFeL: Enforcing Fairness in Forests by Flipping Leaves
- URL: http://arxiv.org/abs/2112.14435v1
- Date: Wed, 29 Dec 2021 07:48:38 GMT
- Title: EiFFFeL: Enforcing Fairness in Forests by Flipping Leaves
- Authors: Seyum Assefa Abebe, Claudio Lucchese, Salvatore Orlando
- Abstract summary: We propose a fairness enforcing approach called EiFFFeL:Enforcing Fairness in Forests by Flipping Leaves.
Experimental results show that our approach achieves a user defined group fairness degree without losing a significant amount of accuracy.
- Score: 5.08078625937586
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Nowadays Machine Learning (ML) techniques are extensively adopted in many
socially sensitive systems, thus requiring to carefully study the fairness of
the decisions taken by such systems. Many approaches have been proposed to
address and to make sure there is no bias against individuals or specific
groups which might originally come from biased training datasets or algorithm
design. In this regard, we propose a fairness enforcing approach called
EiFFFeL:Enforcing Fairness in Forests by Flipping Leaves which exploits
tree-based or leaf-based post-processing strategies to relabel leaves of
selected decision trees of a given forest. Experimental results show that our
approach achieves a user defined group fairness degree without losing a
significant amount of accuracy.
Related papers
- DynFrs: An Efficient Framework for Machine Unlearning in Random Forest [2.315324942451179]
DynFrs is a framework designed to enable efficient machine unlearning in Random Forests.
In experiments, applying Dynfrs on Extremely Trees yields substantial improvements.
arXiv Detail & Related papers (2024-10-02T14:20:30Z) - A New Random Forest Ensemble of Intuitionistic Fuzzy Decision Trees [5.831659043074847]
We propose a new random forest ensemble of intuitionistic fuzzy decision trees (IFDT)
The proposed method enjoys the power of the randomness from bootstrapped sampling and feature selection.
This study is the first to propose a random forest ensemble based on the intuitionistic fuzzy theory.
arXiv Detail & Related papers (2024-03-12T06:52:24Z) - Why do Random Forests Work? Understanding Tree Ensembles as
Self-Regularizing Adaptive Smoothers [68.76846801719095]
We argue that the current high-level dichotomy into bias- and variance-reduction prevalent in statistics is insufficient to understand tree ensembles.
We show that forests can improve upon trees by three distinct mechanisms that are usually implicitly entangled.
arXiv Detail & Related papers (2024-02-02T15:36:43Z) - Selective Knowledge Sharing for Privacy-Preserving Federated
Distillation without A Good Teacher [52.2926020848095]
Federated learning is vulnerable to white-box attacks and struggles to adapt to heterogeneous clients.
This paper proposes a selective knowledge sharing mechanism for FD, termed Selective-FD.
arXiv Detail & Related papers (2023-04-04T12:04:19Z) - When Do Curricula Work in Federated Learning? [56.88941905240137]
We find that curriculum learning largely alleviates non-IIDness.
The more disparate the data distributions across clients the more they benefit from learning.
We propose a novel client selection technique that benefits from the real-world disparity in the clients.
arXiv Detail & Related papers (2022-12-24T11:02:35Z) - Contextual Decision Trees [62.997667081978825]
We propose a multi-armed contextual bandit recommendation framework for feature-based selection of a single shallow tree of the learned ensemble.
The trained system, which works on top of the Random Forest, dynamically identifies a base predictor that is responsible for providing the final output.
arXiv Detail & Related papers (2022-07-13T17:05:08Z) - Can Active Learning Preemptively Mitigate Fairness Issues? [66.84854430781097]
dataset bias is one of the prevailing causes of unfairness in machine learning.
We study whether models trained with uncertainty-based ALs are fairer in their decisions with respect to a protected class.
We also explore the interaction of algorithmic fairness methods such as gradient reversal (GRAD) and BALD.
arXiv Detail & Related papers (2021-04-14T14:20:22Z) - Fairness-guided SMT-based Rectification of Decision Trees and Random
Forests [14.423550468823152]
Our approach converts any decision tree or random forest into a fair one with respect to a specific data set, fairness criteria, and sensitive attributes.
Our experiments on the well-known adult dataset from UC Irvine demonstrate that FairRepair scales to realistic decision trees and random forests.
Since our fairness-guided repair technique repairs decision trees and random forests obtained from a given (unfair) data-set, it can help to identify and rectify biases in decision-making in an organisation.
arXiv Detail & Related papers (2020-11-22T12:30:27Z) - Rectified Decision Trees: Exploring the Landscape of Interpretable and
Effective Machine Learning [66.01622034708319]
We propose a knowledge distillation based decision trees extension, dubbed rectified decision trees (ReDT)
We extend the splitting criteria and the ending condition of the standard decision trees, which allows training with soft labels.
We then train the ReDT based on the soft label distilled from a well-trained teacher model through a novel jackknife-based method.
arXiv Detail & Related papers (2020-08-21T10:45:25Z) - Learning Representations for Axis-Aligned Decision Forests through Input
Perturbation [2.755007887718791]
Axis-aligned decision forests have long been the leading class of machine learning algorithms.
Despite their widespread use and rich history, decision forests to date fail to consume raw structured data.
We present a novel but intuitive proposal to achieve representation learning for decision forests.
arXiv Detail & Related papers (2020-07-29T11:56:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.