An Interpretable Client Decision Tree Aggregation process for Federated Learning
- URL: http://arxiv.org/abs/2404.02510v1
- Date: Wed, 3 Apr 2024 06:53:56 GMT
- Title: An Interpretable Client Decision Tree Aggregation process for Federated Learning
- Authors: Alberto Argente-Garrido, Cristina Zuheros, M. Victoria Luzón, Francisco Herrera,
- Abstract summary: We propose an Interpretable Client Decision Tree aggregation process for Federated Learning scenarios.
This model is based on aggregating multiple decision paths of the decision trees and can be used on different decision tree types, such as ID3 and CART.
We carry out the experiments within four datasets, and the analysis shows that the tree built with the model improves the local models, and outperforms the state-of-the-art.
- Score: 7.8973037023478785
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Trustworthy Artificial Intelligence solutions are essential in today's data-driven applications, prioritizing principles such as robustness, safety, transparency, explainability, and privacy among others. This has led to the emergence of Federated Learning as a solution for privacy and distributed machine learning. While decision trees, as self-explanatory models, are ideal for collaborative model training across multiple devices in resource-constrained environments such as federated learning environments for injecting interpretability in these models. Decision tree structure makes the aggregation in a federated learning environment not trivial. They require techniques that can merge their decision paths without introducing bias or overfitting while keeping the aggregated decision trees robust and generalizable. In this paper, we propose an Interpretable Client Decision Tree Aggregation process for Federated Learning scenarios that keeps the interpretability and the precision of the base decision trees used for the aggregation. This model is based on aggregating multiple decision paths of the decision trees and can be used on different decision tree types, such as ID3 and CART. We carry out the experiments within four datasets, and the analysis shows that the tree built with the model improves the local models, and outperforms the state-of-the-art.
Related papers
- Optimized Feature Generation for Tabular Data via LLMs with Decision Tree Reasoning [53.241569810013836]
We propose a novel framework that utilizes large language models (LLMs) to identify effective feature generation rules.
We use decision trees to convey this reasoning information, as they can be easily represented in natural language.
OCTree consistently enhances the performance of various prediction models across diverse benchmarks.
arXiv Detail & Related papers (2024-06-12T08:31:34Z) - Learning accurate and interpretable decision trees [27.203303726977616]
We develop approaches to design decision tree learning algorithms given repeated access to data from the same domain.
We study the sample complexity of tuning prior parameters in Bayesian decision tree learning, and extend our results to decision tree regression.
We also study the interpretability of the learned decision trees and introduce a data-driven approach for optimizing the explainability versus accuracy trade-off using decision trees.
arXiv Detail & Related papers (2024-05-24T20:10:10Z) - Application of the representative measure approach to assess the reliability of decision trees in dealing with unseen vehicle collision data [0.6571063542099526]
Representative datasets are a cornerstone in shaping the trajectory of artificial intelligence (AI) development.
We investigate the reliability of the $varepsilon$-representativeness method to assess the dataset similarity from a theoretical perspective for decision trees.
We extend the results experimentally in the context of unseen vehicle collision data for XGboost.
arXiv Detail & Related papers (2024-04-15T08:06:54Z) - Learning a Decision Tree Algorithm with Transformers [75.96920867382859]
We introduce MetaTree, a transformer-based model trained via meta-learning to directly produce strong decision trees.
We fit both greedy decision trees and globally optimized decision trees on a large number of datasets, and train MetaTree to produce only the trees that achieve strong generalization performance.
arXiv Detail & Related papers (2024-02-06T07:40:53Z) - Modeling Boundedly Rational Agents with Latent Inference Budgets [56.24971011281947]
We introduce a latent inference budget model (L-IBM) that models agents' computational constraints explicitly.
L-IBMs make it possible to learn agent models using data from diverse populations of suboptimal actors.
We show that L-IBMs match or outperform Boltzmann models of decision-making under uncertainty.
arXiv Detail & Related papers (2023-12-07T03:55:51Z) - Construction of Decision Trees and Acyclic Decision Graphs from Decision
Rule Systems [0.0]
We study the complexity of constructing decision trees and acyclic decision graphs representing decision trees from decision rule systems.
We discuss the possibility of not building the entire decision tree, but describing the computation path in this tree for the given input.
arXiv Detail & Related papers (2023-05-02T18:40:48Z) - Optimal Decision Diagrams for Classification [68.72078059880018]
We study the training of optimal decision diagrams from a mathematical programming perspective.
We introduce a novel mixed-integer linear programming model for training.
We show how this model can be easily extended for fairness, parsimony, and stability notions.
arXiv Detail & Related papers (2022-05-28T18:31:23Z) - Fed-EINI: An Efficient and Interpretable Inference Framework for
Decision Tree Ensembles in Federated Learning [11.843365055516566]
Fed-EINI is an efficient and interpretable inference framework for federated decision tree models.
We propose to protect the decision path by the efficient additively homomorphic encryption method.
Experiments show that the inference efficiency is improved by over $50%$ in average.
arXiv Detail & Related papers (2021-05-20T06:40:05Z) - Fair Training of Decision Tree Classifiers [6.381149074212897]
We study the problem of formally verifying individual fairness of decision tree ensembles.
In our approach, fairness verification and fairness-aware training both rely on a notion of stability of a classification model.
arXiv Detail & Related papers (2021-01-04T12:04:22Z) - Growing Deep Forests Efficiently with Soft Routing and Learned
Connectivity [79.83903179393164]
This paper further extends the deep forest idea in several important aspects.
We employ a probabilistic tree whose nodes make probabilistic routing decisions, a.k.a., soft routing, rather than hard binary decisions.
Experiments on the MNIST dataset demonstrate that our empowered deep forests can achieve better or comparable performance than [1],[3].
arXiv Detail & Related papers (2020-12-29T18:05:05Z) - Rectified Decision Trees: Exploring the Landscape of Interpretable and
Effective Machine Learning [66.01622034708319]
We propose a knowledge distillation based decision trees extension, dubbed rectified decision trees (ReDT)
We extend the splitting criteria and the ending condition of the standard decision trees, which allows training with soft labels.
We then train the ReDT based on the soft label distilled from a well-trained teacher model through a novel jackknife-based method.
arXiv Detail & Related papers (2020-08-21T10:45:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.