Genetic Adversarial Training of Decision Trees
- URL: http://arxiv.org/abs/2012.11352v1
- Date: Mon, 21 Dec 2020 14:05:57 GMT
- Title: Genetic Adversarial Training of Decision Trees
- Authors: Francesco Ranzato and Marco Zanella
- Abstract summary: We put forward a novel learning methodology for ensembles of decision trees based on a genetic algorithm which is able to train a decision tree for maximizing its accuracy and its robustness to adversarial perturbations.
We implemented this genetic adversarial training algorithm in a tool called Meta-Silvae (MS) and we experimentally evaluated it on some reference datasets used in adversarial training.
- Score: 6.85316573653194
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We put forward a novel learning methodology for ensembles of decision trees
based on a genetic algorithm which is able to train a decision tree for
maximizing both its accuracy and its robustness to adversarial perturbations.
This learning algorithm internally leverages a complete formal verification
technique for robustness properties of decision trees based on abstract
interpretation, a well known static program analysis technique. We implemented
this genetic adversarial training algorithm in a tool called Meta-Silvae (MS)
and we experimentally evaluated it on some reference datasets used in
adversarial training. The experimental results show that MS is able to train
robust models that compete with and often improve on the current
state-of-the-art of adversarial training of decision trees while being much
more compact and therefore interpretable and efficient tree models.
Related papers
- SynthTree: Co-supervised Local Model Synthesis for Explainable Prediction [15.832975722301011]
We propose a novel method to enhance explainability with minimal accuracy loss.
We have developed novel methods for estimating nodes by leveraging AI techniques.
Our findings highlight the critical role that statistical methodologies can play in advancing explainable AI.
arXiv Detail & Related papers (2024-06-16T14:43:01Z) - Learning accurate and interpretable decision trees [27.203303726977616]
We develop approaches to design decision tree learning algorithms given repeated access to data from the same domain.
We study the sample complexity of tuning prior parameters in Bayesian decision tree learning, and extend our results to decision tree regression.
We also study the interpretability of the learned decision trees and introduce a data-driven approach for optimizing the explainability versus accuracy trade-off using decision trees.
arXiv Detail & Related papers (2024-05-24T20:10:10Z) - An Interpretable Client Decision Tree Aggregation process for Federated Learning [7.8973037023478785]
We propose an Interpretable Client Decision Tree aggregation process for Federated Learning scenarios.
This model is based on aggregating multiple decision paths of the decision trees and can be used on different decision tree types, such as ID3 and CART.
We carry out the experiments within four datasets, and the analysis shows that the tree built with the model improves the local models, and outperforms the state-of-the-art.
arXiv Detail & Related papers (2024-04-03T06:53:56Z) - MIXRTs: Toward Interpretable Multi-Agent Reinforcement Learning via
Mixing Recurrent Soft Decision Trees [18.83056365359009]
Multi-agent reinforcement learning (MARL) with a black-box neural network architecture makes decisions in an opaque manner.
Existing interpretable approaches, such as traditional linear models and decision trees, usually suffer from weak expressivity and low accuracy.
We develop a novel interpretable architecture that can represent explicit decision processes via the root-to-leaf path.
arXiv Detail & Related papers (2022-09-15T11:39:59Z) - Optimal Decision Diagrams for Classification [68.72078059880018]
We study the training of optimal decision diagrams from a mathematical programming perspective.
We introduce a novel mixed-integer linear programming model for training.
We show how this model can be easily extended for fairness, parsimony, and stability notions.
arXiv Detail & Related papers (2022-05-28T18:31:23Z) - Social Interpretable Tree for Pedestrian Trajectory Prediction [75.81745697967608]
We propose a tree-based method, termed as Social Interpretable Tree (SIT), to address this multi-modal prediction task.
A path in the tree from the root to leaf represents an individual possible future trajectory.
Despite the hand-crafted tree, the experimental results on ETH-UCY and Stanford Drone datasets demonstrate that our method is capable of matching or exceeding the performance of state-of-the-art methods.
arXiv Detail & Related papers (2022-05-26T12:18:44Z) - POETREE: Interpretable Policy Learning with Adaptive Decision Trees [78.6363825307044]
POETREE is a novel framework for interpretable policy learning.
It builds probabilistic tree policies determining physician actions based on patients' observations and medical history.
It outperforms the state-of-the-art on real and synthetic medical datasets.
arXiv Detail & Related papers (2022-03-15T16:50:52Z) - Gone Fishing: Neural Active Learning with Fisher Embeddings [55.08537975896764]
There is an increasing need for active learning algorithms that are compatible with deep neural networks.
This article introduces BAIT, a practical representation of tractable, and high-performing active learning algorithm for neural networks.
arXiv Detail & Related papers (2021-06-17T17:26:31Z) - Fair Training of Decision Tree Classifiers [6.381149074212897]
We study the problem of formally verifying individual fairness of decision tree ensembles.
In our approach, fairness verification and fairness-aware training both rely on a notion of stability of a classification model.
arXiv Detail & Related papers (2021-01-04T12:04:22Z) - Rectified Decision Trees: Exploring the Landscape of Interpretable and
Effective Machine Learning [66.01622034708319]
We propose a knowledge distillation based decision trees extension, dubbed rectified decision trees (ReDT)
We extend the splitting criteria and the ending condition of the standard decision trees, which allows training with soft labels.
We then train the ReDT based on the soft label distilled from a well-trained teacher model through a novel jackknife-based method.
arXiv Detail & Related papers (2020-08-21T10:45:25Z) - MurTree: Optimal Classification Trees via Dynamic Programming and Search [61.817059565926336]
We present a novel algorithm for learning optimal classification trees based on dynamic programming and search.
Our approach uses only a fraction of the time required by the state-of-the-art and can handle datasets with tens of thousands of instances.
arXiv Detail & Related papers (2020-07-24T17:06:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.