Contextual Decision Trees
- URL: http://arxiv.org/abs/2207.06355v1
- Date: Wed, 13 Jul 2022 17:05:08 GMT
- Title: Contextual Decision Trees
- Authors: Tommaso Aldinucci and Enrico Civitelli and Leonardo di Gangi and
Alessandro Sestini
- Abstract summary: We propose a multi-armed contextual bandit recommendation framework for feature-based selection of a single shallow tree of the learned ensemble.
The trained system, which works on top of the Random Forest, dynamically identifies a base predictor that is responsible for providing the final output.
- Score: 62.997667081978825
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Focusing on Random Forests, we propose a multi-armed contextual bandit
recommendation framework for feature-based selection of a single shallow tree
of the learned ensemble. The trained system, which works on top of the Random
Forest, dynamically identifies a base predictor that is responsible for
providing the final output. In this way, we obtain local interpretations by
observing the rules of the recommended tree. The carried out experiments reveal
that our dynamic method is superior to an independent fitted CART decision tree
and comparable to the whole black-box Random Forest in terms of predictive
performances.
Related papers
- Tree Ensembles for Contextual Bandits [2.9623902973073375]
We propose a new framework for contextual multi-armed bandits based on tree ensembles.
As part of this framework, we propose a novel method of estimating the uncertainty in tree ensemble predictions.
arXiv Detail & Related papers (2024-02-10T14:36:31Z) - Why do Random Forests Work? Understanding Tree Ensembles as
Self-Regularizing Adaptive Smoothers [68.76846801719095]
We argue that the current high-level dichotomy into bias- and variance-reduction prevalent in statistics is insufficient to understand tree ensembles.
We show that forests can improve upon trees by three distinct mechanisms that are usually implicitly entangled.
arXiv Detail & Related papers (2024-02-02T15:36:43Z) - RLET: A Reinforcement Learning Based Approach for Explainable QA with
Entailment Trees [47.745218107037786]
We propose RLET, a Reinforcement Learning based Entailment Tree generation framework.
RLET iteratively performs single step reasoning with sentence selection and deduction generation modules.
Experiments on three settings of the EntailmentBank dataset demonstrate the strength of using RL framework.
arXiv Detail & Related papers (2022-10-31T06:45:05Z) - Social Interpretable Tree for Pedestrian Trajectory Prediction [75.81745697967608]
We propose a tree-based method, termed as Social Interpretable Tree (SIT), to address this multi-modal prediction task.
A path in the tree from the root to leaf represents an individual possible future trajectory.
Despite the hand-crafted tree, the experimental results on ETH-UCY and Stanford Drone datasets demonstrate that our method is capable of matching or exceeding the performance of state-of-the-art methods.
arXiv Detail & Related papers (2022-05-26T12:18:44Z) - An Efficient Dynamic Sampling Policy For Monte Carlo Tree Search [0.0]
We consider the popular tree-based search strategy within the framework of reinforcement learning, the Monte Carlo Tree Search (MCTS)
We propose a dynamic sampling tree policy that efficiently allocates limited computational budget to maximize the probability of correct selection of the best action at the root node of the tree.
arXiv Detail & Related papers (2022-04-26T02:39:18Z) - Explaining random forest prediction through diverse rulesets [0.0]
Local Tree eXtractor (LTreeX) is able to explain the forest prediction for a given test instance with a few diverse rules.
We show that our proposed approach substantially outperforms other explainable methods in terms of predictive performance.
arXiv Detail & Related papers (2022-03-29T12:54:57Z) - Making CNNs Interpretable by Building Dynamic Sequential Decision
Forests with Top-down Hierarchy Learning [62.82046926149371]
We propose a generic model transfer scheme to make Convlutional Neural Networks (CNNs) interpretable.
We achieve this by building a differentiable decision forest on top of CNNs.
We name the transferred model deep Dynamic Sequential Decision Forest (dDSDF)
arXiv Detail & Related papers (2021-06-05T07:41:18Z) - Improved Weighted Random Forest for Classification Problems [3.42658286826597]
The key to make well-performing ensemble model is in the diversity of the base models.
We propose several algorithms that intend to modify the weighting strategy of regular random forest.
The proposed models are able to introduce significant improvements compared to regular random forest.
arXiv Detail & Related papers (2020-09-01T16:08:45Z) - Rectified Decision Trees: Exploring the Landscape of Interpretable and
Effective Machine Learning [66.01622034708319]
We propose a knowledge distillation based decision trees extension, dubbed rectified decision trees (ReDT)
We extend the splitting criteria and the ending condition of the standard decision trees, which allows training with soft labels.
We then train the ReDT based on the soft label distilled from a well-trained teacher model through a novel jackknife-based method.
arXiv Detail & Related papers (2020-08-21T10:45:25Z) - Optimal survival trees ensemble [0.0]
Recent studies have adopted an approach of selecting accurate and diverse trees based on individual or collective performance within an ensemble for classification and regression problems.
This work follows in the wake of these investigations and considers the possibility of growing a forest of optimal survival trees.
In addition to improve predictive performance, the proposed method reduces the number of survival trees in the ensemble as compared to the other tree based methods.
arXiv Detail & Related papers (2020-05-18T19:28:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.