GPTree: Towards Explainable Decision-Making via LLM-powered Decision Trees
- URL: http://arxiv.org/abs/2411.08257v1
- Date: Wed, 13 Nov 2024 00:14:09 GMT
- Title: GPTree: Towards Explainable Decision-Making via LLM-powered Decision Trees
- Authors: Sichao Xiong, Yigit Ihlamur, Fuat Alican, Aaron Ontoyin Yin,
- Abstract summary: GPTree is a novel framework combining explainability of decision trees with the advanced reasoning capabilities of LLMs.
Our decision tree achieved a 7.8% precision rate for identifying "unicorn" startups at the inception stage of a startup.
- Score: 0.0
- License:
- Abstract: Traditional decision tree algorithms are explainable but struggle with non-linear, high-dimensional data, limiting its applicability in complex decision-making. Neural networks excel at capturing complex patterns but sacrifice explainability in the process. In this work, we present GPTree, a novel framework combining explainability of decision trees with the advanced reasoning capabilities of LLMs. GPTree eliminates the need for feature engineering and prompt chaining, requiring only a task-specific prompt and leveraging a tree-based structure to dynamically split samples. We also introduce an expert-in-the-loop feedback mechanism to further enhance performance by enabling human intervention to refine and rebuild decision paths, emphasizing the harmony between human expertise and machine intelligence. Our decision tree achieved a 7.8% precision rate for identifying "unicorn" startups at the inception stage of a startup, surpassing gpt-4o with few-shot learning as well as the best human decision-makers (3.1% to 5.6%).
Related papers
- Zero-Shot Decision Tree Construction via Large Language Models [2.005837558796176]
We introduce an algorithm for constructing decision trees using large language models (LLMs) in a zero-shot manner based on Classification and Regression Trees (CART) principles.
Our approach leverages LLMs to perform operations essential for decision tree construction, including attribute discretization, probability calculation, and Gini index computation.
arXiv Detail & Related papers (2025-01-27T17:48:48Z) - RL-LLM-DT: An Automatic Decision Tree Generation Method Based on RL Evaluation and LLM Enhancement [82.02155942106877]
We propose RL-LLM-DT, an automatic decision tree generation method based on RL Evaluation and LLM Enhancement.
To evaluate the effectiveness of this integrated approach, we conducted experiments in a curling game.
arXiv Detail & Related papers (2024-12-16T03:33:49Z) - Enhancing LLM Reasoning with Reward-guided Tree Search [95.06503095273395]
o1-like reasoning approach is challenging, and researchers have been making various attempts to advance this open area of research.
We present a preliminary exploration into enhancing the reasoning abilities of LLMs through reward-guided tree search algorithms.
arXiv Detail & Related papers (2024-11-18T16:15:17Z) - Learning accurate and interpretable decision trees [27.203303726977616]
We develop approaches to design decision tree learning algorithms given repeated access to data from the same domain.
We study the sample complexity of tuning prior parameters in Bayesian decision tree learning, and extend our results to decision tree regression.
We also study the interpretability of the learned decision trees and introduce a data-driven approach for optimizing the explainability versus accuracy trade-off using decision trees.
arXiv Detail & Related papers (2024-05-24T20:10:10Z) - Online Learning of Decision Trees with Thompson Sampling [12.403737756721467]
Decision Trees are prominent prediction models for interpretable Machine Learning.
We devise a new Monte Carlo Tree Search algorithm, able to produce optimal Decision Trees in an online setting.
arXiv Detail & Related papers (2024-04-09T15:53:02Z) - An Interpretable Client Decision Tree Aggregation process for Federated Learning [7.8973037023478785]
We propose an Interpretable Client Decision Tree aggregation process for Federated Learning scenarios.
This model is based on aggregating multiple decision paths of the decision trees and can be used on different decision tree types, such as ID3 and CART.
We carry out the experiments within four datasets, and the analysis shows that the tree built with the model improves the local models, and outperforms the state-of-the-art.
arXiv Detail & Related papers (2024-04-03T06:53:56Z) - Learning a Decision Tree Algorithm with Transformers [75.96920867382859]
We introduce MetaTree, a transformer-based model trained via meta-learning to directly produce strong decision trees.
We fit both greedy decision trees and globally optimized decision trees on a large number of datasets, and train MetaTree to produce only the trees that achieve strong generalization performance.
arXiv Detail & Related papers (2024-02-06T07:40:53Z) - Explaining Tree Model Decisions in Natural Language for Network
Intrusion Detection [18.5400518912912]
Network intrusion detection (NID) systems which leverage machine learning have been shown to have strong performance in practice when used to detect malicious network traffic.
Decision trees in particular offer a strong balance between performance and simplicity, but require users of NID systems to have background knowledge in machine learning to interpret.
In this work, we explore the use of large language models (LLMs) to provide explanations and additional background knowledge for decision tree NID systems.
arXiv Detail & Related papers (2023-10-30T15:40:34Z) - Growing Deep Forests Efficiently with Soft Routing and Learned
Connectivity [79.83903179393164]
This paper further extends the deep forest idea in several important aspects.
We employ a probabilistic tree whose nodes make probabilistic routing decisions, a.k.a., soft routing, rather than hard binary decisions.
Experiments on the MNIST dataset demonstrate that our empowered deep forests can achieve better or comparable performance than [1],[3].
arXiv Detail & Related papers (2020-12-29T18:05:05Z) - MurTree: Optimal Classification Trees via Dynamic Programming and Search [61.817059565926336]
We present a novel algorithm for learning optimal classification trees based on dynamic programming and search.
Our approach uses only a fraction of the time required by the state-of-the-art and can handle datasets with tens of thousands of instances.
arXiv Detail & Related papers (2020-07-24T17:06:55Z) - Generalized and Scalable Optimal Sparse Decision Trees [56.35541305670828]
We present techniques that produce optimal decision trees over a variety of objectives.
We also introduce a scalable algorithm that produces provably optimal results in the presence of continuous variables.
arXiv Detail & Related papers (2020-06-15T19:00:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.