VisRuler: Visual Analytics for Extracting Decision Rules from Bagged and Boosted Decision Trees
- URL: http://arxiv.org/abs/2112.00334v5
- Date: Thu, 18 Apr 2024 16:59:06 GMT
- Title: VisRuler: Visual Analytics for Extracting Decision Rules from Bagged and Boosted Decision Trees
- Authors: Angelos Chatzimparmpas, Rafael M. Martins, Andreas Kerren,
- Abstract summary: Bagging and boosting are two popular ensemble methods in machine learning (ML) that produce many individual decision trees.
We propose a visual analytics tool that aims to assist users in extracting decisions from such ML models.
- Score: 3.5229503563299915
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Bagging and boosting are two popular ensemble methods in machine learning (ML) that produce many individual decision trees. Due to the inherent ensemble characteristic of these methods, they typically outperform single decision trees or other ML models in predictive performance. However, numerous decision paths are generated for each decision tree, increasing the overall complexity of the model and hindering its use in domains that require trustworthy and explainable decisions, such as finance, social care, and health care. Thus, the interpretability of bagging and boosting algorithms, such as random forest and adaptive boosting, reduces as the number of decisions rises. In this paper, we propose a visual analytics tool that aims to assist users in extracting decisions from such ML models via a thorough visual inspection workflow that includes selecting a set of robust and diverse models (originating from different ensemble learning algorithms), choosing important features according to their global contribution, and deciding which decisions are essential for global explanation (or locally, for specific cases). The outcome is a final decision based on the class agreement of several models and the explored manual decisions exported by users. We evaluated the applicability and effectiveness of VisRuler via a use case, a usage scenario, and a user study. The evaluation revealed that most users managed to successfully use our system to explore decision rules visually, performing the proposed tasks and answering the given questions in a satisfying way.
Related papers
- Beyond Predictions: A Participatory Framework for Multi-Stakeholder Decision-Making [3.3044728148521623]
We propose a novel participatory framework that redefines decision-making as a multi-stakeholder optimization problem.
Our framework captures each actor's preferences through context-dependent reward functions.
We introduce a synthetic scoring mechanism that exploits user-defined preferences across multiple metrics to rank decision-making strategies.
arXiv Detail & Related papers (2025-02-12T16:27:40Z) - Modeling Boundedly Rational Agents with Latent Inference Budgets [56.24971011281947]
We introduce a latent inference budget model (L-IBM) that models agents' computational constraints explicitly.
L-IBMs make it possible to learn agent models using data from diverse populations of suboptimal actors.
We show that L-IBMs match or outperform Boltzmann models of decision-making under uncertainty.
arXiv Detail & Related papers (2023-12-07T03:55:51Z) - One Model Many Scores: Using Multiverse Analysis to Prevent Fairness Hacking and Evaluate the Influence of Model Design Decisions [4.362723406385396]
We show how multiverse analysis can be used to better understand fairness implications of design and evaluation decisions.
Our results highlight how decisions regarding the evaluation of a system can lead to vastly different fairness metrics for the same model.
arXiv Detail & Related papers (2023-08-31T12:32:43Z) - AVIS: Autonomous Visual Information Seeking with Large Language Model
Agent [123.75169211547149]
We propose an autonomous information seeking visual question answering framework, AVIS.
Our method leverages a Large Language Model (LLM) to dynamically strategize the utilization of external tools.
AVIS achieves state-of-the-art results on knowledge-intensive visual question answering benchmarks such as Infoseek and OK-VQA.
arXiv Detail & Related papers (2023-06-13T20:50:22Z) - DeforestVis: Behavior Analysis of Machine Learning Models with Surrogate Decision Stumps [46.58231605323107]
We propose DeforestVis, a visual analytics tool that offers summarization of the behaviour of complex ML models.
DeforestVis helps users to explore the complexity versus fidelity trade-off by incrementally generating more stumps.
We show the applicability and usefulness of DeforestVis with two use cases and expert interviews with data analysts and model developers.
arXiv Detail & Related papers (2023-03-31T21:17:15Z) - An Additive Instance-Wise Approach to Multi-class Model Interpretation [53.87578024052922]
Interpretable machine learning offers insights into what factors drive a certain prediction of a black-box system.
Existing methods mainly focus on selecting explanatory input features, which follow either locally additive or instance-wise approaches.
This work exploits the strengths of both methods and proposes a global framework for learning local explanations simultaneously for multiple target classes.
arXiv Detail & Related papers (2022-07-07T06:50:27Z) - R(Det)^2: Randomized Decision Routing for Object Detection [64.48369663018376]
We propose a novel approach to combine decision trees and deep neural networks in an end-to-end learning manner for object detection.
To facilitate effective learning, we propose randomized decision routing with node selective and associative losses.
We name this approach as the randomized decision routing for object detection, abbreviated as R(Det)$2$.
arXiv Detail & Related papers (2022-04-02T07:54:58Z) - Exploiting Meta-Cognitive Features for a Machine-Learning-Based One-Shot
Group-Decision Aggregation [0.7340017786387767]
Methods that rely on meta-cognitive information, such as confidence-based methods, had shown an improvement in various tasks.
Our aim is to exploit meta-cognitive information and to learn from it, for the purpose of enhancing the ability of the group to produce a correct answer.
arXiv Detail & Related papers (2022-01-20T15:56:18Z) - Leveraging Expert Consistency to Improve Algorithmic Decision Support [62.61153549123407]
We explore the use of historical expert decisions as a rich source of information that can be combined with observed outcomes to narrow the construct gap.
We propose an influence function-based methodology to estimate expert consistency indirectly when each case in the data is assessed by a single expert.
Our empirical evaluation, using simulations in a clinical setting and real-world data from the child welfare domain, indicates that the proposed approach successfully narrows the construct gap.
arXiv Detail & Related papers (2021-01-24T05:40:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.