DeforestVis: Behavior Analysis of Machine Learning Models with Surrogate Decision Stumps
- URL: http://arxiv.org/abs/2304.00133v5
- Date: Thu, 18 Apr 2024 16:46:45 GMT
- Title: DeforestVis: Behavior Analysis of Machine Learning Models with Surrogate Decision Stumps
- Authors: Angelos Chatzimparmpas, Rafael M. Martins, Alexandru C. Telea, Andreas Kerren,
- Abstract summary: We propose DeforestVis, a visual analytics tool that offers summarization of the behaviour of complex ML models.
DeforestVis helps users to explore the complexity versus fidelity trade-off by incrementally generating more stumps.
We show the applicability and usefulness of DeforestVis with two use cases and expert interviews with data analysts and model developers.
- Score: 46.58231605323107
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As the complexity of machine learning (ML) models increases and their application in different (and critical) domains grows, there is a strong demand for more interpretable and trustworthy ML. A direct, model-agnostic, way to interpret such models is to train surrogate models-such as rule sets and decision trees-that sufficiently approximate the original ones while being simpler and easier-to-explain. Yet, rule sets can become very lengthy, with many if-else statements, and decision tree depth grows rapidly when accurately emulating complex ML models. In such cases, both approaches can fail to meet their core goal-providing users with model interpretability. To tackle this, we propose DeforestVis, a visual analytics tool that offers summarization of the behaviour of complex ML models by providing surrogate decision stumps (one-level decision trees) generated with the Adaptive Boosting (AdaBoost) technique. DeforestVis helps users to explore the complexity versus fidelity trade-off by incrementally generating more stumps, creating attribute-based explanations with weighted stumps to justify decision making, and analysing the impact of rule overriding on training instance allocation between one or more stumps. An independent test set allows users to monitor the effectiveness of manual rule changes and form hypotheses based on case-by-case analyses. We show the applicability and usefulness of DeforestVis with two use cases and expert interviews with data analysts and model developers.
Related papers
- Disentangling Memory and Reasoning Ability in Large Language Models [97.26827060106581]
We propose a new inference paradigm that decomposes the complex inference process into two distinct and clear actions.
Our experiment results show that this decomposition improves model performance and enhances the interpretability of the inference process.
arXiv Detail & Related papers (2024-11-20T17:55:38Z) - Revisiting SMoE Language Models by Evaluating Inefficiencies with Task Specific Expert Pruning [78.72226641279863]
Sparse Mixture of Expert (SMoE) models have emerged as a scalable alternative to dense models in language modeling.
Our research explores task-specific model pruning to inform decisions about designing SMoE architectures.
We introduce an adaptive task-aware pruning technique UNCURL to reduce the number of experts per MoE layer in an offline manner post-training.
arXiv Detail & Related papers (2024-09-02T22:35:03Z) - LLMs can learn self-restraint through iterative self-reflection [57.26854891567574]
Large Language Models (LLMs) must be capable of dynamically adapting their behavior based on their level of knowledge and uncertainty associated with specific topics.
This adaptive behavior, which we refer to as self-restraint, is non-trivial to teach.
We devise a utility function that can encourage the model to produce responses only when it is confident in them.
arXiv Detail & Related papers (2024-05-15T13:35:43Z) - Increasing Performance And Sample Efficiency With Model-agnostic
Interactive Feature Attributions [3.0655581300025996]
We provide model-agnostic implementations for two popular explanation methods (Occlusion and Shapley values) to enforce entirely different attributions in the complex model.
We show how our proposed approach can significantly improve the model's performance only by augmenting its training dataset based on corrected explanations.
arXiv Detail & Related papers (2023-06-28T15:23:28Z) - Interpretability at Scale: Identifying Causal Mechanisms in Alpaca [62.65877150123775]
We use Boundless DAS to efficiently search for interpretable causal structure in large language models while they follow instructions.
Our findings mark a first step toward faithfully understanding the inner-workings of our ever-growing and most widely deployed language models.
arXiv Detail & Related papers (2023-05-15T17:15:40Z) - HyperImpute: Generalized Iterative Imputation with Automatic Model
Selection [77.86861638371926]
We propose a generalized iterative imputation framework for adaptively and automatically configuring column-wise models.
We provide a concrete implementation with out-of-the-box learners, simulators, and interfaces.
arXiv Detail & Related papers (2022-06-15T19:10:35Z) - Using Shape Metrics to Describe 2D Data Points [0.0]
We propose to use shape metrics to describe 2D data to help make analyses more explainable and interpretable.
This is particularly important in applications in the medical community where the right to explainability' is crucial.
arXiv Detail & Related papers (2022-01-27T23:28:42Z) - Visual Exploration of Machine Learning Model Behavior with Hierarchical
Surrogate Rule Sets [13.94542147252982]
We present Hierarchical Surrogate Rules (HSR), an algorithm that generates hierarchical rules based on user-defined parameters.
We also contribute SuRE, a visual analytics (VA) system that integrates HSR and interactive surrogate rule visualizations.
We evaluate the algorithm in terms of parameter sensitivity, time performance, and comparison with surrogate decision trees.
arXiv Detail & Related papers (2022-01-19T17:03:35Z) - VisRuler: Visual Analytics for Extracting Decision Rules from Bagged and Boosted Decision Trees [3.5229503563299915]
Bagging and boosting are two popular ensemble methods in machine learning (ML) that produce many individual decision trees.
We propose a visual analytics tool that aims to assist users in extracting decisions from such ML models.
arXiv Detail & Related papers (2021-12-01T08:01:02Z) - Learning Causal Models of Autonomous Agents using Interventions [11.351235628684252]
We extend the analysis of an agent assessment module that lets an AI system execute high-level instruction sequences in simulators.
We show that such a primitive query-response capability is sufficient to efficiently derive a user-interpretable causal model of the system.
arXiv Detail & Related papers (2021-08-21T21:33:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.