SMACE: A New Method for the Interpretability of Composite Decision
Systems
- URL: http://arxiv.org/abs/2111.08749v1
- Date: Tue, 16 Nov 2021 19:37:35 GMT
- Title: SMACE: A New Method for the Interpretability of Composite Decision
Systems
- Authors: Gianluigi Lopardo, Damien Garreau, Frederic Precioso, Greger Ottosson
- Abstract summary: We present SMACE, Semi-Model-Agnostic Contextual Explainer, a novel interpretability method for decision systems.
We show that established model-agnostic approaches produce poor results in this framework.
- Score: 3.1542695050861544
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Interpretability is a pressing issue for decision systems. Many post hoc
methods have been proposed to explain the predictions of any machine learning
model. However, business processes and decision systems are rarely centered
around a single, standalone model. These systems combine multiple models that
produce key predictions, and then apply decision rules to generate the final
decision. To explain such decision, we present SMACE, Semi-Model-Agnostic
Contextual Explainer, a novel interpretability method that combines a geometric
approach for decision rules with existing post hoc solutions for machine
learning models to generate an intuitive feature ranking tailored to the end
user. We show that established model-agnostic approaches produce poor results
in this framework.
Related papers
- An Axiomatic Approach to Model-Agnostic Concept Explanations [67.84000759813435]
We propose an approach to concept explanations that satisfy three natural axioms: linearity, recursivity, and similarity.
We then establish connections with previous concept explanation methods, offering insight into their varying semantic meanings.
arXiv Detail & Related papers (2024-01-12T20:53:35Z) - A Meta-heuristic Approach to Estimate and Explain Classifier Uncertainty [0.4264192013842096]
This work proposes a set of class-independent meta-heuristics that can characterize the complexity of an instance in terms of factors are mutually relevant to both human and machine learning decision-making.
The proposed measures and framework hold promise for improving model development for more complex instances, as well as providing a new means of model abstention and explanation.
arXiv Detail & Related papers (2023-04-20T13:09:28Z) - Explainable Data-Driven Optimization: From Context to Decision and Back
Again [76.84947521482631]
Data-driven optimization uses contextual information and machine learning algorithms to find solutions to decision problems with uncertain parameters.
We introduce a counterfactual explanation methodology tailored to explain solutions to data-driven problems.
We demonstrate our approach by explaining key problems in operations management such as inventory management and routing.
arXiv Detail & Related papers (2023-01-24T15:25:16Z) - HEX: Human-in-the-loop Explainability via Deep Reinforcement Learning [2.322461721824713]
We propose HEX, a human-in-the-loop deep reinforcement learning approach to machine learning explainability (MLX)
Our formulation explicitly considers the decision boundary of the ML model in question, rather than the underlying training data.
Our proposed methods thus synthesize HITL MLX policies that explicitly capture the decision boundary of the model in question for use in limited data scenarios.
arXiv Detail & Related papers (2022-06-02T23:53:40Z) - Adversarial Robustness Verification and Attack Synthesis in Stochastic
Systems [8.833548357664606]
We develop a formal framework for adversarial robustness in systems defined as discrete time Markov chains (DTMCs)
We outline a class of threat models under which adversaries can perturb system transitions, constrained by an $varepsilon$ ball around the original transition probabilities.
arXiv Detail & Related papers (2021-10-05T15:52:47Z) - Modularity in Reinforcement Learning via Algorithmic Independence in
Credit Assignment [79.5678820246642]
We show that certain action-value methods are more sample efficient than policy-gradient methods on transfer problems that require only sparse changes to a sequence of previously optimal decisions.
We generalize the recently proposed societal decision-making framework as a more granular formalism than the Markov decision process.
arXiv Detail & Related papers (2021-06-28T21:29:13Z) - Dissecting Generation Modes for Abstractive Summarization Models via
Ablation and Attribution [34.2658286826597]
We propose a two-step method to interpret summarization model decisions.
We first analyze the model's behavior by ablating the full model to categorize each decoder decision into one of several generation modes.
After isolating decisions that do depend on the input, we explore interpreting these decisions using several different attribution methods.
arXiv Detail & Related papers (2021-06-03T00:54:16Z) - Beyond Trivial Counterfactual Explanations with Diverse Valuable
Explanations [64.85696493596821]
In computer vision applications, generative counterfactual methods indicate how to perturb a model's input to change its prediction.
We propose a counterfactual method that learns a perturbation in a disentangled latent space that is constrained using a diversity-enforcing loss.
Our model improves the success rate of producing high-quality valuable explanations when compared to previous state-of-the-art methods.
arXiv Detail & Related papers (2021-03-18T12:57:34Z) - Leveraging Expert Consistency to Improve Algorithmic Decision Support [62.61153549123407]
We explore the use of historical expert decisions as a rich source of information that can be combined with observed outcomes to narrow the construct gap.
We propose an influence function-based methodology to estimate expert consistency indirectly when each case in the data is assessed by a single expert.
Our empirical evaluation, using simulations in a clinical setting and real-world data from the child welfare domain, indicates that the proposed approach successfully narrows the construct gap.
arXiv Detail & Related papers (2021-01-24T05:40:29Z) - Stein Variational Model Predictive Control [130.60527864489168]
Decision making under uncertainty is critical to real-world, autonomous systems.
Model Predictive Control (MPC) methods have demonstrated favorable performance in practice, but remain limited when dealing with complex distributions.
We show that this framework leads to successful planning in challenging, non optimal control problems.
arXiv Detail & Related papers (2020-11-15T22:36:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.