Optimal Local Explainer Aggregation for Interpretable Prediction
- URL: http://arxiv.org/abs/2003.09466v2
- Date: Sun, 15 Nov 2020 21:18:10 GMT
- Title: Optimal Local Explainer Aggregation for Interpretable Prediction
- Authors: Qiaomei Li and Rachel Cummings and Yonatan Mintz
- Abstract summary: Key challenge for decision makers when incorporating black box machine learned models is being able to understand the predictions provided by these models.
One proposed method is training surrogate explainer models which approximate the more complex model.
We propose a novel local explainer algorithm based on information parameters.
- Score: 12.934180951771596
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A key challenge for decision makers when incorporating black box machine
learned models into practice is being able to understand the predictions
provided by these models. One proposed set of methods is training surrogate
explainer models which approximate the more complex model. Explainer methods
are generally classified as either local or global, depending on what portion
of the data space they are purported to explain. The improved coverage of
global explainers usually comes at the expense of explainer fidelity. One way
of trading off the advantages of both approaches is to aggregate several local
explainers into a single explainer model with improved coverage. However, the
problem of aggregating these local explainers is computationally challenging,
and existing methods only use heuristics to form these aggregations.
In this paper we propose a local explainer aggregation method which selects
local explainers using non-convex optimization. In contrast to other heuristic
methods, we use an integer optimization framework to combine local explainers
into a near-global aggregate explainer. Our framework allows a decision-maker
to directly tradeoff coverage and fidelity of the resulting aggregation through
the parameters of the optimization problem. We also propose a novel local
explainer algorithm based on information filtering. We evaluate our algorithmic
framework on two healthcare datasets---the Parkinson's Progression Marker
Initiative (PPMI) data set and a geriatric mobility dataset---which is
motivated by the anticipated need for explainable precision medicine. Our
method outperforms existing local explainer aggregation methods in terms of
both fidelity and coverage of classification and improves on fidelity over
existing global explainer methods, particularly in multi-class settings where
state-of-the-art methods achieve 70% and ours achieves 90%.
Related papers
- GLEAMS: Bridging the Gap Between Local and Global Explanations [6.329021279685856]
We propose GLEAMS, a novel method that partitions the input space and learns an interpretable model within each sub-region.
We demonstrate GLEAMS' effectiveness on both synthetic and real-world data, highlighting its desirable properties and human-understandable insights.
arXiv Detail & Related papers (2024-08-09T13:30:37Z) - An Additive Instance-Wise Approach to Multi-class Model Interpretation [53.87578024052922]
Interpretable machine learning offers insights into what factors drive a certain prediction of a black-box system.
Existing methods mainly focus on selecting explanatory input features, which follow either locally additive or instance-wise approaches.
This work exploits the strengths of both methods and proposes a global framework for learning local explanations simultaneously for multiple target classes.
arXiv Detail & Related papers (2022-07-07T06:50:27Z) - Local Learning Matters: Rethinking Data Heterogeneity in Federated
Learning [61.488646649045215]
Federated learning (FL) is a promising strategy for performing privacy-preserving, distributed learning with a network of clients (i.e., edge devices)
arXiv Detail & Related papers (2021-11-28T19:03:39Z) - A Bayesian Federated Learning Framework with Online Laplace
Approximation [144.7345013348257]
Federated learning allows multiple clients to collaboratively learn a globally shared model.
We propose a novel FL framework that uses online Laplace approximation to approximate posteriors on both the client and server side.
We achieve state-of-the-art results on several benchmarks, clearly demonstrating the advantages of the proposed method.
arXiv Detail & Related papers (2021-02-03T08:36:58Z) - Principles and Algorithms for Forecasting Groups of Time Series:
Locality and Globality [0.5076419064097732]
We formalize the setting of forecasting a set of time series with local and global methods.
Global models can succeed in a wider range of problems than previously thought.
purposely naive algorithms derived from these principles result in superior accuracy.
arXiv Detail & Related papers (2020-08-02T10:22:05Z) - Model Fusion with Kullback--Leibler Divergence [58.20269014662046]
We propose a method to fuse posterior distributions learned from heterogeneous datasets.
Our algorithm relies on a mean field assumption for both the fused model and the individual dataset posteriors.
arXiv Detail & Related papers (2020-07-13T03:27:45Z) - Explaining Predictions by Approximating the Local Decision Boundary [3.60160227126201]
We present a new procedure for local decision boundary approximation (DBA)
We train a variational autoencoder to learn a Euclidean latent space of encoded data representations.
We exploit attribute annotations to map the latent space to attributes that are meaningful to the user.
arXiv Detail & Related papers (2020-06-14T19:12:42Z) - FedPD: A Federated Learning Framework with Optimal Rates and Adaptivity
to Non-IID Data [59.50904660420082]
Federated Learning (FL) has become a popular paradigm for learning from distributed data.
To effectively utilize data at different devices without moving them to the cloud, algorithms such as the Federated Averaging (FedAvg) have adopted a "computation then aggregation" (CTA) model.
arXiv Detail & Related papers (2020-05-22T23:07:42Z) - A flexible outlier detector based on a topology given by graph
communities [0.0]
anomaly detection is essential for optimal performance of machine learning methods and statistical predictive models.
Topology is computed using the communities of a weighted graph codifying mutual nearest neighbors in the feature space.
Our approach overall outperforms, both, local and global strategies in multi and single view settings.
arXiv Detail & Related papers (2020-02-18T18:40:31Z) - Think Locally, Act Globally: Federated Learning with Local and Global
Representations [92.68484710504666]
Federated learning is a method of training models on private data distributed over multiple devices.
We propose a new federated learning algorithm that jointly learns compact local representations on each device.
We also evaluate on the task of personalized mood prediction from real-world mobile data where privacy is key.
arXiv Detail & Related papers (2020-01-06T12:40:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.