Boundary-Aware Uncertainty for Feature Attribution Explainers
- URL: http://arxiv.org/abs/2210.02419v5
- Date: Mon, 4 Mar 2024 06:20:41 GMT
- Title: Boundary-Aware Uncertainty for Feature Attribution Explainers
- Authors: Davin Hill, Aria Masoomi, Max Torop, Sandesh Ghimire, Jennifer Dy
- Abstract summary: We propose a unified uncertainty estimate combining decision boundary-aware uncertainty with explanation function approximation uncertainty.
We show theoretically that the proposed kernel similarity increases with decision boundary complexity.
Empirical results on multiple datasets show that the GPEC uncertainty estimate improves understanding of explanations as compared to existing methods.
- Score: 4.2130431095114895
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Post-hoc explanation methods have become a critical tool for understanding
black-box classifiers in high-stakes applications. However, high-performing
classifiers are often highly nonlinear and can exhibit complex behavior around
the decision boundary, leading to brittle or misleading local explanations.
Therefore there is an impending need to quantify the uncertainty of such
explanation methods in order to understand when explanations are trustworthy.
In this work we propose the Gaussian Process Explanation UnCertainty (GPEC)
framework, which generates a unified uncertainty estimate combining decision
boundary-aware uncertainty with explanation function approximation uncertainty.
We introduce a novel geodesic-based kernel, which captures the complexity of
the target black-box decision boundary. We show theoretically that the proposed
kernel similarity increases with decision boundary complexity. The proposed
framework is highly flexible; it can be used with any black-box classifier and
feature attribution method. Empirical results on multiple tabular and image
datasets show that the GPEC uncertainty estimate improves understanding of
explanations as compared to existing methods.
Related papers
- Learning Model Agnostic Explanations via Constraint Programming [8.257194221102225]
Interpretable Machine Learning faces a recurring challenge of explaining predictions made by opaque classifiers in terms that are understandable to humans.
In this paper, the task is framed as a Constraint Optimization Problem, where the constraint solver seeks an explanation of minimum error and bounded size for an input data instance and a set of samples generated by the black box.
We evaluate the approach empirically on various datasets and show that it statistically outperforms the state-of-the-art Anchors method.
arXiv Detail & Related papers (2024-11-13T09:55:59Z) - Spectral Representations for Accurate Causal Uncertainty Quantification with Gaussian Processes [19.449942440902593]
We introduce a method, IMPspec, that addresses limitations via a spectral representation of the Hilbert space.
We show that posteriors in this model can be obtained explicitly, by extending a result in Hilbert space regression theory.
We also learn the spectral representation to optimise posterior calibration.
arXiv Detail & Related papers (2024-10-18T14:06:49Z) - Explaining Predictive Uncertainty by Exposing Second-Order Effects [13.83164409095901]
We present a new method for explaining predictive uncertainty based on second-order effects.
Our method is generally applicable, allowing for turning common attribution techniques into powerful second-order uncertainty explainers.
arXiv Detail & Related papers (2024-01-30T21:02:21Z) - Information-Theoretic Safe Exploration with Gaussian Processes [89.31922008981735]
We consider a sequential decision making task where we are not allowed to evaluate parameters that violate an unknown (safety) constraint.
Most current methods rely on a discretization of the domain and cannot be directly extended to the continuous case.
We propose an information-theoretic safe exploration criterion that directly exploits the GP posterior to identify the most informative safe parameters to evaluate.
arXiv Detail & Related papers (2022-12-09T15:23:58Z) - What is Flagged in Uncertainty Quantification? Latent Density Models for
Uncertainty Categorization [68.15353480798244]
Uncertainty Quantification (UQ) is essential for creating trustworthy machine learning models.
Recent years have seen a steep rise in UQ methods that can flag suspicious examples.
We propose a framework for categorizing uncertain examples flagged by UQ methods in classification tasks.
arXiv Detail & Related papers (2022-07-11T19:47:00Z) - Posterior and Computational Uncertainty in Gaussian Processes [52.26904059556759]
Gaussian processes scale prohibitively with the size of the dataset.
Many approximation methods have been developed, which inevitably introduce approximation error.
This additional source of uncertainty, due to limited computation, is entirely ignored when using the approximate posterior.
We develop a new class of methods that provides consistent estimation of the combined uncertainty arising from both the finite number of data observed and the finite amount of computation expended.
arXiv Detail & Related papers (2022-05-30T22:16:25Z) - A Study on Mitigating Hard Boundaries of Decision-Tree-based Uncertainty
Estimates for AI Models [0.0]
Uncertainty wrappers use a decision tree approach to cluster input quality related uncertainties, assigning inputs strictly to distinct uncertainty clusters.
Our objective is to replace this with an approach that mitigates hard decision boundaries while preserving interpretability, runtime complexity, and prediction performance.
arXiv Detail & Related papers (2022-01-10T10:29:12Z) - Misspecified Gaussian Process Bandit Optimization [59.30399661155574]
Kernelized bandit algorithms have shown strong empirical and theoretical performance for this problem.
We introduce a emphmisspecified kernelized bandit setting where the unknown function can be $epsilon$--uniformly approximated by a function with a bounded norm in some Reproducing Kernel Hilbert Space (RKHS)
We show that our algorithm achieves optimal dependence on $epsilon$ with no prior knowledge of misspecification.
arXiv Detail & Related papers (2021-11-09T09:00:02Z) - Gaussian Process Uniform Error Bounds with Unknown Hyperparameters for
Safety-Critical Applications [71.23286211775084]
We introduce robust Gaussian process uniform error bounds in settings with unknown hyper parameters.
Our approach computes a confidence region in the space of hyper parameters, which enables us to obtain a probabilistic upper bound for the model error.
Experiments show that the bound performs significantly better than vanilla and fully Bayesian processes.
arXiv Detail & Related papers (2021-09-06T17:10:01Z) - Stochastic Gradient Descent-Ascent and Consensus Optimization for Smooth
Games: Convergence Analysis under Expected Co-coercivity [49.66890309455787]
We introduce the expected co-coercivity condition, explain its benefits, and provide the first last-iterate convergence guarantees of SGDA and SCO.
We prove linear convergence of both methods to a neighborhood of the solution when they use constant step-size.
Our convergence guarantees hold under the arbitrary sampling paradigm, and we give insights into the complexity of minibatching.
arXiv Detail & Related papers (2021-06-30T18:32:46Z) - Worst-Case Risk Quantification under Distributional Ambiguity using
Kernel Mean Embedding in Moment Problem [17.909696462645023]
We propose to quantify the worst-case risk under distributional ambiguity using the kernel mean embedding.
We numerically test the proposed method in characterizing the worst-case constraint violation probability in the context of a constrained control system.
arXiv Detail & Related papers (2020-03-31T23:51:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.