Outlier Explanation via Sum-Product Networks
- URL: http://arxiv.org/abs/2207.08414v1
- Date: Mon, 18 Jul 2022 07:47:36 GMT
- Title: Outlier Explanation via Sum-Product Networks
- Authors: Stefan L\"udtke, Christian Bartelt, Heiner Stuckenschmidt
- Abstract summary: Outlier explanation is the task of identifying a set of features that distinguish a sample from normal data.
Existing methods are based on beam search in the space of feature subsets.
We propose a novel outlier explanation algorithm based on Sum-Product Networks (SPNs)
- Score: 10.1303427221932
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Outlier explanation is the task of identifying a set of features that
distinguish a sample from normal data, which is important for downstream
(human) decision-making. Existing methods are based on beam search in the space
of feature subsets. They quickly becomes computationally expensive, as they
require to run an outlier detection algorithm from scratch for each feature
subset. To alleviate this problem, we propose a novel outlier explanation
algorithm based on Sum-Product Networks (SPNs), a class of probabilistic
circuits. Our approach leverages the tractability of marginal inference in SPNs
to compute outlier scores in feature subsets. By using SPNs, it becomes
feasible to perform backwards elimination instead of the usual forward beam
search, which is less susceptible to missing relevant features in an
explanation, especially when the number of features is large. We empirically
show that our approach achieves state-of-the-art results for outlier
explanation, outperforming recent search-based as well as deep learning-based
explanation methods
Related papers
- Generating Likely Counterfactuals Using Sum-Product Networks [2.457872341625575]
We show that the search for the most likely explanations satisfying many common desiderata for counterfactual explanations can be modeled using mixed-integer optimization (MIO)
In the process, we propose an MIO formulation of a Sum-Product Network (SPN) and use the SPN to estimate the likelihood of a counterfactual.
arXiv Detail & Related papers (2024-01-25T11:06:16Z) - Gram-Schmidt Methods for Unsupervised Feature Extraction and Selection [7.373617024876725]
We propose a Gram-Schmidt process over function spaces to detect and map out nonlinear dependencies.
We provide experimental results for synthetic and real-world benchmark datasets.
Surprisingly, our linear feature extraction algorithms are comparable and often outperform several important nonlinear feature extraction methods.
arXiv Detail & Related papers (2023-11-15T21:29:57Z) - Spectral Entry-wise Matrix Estimation for Low-Rank Reinforcement
Learning [53.445068584013896]
We study matrix estimation problems arising in reinforcement learning (RL) with low-rank structure.
In low-rank bandits, the matrix to be recovered specifies the expected arm rewards, and for low-rank Markov Decision Processes (MDPs), it may for example characterize the transition kernel of the MDP.
We show that simple spectral-based matrix estimation approaches efficiently recover the singular subspaces of the matrix and exhibit nearly-minimal entry-wise error.
arXiv Detail & Related papers (2023-10-10T17:06:41Z) - Approximating a RUM from Distributions on k-Slates [88.32814292632675]
We find a generalization-time algorithm that finds the RUM that best approximates the given distribution on average.
Our theoretical result can also be made practical: we obtain a that is effective and scales to real-world datasets.
arXiv Detail & Related papers (2023-05-22T17:43:34Z) - Efficiently Explaining CSPs with Unsatisfiable Subset Optimization
(extended algorithms and examples) [14.163888405810635]
We build on a recently proposed method for stepwise explaining solutions of Constraint Satisfaction Problems.
An explanation here is a sequence of simple inference steps where simplicity is quantified using a cost function.
arXiv Detail & Related papers (2023-03-21T10:03:36Z) - On Computing Probabilistic Abductive Explanations [30.325691263226968]
The most widely studied explainable AI (XAI) approaches are unsound.
PI-explanations also exhibit important drawbacks, the most visible of which is arguably their size.
This paper investigates practical approaches for computing relevant sets for a number of widely used classifiers.
arXiv Detail & Related papers (2022-12-12T15:47:10Z) - Local policy search with Bayesian optimization [73.0364959221845]
Reinforcement learning aims to find an optimal policy by interaction with an environment.
Policy gradients for local search are often obtained from random perturbations.
We develop an algorithm utilizing a probabilistic model of the objective function and its gradient.
arXiv Detail & Related papers (2021-06-22T16:07:02Z) - Search Methods for Sufficient, Socially-Aligned Feature Importance
Explanations with In-Distribution Counterfactuals [72.00815192668193]
Feature importance (FI) estimates are a popular form of explanation, and they are commonly created and evaluated by computing the change in model confidence caused by removing certain input features at test time.
We study several under-explored dimensions of FI-based explanations, providing conceptual and empirical improvements for this form of explanation.
arXiv Detail & Related papers (2021-06-01T20:36:48Z) - Adaptive Sampling for Best Policy Identification in Markov Decision
Processes [79.4957965474334]
We investigate the problem of best-policy identification in discounted Markov Decision (MDPs) when the learner has access to a generative model.
The advantages of state-of-the-art algorithms are discussed and illustrated.
arXiv Detail & Related papers (2020-09-28T15:22:24Z) - Fibonacci and k-Subsecting Recursive Feature Elimination [2.741266294612776]
Feature selection is a data mining task with the potential of speeding up classification algorithms.
We propose two novel algorithms called Fibonacci- and k-Subsecting Recursive Feature Elimination.
Results show that Fibonacci and k-Subsecting Recursive Feature Elimination are capable of selecting a smaller subset of features much faster than standard RFE.
arXiv Detail & Related papers (2020-07-29T15:53:04Z) - Best-First Beam Search [78.71330480725668]
We show that the standard implementation of beam search can be made up to 10x faster in practice.
We propose a memory-reduced variant of Best-First Beam Search, which has a similar beneficial search bias in terms of downstream performance.
arXiv Detail & Related papers (2020-07-08T05:56:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.