On Deciding Feature Membership in Explanations of SDD & Related
Classifiers
- URL: http://arxiv.org/abs/2202.07553v1
- Date: Tue, 15 Feb 2022 16:38:53 GMT
- Title: On Deciding Feature Membership in Explanations of SDD & Related
Classifiers
- Authors: Xuanxiang Huang, Joao Marques-Silva
- Abstract summary: The paper shows that the feature membership problem (FMP) is hard for $SigmaP$ for a broad class of classifiers.
The paper proposes propositional encodings for classifiers represented with Sentential Decision Diagrams (SDDs) and for other propositional languages.
- Score: 0.685316573653194
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: When reasoning about explanations of Machine Learning (ML) classifiers, a
pertinent query is to decide whether some sensitive features can serve for
explaining a given prediction. Recent work showed that the feature membership
problem (FMP) is hard for $\Sigma_2^P$ for a broad class of classifiers. In
contrast, this paper shows that for a number of families of classifiers, FMP is
in NP. Concretely, the paper proves that any classifier for which an
explanation can be computed in polynomial time, then deciding feature
membership in an explanation can be decided with one NP oracle call. The paper
then proposes propositional encodings for classifiers represented with
Sentential Decision Diagrams (SDDs) and for other related propositional
languages. The experimental results confirm the practical efficiency of the
proposed approach.
Related papers
- Bisimulation Learning [55.859538562698496]
We compute finite bisimulations of state transition systems with large, possibly infinite state space.
Our technique yields faster verification results than alternative state-of-the-art tools in practice.
arXiv Detail & Related papers (2024-05-24T17:11:27Z) - Logic-based Explanations for Linear Support Vector Classifiers with Reject Option [0.0]
Support Vector (SVC) is a well-known Machine Learning (ML) model for linear classification problems.
We propose a logic-based approach with formal guarantees on the correctness and minimality of explanations for linear SVCs with reject option.
arXiv Detail & Related papers (2024-03-24T15:14:44Z) - Dense X Retrieval: What Retrieval Granularity Should We Use? [56.90827473115201]
Often-overlooked design choice is the retrieval unit in which the corpus is indexed, e.g. document, passage, or sentence.
We introduce a novel retrieval unit, proposition, for dense retrieval.
Experiments reveal that indexing a corpus by fine-grained units such as propositions significantly outperforms passage-level units in retrieval tasks.
arXiv Detail & Related papers (2023-12-11T18:57:35Z) - Understanding and Mitigating Classification Errors Through Interpretable
Token Patterns [58.91023283103762]
Characterizing errors in easily interpretable terms gives insight into whether a classifier is prone to making systematic errors.
We propose to discover those patterns of tokens that distinguish correct and erroneous predictions.
We show that our method, Premise, performs well in practice.
arXiv Detail & Related papers (2023-11-18T00:24:26Z) - Mitigating Word Bias in Zero-shot Prompt-based Classifiers [55.60306377044225]
We show that matching class priors correlates strongly with the oracle upper bound performance.
We also demonstrate large consistent performance gains for prompt settings over a range of NLP tasks.
arXiv Detail & Related papers (2023-09-10T10:57:41Z) - Explanation Selection Using Unlabeled Data for Chain-of-Thought
Prompting [80.9896041501715]
Explanations that have not been "tuned" for a task, such as off-the-shelf explanations written by nonexperts, may lead to mediocre performance.
This paper tackles the problem of how to optimize explanation-infused prompts in a blackbox fashion.
arXiv Detail & Related papers (2023-02-09T18:02:34Z) - On Computing Probabilistic Abductive Explanations [30.325691263226968]
The most widely studied explainable AI (XAI) approaches are unsound.
PI-explanations also exhibit important drawbacks, the most visible of which is arguably their size.
This paper investigates practical approaches for computing relevant sets for a number of widely used classifiers.
arXiv Detail & Related papers (2022-12-12T15:47:10Z) - Complementary Explanations for Effective In-Context Learning [77.83124315634386]
Large language models (LLMs) have exhibited remarkable capabilities in learning from explanations in prompts.
This work aims to better understand the mechanisms by which explanations are used for in-context learning.
arXiv Detail & Related papers (2022-11-25T04:40:47Z) - Feature Necessity & Relevancy in ML Classifier Explanations [5.232306238197686]
Given a machine learning (ML) model and a prediction, explanations can be defined as sets of features which are sufficient for the prediction.
It is also critical to understand whether sensitive features can occur in some explanation, or whether a non-interesting feature must occur in all explanations.
arXiv Detail & Related papers (2022-10-27T12:12:45Z) - On Efficiently Explaining Graph-Based Classifiers [16.199563506727316]
This paper shows that not only decision trees (DTs) may not be interpretable but also proposed a-time algorithm for computing one PI-explanation of a DT.
In addition, the paper also proposes a-time algorithm for computing one contrastive explanation.
arXiv Detail & Related papers (2021-06-02T17:55:41Z) - Efficient Explanations With Relevant Sets [30.296628060841645]
This paper investigates solutions for tackling the practical limitations of $delta$-relevant sets.
The computation of the subset of $delta$-relevant sets is in NP, and can be solved with a number of calls to an NP oracle.
arXiv Detail & Related papers (2021-06-01T14:57:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.