Sufficient reasons for classifier decisions in the presence of
constraints
- URL: http://arxiv.org/abs/2105.06001v1
- Date: Wed, 12 May 2021 23:36:12 GMT
- Title: Sufficient reasons for classifier decisions in the presence of
constraints
- Authors: Niku Gorji, Sasha Rubin
- Abstract summary: Recent work has unveiled a theory for reasoning about the decisions made by binary classifiers.
We propose a more general theory, tailored to taking constraints into account.
We prove that this simple idea results in reasons that are no less (and sometimes more) succinct.
- Score: 9.525900373779395
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent work has unveiled a theory for reasoning about the decisions made by
binary classifiers: a classifier describes a Boolean function, and the reasons
behind an instance being classified as positive are the prime-implicants of the
function that are satisfied by the instance. One drawback of these works is
that they do not explicitly treat scenarios where the underlying data is known
to be constrained, e.g., certain combinations of features may not exist, may
not be observable, or may be required to be disregarded. We propose a more
general theory, also based on prime-implicants, tailored to taking constraints
into account. The main idea is to view classifiers in the presence of
constraints as describing partial Boolean functions, i.e., that are undefined
on instances that do not satisfy the constraints. We prove that this simple
idea results in reasons that are no less (and sometimes more) succinct. That
is, not taking constraints into account (e.g., ignored, or taken as negative
instances) results in reasons that are subsumed by reasons that do take
constraints into account. We illustrate this improved parsimony on synthetic
classifiers and classifiers learned from real data.
Related papers
- Abductive explanations of classifiers under constraints: Complexity and properties [6.629765271909503]
We propose three new types of explanations that take into account constraints.
They can be generated from the whole feature space or from a dataset.
We show that coverage is powerful enough to discard redundant and superfluous AXp's.
arXiv Detail & Related papers (2024-09-18T17:15:39Z) - ConstraintChecker: A Plugin for Large Language Models to Reason on
Commonsense Knowledge Bases [53.29427395419317]
Reasoning over Commonsense Knowledge Bases (CSKB) has been explored as a way to acquire new commonsense knowledge.
We propose **ConstraintChecker**, a plugin over prompting techniques to provide and check explicit constraints.
arXiv Detail & Related papers (2024-01-25T08:03:38Z) - CAR: Conceptualization-Augmented Reasoner for Zero-Shot Commonsense
Question Answering [56.592385613002584]
We propose Conceptualization-Augmented Reasoner (CAR) to tackle the task of zero-shot commonsense question answering.
CAR abstracts a commonsense knowledge triple to many higher-level instances, which increases the coverage of CommonSense Knowledge Bases.
CAR more robustly generalizes to answering questions about zero-shot commonsense scenarios than existing methods.
arXiv Detail & Related papers (2023-05-24T08:21:31Z) - On Computing Probabilistic Abductive Explanations [30.325691263226968]
The most widely studied explainable AI (XAI) approaches are unsound.
PI-explanations also exhibit important drawbacks, the most visible of which is arguably their size.
This paper investigates practical approaches for computing relevant sets for a number of widely used classifiers.
arXiv Detail & Related papers (2022-12-12T15:47:10Z) - Contrastive Explanations for Model Interpretability [77.92370750072831]
We propose a methodology to produce contrastive explanations for classification models.
Our method is based on projecting model representation to a latent space.
Our findings shed light on the ability of label-contrastive explanations to provide a more accurate and finer-grained interpretability of a model's decision.
arXiv Detail & Related papers (2021-03-02T00:36:45Z) - Counterfactual Explanations for Oblique Decision Trees: Exact, Efficient
Algorithms [0.0]
We consider counterfactual explanations, the problem of minimally adjusting features in a source input instance so that it is classified as a target class under a given classification.
This has become a topic of recent interest as a way to query a trained model and suggest possible actions to overturn its decision.
arXiv Detail & Related papers (2021-03-01T16:04:33Z) - Logic Embeddings for Complex Query Answering [56.25151854231117]
We propose Logic Embeddings, a new approach to embedding complex queries that uses Skolemisation to eliminate existential variables for efficient querying.
We show that Logic Embeddings are competitively fast and accurate in query answering over large, incomplete knowledge graphs, outperform on negation queries, and in particular, provide improved modeling of answer uncertainty.
arXiv Detail & Related papers (2021-02-28T07:52:37Z) - On Irrelevant Literals in Pseudo-Boolean Constraint Learning [21.506382989223784]
We show that emphirrelevant literals may lead to infer constraints that are weaker than they should be.
This suggests that current implementations of PB solvers based on cutting planes should be reconsidered to prevent the generation of irrelevant literals.
arXiv Detail & Related papers (2020-12-08T13:52:09Z) - Beta Embeddings for Multi-Hop Logical Reasoning in Knowledge Graphs [89.51365993393787]
We present BetaE, a probabilistic embedding framework for answering arbitrary FOL queries over a knowledge graph (KG)
BetaE is the first method that can handle a complete set of first-order logical operations.
We demonstrate the performance of BetaE on answering arbitrary FOL queries on three large, incomplete KGs.
arXiv Detail & Related papers (2020-10-22T06:11:39Z) - Tractable Inference in Credal Sentential Decision Diagrams [116.6516175350871]
Probabilistic sentential decision diagrams are logic circuits where the inputs of disjunctive gates are annotated by probability values.
We develop the credal sentential decision diagrams, a generalisation of their probabilistic counterpart that allows for replacing the local probabilities with credal sets of mass functions.
For a first empirical validation, we consider a simple application based on noisy seven-segment display images.
arXiv Detail & Related papers (2020-08-19T16:04:34Z) - On The Reasons Behind Decisions [11.358487655918676]
We define notions such as sufficient, necessary and complete reasons behind decisions.
We show how these notions can be used to evaluate counterfactual statements.
We present efficient algorithms for computing these notions.
arXiv Detail & Related papers (2020-02-21T13:37:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.