On The Reasons Behind Decisions
- URL: http://arxiv.org/abs/2002.09284v1
- Date: Fri, 21 Feb 2020 13:37:29 GMT
- Title: On The Reasons Behind Decisions
- Authors: Adnan Darwiche and Auguste Hirth
- Abstract summary: We define notions such as sufficient, necessary and complete reasons behind decisions.
We show how these notions can be used to evaluate counterfactual statements.
We present efficient algorithms for computing these notions.
- Score: 11.358487655918676
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent work has shown that some common machine learning classifiers can be
compiled into Boolean circuits that have the same input-output behavior. We
present a theory for unveiling the reasons behind the decisions made by Boolean
classifiers and study some of its theoretical and practical implications. We
define notions such as sufficient, necessary and complete reasons behind
decisions, in addition to classifier and decision bias. We show how these
notions can be used to evaluate counterfactual statements such as "a decision
will stick even if ... because ... ." We present efficient algorithms for
computing these notions, which are based on new advances on tractable Boolean
circuits, and illustrate them using a case study.
Related papers
- Bisimulation Learning [55.859538562698496]
We compute finite bisimulations of state transition systems with large, possibly infinite state space.
Our technique yields faster verification results than alternative state-of-the-art tools in practice.
arXiv Detail & Related papers (2024-05-24T17:11:27Z) - Inverse Decision Modeling: Learning Interpretable Representations of
Behavior [72.80902932543474]
We develop an expressive, unifying perspective on inverse decision modeling.
We use this to formalize the inverse problem (as a descriptive model)
We illustrate how this structure enables learning (interpretable) representations of (bounded) rationality.
arXiv Detail & Related papers (2023-10-28T05:05:01Z) - Empower Nested Boolean Logic via Self-Supervised Curriculum Learning [67.46052028752327]
We find that any pre-trained language models even including large language models only behave like a random selector in the face of multi-nested logic.
To empower language models with this fundamental capability, this paper proposes a new self-supervised learning method textitCurriculum Logical Reasoning (textscClr)
arXiv Detail & Related papers (2023-10-09T06:54:02Z) - Invariant Causal Set Covering Machines [64.86459157191346]
Rule-based models, such as decision trees, appeal to practitioners due to their interpretable nature.
However, the learning algorithms that produce such models are often vulnerable to spurious associations and thus, they are not guaranteed to extract causally-relevant insights.
We propose Invariant Causal Set Covering Machines, an extension of the classical Set Covering Machine algorithm for conjunctions/disjunctions of binary-valued rules that provably avoids spurious associations.
arXiv Detail & Related papers (2023-06-07T20:52:01Z) - Logic for Explainable AI [11.358487655918676]
A central quest in explainable AI relates to understanding the decisions made by (learned) classifiers.
We discuss in this tutorial a comprehensive, semantical and computational theory of explainability along these dimensions.
arXiv Detail & Related papers (2023-05-09T04:53:57Z) - Explainable Data-Driven Optimization: From Context to Decision and Back
Again [76.84947521482631]
Data-driven optimization uses contextual information and machine learning algorithms to find solutions to decision problems with uncertain parameters.
We introduce a counterfactual explanation methodology tailored to explain solutions to data-driven problems.
We demonstrate our approach by explaining key problems in operations management such as inventory management and routing.
arXiv Detail & Related papers (2023-01-24T15:25:16Z) - A Set Membership Approach to Discovering Feature Relevance and
Explaining Neural Classifier Decisions [0.0]
This paper introduces a novel methodology for discovering which features are considered relevant by a trained neural classifier.
Although, feature relevance has received much attention in the machine learning literature here we reconsider it in terms of nonlinear parameter estimation.
arXiv Detail & Related papers (2022-04-05T14:25:11Z) - Sufficient reasons for classifier decisions in the presence of
constraints [9.525900373779395]
Recent work has unveiled a theory for reasoning about the decisions made by binary classifiers.
We propose a more general theory, tailored to taking constraints into account.
We prove that this simple idea results in reasons that are no less (and sometimes more) succinct.
arXiv Detail & Related papers (2021-05-12T23:36:12Z) - Beyond traditional assumptions in fair machine learning [5.029280887073969]
This thesis scrutinizes common assumptions underlying traditional machine learning approaches to fairness in consequential decision making.
We show that group fairness criteria purely based on statistical properties of observed data are fundamentally limited.
We overcome the assumption that sensitive data is readily available in practice.
arXiv Detail & Related papers (2021-01-29T09:02:15Z) - Neural Logic Reasoning [47.622957656745356]
We propose Logic-Integrated Neural Network (LINN) to integrate the power of deep learning and logic reasoning.
LINN learns basic logical operations such as AND, OR, NOT as neural modules, and conducts propositional logical reasoning through the network for inference.
Experiments show that LINN significantly outperforms state-of-the-art recommendation models in Top-K recommendation.
arXiv Detail & Related papers (2020-08-20T14:53:23Z) - The Tractability of SHAP-Score-Based Explanations over Deterministic and
Decomposable Boolean Circuits [2.8682942808330703]
We show that the SHAP-score can be computed in time over the class of decision trees.
We also establish the computational limits of the notion of SHAP-score.
arXiv Detail & Related papers (2020-07-28T08:04:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.