Local Explanations via Necessity and Sufficiency: Unifying Theory and
Practice
- URL: http://arxiv.org/abs/2103.14651v1
- Date: Sat, 27 Mar 2021 01:58:53 GMT
- Title: Local Explanations via Necessity and Sufficiency: Unifying Theory and
Practice
- Authors: David Watson, Limor Gultchin, Ankur Taly, Luciano Floridi
- Abstract summary: Necessity and sufficiency are the building blocks of all successful explanations.
Yet despite their importance, these notions have been conceptually underdeveloped and inconsistently applied in explainable artificial intelligence.
We establish the central role of necessity and sufficiency in XAI, unifying seemingly disparate methods in a single formal framework.
- Score: 3.8902657229395907
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Necessity and sufficiency are the building blocks of all successful
explanations. Yet despite their importance, these notions have been
conceptually underdeveloped and inconsistently applied in explainable
artificial intelligence (XAI), a fast-growing research area that is so far
lacking in firm theoretical foundations. Building on work in logic,
probability, and causality, we establish the central role of necessity and
sufficiency in XAI, unifying seemingly disparate methods in a single formal
framework. We provide a sound and complete algorithm for computing explanatory
factors with respect to a given context, and demonstrate its flexibility and
competitive performance against state of the art alternatives on various tasks.
Related papers
- Towards a Formal Theory of the Need for Competence via Computational Intrinsic Motivation [6.593505830504729]
We focus on the "need for competence", postulated as a key basic human need within Self-Determination Theory (SDT)
We propose that these inconsistencies may be alleviated by drawing on computational models from the field of reinforcement learning (RL)
Our work can support a cycle of theory development by inspiring new computational models formalising aspects of the theory, which can then be tested empirically to refine the theory.
arXiv Detail & Related papers (2025-02-11T10:03:40Z) - Advancing Interactive Explainable AI via Belief Change Theory [5.842480645870251]
We argue that this type of formalisation provides a framework and a methodology to develop interactive explanations.
We first define a novel, logic-based formalism to represent explanatory information shared between humans and machines.
We then consider real world scenarios for interactive XAI, with different prioritisations of new and existing knowledge, where our formalism may be instantiated.
arXiv Detail & Related papers (2024-08-13T13:11:56Z) - What Does Evaluation of Explainable Artificial Intelligence Actually Tell Us? A Case for Compositional and Contextual Validation of XAI Building Blocks [16.795332276080888]
We propose a fine-grained validation framework for explainable artificial intelligence systems.
We recognise their inherent modular structure: technical building blocks, user-facing explanatory artefacts and social communication protocols.
arXiv Detail & Related papers (2024-03-19T13:45:34Z) - Igniting Language Intelligence: The Hitchhiker's Guide From
Chain-of-Thought Reasoning to Language Agents [80.5213198675411]
Large language models (LLMs) have dramatically enhanced the field of language intelligence.
LLMs leverage the intriguing chain-of-thought (CoT) reasoning techniques, obliging them to formulate intermediate steps en route to deriving an answer.
Recent research endeavors have extended CoT reasoning methodologies to nurture the development of autonomous language agents.
arXiv Detail & Related papers (2023-11-20T14:30:55Z) - A Principled Framework for Knowledge-enhanced Large Language Model [58.1536118111993]
Large Language Models (LLMs) are versatile, yet they often falter in tasks requiring deep and reliable reasoning.
This paper introduces a rigorously designed framework for creating LLMs that effectively anchor knowledge and employ a closed-loop reasoning process.
arXiv Detail & Related papers (2023-11-18T18:10:02Z) - Modeling Hierarchical Reasoning Chains by Linking Discourse Units and
Key Phrases for Reading Comprehension [80.99865844249106]
We propose a holistic graph network (HGN) which deals with context at both discourse level and word level, as the basis for logical reasoning.
Specifically, node-level and type-level relations, which can be interpreted as bridges in the reasoning process, are modeled by a hierarchical interaction mechanism.
arXiv Detail & Related papers (2023-06-21T07:34:27Z) - Categorical Foundations of Explainable AI: A Unifying Theory [8.637435154170916]
This paper presents the first mathematically rigorous definitions of key XAI notions and processes, using the well-funded formalism of Category theory.
We show that our categorical framework allows to: (i) model existing learning schemes and architectures, (ii) formally define the term "explanation", (iii) establish a theoretical basis for XAI, and (iv) analyze commonly overlooked aspects of explaining methods.
arXiv Detail & Related papers (2023-04-27T11:10:16Z) - Measuring algorithmic interpretability: A human-learning-based framework
and the corresponding cognitive complexity score [4.707290877865484]
Algorithmic interpretability is necessary to build trust, ensure fairness, and track accountability.
There is no existing formal measurement method for algorithmic interpretability.
We build upon programming language theory and cognitive load theory to develop a framework for measuring algorithmic interpretability.
arXiv Detail & Related papers (2022-05-20T14:31:06Z) - Active Inference in Robotics and Artificial Agents: Survey and
Challenges [51.29077770446286]
We review the state-of-the-art theory and implementations of active inference for state-estimation, control, planning and learning.
We showcase relevant experiments that illustrate its potential in terms of adaptation, generalization and robustness.
arXiv Detail & Related papers (2021-12-03T12:10:26Z) - Counterfactual Explanations as Interventions in Latent Space [62.997667081978825]
Counterfactual explanations aim to provide to end users a set of features that need to be changed in order to achieve a desired outcome.
Current approaches rarely take into account the feasibility of actions needed to achieve the proposed explanations.
We present Counterfactual Explanations as Interventions in Latent Space (CEILS), a methodology to generate counterfactual explanations.
arXiv Detail & Related papers (2021-06-14T20:48:48Z) - Neuro-symbolic Architectures for Context Understanding [59.899606495602406]
We propose the use of hybrid AI methodology as a framework for combining the strengths of data-driven and knowledge-driven approaches.
Specifically, we inherit the concept of neuro-symbolism as a way of using knowledge-bases to guide the learning progress of deep neural networks.
arXiv Detail & Related papers (2020-03-09T15:04:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.