Scientific Explanation and Natural Language: A Unified
Epistemological-Linguistic Perspective for Explainable AI
- URL: http://arxiv.org/abs/2205.01809v2
- Date: Thu, 5 May 2022 05:37:58 GMT
- Title: Scientific Explanation and Natural Language: A Unified
Epistemological-Linguistic Perspective for Explainable AI
- Authors: Marco Valentino, Andr\'e Freitas
- Abstract summary: This paper focuses on the scientific domain, aiming to bridge the gap between theory and practice on the notion of a scientific explanation.
Through a mixture of quantitative and qualitative methodologies, the presented study allows deriving the following main conclusions.
- Score: 2.7920304852537536
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A fundamental research goal for Explainable AI (XAI) is to build models that
are capable of reasoning through the generation of natural language
explanations. However, the methodologies to design and evaluate
explanation-based inference models are still poorly informed by theoretical
accounts on the nature of explanation. As an attempt to provide an
epistemologically grounded characterisation for XAI, this paper focuses on the
scientific domain, aiming to bridge the gap between theory and practice on the
notion of a scientific explanation. Specifically, the paper combines a detailed
survey of the modern accounts of scientific explanation in Philosophy of
Science with a systematic analysis of corpora of natural language explanations,
clarifying the nature and function of explanatory arguments from both a
top-down (categorical) and a bottom-up (corpus-based) perspective. Through a
mixture of quantitative and qualitative methodologies, the presented study
allows deriving the following main conclusions: (1) Explanations cannot be
entirely characterised in terms of inductive or deductive arguments as their
main function is to perform unification; (2) An explanation must cite causes
and mechanisms that are responsible for the occurrence of the event to be
explained; (3) While natural language explanations possess an intrinsic
causal-mechanistic nature, they are not limited to causes and mechanisms, also
accounting for pragmatic elements such as definitions, properties and taxonomic
relations; (4) Patterns of unification naturally emerge in corpora of
explanations even if not intentionally modelled; (5) Unification is realised
through a process of abstraction, whose function is to provide the inference
substrate for subsuming the event to be explained under recurring patterns and
high-level regularities.
Related papers
- Causal Abstraction in Model Interpretability: A Compact Survey [5.963324728136442]
causal abstraction provides a principled approach to understanding and explaining the causal mechanisms underlying model behavior.
This survey paper delves into the realm of causal abstraction, examining its theoretical foundations, practical applications, and implications for the field of model interpretability.
arXiv Detail & Related papers (2024-10-26T12:24:28Z) - Reasoning with Natural Language Explanations [15.281385727331473]
Explanation constitutes an archetypal feature of human rationality, underpinning learning and generalisation.
An increasing amount of research in Natural Language Inference (NLI) has started reconsidering the role that explanations play in learning and inference.
arXiv Detail & Related papers (2024-10-05T13:15:24Z) - Phenomenal Yet Puzzling: Testing Inductive Reasoning Capabilities of Language Models with Hypothesis Refinement [92.61557711360652]
Language models (LMs) often fall short on inductive reasoning, despite achieving impressive success on research benchmarks.
We conduct a systematic study of the inductive reasoning capabilities of LMs through iterative hypothesis refinement.
We reveal several discrepancies between the inductive reasoning processes of LMs and humans, shedding light on both the potentials and limitations of using LMs in inductive reasoning tasks.
arXiv Detail & Related papers (2023-10-12T17:51:10Z) - Explainability for Large Language Models: A Survey [59.67574757137078]
Large language models (LLMs) have demonstrated impressive capabilities in natural language processing.
This paper introduces a taxonomy of explainability techniques and provides a structured overview of methods for explaining Transformer-based language models.
arXiv Detail & Related papers (2023-09-02T22:14:26Z) - Semantics, Ontology and Explanation [0.0]
We discuss the relation between ontological unpacking and other forms of explanation in philosophy and science.
We also discuss the relation between ontological unpacking and other forms of explanation in the area of Artificial Intelligence.
arXiv Detail & Related papers (2023-04-21T16:54:34Z) - A Theoretical Framework for AI Models Explainability with Application in
Biomedicine [3.5742391373143474]
We propose a novel definition of explanation that is a synthesis of what can be found in the literature.
We fit explanations into the properties of faithfulness (i.e., the explanation being a true description of the model's inner workings and decision-making process) and plausibility (i.e., how much the explanation looks convincing to the user)
arXiv Detail & Related papers (2022-12-29T20:05:26Z) - MetaLogic: Logical Reasoning Explanations with Fine-Grained Structure [129.8481568648651]
We propose a benchmark to investigate models' logical reasoning capabilities in complex real-life scenarios.
Based on the multi-hop chain of reasoning, the explanation form includes three main components.
We evaluate the current best models' performance on this new explanation form.
arXiv Detail & Related papers (2022-10-22T16:01:13Z) - Thinking About Causation: A Causal Language with Epistemic Operators [58.720142291102135]
We extend the notion of a causal model with a representation of the state of an agent.
On the side of the object language, we add operators to express knowledge and the act of observing new information.
We provide a sound and complete axiomatization of the logic, and discuss the relation of this framework to causal team semantics.
arXiv Detail & Related papers (2020-10-30T12:16:45Z) - Abduction and Argumentation for Explainable Machine Learning: A Position
Survey [2.28438857884398]
This paper presents Abduction and Argumentation as two principled forms for reasoning.
It fleshes out the fundamental role that they can play within Machine Learning.
arXiv Detail & Related papers (2020-10-24T13:23:44Z) - Towards Interpretable Natural Language Understanding with Explanations
as Latent Variables [146.83882632854485]
We develop a framework for interpretable natural language understanding that requires only a small set of human annotated explanations for training.
Our framework treats natural language explanations as latent variables that model the underlying reasoning process of a neural model.
arXiv Detail & Related papers (2020-10-24T02:05:56Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.