Abduction and Argumentation for Explainable Machine Learning: A Position
Survey
- URL: http://arxiv.org/abs/2010.12896v1
- Date: Sat, 24 Oct 2020 13:23:44 GMT
- Title: Abduction and Argumentation for Explainable Machine Learning: A Position
Survey
- Authors: Antonis Kakas, Loizos Michael
- Abstract summary: This paper presents Abduction and Argumentation as two principled forms for reasoning.
It fleshes out the fundamental role that they can play within Machine Learning.
- Score: 2.28438857884398
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper presents Abduction and Argumentation as two principled forms for
reasoning, and fleshes out the fundamental role that they can play within
Machine Learning. It reviews the state-of-the-art work over the past few
decades on the link of these two reasoning forms with machine learning work,
and from this it elaborates on how the explanation-generating role of Abduction
and Argumentation makes them naturally-fitting mechanisms for the development
of Explainable Machine Learning and AI systems. Abduction contributes towards
this goal by facilitating learning through the transformation, preparation, and
homogenization of data. Argumentation, as a conservative extension of classical
deductive reasoning, offers a flexible prediction and coverage mechanism for
learning -- an associated target language for learned knowledge -- that
explicitly acknowledges the need to deal, in the context of learning, with
uncertain, incomplete and inconsistent data that are incompatible with any
classically-represented logical theory.
Related papers
- Reasoning with Natural Language Explanations [15.281385727331473]
Explanation constitutes an archetypal feature of human rationality, underpinning learning and generalisation.
An increasing amount of research in Natural Language Inference (NLI) has started reconsidering the role that explanations play in learning and inference.
arXiv Detail & Related papers (2024-10-05T13:15:24Z) - Beyond Model Interpretability: Socio-Structural Explanations in Machine Learning [5.159407277301709]
We argue that interpreting machine learning outputs in certain normatively salient domains could require appealing to a third type of explanation.
The relevance of this explanation type is motivated by the fact that machine learning models are not isolated entities but are embedded within and shaped by social structures.
arXiv Detail & Related papers (2024-09-05T15:47:04Z) - A Mechanistic Interpretation of Syllogistic Reasoning in Auto-Regressive Language Models [13.59675117792588]
Recent studies on logical reasoning in auto-regressive Language Models (LMs) have sparked a debate on whether such models can learn systematic reasoning principles during pre-training.
This paper presents a mechanistic interpretation of syllogistic reasoning in LMs to further enhance our understanding of internal dynamics.
arXiv Detail & Related papers (2024-08-16T07:47:39Z) - On the Relationship Between Interpretability and Explainability in Machine Learning [2.828173677501078]
Interpretability and explainability have gained more and more attention in the field of machine learning.
Since both provide information about predictors and their decision process, they are often seen as two independent means for one single end.
This view has led to a dichotomous literature: explainability techniques designed for complex black-box models, or interpretable approaches ignoring the many explainability tools.
arXiv Detail & Related papers (2023-11-20T02:31:08Z) - Explainability for Large Language Models: A Survey [59.67574757137078]
Large language models (LLMs) have demonstrated impressive capabilities in natural language processing.
This paper introduces a taxonomy of explainability techniques and provides a structured overview of methods for explaining Transformer-based language models.
arXiv Detail & Related papers (2023-09-02T22:14:26Z) - Scientific Explanation and Natural Language: A Unified
Epistemological-Linguistic Perspective for Explainable AI [2.7920304852537536]
This paper focuses on the scientific domain, aiming to bridge the gap between theory and practice on the notion of a scientific explanation.
Through a mixture of quantitative and qualitative methodologies, the presented study allows deriving the following main conclusions.
arXiv Detail & Related papers (2022-05-03T22:31:42Z) - Rethinking Explainability as a Dialogue: A Practitioner's Perspective [57.87089539718344]
We ask doctors, healthcare professionals, and policymakers about their needs and desires for explanations.
Our study indicates that decision-makers would strongly prefer interactive explanations in the form of natural language dialogues.
Considering these needs, we outline a set of five principles researchers should follow when designing interactive explanations.
arXiv Detail & Related papers (2022-02-03T22:17:21Z) - Fact-driven Logical Reasoning for Machine Reading Comprehension [82.58857437343974]
We are motivated to cover both commonsense and temporary knowledge clues hierarchically.
Specifically, we propose a general formalism of knowledge units by extracting backbone constituents of the sentence.
We then construct a supergraph on top of the fact units, allowing for the benefit of sentence-level (relations among fact groups) and entity-level interactions.
arXiv Detail & Related papers (2021-05-21T13:11:13Z) - Leap-Of-Thought: Teaching Pre-Trained Models to Systematically Reason
Over Implicit Knowledge [96.92252296244233]
Large pre-trained language models (LMs) acquire some reasoning capacity, but this ability is difficult to control.
We show that LMs can be trained to reliably perform systematic reasoning combining both implicit, pre-trained knowledge and explicit natural language statements.
Our work paves a path towards open-domain systems that constantly improve by interacting with users who can instantly correct a model by adding simple natural language statements.
arXiv Detail & Related papers (2020-06-11T17:02:20Z) - Neuro-symbolic Architectures for Context Understanding [59.899606495602406]
We propose the use of hybrid AI methodology as a framework for combining the strengths of data-driven and knowledge-driven approaches.
Specifically, we inherit the concept of neuro-symbolism as a way of using knowledge-bases to guide the learning progress of deep neural networks.
arXiv Detail & Related papers (2020-03-09T15:04:07Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.