Abduction and Argumentation for Explainable Machine Learning: A Position
Survey
- URL: http://arxiv.org/abs/2010.12896v1
- Date: Sat, 24 Oct 2020 13:23:44 GMT
- Title: Abduction and Argumentation for Explainable Machine Learning: A Position
Survey
- Authors: Antonis Kakas, Loizos Michael
- Abstract summary: This paper presents Abduction and Argumentation as two principled forms for reasoning.
It fleshes out the fundamental role that they can play within Machine Learning.
- Score: 2.28438857884398
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper presents Abduction and Argumentation as two principled forms for
reasoning, and fleshes out the fundamental role that they can play within
Machine Learning. It reviews the state-of-the-art work over the past few
decades on the link of these two reasoning forms with machine learning work,
and from this it elaborates on how the explanation-generating role of Abduction
and Argumentation makes them naturally-fitting mechanisms for the development
of Explainable Machine Learning and AI systems. Abduction contributes towards
this goal by facilitating learning through the transformation, preparation, and
homogenization of data. Argumentation, as a conservative extension of classical
deductive reasoning, offers a flexible prediction and coverage mechanism for
learning -- an associated target language for learned knowledge -- that
explicitly acknowledges the need to deal, in the context of learning, with
uncertain, incomplete and inconsistent data that are incompatible with any
classically-represented logical theory.
Related papers
- On the Relationship Between Interpretability and Explainability in Machine Learning [2.828173677501078]
Interpretability and explainability have gained more and more attention in the field of machine learning.
Since both provide information about predictors and their decision process, they are often seen as two independent means for one single end.
This view has led to a dichotomous literature: explainability techniques designed for complex black-box models, or interpretable approaches ignoring the many explainability tools.
arXiv Detail & Related papers (2023-11-20T02:31:08Z) - Explainability for Large Language Models: A Survey [59.67574757137078]
Large language models (LLMs) have demonstrated impressive capabilities in natural language processing.
This paper introduces a taxonomy of explainability techniques and provides a structured overview of methods for explaining Transformer-based language models.
arXiv Detail & Related papers (2023-09-02T22:14:26Z) - Explanatory machine learning for sequential human teaching [5.706360286474043]
We show that sequential teaching of concepts with increasing complexity has a beneficial effect on human comprehension.
We propose a framework for the effects of sequential teaching on comprehension based on an existing definition of comprehensibility.
arXiv Detail & Related papers (2022-05-20T15:23:46Z) - Scientific Explanation and Natural Language: A Unified
Epistemological-Linguistic Perspective for Explainable AI [2.7920304852537536]
This paper focuses on the scientific domain, aiming to bridge the gap between theory and practice on the notion of a scientific explanation.
Through a mixture of quantitative and qualitative methodologies, the presented study allows deriving the following main conclusions.
arXiv Detail & Related papers (2022-05-03T22:31:42Z) - Rethinking Explainability as a Dialogue: A Practitioner's Perspective [57.87089539718344]
We ask doctors, healthcare professionals, and policymakers about their needs and desires for explanations.
Our study indicates that decision-makers would strongly prefer interactive explanations in the form of natural language dialogues.
Considering these needs, we outline a set of five principles researchers should follow when designing interactive explanations.
arXiv Detail & Related papers (2022-02-03T22:17:21Z) - Causality in Neural Networks -- An Extended Abstract [0.0]
Causal reasoning is the main learning and explanation tool used by humans.
Introducing the ideas of causality to machine learning helps in providing better learning and explainable models.
arXiv Detail & Related papers (2021-06-03T09:52:36Z) - Fact-driven Logical Reasoning for Machine Reading Comprehension [82.58857437343974]
We are motivated to cover both commonsense and temporary knowledge clues hierarchically.
Specifically, we propose a general formalism of knowledge units by extracting backbone constituents of the sentence.
We then construct a supergraph on top of the fact units, allowing for the benefit of sentence-level (relations among fact groups) and entity-level interactions.
arXiv Detail & Related papers (2021-05-21T13:11:13Z) - Counterfactual Explanations for Machine Learning: A Review [5.908471365011942]
We review and categorize research on counterfactual explanations in machine learning.
Modern approaches to counterfactual explainability in machine learning draw connections to the established legal doctrine in many countries.
arXiv Detail & Related papers (2020-10-20T20:08:42Z) - Leap-Of-Thought: Teaching Pre-Trained Models to Systematically Reason
Over Implicit Knowledge [96.92252296244233]
Large pre-trained language models (LMs) acquire some reasoning capacity, but this ability is difficult to control.
We show that LMs can be trained to reliably perform systematic reasoning combining both implicit, pre-trained knowledge and explicit natural language statements.
Our work paves a path towards open-domain systems that constantly improve by interacting with users who can instantly correct a model by adding simple natural language statements.
arXiv Detail & Related papers (2020-06-11T17:02:20Z) - Neuro-symbolic Architectures for Context Understanding [59.899606495602406]
We propose the use of hybrid AI methodology as a framework for combining the strengths of data-driven and knowledge-driven approaches.
Specifically, we inherit the concept of neuro-symbolism as a way of using knowledge-bases to guide the learning progress of deep neural networks.
arXiv Detail & Related papers (2020-03-09T15:04:07Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.