Explainable Machine Learning with Prior Knowledge: An Overview
- URL: http://arxiv.org/abs/2105.10172v1
- Date: Fri, 21 May 2021 07:33:22 GMT
- Title: Explainable Machine Learning with Prior Knowledge: An Overview
- Authors: Katharina Beckh, Sebastian M\"uller, Matthias Jakobs, Vanessa Toborek,
Hanxiao Tan, Raphael Fischer, Pascal Welke, Sebastian Houben, Laura von
Rueden
- Abstract summary: The complexity of machine learning models has elicited research to make them more explainable.
We propose to harness prior knowledge to improve upon the explanation capabilities of machine learning models.
- Score: 1.1045760002858451
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This survey presents an overview of integrating prior knowledge into machine
learning systems in order to improve explainability. The complexity of machine
learning models has elicited research to make them more explainable. However,
most explainability methods cannot provide insight beyond the given data,
requiring additional information about the context. We propose to harness prior
knowledge to improve upon the explanation capabilities of machine learning
models. In this paper, we present a categorization of current research into
three main categories which either integrate knowledge into the machine
learning pipeline, into the explainability method or derive knowledge from
explanations. To classify the papers, we build upon the existing taxonomy of
informed machine learning and extend it from the perspective of explainability.
We conclude with open challenges and research directions.
Related papers
- Beyond Model Interpretability: Socio-Structural Explanations in Machine Learning [5.159407277301709]
We argue that interpreting machine learning outputs in certain normatively salient domains could require appealing to a third type of explanation.
The relevance of this explanation type is motivated by the fact that machine learning models are not isolated entities but are embedded within and shaped by social structures.
arXiv Detail & Related papers (2024-09-05T15:47:04Z) - Retrieval-Enhanced Machine Learning [110.5237983180089]
We describe a generic retrieval-enhanced machine learning framework, which includes a number of existing models as special cases.
REML challenges information retrieval conventions, presenting opportunities for novel advances in core areas, including optimization.
REML research agenda lays a foundation for a new style of information access research and paves a path towards advancing machine learning and artificial intelligence.
arXiv Detail & Related papers (2022-05-02T21:42:45Z) - Embedding Knowledge for Document Summarization: A Survey [66.76415502727802]
Previous works proved that knowledge-embedded document summarizers excel at generating superior digests.
We propose novel to recapitulate knowledge and knowledge embeddings under the document summarization view.
arXiv Detail & Related papers (2022-04-24T04:36:07Z) - Explainability in Machine Learning: a Pedagogical Perspective [9.393988089692947]
We provide a pedagogical perspective on how to structure the learning process to better impart knowledge to students and researchers in machine learning.
We discuss the advantages and disadvantages of various opaque and transparent machine learning models.
We will also discuss ways to structure potential assignments to best help students learn to use explainability as a tool alongside any given machine learning application.
arXiv Detail & Related papers (2022-02-21T16:15:57Z) - Computing Rule-Based Explanations of Machine Learning Classifiers using
Knowledge Graphs [62.997667081978825]
We use knowledge graphs as the underlying framework providing the terminology for representing explanations for the operation of a machine learning classifier.
In particular, we introduce a novel method for extracting and representing black-box explanations of its operation, in the form of first-order logic rules expressed in the terminology of the knowledge graph.
arXiv Detail & Related papers (2022-02-08T16:21:49Z) - Knowledge as Invariance -- History and Perspectives of
Knowledge-augmented Machine Learning [69.99522650448213]
Research in machine learning is at a turning point.
Research interests are shifting away from increasing the performance of highly parameterized models to exceedingly specific tasks.
This white paper provides an introduction and discussion of this emerging field in machine learning research.
arXiv Detail & Related papers (2020-12-21T15:07:19Z) - Counterfactual Explanations for Machine Learning: A Review [5.908471365011942]
We review and categorize research on counterfactual explanations in machine learning.
Modern approaches to counterfactual explainability in machine learning draw connections to the established legal doctrine in many countries.
arXiv Detail & Related papers (2020-10-20T20:08:42Z) - Explainability in Deep Reinforcement Learning [68.8204255655161]
We review recent works in the direction to attain Explainable Reinforcement Learning (XRL)
In critical situations where it is essential to justify and explain the agent's behaviour, better explainability and interpretability of RL models could help gain scientific insight on the inner workings of what is still considered a black box.
arXiv Detail & Related papers (2020-08-15T10:11:42Z) - Machine Learning Explainability for External Stakeholders [27.677158604772238]
There have been growing calls to open the black box and to make machine learning algorithms more explainable.
We conducted a day-long workshop with academics, industry experts, legal scholars, and policymakers to develop a shared language around explainability.
We provide a short summary of various case studies of explainable machine learning, lessons from those studies, and discuss open challenges.
arXiv Detail & Related papers (2020-07-10T14:27:06Z) - Directions for Explainable Knowledge-Enabled Systems [3.7250420821969827]
We leverage our survey of explanation literature in Artificial Intelligence and closely related fields to generate a set of explanation types.
We define each type and provide an example question that would motivate the need for this style of explanation.
We believe this set of explanation types will help future system designers in their generation and prioritization of requirements.
arXiv Detail & Related papers (2020-03-17T04:34:29Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.