Semantics of the Black-Box: Can knowledge graphs help make deep learning
systems more interpretable and explainable?
- URL: http://arxiv.org/abs/2010.08660v4
- Date: Fri, 11 Dec 2020 23:03:11 GMT
- Title: Semantics of the Black-Box: Can knowledge graphs help make deep learning
systems more interpretable and explainable?
- Authors: Manas Gaur, Keyur Faldu, Amit Sheth
- Abstract summary: Recent innovations in deep learning (DL) have shown enormous potential to impact individuals and society.
Black-Box nature of DL models and over-reliance on massive amounts of data poses challenges for interpretability and explainability of the system.
This article demonstrates how knowledge, provided as a knowledge graph, is incorporated into DL methods using knowledge-infused learning.
- Score: 4.2111286819721485
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The recent series of innovations in deep learning (DL) have shown enormous
potential to impact individuals and society, both positively and negatively.
The DL models utilizing massive computing power and enormous datasets have
significantly outperformed prior historical benchmarks on increasingly
difficult, well-defined research tasks across technology domains such as
computer vision, natural language processing, signal processing, and
human-computer interactions. However, the Black-Box nature of DL models and
their over-reliance on massive amounts of data condensed into labels and dense
representations poses challenges for interpretability and explainability of the
system. Furthermore, DLs have not yet been proven in their ability to
effectively utilize relevant domain knowledge and experience critical to human
understanding. This aspect is missing in early data-focused approaches and
necessitated knowledge-infused learning and other strategies to incorporate
computational knowledge. This article demonstrates how knowledge, provided as a
knowledge graph, is incorporated into DL methods using knowledge-infused
learning, which is one of the strategies. We then discuss how this makes a
fundamental difference in the interpretability and explainability of current
approaches, and illustrate it with examples from natural language processing
for healthcare and education applications.
Related papers
- Large Language Models are Limited in Out-of-Context Knowledge Reasoning [65.72847298578071]
Large Language Models (LLMs) possess extensive knowledge and strong capabilities in performing in-context reasoning.
This paper focuses on a significant aspect of out-of-context reasoning: Out-of-Context Knowledge Reasoning (OCKR), which is to combine multiple knowledge to infer new knowledge.
arXiv Detail & Related papers (2024-06-11T15:58:59Z) - Informed Meta-Learning [55.2480439325792]
Meta-learning and informed ML stand out as two approaches for incorporating prior knowledge into ML pipelines.
We formalise a hybrid paradigm, informed meta-learning, facilitating the incorporation of priors from unstructured knowledge representations.
We demonstrate the potential benefits of informed meta-learning in improving data efficiency, robustness to observational noise and task distribution shifts.
arXiv Detail & Related papers (2024-02-25T15:08:37Z) - Recognizing Unseen Objects via Multimodal Intensive Knowledge Graph
Propagation [68.13453771001522]
We propose a multimodal intensive ZSL framework that matches regions of images with corresponding semantic embeddings.
We conduct extensive experiments and evaluate our model on large-scale real-world data.
arXiv Detail & Related papers (2023-06-14T13:07:48Z) - Causal Reasoning Meets Visual Representation Learning: A Prospective
Study [117.08431221482638]
Lack of interpretability, robustness, and out-of-distribution generalization are becoming the challenges of the existing visual models.
Inspired by the strong inference ability of human-level agents, recent years have witnessed great effort in developing causal reasoning paradigms.
This paper aims to provide a comprehensive overview of this emerging field, attract attention, encourage discussions, bring to the forefront the urgency of developing novel causal reasoning methods.
arXiv Detail & Related papers (2022-04-26T02:22:28Z) - Algebraic Learning: Towards Interpretable Information Modeling [0.0]
This thesis addresses the issue of interpretability in general information modeling and endeavors to ease the problem from two scopes.
Firstly, a problem-oriented perspective is applied to incorporate knowledge into modeling practice, where interesting mathematical properties emerge naturally.
Secondly, given a trained model, various methods could be applied to extract further insights about the underlying system.
arXiv Detail & Related papers (2022-03-13T15:53:39Z) - Knowledge Modelling and Active Learning in Manufacturing [0.6299766708197884]
Ontologies and Knowledge Graphs provide means to model and relate a wide range of concepts, problems, and configurations.
Both can be used to generate new knowledge through deductive inference and identify missing knowledge.
Active learning can be used to identify the most informative data instances for which to obtain users' feedback, reduce friction, and maximize knowledge acquisition.
arXiv Detail & Related papers (2021-07-05T22:07:21Z) - EXplainable Neural-Symbolic Learning (X-NeSyL) methodology to fuse deep
learning representations with expert knowledge graphs: the MonuMAI cultural
heritage use case [13.833923272291853]
We present the eXplainable Neural-symbolic learning (X-NeSyL) methodology, designed to learn both symbolic and deep representations.
X-NeSyL methodology involves the concrete use of two notions of explanation at inference and training time respectively.
We showcase X-NeSyL methodology using MonuMAI dataset for monument facade image classification, and demonstrate that our approach improves explainability and performance.
arXiv Detail & Related papers (2021-04-24T09:06:08Z) - Explainability in Deep Reinforcement Learning [68.8204255655161]
We review recent works in the direction to attain Explainable Reinforcement Learning (XRL)
In critical situations where it is essential to justify and explain the agent's behaviour, better explainability and interpretability of RL models could help gain scientific insight on the inner workings of what is still considered a black box.
arXiv Detail & Related papers (2020-08-15T10:11:42Z) - A Heterogeneous Graph with Factual, Temporal and Logical Knowledge for
Question Answering Over Dynamic Contexts [81.4757750425247]
We study question answering over a dynamic textual environment.
We develop a graph neural network over the constructed graph, and train the model in an end-to-end manner.
arXiv Detail & Related papers (2020-04-25T04:53:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.