EXplainable Neural-Symbolic Learning (X-NeSyL) methodology to fuse deep
learning representations with expert knowledge graphs: the MonuMAI cultural
heritage use case
- URL: http://arxiv.org/abs/2104.11914v1
- Date: Sat, 24 Apr 2021 09:06:08 GMT
- Title: EXplainable Neural-Symbolic Learning (X-NeSyL) methodology to fuse deep
learning representations with expert knowledge graphs: the MonuMAI cultural
heritage use case
- Authors: Natalia D\'iaz-Rodr\'iguez, Alberto Lamas, Jules Sanchez, Gianni
Franchi, Ivan Donadello, Siham Tabik, David Filliat, Policarpo Cruz, Rosana
Montes, Francisco Herrera
- Abstract summary: We present the eXplainable Neural-symbolic learning (X-NeSyL) methodology, designed to learn both symbolic and deep representations.
X-NeSyL methodology involves the concrete use of two notions of explanation at inference and training time respectively.
We showcase X-NeSyL methodology using MonuMAI dataset for monument facade image classification, and demonstrate that our approach improves explainability and performance.
- Score: 13.833923272291853
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The latest Deep Learning (DL) models for detection and classification have
achieved an unprecedented performance over classical machine learning
algorithms. However, DL models are black-box methods hard to debug, interpret,
and certify. DL alone cannot provide explanations that can be validated by a
non technical audience. In contrast, symbolic AI systems that convert concepts
into rules or symbols -- such as knowledge graphs -- are easier to explain.
However, they present lower generalisation and scaling capabilities. A very
important challenge is to fuse DL representations with expert knowledge. One
way to address this challenge, as well as the performance-explainability
trade-off is by leveraging the best of both streams without obviating domain
expert knowledge. We tackle such problem by considering the symbolic knowledge
is expressed in form of a domain expert knowledge graph. We present the
eXplainable Neural-symbolic learning (X-NeSyL) methodology, designed to learn
both symbolic and deep representations, together with an explainability metric
to assess the level of alignment of machine and human expert explanations. The
ultimate objective is to fuse DL representations with expert domain knowledge
during the learning process to serve as a sound basis for explainability.
X-NeSyL methodology involves the concrete use of two notions of explanation at
inference and training time respectively: 1) EXPLANet: Expert-aligned
eXplainable Part-based cLAssifier NETwork Architecture, a compositional CNN
that makes use of symbolic representations, and 2) SHAP-Backprop, an
explainable AI-informed training procedure that guides the DL process to align
with such symbolic representations in form of knowledge graphs. We showcase
X-NeSyL methodology using MonuMAI dataset for monument facade image
classification, and demonstrate that our approach improves explainability and
performance.
Related papers
- Explaining Explainability: Towards Deeper Actionable Insights into Deep
Learning through Second-order Explainability [70.60433013657693]
Second-order explainable AI (SOXAI) was recently proposed to extend explainable AI (XAI) from the instance level to the dataset level.
We demonstrate for the first time, via example classification and segmentation cases, that eliminating irrelevant concepts from the training set based on actionable insights from SOXAI can enhance a model's performance.
arXiv Detail & Related papers (2023-06-14T23:24:01Z) - Recognizing Unseen Objects via Multimodal Intensive Knowledge Graph
Propagation [68.13453771001522]
We propose a multimodal intensive ZSL framework that matches regions of images with corresponding semantic embeddings.
We conduct extensive experiments and evaluate our model on large-scale real-world data.
arXiv Detail & Related papers (2023-06-14T13:07:48Z) - SpecXAI -- Spectral interpretability of Deep Learning Models [11.325580593182414]
XAI attempts to develop techniques that temper the impenetrable nature of the models and promote a level of understanding of their behavior.
Here we present our contribution to XAI methods in the form of a framework that we term SpecXAI.
We show how this framework can be used to not only understand the network but also manipulate it into a linear interpretable symbolic representation.
arXiv Detail & Related papers (2023-02-20T12:36:54Z) - Schema-aware Reference as Prompt Improves Data-Efficient Knowledge Graph
Construction [57.854498238624366]
We propose a retrieval-augmented approach, which retrieves schema-aware Reference As Prompt (RAP) for data-efficient knowledge graph construction.
RAP can dynamically leverage schema and knowledge inherited from human-annotated and weak-supervised data as a prompt for each sample.
arXiv Detail & Related papers (2022-10-19T16:40:28Z) - MAML and ANIL Provably Learn Representations [60.17417686153103]
We prove that two well-known meta-learning methods, MAML and ANIL, are capable of learning common representation among a set of given tasks.
Specifically, in the well-known multi-task linear representation learning setting, they are able to recover the ground-truth representation at an exponentially fast rate.
Our analysis illuminates that the driving force causing MAML and ANIL to recover the underlying representation is that they adapt the final layer of their model.
arXiv Detail & Related papers (2022-02-07T19:43:02Z) - Knowledge Graph Augmented Network Towards Multiview Representation
Learning for Aspect-based Sentiment Analysis [96.53859361560505]
We propose a knowledge graph augmented network (KGAN) to incorporate external knowledge with explicitly syntactic and contextual information.
KGAN captures the sentiment feature representations from multiple perspectives, i.e., context-, syntax- and knowledge-based.
Experiments on three popular ABSA benchmarks demonstrate the effectiveness and robustness of our KGAN.
arXiv Detail & Related papers (2022-01-13T08:25:53Z) - RELAX: Representation Learning Explainability [10.831313203043514]
We propose RELAX, which is the first approach for attribution-based explanations of representations.
ReLAX explains representations by measuring similarities in the representation space between an input and masked out versions of itself.
We provide theoretical interpretations of RELAX and conduct a novel analysis of feature extractors trained using supervised and unsupervised learning.
arXiv Detail & Related papers (2021-12-19T14:51:31Z) - Semantics of the Black-Box: Can knowledge graphs help make deep learning
systems more interpretable and explainable? [4.2111286819721485]
Recent innovations in deep learning (DL) have shown enormous potential to impact individuals and society.
Black-Box nature of DL models and over-reliance on massive amounts of data poses challenges for interpretability and explainability of the system.
This article demonstrates how knowledge, provided as a knowledge graph, is incorporated into DL methods using knowledge-infused learning.
arXiv Detail & Related papers (2020-10-16T22:55:23Z) - Explainability in Deep Reinforcement Learning [68.8204255655161]
We review recent works in the direction to attain Explainable Reinforcement Learning (XRL)
In critical situations where it is essential to justify and explain the agent's behaviour, better explainability and interpretability of RL models could help gain scientific insight on the inner workings of what is still considered a black box.
arXiv Detail & Related papers (2020-08-15T10:11:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.