Concept Embeddings for Fuzzy Logic Verification of Deep Neural Networks
in Perception Tasks
- URL: http://arxiv.org/abs/2201.00572v1
- Date: Mon, 3 Jan 2022 10:35:47 GMT
- Title: Concept Embeddings for Fuzzy Logic Verification of Deep Neural Networks
in Perception Tasks
- Authors: Gesina Schwalbe, Christian Wirth, Ute Schmid
- Abstract summary: We present a simple, yet effective, approach to verify whether a trained convolutional neural network (CNN) respects specified symbolic background knowledge.
The knowledge may consist of any fuzzy predicate logic rules.
We show that this approach benefits from fuzziness and calibrating the concept outputs.
- Score: 1.2246649738388387
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: One major drawback of deep neural networks (DNNs) for use in sensitive
application domains is their black-box nature. This makes it hard to verify or
monitor complex, symbolic requirements. In this work, we present a simple, yet
effective, approach to verify whether a trained convolutional neural network
(CNN) respects specified symbolic background knowledge. The knowledge may
consist of any fuzzy predicate logic rules. For this, we utilize methods from
explainable artificial intelligence (XAI): First, using concept embedding
analysis, the output of a computer vision CNN is post-hoc enriched by concept
outputs; second, logical rules from prior knowledge are fuzzified to serve as
continuous-valued functions on the concept outputs. These can be evaluated with
little computational overhead. We demonstrate three diverse use-cases of our
method on stateof-the-art object detectors: Finding corner cases, utilizing the
rules for detecting and localizing DNN misbehavior during runtime, and
comparing the logical consistency of DNNs. The latter is used to find related
differences between EfficientDet D1 and Mask R-CNN object detectors. We show
that this approach benefits from fuzziness and calibrating the concept outputs.
Related papers
- Unveiling Ontological Commitment in Multi-Modal Foundation Models [7.485653059927206]
Deep neural networks (DNNs) automatically learn rich representations of concepts and respective reasoning.
We propose a method that extracts the learned superclass hierarchy from a multimodal DNN for a given set of leaf concepts.
An initial evaluation study shows that meaningful ontological class hierarchies can be extracted from state-of-the-art foundation models.
arXiv Detail & Related papers (2024-09-25T17:24:27Z) - GINN-KAN: Interpretability pipelining with applications in Physics Informed Neural Networks [5.2969467015867915]
We introduce the concept of interpretability pipelineing, to incorporate multiple interpretability techniques to outperform each individual technique.
We evaluate two recent models selected for their potential to incorporate interpretability into standard neural network architectures.
We introduce a novel interpretable neural network GINN-KAN that synthesizes the advantages of both models.
arXiv Detail & Related papers (2024-08-27T04:57:53Z) - Symbol Correctness in Deep Neural Networks Containing Symbolic Layers [0.0]
We formalize a high-level principle that can guide the design and analysis of NS-DNNs.
We show that symbol correctness is a necessary property for NS-DNN explainability and transfer learning.
arXiv Detail & Related papers (2024-02-06T03:33:50Z) - SimpleMind adds thinking to deep neural networks [3.888848425698769]
Deep neural networks (DNNs) detect patterns in data and have shown versatility and strong performance in many computer vision applications.
DNNs alone are susceptible to obvious mistakes that violate simple, common sense concepts and are limited in their ability to use explicit knowledge to guide their search and decision making.
This paper introduces SimpleMind, an open-source software framework for Cognitive AI focused on medical image understanding.
arXiv Detail & Related papers (2022-12-02T03:38:20Z) - Spiking Neural Network Decision Feedback Equalization [70.3497683558609]
We propose an SNN-based equalizer with a feedback structure akin to the decision feedback equalizer (DFE)
We show that our approach clearly outperforms conventional linear equalizers for three different exemplary channels.
The proposed SNN with a decision feedback structure enables the path to competitive energy-efficient transceivers.
arXiv Detail & Related papers (2022-11-09T09:19:15Z) - Knowledge Enhanced Neural Networks for relational domains [83.9217787335878]
We focus on a specific method, KENN, a Neural-Symbolic architecture that injects prior logical knowledge into a neural network.
In this paper, we propose an extension of KENN for relational data.
arXiv Detail & Related papers (2022-05-31T13:00:34Z) - Neuro-Symbolic Inductive Logic Programming with Logical Neural Networks [65.23508422635862]
We propose learning rules with the recently proposed logical neural networks (LNN)
Compared to others, LNNs offer strong connection to classical Boolean logic.
Our experiments on standard benchmarking tasks confirm that LNN rules are highly interpretable.
arXiv Detail & Related papers (2021-12-06T19:38:30Z) - Reinforcement Learning with External Knowledge by using Logical Neural
Networks [67.46162586940905]
A recent neuro-symbolic framework called the Logical Neural Networks (LNNs) can simultaneously provide key-properties of both neural networks and symbolic logic.
We propose an integrated method that enables model-free reinforcement learning from external knowledge sources.
arXiv Detail & Related papers (2021-03-03T12:34:59Z) - Boosting Deep Neural Networks with Geometrical Prior Knowledge: A Survey [77.99182201815763]
Deep Neural Networks (DNNs) achieve state-of-the-art results in many different problem settings.
DNNs are often treated as black box systems, which complicates their evaluation and validation.
One promising field, inspired by the success of convolutional neural networks (CNNs) in computer vision tasks, is to incorporate knowledge about symmetric geometrical transformations.
arXiv Detail & Related papers (2020-06-30T14:56:05Z) - An Adversarial Approach for Explaining the Predictions of Deep Neural
Networks [9.645196221785694]
We present a novel algorithm for explaining the predictions of a deep neural network (DNN) using adversarial machine learning.
Our approach identifies the relative importance of input features in relation to the predictions based on the behavior of an adversarial attack on the DNN.
Our analysis enables us to produce consistent and efficient explanations.
arXiv Detail & Related papers (2020-05-20T18:06:53Z) - Architecture Disentanglement for Deep Neural Networks [174.16176919145377]
We introduce neural architecture disentanglement (NAD) to explain the inner workings of deep neural networks (DNNs)
NAD learns to disentangle a pre-trained DNN into sub-architectures according to independent tasks, forming information flows that describe the inference processes.
Results show that misclassified images have a high probability of being assigned to task sub-architectures similar to the correct ones.
arXiv Detail & Related papers (2020-03-30T08:34:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.