SimpleMind adds thinking to deep neural networks
- URL: http://arxiv.org/abs/2212.00951v1
- Date: Fri, 2 Dec 2022 03:38:20 GMT
- Title: SimpleMind adds thinking to deep neural networks
- Authors: Youngwon Choi, M. Wasil Wahi-Anwar, Matthew S. Brown
- Abstract summary: Deep neural networks (DNNs) detect patterns in data and have shown versatility and strong performance in many computer vision applications.
DNNs alone are susceptible to obvious mistakes that violate simple, common sense concepts and are limited in their ability to use explicit knowledge to guide their search and decision making.
This paper introduces SimpleMind, an open-source software framework for Cognitive AI focused on medical image understanding.
- Score: 3.888848425698769
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural networks (DNNs) detect patterns in data and have shown
versatility and strong performance in many computer vision applications.
However, DNNs alone are susceptible to obvious mistakes that violate simple,
common sense concepts and are limited in their ability to use explicit
knowledge to guide their search and decision making. While overall DNN
performance metrics may be good, these obvious errors, coupled with a lack of
explainability, have prevented widespread adoption for crucial tasks such as
medical image analysis. The purpose of this paper is to introduce SimpleMind,
an open-source software framework for Cognitive AI focused on medical image
understanding. It allows creation of a knowledge base that describes expected
characteristics and relationships between image objects in an intuitive
human-readable form. The SimpleMind framework brings thinking to DNNs by: (1)
providing methods for reasoning with the knowledge base about image content,
such as spatial inferencing and conditional reasoning to check DNN outputs; (2)
applying process knowledge, in the form of general-purpose software agents,
that are chained together to accomplish image preprocessing, DNN prediction,
and result post-processing, and (3) performing automatic co-optimization of all
knowledge base parameters to adapt agents to specific problems. SimpleMind
enables reasoning on multiple detected objects to ensure consistency, providing
cross checking between DNN outputs. This machine reasoning improves the
reliability and trustworthiness of DNNs through an interpretable model and
explainable decisions. Example applications are provided that demonstrate how
SimpleMind supports and improves deep neural networks by embedding them within
a Cognitive AI framework.
Related papers
- An Automata-Theoretic Approach to Synthesizing Binarized Neural Networks [13.271286153792058]
Quantized neural networks (QNNs) have been developed, with binarized neural networks (BNNs) restricted to binary values as a special case.
This paper presents an automata-theoretic approach to synthesizing BNNs that meet designated properties.
arXiv Detail & Related papers (2023-07-29T06:27:28Z) - Knowledge Enhanced Neural Networks for relational domains [83.9217787335878]
We focus on a specific method, KENN, a Neural-Symbolic architecture that injects prior logical knowledge into a neural network.
In this paper, we propose an extension of KENN for relational data.
arXiv Detail & Related papers (2022-05-31T13:00:34Z) - Great Truths are Always Simple: A Rather Simple Knowledge Encoder for
Enhancing the Commonsense Reasoning Capacity of Pre-Trained Models [89.98762327725112]
Commonsense reasoning in natural language is a desired ability of artificial intelligent systems.
For solving complex commonsense reasoning tasks, a typical solution is to enhance pre-trained language models(PTMs) with a knowledge-aware graph neural network(GNN) encoder.
Despite the effectiveness, these approaches are built on heavy architectures, and can't clearly explain how external knowledge resources improve the reasoning capacity of PTMs.
arXiv Detail & Related papers (2022-05-04T01:27:36Z) - Concept Embeddings for Fuzzy Logic Verification of Deep Neural Networks
in Perception Tasks [1.2246649738388387]
We present a simple, yet effective, approach to verify whether a trained convolutional neural network (CNN) respects specified symbolic background knowledge.
The knowledge may consist of any fuzzy predicate logic rules.
We show that this approach benefits from fuzziness and calibrating the concept outputs.
arXiv Detail & Related papers (2022-01-03T10:35:47Z) - Reinforcement Learning with External Knowledge by using Logical Neural
Networks [67.46162586940905]
A recent neuro-symbolic framework called the Logical Neural Networks (LNNs) can simultaneously provide key-properties of both neural networks and symbolic logic.
We propose an integrated method that enables model-free reinforcement learning from external knowledge sources.
arXiv Detail & Related papers (2021-03-03T12:34:59Z) - Utilizing Explainable AI for Quantization and Pruning of Deep Neural
Networks [0.495186171543858]
Recent efforts to understand and explain AI (Artificial Intelligence) methods have led to a new research area, termed as explainable AI.
Recent efforts to understand and explain AI (Artificial Intelligence) methods have led to a new research area, termed as explainable AI.
In this paper, we utilize explainable AI methods: mainly DeepLIFT method.
arXiv Detail & Related papers (2020-08-20T16:52:58Z) - Noise-Response Analysis of Deep Neural Networks Quantifies Robustness
and Fingerprints Structural Malware [48.7072217216104]
Deep neural networks (DNNs) have structural malware' (i.e., compromised weights and activation pathways)
It is generally difficult to detect backdoors, and existing detection methods are computationally expensive and require extensive resources (e.g., access to the training data)
Here, we propose a rapid feature-generation technique that quantifies the robustness of a DNN, fingerprints' its nonlinearity, and allows us to detect backdoors (if present)
Our empirical results demonstrate that we can accurately detect backdoors with high confidence orders-of-magnitude faster than existing approaches (seconds versus
arXiv Detail & Related papers (2020-07-31T23:52:58Z) - Boosting Deep Neural Networks with Geometrical Prior Knowledge: A Survey [77.99182201815763]
Deep Neural Networks (DNNs) achieve state-of-the-art results in many different problem settings.
DNNs are often treated as black box systems, which complicates their evaluation and validation.
One promising field, inspired by the success of convolutional neural networks (CNNs) in computer vision tasks, is to incorporate knowledge about symmetric geometrical transformations.
arXiv Detail & Related papers (2020-06-30T14:56:05Z) - An Adversarial Approach for Explaining the Predictions of Deep Neural
Networks [9.645196221785694]
We present a novel algorithm for explaining the predictions of a deep neural network (DNN) using adversarial machine learning.
Our approach identifies the relative importance of input features in relation to the predictions based on the behavior of an adversarial attack on the DNN.
Our analysis enables us to produce consistent and efficient explanations.
arXiv Detail & Related papers (2020-05-20T18:06:53Z) - Architecture Disentanglement for Deep Neural Networks [174.16176919145377]
We introduce neural architecture disentanglement (NAD) to explain the inner workings of deep neural networks (DNNs)
NAD learns to disentangle a pre-trained DNN into sub-architectures according to independent tasks, forming information flows that describe the inference processes.
Results show that misclassified images have a high probability of being assigned to task sub-architectures similar to the correct ones.
arXiv Detail & Related papers (2020-03-30T08:34:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.