Wider Vision: Enriching Convolutional Neural Networks via Alignment to
External Knowledge Bases
- URL: http://arxiv.org/abs/2102.11132v1
- Date: Mon, 22 Feb 2021 16:00:03 GMT
- Title: Wider Vision: Enriching Convolutional Neural Networks via Alignment to
External Knowledge Bases
- Authors: Xuehao Liu, Sarah Jane Delany, Susan McKeever
- Abstract summary: We aim to explain and expand CNNs models via the mirroring or alignment of CNN to an external knowledge base.
This will allow us to give a semantic context or label for each visual feature.
Our results show that in the aligned embedding space, nodes from the knowledge graph are close to the CNN feature nodes that have similar meanings.
- Score: 0.3867363075280543
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Deep learning models suffer from opaqueness. For Convolutional Neural
Networks (CNNs), current research strategies for explaining models focus on the
target classes within the associated training dataset. As a result, the
understanding of hidden feature map activations is limited by the
discriminative knowledge gleaned during training. The aim of our work is to
explain and expand CNNs models via the mirroring or alignment of CNN to an
external knowledge base. This will allow us to give a semantic context or label
for each visual feature. We can match CNN feature activations to nodes in our
external knowledge base. This supports knowledge-based interpretation of the
features associated with model decisions. To demonstrate our approach, we build
two separate graphs. We use an entity alignment method to align the feature
nodes in a CNN with the nodes in a ConceptNet based knowledge graph. We then
measure the proximity of CNN graph nodes to semantically meaningful knowledge
base nodes. Our results show that in the aligned embedding space, nodes from
the knowledge graph are close to the CNN feature nodes that have similar
meanings, indicating that nodes from an external knowledge base can act as
explanatory semantic references for features in the model. We analyse a variety
of graph building methods in order to improve the results from our embedding
space. We further demonstrate that by using hierarchical relationships from our
external knowledge base, we can locate new unseen classes outside the CNN
training set in our embeddings space, based on visual feature activations. This
suggests that we can adapt our approach to identify unseen classes based on CNN
feature activations. Our demonstrated approach of aligning a CNN with an
external knowledge base paves the way to reason about and beyond the trained
model, with future adaptations to explainable models and zero-shot learning.
Related papers
- Linking in Style: Understanding learned features in deep learning models [0.0]
Convolutional neural networks (CNNs) learn abstract features to perform object classification.
We propose an automatic method to visualize and systematically analyze learned features in CNNs.
arXiv Detail & Related papers (2024-09-25T12:28:48Z) - Graph Neural Networks Provably Benefit from Structural Information: A
Feature Learning Perspective [53.999128831324576]
Graph neural networks (GNNs) have pioneered advancements in graph representation learning.
This study investigates the role of graph convolution within the context of feature learning theory.
arXiv Detail & Related papers (2023-06-24T10:21:11Z) - A novel feature-scrambling approach reveals the capacity of
convolutional neural networks to learn spatial relations [0.0]
Convolutional neural networks (CNNs) are one of the most successful computer vision systems to solve object recognition.
Yet it remains poorly understood how CNNs actually make their decisions, what the nature of their internal representations is, and how their recognition strategies differ from humans.
arXiv Detail & Related papers (2022-12-12T16:40:29Z) - Robust Knowledge Adaptation for Dynamic Graph Neural Networks [61.8505228728726]
We propose Ada-DyGNN: a robust knowledge Adaptation framework via reinforcement learning for Dynamic Graph Neural Networks.
Our approach constitutes the first attempt to explore robust knowledge adaptation via reinforcement learning.
Experiments on three benchmark datasets demonstrate that Ada-DyGNN achieves the state-of-the-art performance.
arXiv Detail & Related papers (2022-07-22T02:06:53Z) - The Mind's Eye: Visualizing Class-Agnostic Features of CNNs [92.39082696657874]
We propose an approach to visually interpret CNN features given a set of images by creating corresponding images that depict the most informative features of a specific layer.
Our method uses a dual-objective activation and distance loss, without requiring a generator network nor modifications to the original model.
arXiv Detail & Related papers (2021-01-29T07:46:39Z) - What Do Deep Nets Learn? Class-wise Patterns Revealed in the Input Space [88.37185513453758]
We propose a method to visualize and understand the class-wise knowledge learned by deep neural networks (DNNs) under different settings.
Our method searches for a single predictive pattern in the pixel space to represent the knowledge learned by the model for each class.
In the adversarial setting, we show that adversarially trained models tend to learn more simplified shape patterns.
arXiv Detail & Related papers (2021-01-18T06:38:41Z) - Node2Seq: Towards Trainable Convolutions in Graph Neural Networks [59.378148590027735]
We propose a graph network layer, known as Node2Seq, to learn node embeddings with explicitly trainable weights for different neighboring nodes.
For a target node, our method sorts its neighboring nodes via attention mechanism and then employs 1D convolutional neural networks (CNNs) to enable explicit weights for information aggregation.
In addition, we propose to incorporate non-local information for feature learning in an adaptive manner based on the attention scores.
arXiv Detail & Related papers (2021-01-06T03:05:37Z) - Decoding CNN based Object Classifier Using Visualization [6.666597301197889]
We visualize what type of features are extracted in different convolution layers of CNN.
Visualizing heat map of activation helps us to understand how CNN classifies and localizes different objects in image.
arXiv Detail & Related papers (2020-07-15T05:01:27Z) - An Information-theoretic Visual Analysis Framework for Convolutional
Neural Networks [11.15523311079383]
We introduce a data model to organize the data that can be extracted from CNN models.
We then propose two ways to calculate entropy under different circumstances.
We develop a visual analysis system, CNNSlicer, to interactively explore the amount of information changes inside the model.
arXiv Detail & Related papers (2020-05-02T21:36:50Z) - Curriculum By Smoothing [52.08553521577014]
Convolutional Neural Networks (CNNs) have shown impressive performance in computer vision tasks such as image classification, detection, and segmentation.
We propose an elegant curriculum based scheme that smoothes the feature embedding of a CNN using anti-aliasing or low-pass filters.
As the amount of information in the feature maps increases during training, the network is able to progressively learn better representations of the data.
arXiv Detail & Related papers (2020-03-03T07:27:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.