Understanding Neural Network Systems for Image Analysis using Vector
Spaces and Inverse Maps
- URL: http://arxiv.org/abs/2402.00261v1
- Date: Thu, 1 Feb 2024 01:11:15 GMT
- Title: Understanding Neural Network Systems for Image Analysis using Vector
Spaces and Inverse Maps
- Authors: Rebecca Pattichis and Marios S. Pattichis
- Abstract summary: We introduce techniques from Linear Algebra to model neural network layers as maps between signal spaces.
We also introduce the concept of invertible networks and an algorithm for computing input images that yield specific outputs.
- Score: 3.069161525997864
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: There is strong interest in developing mathematical methods that can be used
to understand complex neural networks used in image analysis. In this paper, we
introduce techniques from Linear Algebra to model neural network layers as maps
between signal spaces. First, we demonstrate how signal spaces can be used to
visualize weight spaces and convolutional layer kernels. We also demonstrate
how residual vector spaces can be used to further visualize information lost at
each layer. Second, we introduce the concept of invertible networks and an
algorithm for computing input images that yield specific outputs. We
demonstrate our approach on two invertible networks and ResNet18.
Related papers
- Investigating Map-Based Path Loss Models: A Study of Feature Representations in Convolutional Neural Networks [20.62701088477552]
We investigate different methods of representing scalar features in convolutional neural networks.
We find that representing scalar features as image channels results in the strongest generalization.
arXiv Detail & Related papers (2025-01-13T18:15:01Z) - Linking in Style: Understanding learned features in deep learning models [0.0]
Convolutional neural networks (CNNs) learn abstract features to perform object classification.
We propose an automatic method to visualize and systematically analyze learned features in CNNs.
arXiv Detail & Related papers (2024-09-25T12:28:48Z) - Half-Space Feature Learning in Neural Networks [2.3249139042158853]
There currently exist two extreme viewpoints for neural network feature learning.
We argue neither interpretation is likely to be correct based on a novel viewpoint.
We use this alternate interpretation to motivate a model, called the Deep Linearly Gated Network (DLGN)
arXiv Detail & Related papers (2024-04-05T12:03:19Z) - Graph Neural Networks Provably Benefit from Structural Information: A
Feature Learning Perspective [53.999128831324576]
Graph neural networks (GNNs) have pioneered advancements in graph representation learning.
This study investigates the role of graph convolution within the context of feature learning theory.
arXiv Detail & Related papers (2023-06-24T10:21:11Z) - Hamming Similarity and Graph Laplacians for Class Partitioning and
Adversarial Image Detection [2.960821510561423]
We investigate the potential for ReLU activation patterns (encoded as bit vectors) to aid in understanding and interpreting the behavior of neural networks.
We utilize Representational Dissimilarity Matrices (RDMs) to investigate the coherence of data within the embedding spaces of a deep neural network.
We demonstrate that bit vectors aid in adversarial image detection, again achieving over 95% accuracy in separating adversarial and non-adversarial images.
arXiv Detail & Related papers (2023-05-02T22:16:15Z) - Extracting Semantic Knowledge from GANs with Unsupervised Learning [65.32631025780631]
Generative Adversarial Networks (GANs) encode semantics in feature maps in a linearly separable form.
We propose a novel clustering algorithm, named KLiSH, which leverages the linear separability to cluster GAN's features.
KLiSH succeeds in extracting fine-grained semantics of GANs trained on datasets of various objects.
arXiv Detail & Related papers (2022-11-30T03:18:16Z) - Convolutional Learning on Multigraphs [153.20329791008095]
We develop convolutional information processing on multigraphs and introduce convolutional multigraph neural networks (MGNNs)
To capture the complex dynamics of information diffusion within and across each of the multigraph's classes of edges, we formalize a convolutional signal processing model.
We develop a multigraph learning architecture, including a sampling procedure to reduce computational complexity.
The introduced architecture is applied towards optimal wireless resource allocation and a hate speech localization task, offering improved performance over traditional graph neural networks.
arXiv Detail & Related papers (2022-09-23T00:33:04Z) - Neural Networks as Paths through the Space of Representations [5.165741406553346]
We develop a simple idea for interpreting the layer-by-layer construction of useful representations.
We formalize this intuitive idea of "distance" by leveraging recent work on metric representational similarity.
With this framework, the layer-wise computation implemented by a deep neural network can be viewed as a path in a high-dimensional representation space.
arXiv Detail & Related papers (2022-06-22T11:59:10Z) - FuNNscope: Visual microscope for interactively exploring the loss
landscape of fully connected neural networks [77.34726150561087]
We show how to explore high-dimensional landscape characteristics of neural networks.
We generalize observations on small neural networks to more complex systems.
An interactive dashboard opens up a number of possible application networks.
arXiv Detail & Related papers (2022-04-09T16:41:53Z) - A singular Riemannian geometry approach to Deep Neural Networks II.
Reconstruction of 1-D equivalence classes [78.120734120667]
We build the preimage of a point in the output manifold in the input space.
We focus for simplicity on the case of neural networks maps from n-dimensional real spaces to (n - 1)-dimensional real spaces.
arXiv Detail & Related papers (2021-12-17T11:47:45Z) - Leveraging Sparse Linear Layers for Debuggable Deep Networks [86.94586860037049]
We show how fitting sparse linear models over learned deep feature representations can lead to more debuggable neural networks.
The resulting sparse explanations can help to identify spurious correlations, explain misclassifications, and diagnose model biases in vision and language tasks.
arXiv Detail & Related papers (2021-05-11T08:15:25Z) - Semiotic Aggregation in Deep Learning [0.0]
Convolutional neural networks utilize a hierarchy of neural network layers.
We analyze the saliency maps of these layers from the perspective of semiotics.
We show how the obtained knowledge can be used to explain the neural decision model.
arXiv Detail & Related papers (2021-04-22T08:55:54Z) - Quiver Signal Processing (QSP) [145.6921439353007]
We state the basics for a signal processing framework on quiver representations.
We propose a signal processing framework that allows us to handle heterogeneous multidimensional information in networks.
arXiv Detail & Related papers (2020-10-22T08:40:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.