Understanding Neural Network Systems for Image Analysis using Vector
Spaces and Inverse Maps
- URL: http://arxiv.org/abs/2402.00261v1
- Date: Thu, 1 Feb 2024 01:11:15 GMT
- Title: Understanding Neural Network Systems for Image Analysis using Vector
Spaces and Inverse Maps
- Authors: Rebecca Pattichis and Marios S. Pattichis
- Abstract summary: We introduce techniques from Linear Algebra to model neural network layers as maps between signal spaces.
We also introduce the concept of invertible networks and an algorithm for computing input images that yield specific outputs.
- Score: 3.069161525997864
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: There is strong interest in developing mathematical methods that can be used
to understand complex neural networks used in image analysis. In this paper, we
introduce techniques from Linear Algebra to model neural network layers as maps
between signal spaces. First, we demonstrate how signal spaces can be used to
visualize weight spaces and convolutional layer kernels. We also demonstrate
how residual vector spaces can be used to further visualize information lost at
each layer. Second, we introduce the concept of invertible networks and an
algorithm for computing input images that yield specific outputs. We
demonstrate our approach on two invertible networks and ResNet18.
Related papers
- Half-Space Feature Learning in Neural Networks [2.3249139042158853]
There currently exist two extreme viewpoints for neural network feature learning.
We argue neither interpretation is likely to be correct based on a novel viewpoint.
We use this alternate interpretation to motivate a model, called the Deep Linearly Gated Network (DLGN)
arXiv Detail & Related papers (2024-04-05T12:03:19Z) - Image segmentation with traveling waves in an exactly solvable recurrent
neural network [71.74150501418039]
We show that a recurrent neural network can effectively divide an image into groups according to a scene's structural characteristics.
We present a precise description of the mechanism underlying object segmentation in this network.
We then demonstrate a simple algorithm for object segmentation that generalizes across inputs ranging from simple geometric objects in grayscale images to natural images.
arXiv Detail & Related papers (2023-11-28T16:46:44Z) - Graph Neural Networks Provably Benefit from Structural Information: A
Feature Learning Perspective [53.999128831324576]
Graph neural networks (GNNs) have pioneered advancements in graph representation learning.
This study investigates the role of graph convolution within the context of feature learning theory.
arXiv Detail & Related papers (2023-06-24T10:21:11Z) - Convolutional Learning on Multigraphs [153.20329791008095]
We develop convolutional information processing on multigraphs and introduce convolutional multigraph neural networks (MGNNs)
To capture the complex dynamics of information diffusion within and across each of the multigraph's classes of edges, we formalize a convolutional signal processing model.
We develop a multigraph learning architecture, including a sampling procedure to reduce computational complexity.
The introduced architecture is applied towards optimal wireless resource allocation and a hate speech localization task, offering improved performance over traditional graph neural networks.
arXiv Detail & Related papers (2022-09-23T00:33:04Z) - Neural Networks as Paths through the Space of Representations [5.165741406553346]
We develop a simple idea for interpreting the layer-by-layer construction of useful representations.
We formalize this intuitive idea of "distance" by leveraging recent work on metric representational similarity.
With this framework, the layer-wise computation implemented by a deep neural network can be viewed as a path in a high-dimensional representation space.
arXiv Detail & Related papers (2022-06-22T11:59:10Z) - FuNNscope: Visual microscope for interactively exploring the loss
landscape of fully connected neural networks [77.34726150561087]
We show how to explore high-dimensional landscape characteristics of neural networks.
We generalize observations on small neural networks to more complex systems.
An interactive dashboard opens up a number of possible application networks.
arXiv Detail & Related papers (2022-04-09T16:41:53Z) - How and what to learn:The modes of machine learning [7.085027463060304]
We propose a new approach, namely the weight pathway analysis (WPA), to study the mechanism of multilayer neural networks.
WPA shows that a neural network stores and utilizes information in a "holographic" way, that is, the network encodes all training samples in a coherent structure.
It is found that hidden-layer neurons self-organize into different classes in the later stages of the learning process.
arXiv Detail & Related papers (2022-02-28T14:39:06Z) - A singular Riemannian geometry approach to Deep Neural Networks II.
Reconstruction of 1-D equivalence classes [78.120734120667]
We build the preimage of a point in the output manifold in the input space.
We focus for simplicity on the case of neural networks maps from n-dimensional real spaces to (n - 1)-dimensional real spaces.
arXiv Detail & Related papers (2021-12-17T11:47:45Z) - Semiotic Aggregation in Deep Learning [0.0]
Convolutional neural networks utilize a hierarchy of neural network layers.
We analyze the saliency maps of these layers from the perspective of semiotics.
We show how the obtained knowledge can be used to explain the neural decision model.
arXiv Detail & Related papers (2021-04-22T08:55:54Z) - Quiver Signal Processing (QSP) [145.6921439353007]
We state the basics for a signal processing framework on quiver representations.
We propose a signal processing framework that allows us to handle heterogeneous multidimensional information in networks.
arXiv Detail & Related papers (2020-10-22T08:40:15Z) - Learning Local Complex Features using Randomized Neural Networks for
Texture Analysis [0.1474723404975345]
We present a new approach that combines a learning technique and the Complex Network (CN) theory for texture analysis.
This method takes advantage of the representation capacity of CN to model a texture image as a directed network.
This neural network has a single hidden layer and uses a fast learning algorithm, which is able to learn local CN patterns for texture characterization.
arXiv Detail & Related papers (2020-07-10T23:18:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.