Random Graph-Based Neuromorphic Learning with a Layer-Weaken Structure
- URL: http://arxiv.org/abs/2111.08888v1
- Date: Wed, 17 Nov 2021 03:37:06 GMT
- Title: Random Graph-Based Neuromorphic Learning with a Layer-Weaken Structure
- Authors: Ruiqi Mao and Rongxin Cui
- Abstract summary: We transform the random graph theory into an NN model with practical meaning and based on clarifying the input-output relationship of each neuron.
Under the usage of this low-operation cost approach, neurons are assigned to several groups of which connection relationships can be regarded as uniform representations of random graphs they belong to.
We develop a joint classification mechanism involving information interaction between multiple RGNNs and realize significant performance improvements in supervised learning for three benchmark tasks.
- Score: 4.477401614534202
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Unified understanding of neuro networks (NNs) gets the users into great
trouble because they have been puzzled by what kind of rules should be obeyed
to optimize the internal structure of NNs. Considering the potential capability
of random graphs to alter how computation is performed, we demonstrate that
they can serve as architecture generators to optimize the internal structure of
NNs. To transform the random graph theory into an NN model with practical
meaning and based on clarifying the input-output relationship of each neuron,
we complete data feature mapping by calculating Fourier Random Features (FRFs).
Under the usage of this low-operation cost approach, neurons are assigned to
several groups of which connection relationships can be regarded as uniform
representations of random graphs they belong to, and random arrangement fuses
those neurons to establish the pattern matrix, markedly reducing manual
participation and computational cost without the fixed and deep architecture.
Leveraging this single neuromorphic learning model termed random graph-based
neuro network (RGNN) we develop a joint classification mechanism involving
information interaction between multiple RGNNs and realize significant
performance improvements in supervised learning for three benchmark tasks,
whereby they effectively avoid the adverse impact of the interpretability of
NNs on the structure design and engineering practice.
Related papers
- Exploring Structural Nonlinearity in Binary Polariton-Based Neuromorphic Architectures [0.0]
We show that structural nonlinearity, derived from the network's layout, plays a crucial role in facilitating complex computational tasks.
This shift in focus from individual neuron properties to network architecture could lead to significant advancements in the efficiency and applicability of neuromorphic computing.
arXiv Detail & Related papers (2024-11-09T09:29:46Z) - Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - Structured Neural Networks for Density Estimation and Causal Inference [15.63518195860946]
Injecting structure into neural networks enables learning functions that satisfy invariances with respect to subsets of inputs.
We propose the Structured Neural Network (StrNN), which injects structure through masking pathways in a neural network.
arXiv Detail & Related papers (2023-11-03T20:15:05Z) - Equivariant Matrix Function Neural Networks [1.8717045355288808]
We introduce Matrix Function Neural Networks (MFNs), a novel architecture that parameterizes non-local interactions through analytic matrix equivariant functions.
MFNs is able to capture intricate non-local interactions in quantum systems, paving the way to new state-of-the-art force fields.
arXiv Detail & Related papers (2023-10-16T14:17:00Z) - Deep Architecture Connectivity Matters for Its Convergence: A
Fine-Grained Analysis [94.64007376939735]
We theoretically characterize the impact of connectivity patterns on the convergence of deep neural networks (DNNs) under gradient descent training.
We show that by a simple filtration on "unpromising" connectivity patterns, we can trim down the number of models to evaluate.
arXiv Detail & Related papers (2022-05-11T17:43:54Z) - Universal approximation property of invertible neural networks [76.95927093274392]
Invertible neural networks (INNs) are neural network architectures with invertibility by design.
Thanks to their invertibility and the tractability of Jacobian, INNs have various machine learning applications such as probabilistic modeling, generative modeling, and representation learning.
arXiv Detail & Related papers (2022-04-15T10:45:26Z) - Modeling Structure with Undirected Neural Networks [20.506232306308977]
We propose undirected neural networks, a flexible framework for specifying computations that can be performed in any order.
We demonstrate the effectiveness of undirected neural architectures, both unstructured and structured, on a range of tasks.
arXiv Detail & Related papers (2022-02-08T10:06:51Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - BScNets: Block Simplicial Complex Neural Networks [79.81654213581977]
Simplicial neural networks (SNN) have recently emerged as the newest direction in graph learning.
We present Block Simplicial Complex Neural Networks (BScNets) model for link prediction.
BScNets outperforms state-of-the-art models by a significant margin while maintaining low costs.
arXiv Detail & Related papers (2021-12-13T17:35:54Z) - Stability of Algebraic Neural Networks to Small Perturbations [179.55535781816343]
Algebraic neural networks (AlgNNs) are composed of a cascade of layers each one associated to and algebraic signal model.
We show how any architecture that uses a formal notion of convolution can be stable beyond particular choices of the shift operator.
arXiv Detail & Related papers (2020-10-22T09:10:16Z) - A Graph Neural Network Framework for Causal Inference in Brain Networks [0.3392372796177108]
A central question in neuroscience is how self-organizing dynamic interactions in the brain emerge on their relatively static backbone.
We present a graph neural network (GNN) framework to describe functional interactions based on structural anatomical layout.
We show that GNNs are able to capture long-term dependencies in data and also scale up to the analysis of large-scale networks.
arXiv Detail & Related papers (2020-10-14T15:01:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.