Transforming to Yoked Neural Networks to Improve ANN Structure
- URL: http://arxiv.org/abs/2306.02157v3
- Date: Thu, 24 Aug 2023 15:51:01 GMT
- Title: Transforming to Yoked Neural Networks to Improve ANN Structure
- Authors: Xinshun Liu and Yizhi Fang and Yichao Jiang
- Abstract summary: Most existing artificial neural networks (ANN) are designed as a tree structure to imitate neural networks.
We propose a model YNN to efficiently eliminate such structural bias.
In our model, nodes also carry out aggregation and transformation of features, and edges determine the flow of information.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Most existing classical artificial neural networks (ANN) are designed as a
tree structure to imitate neural networks. In this paper, we argue that the
connectivity of a tree is not sufficient to characterize a neural network. The
nodes of the same level of a tree cannot be connected with each other, i.e.,
these neural unit cannot share information with each other, which is a major
drawback of ANN. Although ANN has been significantly improved in recent years
to more complex structures, such as the directed acyclic graph (DAG), these
methods also have unidirectional and acyclic bias for ANN. In this paper, we
propose a method to build a bidirectional complete graph for the nodes in the
same level of an ANN, which yokes the nodes of the same level to formulate a
neural module. We call our model as YNN in short. YNN promotes the information
transfer significantly which obviously helps in improving the performance of
the method. Our YNN can imitate neural networks much better compared with the
traditional ANN. In this paper, we analyze the existing structural bias of ANN
and propose a model YNN to efficiently eliminate such structural bias. In our
model, nodes also carry out aggregation and transformation of features, and
edges determine the flow of information. We further impose auxiliary sparsity
constraint to the distribution of connectedness, which promotes the learned
structure to focus on critical connections. Finally, based on the optimized
structure, we also design small neural module structure based on the minimum
cut technique to reduce the computational burden of the YNN model. This
learning process is compatible with the existing networks and different tasks.
The obtained quantitative experimental results reflect that the learned
connectivity is superior to the traditional NN structure.
Related papers
- SPENSER: Towards a NeuroEvolutionary Approach for Convolutional Spiking
Neural Networks [0.0]
Spiking Neural Networks (SNNs) have attracted recent interest due to their energy efficiency and biological plausibility.
There is no consensus on the best learning algorithm for SNNs.
In this paper, we propose SPENSER, a framework for SNN generation based on DENSER.
arXiv Detail & Related papers (2023-05-18T14:06:37Z) - Joint A-SNN: Joint Training of Artificial and Spiking Neural Networks
via Self-Distillation and Weight Factorization [12.1610509770913]
Spiking Neural Networks (SNNs) mimic the spiking nature of brain neurons.
We propose a joint training framework of ANN and SNN, in which the ANN can guide the SNN's optimization.
Our method consistently outperforms many other state-of-the-art training methods.
arXiv Detail & Related papers (2023-05-03T13:12:17Z) - Hybrid Spiking Neural Network Fine-tuning for Hippocampus Segmentation [3.1247096708403914]
Spiking neural networks (SNNs) have emerged as a low-power alternative to artificial neural networks (ANNs)
In this work, we propose a hybrid SNN training scheme and apply it to segment human hippocampi from magnetic resonance images.
arXiv Detail & Related papers (2023-02-14T20:18:57Z) - Robust Knowledge Adaptation for Dynamic Graph Neural Networks [61.8505228728726]
We propose Ada-DyGNN: a robust knowledge Adaptation framework via reinforcement learning for Dynamic Graph Neural Networks.
Our approach constitutes the first attempt to explore robust knowledge adaptation via reinforcement learning.
Experiments on three benchmark datasets demonstrate that Ada-DyGNN achieves the state-of-the-art performance.
arXiv Detail & Related papers (2022-07-22T02:06:53Z) - Knowledge Enhanced Neural Networks for relational domains [83.9217787335878]
We focus on a specific method, KENN, a Neural-Symbolic architecture that injects prior logical knowledge into a neural network.
In this paper, we propose an extension of KENN for relational data.
arXiv Detail & Related papers (2022-05-31T13:00:34Z) - BScNets: Block Simplicial Complex Neural Networks [79.81654213581977]
Simplicial neural networks (SNN) have recently emerged as the newest direction in graph learning.
We present Block Simplicial Complex Neural Networks (BScNets) model for link prediction.
BScNets outperforms state-of-the-art models by a significant margin while maintaining low costs.
arXiv Detail & Related papers (2021-12-13T17:35:54Z) - Spiking neural networks trained via proxy [0.696125353550498]
We propose a new learning algorithm to train spiking neural networks (SNN) using conventional artificial neural networks (ANN) as proxy.
We couple two SNN and ANN networks, respectively, made of integrate-and-fire (IF) and ReLU neurons with the same network architectures and shared synaptic weights.
By assuming IF neuron with rate-coding as an approximation of ReLU, we backpropagate the error of the SNN in the proxy ANN to update the shared weights, simply by replacing the ANN final output with that of the SNN.
arXiv Detail & Related papers (2021-09-27T17:29:51Z) - Explore the Knowledge contained in Network Weights to Obtain Sparse
Neural Networks [2.649890751459017]
This paper proposes a novel learning approach to obtain sparse fully connected layers in neural networks (NNs) automatically.
We design a switcher neural network (SNN) to optimize the structure of the task neural network (TNN)
arXiv Detail & Related papers (2021-03-26T11:29:40Z) - Kernel Based Progressive Distillation for Adder Neural Networks [71.731127378807]
Adder Neural Networks (ANNs) which only contain additions bring us a new way of developing deep neural networks with low energy consumption.
There is an accuracy drop when replacing all convolution filters by adder filters.
We present a novel method for further improving the performance of ANNs without increasing the trainable parameters.
arXiv Detail & Related papers (2020-09-28T03:29:19Z) - Neural Networks Enhancement with Logical Knowledge [83.9217787335878]
We propose an extension of KENN for relational data.
The results show that KENN is capable of increasing the performances of the underlying neural network even in the presence relational data.
arXiv Detail & Related papers (2020-09-13T21:12:20Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.